recovery of failing disk (fwd)
D. Hugh Redelmeier
hugh-pmF8o41NoarQT0dZR+AlfA at public.gmane.org
Tue Jul 30 03:33:03 UTC 2013
This also didn't get through.
---------- Forwarded message ----------
From: David Collier-Brown <davec-b-bJEeYj9oJeDQT0dZR+AlfA at public.gmane.org>
To: tlug-lxSQFCZeNF4 at public.gmane.org
Cc: D. Hugh Redelmeier <hugh-pmF8o41NoarQT0dZR+AlfA at public.gmane.org>
Date: Mon, 22 Jul 2013 15:09:26 -0400
Subject: Re: [TLUG]: recovery of failing disk
Reply-To: davecb-0XdUWXLQalXR7s880joybQ at public.gmane.org
On 07/22/2013 02:02 PM, D. Hugh Redelmeier wrote:
> We have a portable external hard drive that is failing. It has/had lots
> of stuff we want on it.
>
> GNU ddrescue seems like a great tool. I've used it to capture what I
> can from the drive.
>
> Actually, it is still going: it first captures the stuff without errors,
> then is willing to go back and retry the bad sectors. Interestingly
> enough, even with all the retrying that the kernel does, ddrescue's
> retrying ekes out a bit more. But it is very slow going.
>
> There are at least three programs with the name ddrescue. GNU
> ddrescue seems to be the most evolved. It's what I get on Fedora when
> I ask for ddrescue.
>
> Reading ddrescue(1) isn't enough. Sadly, you really should read the
> info documents. I find "pinfo" are reasonably painless GNU info
> reader.
>
> The original disk drive had sectors of 512 bytes. So the recovery is
> in those units. What I got was a full-of-holes ext3 filesystem.
> There are two ways to fix this, and I'm not sure of which is best. I
> did the first:
>
> copy the rescued partition image to a large-enough disk
> partition, raw (so the new partition got the old filesystem).
>
> I used dumpe2fs to figure out the filesystem's block size:
> 4096
>
> I constructed a bad-blick list
> ddrescuelog -l- -b4096 ddrescue-log >badblocklist
>
> I used e2fsck to fix the filesystem, taking into account the
> bad block list:
> e2fsck -v -f -L badblocklist /dev/sdk2
>
> The other alternative would be to skip the badblock list. e2fsck
> would see a sector of 512 zero bytes where the errors had been.
>
> Which is better?
>
> In the first technique, e2fsck is told where the losses are, so it
> should be able to take good account of them. This is not the case for
> the second.
>
> Each loss in the fist is rounded up to a multiple of 4096 bytes whereas
> in the second it is in terms of 512-byte sectors. So the first
> technique must be losing some precious data.
>
> In the second technique, holes in metadata would surely be caught by
> e2fsck. But holes in the data of a file would not be detected.
> --
> The Toronto Linux Users Group. Meetings: http://gtalug.org/
> TLUG requests: Linux topics, No HTML, wrap text below 80 columns
> How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists
>
I'd be inclined to map bad blocks to file names/offsets early in the
process, go through the fsck approach, and then work though the files...
--dave
--dave
--
David Collier-Brown, | Always do right. This will gratify
System Programmer and Author | some people and astonish the rest
davecb-0XdUWXLQalXR7s880joybQ at public.gmane.org | -- Mark Twain
(416) 223-8968
--
The Toronto Linux Users Group. Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists
More information about the Legacy
mailing list