[GTALUG] GnuPG Woes...
jamon.camisso at utoronto.ca
Thu Jan 1 20:37:10 UTC 2015
On 2015-01-01 5:53 PM, Lennart Sorensen wrote:
> On Tue, Dec 30, 2014 at 04:53:09PM -0500, Peter King wrote:
>> I managed to locate two partial versions of the missing file, from which I
>> could reconstruct most of it. Still no idea about what went wrong, but given
>> that the partial versions decrypted without problem, my guess is a disk error
>> or something of the sort that corrupted the encrypted file, which was then
>> propagated to all my backups.
>> Moral of the Story (one moral among many): Keep static time-stamped backups as
>> well as current redundant copies. Will implement a scheme to do so this week,
>> a better New Year's resolution than most!
>> Thanks to all who offered suggestions.
> I suppose things like rsnapshot which keeps copies with hardlinks to
> save space really ought to be on a good filesystem. By default rsync
> doesn't compare files if the size and timstamp matches, which of course
> means rsnapshot could happily think you have a good copy of a file but
> that has in fact gotten corrupt because the underlying disk is failing.
> Making it always do a read compare would make it much slower, so having
> the filesystem maintain the reduncancy and checksums does seem more
The benefit of rsnapshot being that unless the original or subsequent
versions are deleted, it is possible to go back in time to a version of
the file that is intact. If the underlying disk is failing and
corrupting files then ZFS or rsnapshot or tarballs won't make a
FWIW, I use rsnapshot for backups on top of ZFS (on Linux) as a
production remote backup server. Apart from lengthy delete times (which
is an issue with BTRFS as well, and rsnapshot for any meaningful amount
of backups on any filesystem), it has been a reliable, and space
efficient backup system for a few years now.
More information about the talk