Encryption, paranoia and virtual machines
Christopher Browne
cbbrowne-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org
Fri Nov 25 19:19:50 UTC 2011
On Fri, Nov 25, 2011 at 1:45 PM, Jamon Camisso
<jamon.camisso-H217xnMUJC0sA/PxXw9srA at public.gmane.org> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> On 11/25/2011 01:20 PM, Christopher Browne wrote:
>> No, it assumes that the data is encrypted before transmission. In
>> that case, the server never receives it in unencrypted form, so that
>> risk simply doesn't exist.
>>
>>> For example, assuming physical access, what is to prevent someone from
>>> running a forensic recovery tool on say /var/spool files, or on a swap
>>> partition if either location handled data destined for the encrypted
>>> database?
>>
>> What prevents this is that the data was encrypted before it was
>> received by this server.
>>
>>> For me at least, while encrypting the whole disk is definitely a
>>> shot-gun approach, the overhead is slight and reduces complexity. I
>>> certainly don't notice any performance issues with AES on an SSD.
>>
>> The problem is that encrypting the whole disk is the "Vitamin C as a
>> cure for cancer" cure; it is quite likely that it only provides any
>> protection in peoples' imaginations, rather than providing *actual*
>> protection.
>>
>> In order for "whole disk" encryption to function requires that there
>> be an encryption key on the server, and if that is so, then that key
>> IS vulnerable to being found by the system administrators. Not "might
>> be vulnerable" - it right well *IS* vulnerable to anyone with physical
>> access, which makes any security being claimed into a mirage.
>>
>> It doesn't matter how many people think good thoughts about this form
>> of "security", they may feel good about Sacro Cranial Therapy, or the
>> merits of homeopathy. "Good feelings" don't make any of it effectual.
>
> So here we have two different trust models at work. In the first case,
> the assumption is that data is not being tampered with or siphoned off
> at the origin *before* encryption. The source data is assumed to be
> trustworthy.
>
> This assumes a number of things about the network, physical security,
> encryption being used etc. Crucially, this also entails trust in the
> person(s) administering the box.
>
> The second case is the human one, where the attempt is to mitigate
> against the possible actions rogue administrator. This entails that
> every system said administrator can access is locked down such that they
> cannot interfere with the encryption, either at the source or
> destination. But then who locks down the box in the first place? Another
> administrator? A policy based configuration tool? Who hires that
> administrator or controls those files?
>
> Do you hire 2 administrators to check each others' work? What it they
> team up and go rogue?
>
> The crux of the issue is that at some point, a) the law of diminishing
> returns kicks into effect, and b) you have to trust a person or a
> machine or a network at some point. No matter how much work is done, at
> a certain moment it becomes a leap of faith that the systems or the
> people around you are trustworthy. I much prefer trusting in the latter,
> however naive that might sound.
The laws emerging surrounding health care data are pointing towards
that trust being something you mightn't readily afford.
And there are most definitely cases where you really, really, really,
really cannot assume that kind of trust.
http://www.wayner.org/node/15
Characteristic case: the rape crisis centre, where the admins in the
back room have NO business having access to the data, prurient or
otherwise. Victims won't talk freely if their skin crawls at the
thought of unknown people reading their accounts - data simply won't
get recorded unless it is secured against the admins, and that's not
"one layer of admins, but not the others that we do trust", that's
secured against access by ALL the admins. Your premise of an extra
layer of administrators doesn't work.
A less sensitive, but still interesting example is that of library
data collection: <http://www.wayner.org/node/31>
"One of the bigger challenges for librarians is protecting the privacy
of patrons while stopping them from stealing the books. In many cases,
there's not much interesting in our reading choices, but sometimes
there is. A spy might look for hints or clues in the list of books
taken out by researchers from the nearby army base. A blackmailer may
try to subvert some of the local police or security personnel by
looking at what they read. This may be why some librarians are so
careful to protect the choices of their customers.
Is it possible to protect the reading choices of library patrons from
hackers, insiders, and snoops while catching thieves? At first glance,
this seems difficult because the library must keep track of the books
on loan to defend itself against people who don't bring them back.
Some libraries try to delete all records after a book is returned, but
that doesn't stop the curious from looking at the list of books that
are currently checked out.
The surprising result is that the library doesn't need to keep a list
of what people are reading to stop theft. A few simple one way
functions can lock out even the most adept snoops. (A good one-way
function is the Secure Hash Algorithm or SHA and many toolkits now
come with implementations that implement it and a more general,
metaprotocol, the HMAC.)"
You don't store the list of individuals and the books they have
borrowed, you store SHA1 digests of those.
--
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"
--
The Toronto Linux Users Group. Meetings: http://gtalug.org/
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists
More information about the Legacy
mailing list