OT: Can We Make OSes Reliable and Secure

D. Hugh Redelmeier hugh-pmF8o41NoarQT0dZR+AlfA at public.gmane.org
Fri Jun 2 22:06:13 UTC 2006


| From: Robert Brockway <rbrockway-wgAaPJgzrDxH4x6Dk/4f9A at public.gmane.org>

| On Fri, 2 Jun 2006, Lennart Sorensen wrote:

| > ... After all sometime in 2.4.x (I think x=10) the VM system was
| > completely replace by someone elses code because the existing code
| > didn't work well, and the new one was simpler, more obvious, and clearly
| > worked.  The VM interface apparently is sufficiently abstracted that it
| > could just be replaced all at once without having to touch other code
| > that uses it.
| 
| My recollection of that event is that it was anything but clean.  It was quite
| a painful experience and took place over several months.  It's a perfect
| example of whatr I'm talking about - the monolithic design made the change
| hard.

Interesting example.  I'll talk about it in the abstract because I
don't actually know/remember the details.  (I do remember being
unconvinced that the new VM was a Good Thing.)

An efficient VM system has its fingers in a lot of pies.  For example,
folks generally think that file caching should be done by the VM
system.  So the VM system needs to know things about file access
behaviour (typically access patterns are different for files and main
memory).  A VM system needs to know what the scheduler intends to do
(to stage pages, for example; to swap working sets out, perhaps, etc).
The VM manager might profitably know about about disk behaviour, at
quite a low level, to schedule page reads or writes optimally on the
physical hardware.  It may need to know about error behaviour of the
drive to know when it is safe to release a page frame, or perhaps kill
a process for which a page read failed.  I won't even go into OOM.
And this is all off the top of my head.

A simple VM is really easy.  It turns out that the VM is so crucial
on many systems (and hence all general purpose operating systems
worthy of the name) that it is deemed worthwhile to go beyond easy.
"As simple as possible, but no simpler."

| I disagree.  A lot of problems have occured because the Linux kernel (and
| indeed any monolithic kernel) is not very modular.

Actually, I suspect "microkernel" and "monolithic kernel" are being
used connotatively rather than denotatively.

Generally, I think microkernels are made up of a bunch of co-operating
sequential processes.  But what is the process model?

- separate address spaces?  This is often expensive in performance
  (MMU state changes do a lot of dammage).  UNIX folks just assume
  that different processes have different address spaces, but that is
  unwarranted.

- communication by message passing?  Not necessarily: "monitors" are a
  very nice concept and are in some ways cleaner (typically monitor
  calls involve strong typing whereas message passing typically does
  not).

- If communication is by message passing, what are the limits on messages?
  High performance systems I've seen have strong limits on the size of
  messages to keep down the marshalling and demarshalling costs.  Some
  get externalized: messages might contain pointers to buffers, for
  example (if there are shared address spaces).  On systems with
  separate address spaces for components, big messages might be
  handled as a page being handed off from one process to another.

- Do the programs (like ones written in C) need extra-lingual
  enforcement to make them safe to their neighbours (eg. separate
  address spaces)?  We in the UNIX world assume this, but it wasn't
  true in the Burroughs machines (from the B5000 of ~1960 on).  It
  isn't true of Java.

I've long had a dream that one could write a program in a clean,
modular fashion and that a processor (let's call it a compiler) could
implement it in a way that squeezed out inefficiency by carefully
implementing linkages.

|  Many of these problems
| simply can't occur when the only way system processes can communicate is
| through a tightly defined protocol.

Right.  That should be true in a monolithic kernel too.  I think that
various lint-like tools that are being brought to bear on the kernel
are trying to discover / enforce protocols.  How much better it would
be if those protocols were declared.

| > Unix design hasn't given up yet.  It is evolving though.  We have memory
| 
| I think *nix is great.  It is the best OS out there in common use today. But
| it is showing its age.  Plan 9 is a great example of how unix would have been
| if it had been started in the 1990s.  I reiterate, if we have not advanced
| beyond the current crop of OSes in 20 years I think we've made a big mistake.
| Most of the OSes on the horizon (experimental or otherwise) draw conceptually
| from unix while leaving behind much of the baggage.

UNIX (including Linux) is mostly "good enough" so it is hard to
replace it, even with something better.  Plan 9 didn't manage to.
Inferno didn't.  (It might be possible to replace it with something
worse, like MS Windows :-)

I think that there is a parallel with the x86.  It turned out to be
good enough.  We would all have prefered Alpha (wouldn't we?) from a
technical standpoint.

Heck, even X survived, and I'm startled too admit that it might be good
enough.  Yuck!

I like the ideas of Plan 9 a lot.  And I've had plenty of opportunity
to try it.  I've only once gotten so far as to try installing it on a
machine (I gave up at the first sign of trouble).  Why?  I've got a
tremendous investment in UNIX that I doubt is worth giving up.  The
only friend I know using Plan 9 is a Quacker -- I think that that is
no accident (no, not Pike; he's an old friend, but doesn't use Plan 9
AFAIK).
--
The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml





More information about the Legacy mailing list