OT: Can We Make OSes Reliable and Secure

Christopher Browne cbbrowne-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org
Sun Jun 11 04:16:31 UTC 2006


On 6/6/06, Robert Brockway <rbrockway-wgAaPJgzrDxH4x6Dk/4f9A at public.gmane.org> wrote:
> On Mon, 5 Jun 2006, Lennart Sorensen wrote:
>
> > Then what took Hurd so long?
>
> Disagreements, lack of direction, general human related problems that can
> derail even the best project.

I don't think that was quite it.  There was agreement that they should
pursue using CMU Mach as the underlying kernel.  There was a
considerable delay because RMS was pursuing permission to use it at
much the same time that Microsoft was busy gutting the CMU Mach
project in order to assortedly populate their research division and
(equally important) to prevent Mach-based work from competing with
Windows NT.

By the time permission to use CMU Mach arrived, most of the people
interested in working on a "GNU kernel" had moved on to other projects
such as Linux.

> >> Sure.  Because a lot of hard work has gone into it.  I am emphasising that
> >> it is simply easier to make a microkernel system stable.
> >
> > Well other than QNX, I am still waiting to see a successful microkernel
> > system that gets used.
>
> OS-9 (not Mac OS9 but Microware OS-9)

There are others, but little that's not pretty much curiosity.

> > Well the unix user space is excelent.  Little tools (modular design
> > after all) that do one thing well, working together is great.  Of course
>
> That's part of the conceptual side which many OSes have taken on board (as
> per my earlier posts).
>
> There is cruft in *nix userspace.  Why exactly is the command to send a
> signal to a process called "kill"? The name is historical but is defined
> in POSIX now.  This one was highlighted to me just recently when I was
> providing training on the shell and command line tools.

That "kill" is poorly named is in no way a meaningful argument in
favor of using a microkernel.  Similarly, the fact that people get
confused as to why Unix often uses such directories as /bin, /sbin
/usr/sbin, /usr/bin, and /usr/ucb to store differing programs is not a
reason for switching to a microkernel.

(The notion of a Union filesystem was something argued as being a good
thing about Hurd, but non-microkernels have implemented it, such as
BSD, Plan 9...
http://en.wikipedia.org/wiki/UnionFS)

> > there is no reason a microkernel based system couldn't use the same user
> > space, and work great.
>
> Yes, very true.  In fact I fully expect the popular MK (microkernel)
> systems of the coming decades to fully support a Linux
> look-alike environment among others.
>
> I've alluded to this a bit in earlier posts:
>
> My view of future OSes is MK based systems with low level services
> (functions we'd now have in a monolithic kernel) running in userspace and
> the ability to simulataneously abstract a large number of environments
> cleanly and have them talk.  What we today call virtualisation would only
> be a subset of the abstraction I'm talking about.
>
> In an MK (in general) you lose in performance but you gain in stability
> and flexibility.  Experiments in the area show the performance loss is
> usually no more than 10% for an MK.  A well designed MK like recent
> versions of the L4 family can actually perform better than a monolithic
> kernel in some cases.

In Hurd, the notion of having all sorts of services managed as daemons
separate from any central "kernel" is certainly an interesting one.
In principle, that ought to allow restarting all sorts of services
without needing to reboot.

Of course, in practice, the Linux NFS server moved from a separate
server into the kernel because this made it significantly faster.

The last performance numbers I heard from the L4 folks was that they
could, by careful design, have the microkernel be only 10-15% slower
than the equivalent monolithic kernel.  I haven't seen any papers
recording the "microkernel performing better" claim.

> > If I was to accidentally byte swap some data in one module (say it's a
> > module that does caching) before sending it to the filesystem module,
> > there is nothing the filesystem module can do to save me.  Bugs are
> > bugs, and they will hurt data in some cases.
>
> If the caching module passes bad data, yes that could cause corruption.
> It also has nothing to do with my original assertion...
>
> This part of the discussion arose when I pointed out that tampering in one
> part of a monolithic kernel broke another part.  I was (and still am)
> talking about code changes, not corrupt data.
>
> No amount of tampering in one module will corrupt code in another module
> of a well designed MK system.  As they only communicate through a well
> defined protocol there remains integrity in the commands exchanged between
> the modules.  For example a buffer-cache module could only ask the
> filesystem module to do specific actions (flush block X to disk now,
> whatever).  The language the modules speak can be arbitrarily constrained.
>
> A well defined MK system would probably even put constraints on how
> rapidly messages could be sent over the protocol to prevent an internal
> DoS.

The Dragonfly BSD folk are trying to head down this sort of road,
using message passing (with choice of async/sync approaches).  Again,
this is a monolithic kernel; there is little that people consider
doing on microkernels that don't seem to benefit monolithic kernels...
-- 
http://www3.sympatico.ca/cbbrowne/linux.html
Oddly enough, this is completely standard behaviour for shells. This
is a roundabout way of saying `don't use combined chains of `&&'s and
`||'s unless you think Gödel's theorem is for sissies'.
--
The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml





More information about the Legacy mailing list