OT: Can We Make OSes Reliable and Secure

Lennart Sorensen lsorense-1wCw9BSqJbv44Nm34jS7GywD8/FfD2ys at public.gmane.org
Fri Jun 2 18:03:33 UTC 2006


On Fri, Jun 02, 2006 at 01:39:00PM -0400, Robert Brockway wrote:
> Yes it is a good article.  I don't agree with everything he said.  For one 
> thing I don't think that microkernels were dismissed as unacceptable at 
> all (and it seems an odd thing for Mr Tannenbaum to say actually).
> 
> Microkernels have been relegated to the sidelines out of a desire for 
> "more speed" "more speed".  Almost every research OS out there is based on 
> a microkernel.

It is simpler, when doing research, to use a microkernel where
everything is its own little well defined module so you can replace one
piece at a time to see what changing the deisgn of one piece does the
the performance of the system.  Of course having to pass everything
around between modules takes time, where something like the linux kernel
will just pass a pointer to the memory whenever that is the most
efficient and reasonable thing to do.  This also means it is harder to
tear out and replace a subsystem in linux, although it isn't always that
bad.  After all sometime in 2.4.x (I think x=10) the VM system was
completely replace by someone elses code because the existing code
didn't work well, and the new one was simpler, more obvious, and clearly
worked.  The VM interface apparently is sufficiently abstracted that it
could just be replaced all at once without having to touch other code
that uses it.

> I've long disagreed with the notion that a microkernel has inherently 
> harder to develop. Certainly more up front planning is required but with a 
> well thought out and flexible structure and message passing mechanism 
> development of the OS subsystems should actually move as quickly (if note 
> more quickly) than in a monolithic kernel thanks in part to the well 
> defined interfaces present.  Monolithic kernels are high susceptible to 
> unintentional breakage.  How many times have we seen this on Linux[1].

I think much of the breakage seen on linux is just as likely on a micro
kernel system.  One thing the microkernel system is less liukely to see
is a pointer bug in one piece of code causing a problem at some other
random place in the system.  This certainly is useful, but I am not
convinced it is worth the tradeoff.

> Yes there is a performance hit.  Research consistently shows the 
> performance hit to be <10% on a well built system.  Given the increased 
> relibility (both in terms of the system staying up, and in terms of 
> correctness of code) this is well worth it IMHO.

The linux kernel seems pretty good at staying up.  So does BSD.  If your
memory management module blows up, what is the rest of the system
supposed to do on a microkernel system.  Microkernels don't
automatically make things more stable, they do make it somewhat easier
and clearer to make sure that one piece doesn't mess up another.  Of
course if one piece is critical to the operation of the system as a
whole, it has to be right no matter what, microkernel or not.

> I for one would be very disapppointed if all of the OSes we use today were 
> not relegated to history by 2025.  We know how to build better systems - 
> we should go and do it.  Mind you there is no need to give up all our 
> lovely applications.  A future microkernel can easily create an instance 
> in which Linux apps can run at native speed.  Indeed such things exist now 
> in terms of virtualisation and a POSIX interface is a required component 
> of any serious microkernel system.

Unix design hasn't given up yet.  It is evolving though.  We have memory
space seperation at least.  Of course some computer scientists are still
annoyed that IBM made the shared code/data memory design the common
accepted one, when other systems had code and data in seperate memory
spaces, which gave a lot of extra safety for the code since code would
not modify code, code would modify data.  The code memory space was not
writeable by the code, only by the OS when loading the code into memory.
Todays write protected code pages and that are just hacks to try and
deal with that old mistake.

> [1] Now was it 2.4.15 where a filesystem patch broke the buffer-cache 
> resulting in filesystem corruption if you did not properly unmount before 
> system halt?  Such a problem cannot occur between (for example) the 
> filesystem code and buffer-cache in a microkernel any post than sshd can 
> take down your web server now (ie, if it manages to do it, it is only 
> because the system allowed the behaviour).

A microkernel can have a buggy FS module too.  What is the difference?

Len Sorensen
--
The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml





More information about the Legacy mailing list