OT: Can We Make OSes Reliable and Secure

Christopher Browne cbbrowne-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org
Fri Jun 2 22:16:01 UTC 2006


On 6/2/06, Robert Brockway <rbrockway-wgAaPJgzrDxH4x6Dk/4f9A at public.gmane.org> wrote:
> On Fri, 2 Jun 2006, Lennart Sorensen wrote:
>
> > efficient and reasonable thing to do.  This also means it is harder to
> > tear out and replace a subsystem in linux, although it isn't always that
> > bad.  After all sometime in 2.4.x (I think x=10) the VM system was
> > completely replace by someone elses code because the existing code
> > didn't work well, and the new one was simpler, more obvious, and clearly
> > worked.  The VM interface apparently is sufficiently abstracted that it
> > could just be replaced all at once without having to touch other code
> > that uses it.
>
> My recollection of that event is that it was anything but clean.  It was
> quite a painful experience and took place over several months.  It's a
> perfect example of whatr I'm talking about - the monolithic design made
> the change hard.

Unfortunately, casting the IPC interfaces in stone introduces a new
set of restrictions and a new set of problems.

More vital, the move to a microkernel design means that you have a
"broken system" for a substantial period of time.

What we saw happen with Hurd was that:

"Of course 5  years from now that will be different,  but 5 years from
now  everyone  will  be  running  free  GNU on  their  200  MIPS,  64M
SPARCstation-5."  -- Andrew Tanenbaum, 1992

became

"I am aware of the benefits  of a micro kernel approach.  However, the
fact remains  that Linux is  here, and GNU  isn't --- and  people have
been working on Hurd for a lot longer than Linus has been working on
Linux." -- Ted T'so, 1992.

And it is worth considering that it is now 2006, some twelve years
hence, and Hurd /"free GNU" (as Tanembaum termed it) is still pretty
much a curiosity, with Really Substantial limitations.  (They still
are limited to 1GB filesystems, because of limitations in Mach's VM,
right???)

> > I think much of the breakage seen on linux is just as likely on a micro
> > kernel system.  One thing the microkernel system is less liukely to see
>
> I disagree.  A lot of problems have occured because the Linux kernel (and
> indeed any monolithic kernel) is not very modular.  Many of these problems
> simply can't occur when the only way system processes can communicate is
> through a tightly defined protocol.  See my earlier example of sshd and
> httpd: one can only kill the other if the kernel allows it (ie, if there
> is a bug in the kernel).

For better or worse, Linux *isn't* a microkernel, and the people
involved in its development *aren't interested* in going through the
process of tearing everything down to turn it into one.

It's fair to say that there were some unfortunate historical factors
in the Hurd experience.  They got held off by about 2 years, at the
time that interest in Linux was moving from curiosity to seriousness,
because people at CMU were promising RMS that "real soon now" there'd
be a GPLable release of Mach, and that simply took an atrocious amount
of time.

By the point in time that they had Mach code that they could use for
anything, Microsoft had bought out the CMU OS group to turn it into
the beginnings of Microsoft Research, and the prospective world view
that was "Everyone will be porting Mach everywhere, and all other
kernels will be curiosities" turned into "Nobody cares about Mach
anymore (except Steve Jobs) and nobody wants to touch Raschid's code,
either..."

Some of that could have turned out differently.  But it's worth
pointing out that in order to head towards a New World Order of
_Linux, The MicroKernel_, we'd have to assortedly see:

- People that have, for 15 years, been dead set *against*
microkernels, change their mind

- After changing their minds, at least a two year hiatus on having a
functioning system or *any* new feature of any sort as Linux got
refactored into microkernel components (something not dissimilar to
this has happened with the refactoring of X.org between version 6.9
and 7.0)

I don't see people considering either of those things to be anywhere
near acceptable.

> > is a pointer bug in one piece of code causing a problem at some other
> > random place in the system.  This certainly is useful, but I am not
> > convinced it is worth the tradeoff.
>
> I absolutely am.  I am convinced we'll see much more rapid system
> development under a well planned microkernel system.

Perhaps, but you need to have an "as big as designing X11 or BSD"
superproject involving hundreds of millions of dollars of funding at
academic and professional levels along with several years of work and
waiting time as predecessors to actually seeing the glimmerings of a
system smart enough to display "Hello, world!"

> > The linux kernel seems pretty good at staying up.  So does BSD.  If your
>
> Sure.  Because a lot of hard work has gone into it.  I am emphasising that
> it is simply easier to make a microkernel system stable.

With the pre-assumption that you have gotten the effort in place to
build a functional microkernel system in the first place.

> > Unix design hasn't given up yet.  It is evolving though.  We have memory
>
> I think *nix is great.  It is the best OS out there in common use today.
> But it is showing its age.  Plan 9 is a great example of how unix would
> have been if it had been started in the 1990s.  I reiterate, if we have
> not advanced beyond the current crop of OSes in 20 years I think we've
> made a big mistake.  Most of the OSes on the horizon (experimental or
> otherwise) draw conceptually from unix while leaving behind much of the
> baggage.

"Mistake" doesn't seem the right word to me.

There are some good arguments that we have *more primitive* systems
than what was the state of the art 20 years ago, between such things
as VMS, TOPS-20, Tenex, Genera, Multics, Stratus, and such.

It seems to me that the "market" for operating systems has been
scorched down to salting the ashen ground by the combination of what
Microsoft did in formulating MSR (where they basically gave OS
researchers piles of money to come to MSR and stop working on
competing systems), where they spent barrels of money to scare anyone
out of the "top end" of building expensive OSes, and the popularity of
Linux, which allows people to expect that they can get the
functionality of Unix "for free."

There are plenty of good things about Linux being free; one of those
things *isn't* "Because it encourages vendors to spend more on
developing competing OSes."
-- 
http://www3.sympatico.ca/cbbrowne/linux.html
Oddly enough, this is completely standard behaviour for shells. This
is a roundabout way of saying `don't use combined chains of `&&'s and
`||'s unless you think Gödel's theorem is for sissies'.
--
The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml





More information about the Legacy mailing list