Re: [WinMac] Re: Linux


Danny Thomas(D.Thomas[at]vthrc.uq.edu.au)
Sun, 17 Jan 1999 22:38:46 +1000


CHoogendyk@aol.com replied

>Actually, it's my understanding that it is the microkernal architecture of
>Unix that makes it so portable. The C code for lots of stuff (like Sendmail)
>is pretty general and simply has to be recompiled on the target machine. The
>real work is in recoding the microkernal. Once that is done, the rest is
>relatively easy. I believe the Mach microkernal is just a rewrite of the Unix
>microkernal by folks at Carnegie Mellon that NeXt bought the rights to use.
I know this is off-course, but UNIX does NOT use a microkernel.

A microkernel (it's kern-E-l) typically consists of a few small processes
(corresponding to a few 10's of K of code) running things like process
scheduling and memory management. All other kernel functionality is built
on top of this.

The Mach microkernel is not UNIX. it was a research project to see what was
the functionality REALLY needed in a kernel (ie a micro-kernel) and what
functionality could be moved to a personality layer above this, ie most of
a typical kernel. NeXT add a BSD UNIX personality on top of Mach, but they
could have added a VMS personality.

I'm not too familiar with Linux but would expect it's kernel size to be
comparable to the NetBSD ones I've seen (BTW like Linux, NetBSD is
experimentally running on iMacs) which is a binary of at least a few
hundred K to over a meg depending on the functionality and drivers
included. All kernel code runs in the same memory space.

By way of comparison, the kernel for ATT V7 UNIX from about 20 years ago
was 40K. Of course it didn't have things like networking, etc. That size
was comparable to MSDOS.

>If LinuxPPC is faster than MkLinux then that is just because they did a better
>job of porting Linux to the PowerPC.
it's probably because most microkernels involve extra context switches
going between the processes of the microkernel, whereas all parts of a
monolithic kernel run in same address space (no switches). Apart from the
time they take, context switches also reduce the effectiveness of caches.

>The implementation of the MacOS on PPC was extremely difficult because it is
>NOT a microkernal architecture. There were huge numbers of tool box routines
>that had to be converted and rewritten (and too small a team to do it). The
>magic bullet was when they found someone who was good enough to write the 68K
>emulator for the PPC. They did an unbelievable job of it (they had gone
>through several versions that were not good enough), and Apple stock soared
>when they delivered the first Power Macs. There is still code in the MacOS
>that is running in 68K emulation mode on a PowerPC. Each version of the MacOS
>gets a little more converted, and those parts that are converted run
>substantially faster.
well I think the biggest problem is that much of MacOS was written in 68K
code. Still, Apple must have had some background for the conversion from
A/UX and MAE.

I think MacOS 8 had about 85% of code still as 68K. I think 8.5 has moved
quite a bit more to PPC native. The actual percentage isn't too critical
because

>The OS from NeXt had a real advantage because it WAS a microkernal
>architecture. Rhapsody was the code name for the OS that was to meld the MacOS
>interface with the NeXt OS microkernal and other services and utilities and
>also include the Mac toolbox in a way that would allow developers to not have
>to scrap everything. This is very soon to be out as MacOS X Server (see
>www.apple.com). Steve (and many major developers, such as Adobe) felt that it
>didn't go far enough. Steve pushed the Apple staff harder and came up with
>MacOS X which will not require developers to do substantial rewrites as
>Rhapsody would have. This will be out later this year.
>
>One of the reasons for Intel's constant load of backward compatibility in its
>chip architecture is that DOS and Windoze (following Dan's practice in
>distinguishing 3.1, 95, and 98) were, like the MacOS, not microkernal
>architectures, and rewriting them would not be economical. Note that DOS and
>Windoze do not run on Alpha or anything else.
Intel's backward compatibility is to support all the existing code out there.

micro vs monolithic kernel is not a big issue in terms of portability. It
how well the system is written for portability. While UNIX has had a
long-held reputation as being a portable operating system, that really was
by way of comparison with the existing commercial offerings which were all
written in assembler (eg IBM's, DEC's VMS). I find it surprising that it's
taken 20+ years for a version of UNIX (NetBSD) to bring out
architecture-independent interfaces for drivers. That is, properly written
source code for a NetBSD driver for a PCI card will not have any
platform/architecture-specific ifdefs for it to compile and run on a i386,
dec alpha, arm, PPC. That source code will work on any NetBSD platform with
PCI slots, and that includes machines like an alpha with multiple PCI
busses or the wildly different ways DMA is done on the i386 compared with
an alpha. NetBSD also includes bus-independent interfaces so the same
driver source code (again with no ifdefs) will work for a card that comes
in different versions, eg for ISA, EISA or PCI slots.

I believe that driver development under NeXT/OpenStep/Rhapsody is similarly
machine independent though.

MacOS was never written with portability in mind.

>Windows NT has some element of the Microkernal concept built into it. This is
>why they were able to develop NT for Alpha and PPC. It wasn't, however,
>terribly easy because Microsoft made things more complicated than necessary.
>IBM eventually dropped the NT on PPC project.
at one point MS were talking of NT being microkernel-like. I believe NT was
derived from Dave Cutle's work at Digital which was a true micro-kernel. NT
has a monolithic kernel, eg for performance reasons, the video drivers were
moved into kernel address space for NT4, so any driver bug could bring the
whole system down.

In spite of the conspiracies some people like to believe in, I think the
main reason for NT being dropped on non-Intel architectures were primarily
commercial. Of course NT/Alpha continues, but DEC put a lot of resources
into this right from the early days of NT. As one example of the commercial
pressures, at one point MS was supplying a version of Office for the then
available copy of NT on MIPS platforms (eg SGI). That got dropped when they
sold less than 10 copies in one year.

cheers,
Danny Thomas

* Windows-MacOS Cooperation List *
* FAQ: <http://www.darryl.com/winmacfaq/> *
* Archives: <http://www.darryl.com/winmac/> *
* Subscribe: <mailto:winmac-on@xerxes.frit.utexas.edu> *
* Subscribe Digest: <mailto:winmac-digest@xerxes.frit.utexas.edu> *
* Unsubscribe: <mailto:winmac-off@xerxes.frit.utexas.edu> *



This archive was generated by hypermail 2.0b2 on Sun Jan 17 1999 - 04:42:00 PST