XNU, the kernel of OS X, is a hybrid of Mach and select BSD code, which leans substantially more towards a monolithic kernel design. What Mach typically handled in a microkernel manner with servers, namely things like the VFS, networking, etc. have all been completely removed in XNU. Where once there were Mach servers there is now the FreeBSD Unified Buffer Cache, to which Apple has attached various FreeBSD subsystems like the FreeBSD VFS and network stack.
XNU is essentially a monolithic kernel, much like Linux. The real differences, in my opinion, lie in the IOKit object oriented driver API, whereas Linux has no real driver API and drivers have complete access to all kernel functions as drivers are simply kernel modules.
Bottom line, Mach is the most advanced manual shaving system in the world. The Mach3 Turbo is the world's first triple-blade razor, with three blades positioned independently to shave you progressively closer in a single stroke.
I believe you're thinking of the M3 Power. If you're getting electric pulses off the M3 Turbo, you're standing in a puddle in your bathroom and you have a bad ground to your electrical outlet.
Take the blade off and you have the world's finest lady pleasuring device. Trust me, it works.
I'm not quite sure why you think an inert, unmoving stick of plastic roughly shaped like a pencil would be a fine device to pleasure a lady with. There are plenty of other options in various textures (and some that vibrate) built just for that purpose.
I'm glad I'm not the only person to think that. Makes bugger all difference to my shave though.
In other news: Anybody else being persistantly bugged to moderate or M2? Every single day I come back to find they want me to M2, and almost daily I find another 5 mod points.
This appendix on Mach is from the newest edition of the classic "Operating System Concepts," Seventh Edition by Silberschatz, Galvin, and Gagne (Wiley). ISBN: 0-471-69466-5. Published December 2004.
There are also free online chapters for FreeBSD and Nachos.
The therapy was working.. I had almost blotted out the existence of Nachos altogether. Now it's all coming back. Why do you feel the need to hurt me so? *sob*
What's interesting is the utter irrelevance of the slashdot posting to the book excerpt. Slashdot talks as if it's a detailed article on the internals of Darwin, which is the OSX kernel, NOT MACH. Darwin is based on Mach, yes, but it's not Mach.
And the article never once mentions OSX, or Apple, or Macs. One must seriously wonder...is the ability to read included in the job requirements for a slashdot editor?
The modular design of microkernels makes for easier design & debugging, and with some designs the freedom to make user space services that can only be in privileged space in monolithic designs, but does one want to pay the overhead for all that message passing? Now that we are getting into parallel processing at the consumer level with multicore and hyperthreaded chips, maybe the answer is yes.
The modular design of microkernels makes for easier design & debugging, and with some designs the freedom to make user space services that can only be in privileged space in monolithic designs, but does one want to pay the overhead for all that message passing?
No, which is why Apple's XNU runs in one address space for the most part (I don't even know whether there are parts which don't), and most message passing has been reduced to plain function calls. They still have the design advantages of s
While many devices are not supported, and the performance is not good, HURD/Mach is feature complete (and most of Debian runs on it, IIRC).
Because the performance was bad, the new HURD effort focuses on reimplementing on L4. Perhaps with a faster microkernel, Apple could have avoided the kludge of an in-kernel BSD peer.
If I am reading correctly, Mach is responsible for IPC in the Apple kernel. It would be interesting to see benchmarks of SYSV system calls to semaphores, queues, and shared memory (and pe
QNX uses shared memory to pass messages. Its message passing is very lightweight, and the resulting performance is far better than Linux.
In this day and age, there is no reason to use a macrokernel unless your hardware lacks the features needed for a microkernel. QNX has proved this quite nicely.
So does Mach, and it's slow. I've never seen real-world measures to suggest that QnX is fast. All we know is that the performance of the OS itself is good, and that's a VERY DIFFERENT measure.
The slow performance is due to a number of problems:
1) not all MMU's are really suited to this task. Many are slower to set up than just copying the memory around. Sun found this to be at around 5k, below that, it was faster to just copy memory physically.
2) MMUs/VM are based on pages, 2 or 4k typically. Thus passing in a single 32-bit int parameter causes big page hits. You can tune this out, but it's still annoying.
3) Each copy takes TWO context switches - one to switch into the kernel to copy the memory across ports, another back out to the called program. This means that even the simplest "system calls" are twice as slow as under a monokernel, AT BEST.
4) Additionally the data has to be examined to see if it contains ports being passed around, and if so, they have to be translated because the port codes are private to a program (and thus different in the other one).
5) Using mapped memory ignores all the hardware specific solutions to these problems, many of which are built into modern processors.
It's exactly the sort of one-size-fits-all solution that you'd expect from a research project. One that doesn't work in the real world. One that should have been replaced, and was in L4, Spring, etc.
For instance, Spring included three different IPC systems, each tuned to certain types of data, each used in different ways on different CPUs. The "fast-path" used a half-switch into the kernel by mapping off registers, allowing IPC to degenerate into register passing largely identical to a procedure call. As long as the message fit within the limitations -- 8 registers, no port identifiers, etc. -- it was faster than a traditional Unix trap. These limitations seem serious, but were in fact used for 80% of calls and 60% of returns (you often say "getDiskSector(integer value)" which could fit into the fast-path, and get back 2k of data which wouldn't).
The benchmarks for Mach 4.0 showed it within 20% the speed of a monolithic kernel of the same era. Check the site for more details, although I seem to recall that the project is no longer in active development.
There is one very easy way to kill a microkernel's performance - force it to use a synchronous system call API (e.g. POSIX). With a synchronous system call API, a context switch is required for every system call. With an asynchronous API, the process simply writes messages into a buffer (or set of buffers for different kernel services) until it either needs to wait for a response or its quantum expires. At this point, you switch to the next context (perhaps a kernel server) and process the incoming messages. This reduces the total number of context switches (and, more importantly the number of mode switches). If you want to see good performance from QNX, then use the native system call API, not the POSIX wrapper.
My point was limited to the time for the switching itself. Perhaps I should have been more clear on this.
The "at best" is assuming that the forgoing issues don't cause things like cache faults. Passing parameters in registers won't. Thus the performance really can be MUCH worse than twice even in the case of minimal calls.
But even in that case the real-world performance of Mach is, in fact, much worse that twice as slow. I believe it was Chan that did all the big measurement runs, and - going on memory he
QNX is a real time operating system - its message passing only has good performance when there's not too many different types of messages to pass. The desktop versions of QNX work if you are only doing a couple things, like browsing and doing email. But if you try to do the things that a Linux distro could easily do, like burning a CD while writing to USB device and compiling a new kernel and running a dozen windows, it'll choke up: it's NOT suitable for a general purpose desktop.
Check out this recent video discussing Microsofts "Singularity" research project works in this way. <URL:http://channel9.msdn.com/ShowPost.aspx? PostID =68302>
Um, if you want to warn anyone maybe you should warn the sys admin of the server that hosts the PDF file that you just put a link to on the main page of slashdot. I think they'll care a little more about the super slashdot effect (I'm coining that term for when a non-html file is linked to from slashdot - be it pdf, mpg, avi, etc.) than we will about taking the extra time to load.
... benefits and detriments exist for both monolithic and micro flavors. I doubt a conclusion could ever be made about which one is 'better'... because it all depends on context. "How will the system be used?" "What kind of environment will the system be operating under?" "What are the performance goals of the system?" "What types of hardware will the system(s) need to support?"
Each system has benefits... but they almost always rely on the existence of certain assumptions.
That is very true. Arguably, though, what you have is a continuum, with monolithic kernels at one extreme and exokernels (virtually everything in userspace) at the other extreme. Different requirements would need different designs, somewhere on that continuum, but there would be no "overall best" for all circumstances.
Actually, as kernels have started adding parallelism (such as SMP, clustering, etc), it becomes harder to really say exactly what sort of design a kernel really has. (Design is intrinsic and
Let's ask what OS is better. The Apple and Linux people will go at each others throats and will in the end agree that they hate Window$.
I, for one, envision a different outcome. One which is more fair to everyone involved:
"Well ladies and gentlemen, I don't think any of our contestants this evening have succeeded in encapsulating the intricacies of kernel design, so I'm going to award the first prize this evening to the girl with the biggest tits."
I thought the article was more than relativly informative. Personally I love my mac and I think it's about time people stop fighting over which OS is better, use the right tool for the job, be that Linux, mac, windows, whatever. Anyways, figured I'd throw in a link to some other cool stuff about mach. http://rentzsch.com/papers/overridingMacOSX [rentzsch.com] The page deals with code injection and function overriding within MAC OS X. I think something like it was on here not too long ago but it's also pretty interesting stuff, I'd suggest the read.
I just had to comment, your post is very refreshing and what I remember the OLD days of Slashdot being like.
It really isn't so hard to say, all OSes have really cool things about them, and we should talk about these cool things, as they enrich the OSes we develop for or use.
Why can't something Microsoft does actually be cool, or something Apple does actually be outstanding, or a *nix project create a whole new paradigm that is fantastic.
Anyway, thanks for the feeling of the old days when people didn't bi
Mac OS X uses Mach, but it also uses a FreeBSD kernel and compiles them together. This eliminates the runtime characteristics of a Microkernel. This is actually quite common.
So, even though it uses Mach, you can't call it a Microkernel.
I suppose it's possible I'm underinformed, but I believe the "BSD subsystem" of OSX is not compiled "into the kernel" and is entirely a compatibility layer on top of it.
I suspect this is exactly how to never violate the microkernel design and still have BSD compat.
The BSD and Mach personalities run together in one and the same (kernel) address space. The BSD layer does not consists of merely some user space libraries. See e.g. this graphic [apple.com].
Actually, the grandparent IS correct. I spent the last week studying the Mach and OS X designs, and I found the following things:
1. Mach is not a complete kernel. It requires someone to implement the areas which the Mach group were not researching. This has traditionally been done by compiling against BSD 4.3.
2. Mac OS X updated to the FreeBSD kernel instead of BSD 4.3 to gain a more modern kernel design with better hardware support.
3. OS 9 "Classic" is not a microkernel server, but rather a technology that Apple calls "Blue Box". Blue Box is a hardware virtualizer like VMWare that is capable of communicating directly with the OS X desktop. Using this communication, the OS 9 desktop is made to disappear, making the application appear to run on the OS X desktop.
4. The combination of Mach and FreeBSD is called "XNU" by Apple. The complete os is called Darwin, and the commercial variety with the Next and Mac APIs is called "Mac OS X".
I always thought that Linux was indeed only a kernel for the GNU OS
That is a true statement for the GNU project, but not for all of Linuxdom. Linux (the OS) was not started by the GNU folks. It was started as a separate project and incorporated items from the FSF (and BSD, etc.) into its release. From the beginning the whole OS has always been called "Linux" (search Google Groups for "linux 0.11 author:torvalds" and click on the "Linux information sheet" for an example of this).
Yes, RMS prefers to call the OS GNU/Linux, but that's because he's seeing things from the perspective of the GNU project incorporating the Linux kernel into their work. The rest of Linuxdom see Linux as the name of both the OS and the kernel, and qualify the name using the phrase "the Linux kernel" as an easy way to differentiate between the two.
So, the opening statement in the OS X story is false: Linux is an OS, and is used as such by folks every day. This is the reality of the situation, and it is, at best, wishful thinking on the part of folks who claim it is not to say otherwise.
> I agree that way back when, Linux was the name of the kernel, period
Not so. Here a posting from Linux Torvalds about Linux -- from the beginning the term was used as both the name for the kernel and the whole OS:
LINUX INFORMATION SHEET (last updated 13 Dec 1991)
1. WHAT IS LINUX 0.11
LINUX 0.11 is a freely distributable UNIX clone. It implements a subset of System V and POSIX functionality. LINUX has been written from scratch, and therefore does not contain any AT&T or MINIX code--not in the kernel, the compiler, the utilities, or the libraries. For this reason it can be made available with the complete source code via anonymous FTP. LINUX runs only on 386/486 AT-bus machines; porting to non-Intel architectures is likely to be difficult, as the kernel makes extensive use of 386 memory management and task primitives.
[...]
2. LINUX features
- System call compatible with a subset of System V and POSIX
- Full multiprogramming (multiple programs can run at once)
- Memory paging with copy-on-write
- Demand loading of executables
- Page sharing of executables
- ANSI compliant C compiler (gcc)
- A complete set of compiler writing tools
(bison as yacc-replacement, flex as lex replacement)
- The GNU 'Bourne again' shell (bash)
- Micro emacs
- most utilities you need for development
(cat, cp, kermit, ls, make, etc.)
- Over 200 library procedures (atoi, fork, malloc, read, stdio, etc.)
- Currently 4 national keyboards: Finnish/US/German/French
- Full source code (in C) for the OS is freely distributable
- [...]
In this post, you see that Linus was effectively trying to rename GNU
That's certainly one cynical viewpoint, but is not what really happened. Linus started his own OS project and he named it as he pleased (or really those around him named it and he accepted the name). There's nothing wrong with naming your own project and then cherry picking the items you want to be in your project from the available choices. Keep in mind that the GNU folks were working on HURD at the time, and were not all that keen on
Apple's kernel is called XNU (Xun's not Unix). It is based on Mach with a BSD compatibility layer included at the kernel level (as are various other subsytems usually implemented at a server level in true microkernels), not as a 'mach server'. It does not use Mach as a microkernel. Xnu is a essentially a monolithic kernel. The Mach code takes care of inter-process communication, virtual memory, preemptive multi-tasking, etc. The BSD codebase of XNU handles user ids, file permissions, TCP/IP stack, sockets, filesystems
It is based on Mach with a BSD compatibility layer
It's not just a "compatibility layer". A Mach system consists of multiple servers providing services to each other and to applications. The BSD server in XNU is an essential part of the system... it's the ringleader, and calls the shots from boot onwards.
Operating System Concepts is a great book for learning about what an OS is and the design choices that go into building one. We used that book way back in my college days, and it's one of the few textbooks I actually kept. Here's an excerpt from the (linked PDF) chapter on Mach:
Mach 2.5 is also the basis for the operating system on the NeXT workstation, the brainchild of Steve Jobs, of Apple Computer fame.
So... does anyone here know what Steve Jobs and Mach have been up to since their halcyon days at NeXT?
Although Darwin does use Mach at the heart, it also has large chunks of the BSD kernel bolted on to avoid Mach's typical performance hit. Consequently, OS X really isn't microkernel, and you can't do all the cool microkernel tricks (load or unload almost anything dynamically, drivers can't crash the OS, etc).
This approach doesn't make much logical sense to me, but it's what Steve and Avie wanted, and somehow, amazingly, it still just plain works.
Mach was never originally engineering for posix compliance, and yet the two main operating systems built from it, osx (and darwin), and hurd, each have tried hard to tame and make mach behave posix compliant. This has sometimes produced interesting compatibility issues, especially in the contenous issue of posix threading, and has resulted in compatability layers which weigh down the system further.
Given this compatibility effort, mach is not a fair comparison, either in hurd or osx, for comparing the merits and performance between that of monolithic and microkernel achitectures because so much extra stuff was added to a design never intended for posix. Something like QNX4 and later, designed both as a microkernel and for posix, or perhaps a pure mach system running applications designed specifically for mach, might be a more fare basis to compare the value of microkernel vs monolithic architectures.
Mach on hurd is easier to grasp and test since many of the lower level mach kernel services are still represented and usable there. Apple seems to be trying to eliminate visibility of as many of the lower level mach services from application developers as possible. Yet, there are still many things that can only be done in the mach kernel on osx or darwin (such as threads that can be cancelled on socket operations or sleeps). If one wanted a bsd/posix compliant environment, I think Apple would have been far better off starting from PPC/xBSD or Linux kernels, rather than trying to rope and rebuild mach to fit into something it was never originally designed for.
Technically, that may be correct (Mach developed without aim for POSIX compliance), but considering the title of the original Mach paper was, "Mach: A New Kernel Foundation for UNIX Devlopment," it's a silly statement. Mach was made explicitly for implementing UNIX-like operating systems.
The level of Mach-iness of OS X is another good question, though. From what I gather, OS X looks more like a monolithic BSD kernel ported to a Mach/PPC architecture (where instead of targeting the PPC architecture, OS X's
What's better? PHP or Python? What's better Pepsi or Coke? The answer is always the same. It depends what your goals/needs/desires are. Neither is "better" in the all encompassing good or bad definition unless you qualify it. Which one's better for performance? Probably the monolithic kernel. Which one's better for security? Probably the micro-kernel. But even then, you have to qualify both of those. Performance of what? Security of what?
I'm sick of all these stupid "which is better?" religious wars that geeks are always so interested in having. What's better? C++ or Java? What's better? IE or Mozilla?
They're all better because the more there are, the more choices you have. There, is that a satisfactory answer?
In general, monolithic kernels run in a single address space and use direct procedure calls / variable accesses to pass data and control flow between subsystems. This is true even if they support loadable modules (like Linux). Any driver or other subsystem in your kernel can (if it wants) access any other part of the kernel.
Although Mach itself is a microkernel, the "xnu" kernel which Darwin / MacOS X uses also hosts other components *in the same address space*. Some of the subsystems (e.g. the BSD subsystem) are large and resemble monolithic systems themselves. The overall system is not a "pure" microkernel, with lots of code moved out of privileged mode. Equally, it's not quite like a traditional monolithic UNIX because of the use of Mach and the other Darwin-specific components (e.g. a (relatively?) stable binary interface for drivers).
Monolithic kernels are dominant in practice (so far). Windows started off microkernel-y but has ended up rather monolithic (at least partly for performance reasons). Xnu (Darwin / MacOS kernel) also has strong monolithic leanings, despite being based on Mach.
The microkernel design still appeals, though. For some things (not all) it is beneficial to move stuff out into less-privileged units. (Small) examples of this in Linux include: FUSE (for implementing non-performance-critical filesystems in Linux userspace), udev instead of devfs, moving initialisation code to the initramfs instead of being in the kernel itself...
Other systems (e.g. Dragonfly BSD) are also seeking to move functionality to userspace where possible without undue complexity and / or performance cost.
Some argue that virtual machine monitors are a useful modern equivalent to microkernels. They perform a similar function (partitioning system software into multiple less privileged entities), although they do it in a more "pragmatic", less architecturally "pure" way.
Virtual machine monitors allow multiple virtual machines to use the same hardware. They have also been used for running Linux drivers in fault-resistant sandboxed virtual machines, with performance within a few percent of a traditional monolithic design (fully privileged drivers).
The L4 microkernel is being used as a virtual machine monitor for this work by one research group, Xen has these capabilities also.
Mach is painfully slow. It's an old microkernel and it uses async IPC (to allow for passing messages over the network). This is slow because you have to do a ton of context switches and copy the message between address spaces.
L4 [l4ka.org], on the other hand, uses sync IPC. It has a bunch of neat optimizations, but the main reason why it's faster than Mach is that it doesn't have to copy messages. You send an IPC and it goes into the part of your VM space that L4 sets aside for IPC and then L4 does a quick context switch to the target task, it processes the IPC, and then you get your data back. (Simplifyed a ton).
OS X's kernel, "xnu", is/based/ on Mach 3.0 and obviously shares a few concepts with Mach, but is neither a pure Microkernel, nor are all its components from Mach.
Amit Singh has a well-written page about XNU: http://www.kernelthread.com/mac/osx/arch_xnu.html
Mac OS X does not use Mach like a microkernel at all. I wish people'd get this through their thick skulls.
It uses Mach and BSD in THE SAME ADDRESS SPACE. As such, it's basically as monolithic as it gets. It just happens to incoporate Mach in the kernel space and uses it for threads and IPC.
Anyone who takes 10 minutes to look at the Darwin documentation would know this.
by Anonymous Coward writes:
on Monday May 16, 2005 @11:25AM (#12543798)
The kernel that Apple uses in OS X is called XNU.
It uses code FROM Mach, but it is not Mach and it is not a Microkernel. NT (NT 4.0, NT 5.0 (win2k), NT 5.1 (WinXP) does not use a Microkerenl either.
The only OS that I know of that actually uses a Microkernel is GNU/Hurd.
The OS X kernel, called XNU, is a mixture of *BSD kernel code and Mach kernel code.
Yes, yes, there was a ancient debate between Linus and that other guy about Micro- vs Macro-kernels, and guess what, Linus was right.
Apple does NOT use a microkernel. It does not use Mach. It is BASED on Mach. It would be considured a kludge compared to Linux or FreeBSD, but it works out fine.
Similar to how Mustangs are based on Ford Falcons and Granadas from the late 70's and early 80's. Those cars were as much as a failure as Mach, however the Mustang is flashy and many people desire it. So go figure.
Just FYI, the other guy is Andrew Tannenbaum, who wrote Minix [cs.vu.nl]
As for winning or losing, you can see for yourself on Google Groups [google.ca] that it was not about winning, it was a discussion on the merits of both.
Although interesting, Mach was developed at a university and shows a huge number of problems as a result. Notably performance is terrible, due largely to the IPC performance. When people actually tried the "collection of servers" operating system in Mach 3, it was clear it was simply not a workable solution. Workplace OS, Star Trek and any number of other OS's died as a result.
What's sad about this is that the failure of Mach tainted ALL ukernels. By the mid-1990s the idea was basically dead. But what an idea! Don't have your machine on a network? Simply don't run the network program. Using a diskless system? Don't run the disk server. Don't want _VM_... no problem. You can use the exact same OS image to build anything from a minimal OS for a handheld to a full-blown multi-machine cluster, without even compiling. No pluggable kernels, no shared libraries, no stackable file systems, nothing but top and ls.
But it just didn't work. IIRC performance of a Unix app on a truly collection-of-servers Mach was 56% slower than BSD. Unusable. Of course you can compile the entire thing into a single app, the "co-located servers" idea, but then all the advantages of Mach go away, every single one.
Now, given this, the question has to be asked: why anyone would still use it? Don't get me wrong, there are real advantages to Mach, notably for Apple who ship a number of multiprocessor machines. But the same support can be added to monokernals. Likewise Apple's version has support for soft realtime, which has also been added to monokernels. So in the end the Mac runs slower than it could, and I am hard pressed to find an upside.
Of course it didn't have to be this way. The problems in Mach led from the development process, not the concepts within. As L4 shows, it is possible to make a cross-platform IPC system that is not a serious drag on performance. And Sun's Spring went further than anyone, really re-writing the entire OS into something I find really interesting, and still providing fast Unix at the same time. I'd love to see someone build Mac OS X on Spring...
Although interesting, Mach was developed at a university and shows a huge number of problems as a result.
Sad, but true. The developers of Mach chose to start with BSD and tried to hack it into a microkernel, one section at a time. This was a flop. Mach 2.5, which Apple uses, is basically BSD with some Mach features. Mach 3 is more of a microkernel, but is so awful that nobody uses it.
There are really only two microkernels that work - VM, for IBM mainframes, and QNX. In both cases, incredible care was put into getting the key primitives - interprocess communication and scheduling - right. If those are botched, the system never recovers.
Mach suffered from too much "cool idea" syndrome.
There's too much generality in key primitives that need to work fast. Message passing has too many options. The ability to build heterogeneous multiprocessor clusters out of whatever you have lying around complicates the simpler cases. And sharing memory across the network isn't worth the trouble.
It's clear from VM and QNX how a microkernel should work. Interprocess communication and scheduling need to play well together. Interprocess communication primitives should be like subroutine calls, not I/O operations. Try for an overhead of about 20%, and don't get carried away with the "zero copy" mania. Organize the I/O system so that the channel drivers that manage memory access are separate from the device drivers that manage the device functions.
Actually, Apple's kernel is a collection of parts from BSD, Mach, and IOKit. [freebsd.org] It's a monolithic kernel like Mach 2.5, not a microkernel like Mach 3.0, although some parts from the Mach 3.0 code base were supposedly used.
IOkit is written in the "embedded subset" of C++, an idea from 1999 that never caught on. Drivers are loadable kernel modules, as with Linux, but the structure is quite different.
Any driver can crash the kernel. It's not a microkernel at all.
by Anonymous Coward writes:
on Monday May 16, 2005 @11:47AM (#12544015)
Just because it runs on Mach doesn't mean that MacOS X is a microkernel architecture.
Just as an example, on the new MacMini hardware, sound level control is done in kernelspace (since HW doesn't support that anymore)! Whereas the LinuxPPC developers refuse to do things like this in kernelspace.
Actually in Linux many things are pushed out to userspace (think udev), making it much more microkernel-like than MacOSX.
(Not that Apple-Fanboys would understand anything of that)
Linux is in spirit a monolithic design, and MacOS is in spirit a mach-based microkernel design.
In reality, though, both MacOS X and Linux have departed from the architectures in mostly pragmatic ways. OS X is not a "pure" microkernel in the mach sense.
I can't believe people are modding you up for this.
The Linux kernel is monolithic. Linux modules do not run in user-mode. They are loaded into the kernel proper.
mkLinux was an Apple-sponsored effort to run Linux on Mach. The Linux kernel was modified to run in user-mode; it basically became an executable. In fact, you could run multiple instances of the same kernel (or different kernels) simultaneously.
The modules may be loaded into the kernel proper, but that does not make Linux necessarily monolithic, as the bindings are necessarily on-the-fly and the failure of a given module does not automatically mean the failure of the kernel as a whole.
mkLinux is not the only microkernel Linux - L4Linux is still maintained and is much more advanced. Nor are these the only Linux kernels to run in userspace - UML Linux, for example, does just fine. It is not clear where XEN fits into the picture.
All in all, though, the situation with Linux is actually a highly complex one, and should not be regarded as being definitely anything.
"I can't believe people are modding you up for this."
I can. Maybe you're too young to remember when the term monolithic was commonly used to describe a kernel which, instead of using loadable modules, was linked as a single binary image. This was, and is, a valid use of the word. Here's an example [linuxjournal.com].
The first time I heard someone say that Solaris is monolithic, I thought that they were saying that, like SysVR3, it didn't support loadable modules. I didn't realize that, with the development of microkernels, the term "monolithic kernel" had started to be used in a different context.
You can't download the binary of a driver, tell the kernel to load it, and expect it to work unless the person who compiled just so happened to have the exact same version info, and by some miracle the same compile options.
Yes, distros like RedHat and SuSE do have binary drivers for download, BUT ONLY if you stay with the stock kernel.
Just because you can "load modules" doesn't mean you are suddenly a microkernel. God it's like monolithic has become a swear wor
That is monolithic as well, but not using the term in the same way. Monolithic essentially means made from a single piece. This CAN refer to modules as well, as the kernel modules aren't built into the kernel binary, but in the case of monolithic vs. microkernel, it doesn't refer to how the kernel is built. Rather it refers to the execution of the operating system kernel.
A modular Linux kernel loads as a single executable that then loads modules into it's process space as needed to do things. This is essentially a monolithic kernel. The OS runs as a single process.
Microkernel's have the OS split as seperate processes, mostly outside the core microkernel (which has the job of facilitating message passing between all these processes, and lowlevel process management). The Microkernel may or may not do I/O, sometimes seperate processes do.
Hope that helps.
> Maybe I'm mixing terms here, but I was under the impression > that linux is NOT monolithic - its quite modular. Monolithic > translates to no modules, correct?
No: Both the modules and the rest of the kernel run in the same address space, so Linux is monolithic.
A microkernel approach puts some (most, for second-generation microkernels like L4) traditional kernel features into user space, where they cannot hurt the kernel directly, by overwriting memory.
Not the way the term is used in computer science, it's not enough to have a modular design or loadable modules for device drivers, but to have a small core that abstracts OS services in an OS agnostic way to higher layers, and that loads the higher layers as needed.
monolithic in this case also means interface-monolithic. basically all the interfaces are defined as symbols to the linker, and all interfaces are defined c-native.
the micro-kernels are meant to use message passing and more abstracted interfaces, as well as separate address spaces to ensure a bad module does not take down the entire kernel. Think of it like the modules run as only semi-privileged applications, handling their hardware and then giving control back to the micro-kernel which does as little as possible to arbitrate control and schedule between the subsystems and user-mode applications. Drivers are no longer fully privileged, and the entire user-space can be considered a subsystem of the kernel.
it's different, and kinda hard to design for, but i can't wait for hurd to release a linux compat layer.
Linux modules all run in the same address space. Module functionality is invoked with a function call. A microkernel typically uses a message-passing approach and modules are isolated from one another. A small context switch must occur when invoking something in a microkernel module. Hence the overhead of a microkernel is much greater than a monolithic kernel. However many argue that the overheads are worth the organization and safety advantages that the microkernel gives you-- especially nowadays where for typical appliations, the OS accounts for a tiny fraction of overall runtime.
Maybe I'm mixing terms here, but I was under the impression that linux is NOT monolithic - its quite modular. Monolithic translates to no modules, correct?
No, you're mixing terms.
We say something has a modular design if it's divided into pieces that communicate with each other through small well-defined API's.
Linux's kernel modules are bits of kernel code that can be loaded into the kernel at runtime. Usually these modules are also examples of modularity, but they don't have to be. Modules have full access to the kernel's memory, so can do anything the kernel can.
In a microkernel drivers, filesystems, etc., all live in a completely separate address space from the kernel, so if, for example, a driver goes bonkers and starts writing to random pieces of memory, the kernel is protected. This forces the design to be somewhat "modular", but again isn't quite the same thing.
So, the linux kernel supports kernel modules, and its design is to some degree "modular" (as any project that size would have to be), but noone would claim it to be a microkernel.
I'm going to ignore for a moment that "lith" means stone. A "monolithic" kernel means a kernel in one piece. As such, a modular design that clips together into a single unit may or may not be counted, depending on your choice of definition.
Personally, I wouldn't consider a modular kernel to be monolithic, because it can take many forms. It may have a signle form at any given time, but over any period of time, it may vary that form. If we are going to give the Linux kernel a "technical" description, then "
First of all, having modules or not has no effect on being monolithic. The entire kernel is a single process that simply executes code, wether its compiled into the kernel, or loaded into the kernel as a module makes no difference here. Microkernels actually have seperate processes for different parts of the kernel, and they cannot execute code from each other, they must communicate back and forth using some sort of message passing system.
And second, no BSD based kernel forces you to use modules. Have you actually tried any BSD? Modules are entirely optional, just like linux. In fact, openbsd's kernel only has support for modules, but nothing is actually compiled as a module, and using modules is unsupported.
Just because linux gives you the options of going modular or monolithic whereas most BSD based kernels do not (you will use modules, period)
It's a good thing you've already been modded to zero...
To clear up any confusion: No *BSD uses a microkernel. (The only part of OS-X that is *BSD is the userland, which is derived from FreeBSD). The *BSD's are basically in the same classification as Linux/Solaris/HP-UX or any other UNIX or *NIX clone. Which means: all the *BSD's are monolithic in nature, with some modular abilities added on in recent years. Like Linux, the *BSD's can load a kernel-module upon request (either during boot, or upon superuser-request). These modules can also be compiled into the kernel itself (which is sometimes a good idea, as it saves a small amount of memory and improves performance).
Anyhoo, back to the original topic: The MACH microkernel. Apple's implementation is excellent these days, but it definitely went through its struggles (which is one reason why we continue to see major speed improvements with new versions of OSX, even on older hardware). Creating a monolithic kernel is difficult enough, but to create a micro like MACH, and do it properly... that takes serious skills. Mad props, Apple engineers.
Actually, the Mac OS X Kernel is the Mac OS X Kernel, and the Linux Kernel is the Linux Kernel.
To couch them in terms of Monolithic versus Micro would be like trying to classify an economy as Capitalist or Communist.
Neither economy has ever existed in it's pure form. Both descriptions also have political overtones that have precious little to do with their actualy description.
Well nope. You can insert newly compiled modules into a previously compiled kernel to get new features (that's how the many proprietory video drivers work, for example.) But those are a) running in kernel space, not user space b) communicated with by predefined hooks, rather than a generic message passing interfacing.
That's why linux modules, which are superficially like elements of a microkernel, are not really like them at all.
The apple design is, however, what i'd call bad. They've taken a microkernel (Mach), and implemented a monolithic kernel beneath it, to run their legacy apps!!! It's ugly!
I would disagree with you there. Apple's design may not be beautiful, but it certainly has the best of both worlds.
The BSD Layer, Memeory Managment, etc are all built inside XNU (OS X Kernel) but at the same time its still functions as a microkernel allowing things such as Kernel Extensions (Kext).
Actually that's an EXCELLENT point... by your rationale we should actually stop calling it OS-X and just call it Mach, since the kernel apparenlty gets to name the whole OS. Oh wait, you say that the rest of the OS took a lot of hard work to develop? Maybe calling Linux just plain GNU would make more sense.
I hate flamewars, but as been said by many people many times, RMS does not get to define opensource. OpenSource existed long before RMS, and will exist after his demise. If you want to talk about fighting for open code, remember that the Regents of UCB fought for opensource code in a court case, and can say that they won.
RMS is very important, but he's a zealot, and a lot of people don't agree with his views (I for one don't on a lot of issues). Don't get caught up in the whole Saint iGNUcious thing.
This is a tough fight (Score:4, Funny)
Re:This is a tough fight (Score:3, Funny)
Tune in next week to the same article (poasted at a later dupe-date for your conveinance) and FIND OUT!
We're also to believe that the Apple Metrosexuals plan to use a hypnotizing-GayRay against the dirty hippies!
SAME TIME SAME CHANNEL!
They have more in common than you may think... (Score:5, Informative)
XNU is essentially a monolithic kernel, much like Linux. The real differences, in my opinion, lie in the IOKit object oriented driver API, whereas Linux has no real driver API and drivers have complete access to all kernel functions as drivers are simply kernel modules.
commence the horse beating (Score:5, Funny)
Re:commence the horse beating (Score:5, Funny)
Re:commence the horse beating (Score:5, Funny)
--
Evan
Re:commence the horse beating (Score:3, Insightful)
I'm not quite sure why you think an inert, unmoving stick of plastic roughly shaped like a pencil would be a fine device to pleasure a lady with. There are plenty of other options in various textures (and some that vibrate) built just for that purpose.
--
Evan
Re:commence the horse beating (Score:5, Funny)
No, take my pants off and you have the world's finest lady-pleasuring device.
Trust me, it works.
Re:commence the horse beating (Score:5, Funny)
--
Evan
Re:commence the horse beating (Score:3, Funny)
Speaking of dead horse beating...
Re:commence the horse beating (Score:3, Interesting)
In other news: Anybody else being persistantly bugged to moderate or M2? Every single day I come back to find they want me to M2, and almost daily I find another 5 mod points.
That's the reason Apple's come in those colours... (Score:2, Funny)
You got a light Mac?
No, but I've got dark brown NeXT machine.
joke.
MirrorDot (Score:4, Informative)
Re:MirrorDot (Score:2, Funny)
Re:MirrorDot (Score:2)
Re:MirrorDot (Score:3, Insightful)
Complete Book reference (Score:5, Informative)
There are also free online chapters for FreeBSD and Nachos.
Link to Wiley's purchase page (given that we are
Ahhh!! Nachos!!! (Score:4, Funny)
Re:Ahhh!! Nachos!!! (Score:3, Funny)
WHY? WHY DID YOU HAVE TO REMIND ME?
The therapy was working.. I had almost blotted out the existence of Nachos altogether. Now it's all coming back. Why do you feel the need to hurt me so? *sob*
-Laxitive
Not in book 5th edition -Old Article (Score:2)
acording to the preface from 5th "coverage of the Mach operating system (old chapter 20), which is a modern os.... is available on line."
Back into relevance? The new article doesn't mention MAC OSX which doesn't mean it completely out of date.
It should be noted that appe hired Avie Tevanian to modify the MACH kernal and boost its performace.
http://everythi [everything2.com]
Re:Complete Book reference (Score:3, Insightful)
design is better, performance is worse (Score:5, Interesting)
Re:design is better, performance is worse (Score:3, Informative)
No, which is why Apple's XNU runs in one address space for the most part (I don't even know whether there are parts which don't), and most message passing has been reduced to plain function calls. They still have the design advantages of s
L4 performance? (Score:3, Interesting)
HURD abandoned Mach because of performance issues and is being reimplemented on L4 [l4ka.org].
If Apple had chosen L4, would it have been necessary from a performance perspective to include BSD at a peer level with the microkernel?
Is it now far too late for Apple to dump Mach?
HURD on Mach is done. (Score:3, Informative)
While many devices are not supported, and the performance is not good, HURD/Mach is feature complete (and most of Debian runs on it, IIRC).
Because the performance was bad, the new HURD effort focuses on reimplementing on L4. Perhaps with a faster microkernel, Apple could have avoided the kludge of an in-kernel BSD peer.
If I am reading correctly, Mach is responsible for IPC in the Apple kernel. It would be interesting to see benchmarks of SYSV system calls to semaphores, queues, and shared memory (and pe
Re:qnx does just fine with a u-kernel and message (Score:5, Interesting)
In this day and age, there is no reason to use a macrokernel unless your hardware lacks the features needed for a microkernel. QNX has proved this quite nicely.
Re:qnx does just fine with a u-kernel and message (Score:5, Informative)
So does Mach, and it's slow. I've never seen real-world measures to suggest that QnX is fast. All we know is that the performance of the OS itself is good, and that's a VERY DIFFERENT measure.
The slow performance is due to a number of problems:
1) not all MMU's are really suited to this task. Many are slower to set up than just copying the memory around. Sun found this to be at around 5k, below that, it was faster to just copy memory physically.
2) MMUs/VM are based on pages, 2 or 4k typically. Thus passing in a single 32-bit int parameter causes big page hits. You can tune this out, but it's still annoying.
3) Each copy takes TWO context switches - one to switch into the kernel to copy the memory across ports, another back out to the called program. This means that even the simplest "system calls" are twice as slow as under a monokernel, AT BEST.
4) Additionally the data has to be examined to see if it contains ports being passed around, and if so, they have to be translated because the port codes are private to a program (and thus different in the other one).
5) Using mapped memory ignores all the hardware specific solutions to these problems, many of which are built into modern processors.
It's exactly the sort of one-size-fits-all solution that you'd expect from a research project. One that doesn't work in the real world. One that should have been replaced, and was in L4, Spring, etc.
For instance, Spring included three different IPC systems, each tuned to certain types of data, each used in different ways on different CPUs. The "fast-path" used a half-switch into the kernel by mapping off registers, allowing IPC to degenerate into register passing largely identical to a procedure call. As long as the message fit within the limitations -- 8 registers, no port identifiers, etc. -- it was faster than a traditional Unix trap. These limitations seem serious, but were in fact used for 80% of calls and 60% of returns (you often say "getDiskSector(integer value)" which could fit into the fast-path, and get back 2k of data which wouldn't).
Maury
Re:qnx does just fine with a u-kernel and message (Score:4, Interesting)
There is one very easy way to kill a microkernel's performance - force it to use a synchronous system call API (e.g. POSIX). With a synchronous system call API, a context switch is required for every system call. With an asynchronous API, the process simply writes messages into a buffer (or set of buffers for different kernel services) until it either needs to wait for a response or its quantum expires. At this point, you switch to the next context (perhaps a kernel server) and process the incoming messages. This reduces the total number of context switches (and, more importantly the number of mode switches). If you want to see good performance from QNX, then use the native system call API, not the POSIX wrapper.
Re:qnx does just fine with a u-kernel and message (Score:3, Informative)
The "at best" is assuming that the forgoing issues don't cause things like cache faults. Passing parameters in registers won't. Thus the performance really can be MUCH worse than twice even in the case of minimal calls.
But even in that case the real-world performance of Mach is, in fact, much worse that twice as slow. I believe it was Chan that did all the big measurement runs, and - going on memory he
Re:qnx does just fine with a u-kernel and message (Score:3, Informative)
Re:design AND performance better with safe kernel (Score:3, Informative)
<URL:http://channel9.msdn.com/ShowPost.aspx
Why warn us? Super Slashdot Effect (Score:5, Funny)
Um, if you want to warn anyone maybe you should warn the sys admin of the server that hosts the PDF file that you just put a link to on the main page of slashdot. I think they'll care a little more about the super slashdot effect (I'm coining that term for when a non-html file is linked to from slashdot - be it pdf, mpg, avi, etc.) than we will about taking the extra time to load.
Re:Why warn us? Super Slashdot Effect (Score:4, Insightful)
As always... (Score:5, Insightful)
Each system has benefits... but they almost always rely on the existence of certain assumptions.
Re:As always... (Score:3, Interesting)
Actually, as kernels have started adding parallelism (such as SMP, clustering, etc), it becomes harder to really say exactly what sort of design a kernel really has. (Design is intrinsic and
Here's an idea for some news - (Score:4, Funny)
Smucks.
Re:Here's an idea for some news - (Score:4, Funny)
I, for one, envision a different outcome. One which is more fair to everyone involved:
Yaz.
mach inject (Score:5, Informative)
Right tool for the job (Score:2, Funny)
Yes. For instance, if you're wanting to test out the effects of the latest greatest spyware, use Windows and IE to do anything on the Internet.
Re:mach inject (Score:3, Insightful)
It really isn't so hard to say, all OSes have really cool things about them, and we should talk about these cool things, as they enrich the OSes we develop for or use.
Why can't something Microsoft does actually be cool, or something Apple does actually be outstanding, or a *nix project create a whole new paradigm that is fantastic.
Anyway, thanks for the feeling of the old days when people didn't bi
Mac OS X is Mach, but it is not a Microkernel (Score:5, Interesting)
So, even though it uses Mach, you can't call it a Microkernel.
Re:Mac OS X is Mach, but it is not a Microkernel (Score:5, Interesting)
I suspect this is exactly how to never violate the microkernel design and still have BSD compat.
Re:Mac OS X is Mach, but it is not a Microkernel (Score:5, Informative)
Re:Mac OS X is Mach, but it is not a Microkernel (Score:5, Informative)
1. Mach is not a complete kernel. It requires someone to implement the areas which the Mach group were not researching. This has traditionally been done by compiling against BSD 4.3.
2. Mac OS X updated to the FreeBSD kernel instead of BSD 4.3 to gain a more modern kernel design with better hardware support.
3. OS 9 "Classic" is not a microkernel server, but rather a technology that Apple calls "Blue Box". Blue Box is a hardware virtualizer like VMWare that is capable of communicating directly with the OS X desktop. Using this communication, the OS 9 desktop is made to disappear, making the application appear to run on the OS X desktop.
4. The combination of Mach and FreeBSD is called "XNU" by Apple. The complete os is called Darwin, and the commercial variety with the Next and Mac APIs is called "Mac OS X".
More Info:
Mach Kernel [cmu.edu]
Wikipedia: Mach [wikipedia.org]
Wikipedia: XNU [wikipedia.org]
Blue Box info [kernelthread.com]
Linux the OS that is not an OS? (Score:4, Informative)
Google's definition of Linux [google.com]
I think you have more of a chance to start a discussion on that statement then you do in regards to which kernel is "better".
Linux IS an OS, both historically and now (Score:5, Informative)
That is a true statement for the GNU project, but not for all of Linuxdom. Linux (the OS) was not started by the GNU folks. It was started as a separate project and incorporated items from the FSF (and BSD, etc.) into its release. From the beginning the whole OS has always been called "Linux" (search Google Groups for "linux 0.11 author:torvalds" and click on the "Linux information sheet" for an example of this).
Yes, RMS prefers to call the OS GNU/Linux, but that's because he's seeing things from the perspective of the GNU project incorporating the Linux kernel into their work. The rest of Linuxdom see Linux as the name of both the OS and the kernel, and qualify the name using the phrase "the Linux kernel" as an easy way to differentiate between the two.
So, the opening statement in the OS X story is false: Linux is an OS, and is used as such by folks every day. This is the reality of the situation, and it is, at best, wishful thinking on the part of folks who claim it is not to say otherwise.
Re:Linux the OS that is not an OS? (Score:5, Informative)
Not so. Here a posting from Linux Torvalds about Linux -- from the beginning the term was used as both the name for the kernel and the whole OS:
LINUX INFORMATION SHEET
(last updated 13 Dec 1991)
1. WHAT IS LINUX 0.11
LINUX 0.11 is a freely distributable UNIX clone. It implements a
subset of System V and POSIX functionality. LINUX has been written
from scratch, and therefore does not contain any AT&T or MINIX
code--not in the kernel, the compiler, the utilities, or the libraries.
For this reason it can be made available with the complete source code
via anonymous FTP. LINUX runs only on 386/486 AT-bus machines; porting
to non-Intel architectures is likely to be difficult, as the kernel
makes extensive use of 386 memory management and task primitives.
[...]
2. LINUX features
- System call compatible with a subset of System V and POSIX
- Full multiprogramming (multiple programs can run at once)
- Memory paging with copy-on-write
- Demand loading of executables
- Page sharing of executables
- ANSI compliant C compiler (gcc)
- A complete set of compiler writing tools
(bison as yacc-replacement, flex as lex replacement)
- The GNU 'Bourne again' shell (bash)
- Micro emacs
- most utilities you need for development
(cat, cp, kermit, ls, make, etc.)
- Over 200 library procedures (atoi, fork, malloc, read, stdio, etc.)
- Currently 4 national keyboards: Finnish/US/German/French
- Full source code (in C) for the OS is freely distributable
- [...]
Nothing wrong with naming your own project (Score:3, Interesting)
That's certainly one cynical viewpoint, but is not what really happened. Linus started his own OS project and he named it as he pleased (or really those around him named it and he accepted the name). There's nothing wrong with naming your own project and then cherry picking the items you want to be in your project from the available choices. Keep in mind that the GNU folks were working on HURD at the time, and were not all that keen on
Xnu, not mach (Score:5, Informative)
stop spreading the myth that Xnu is a microkernel
Not a "compatibility layer" (Score:5, Interesting)
It's not just a "compatibility layer". A Mach system consists of multiple servers providing services to each other and to applications. The BSD server in XNU is an essential part of the system... it's the ringleader, and calls the shots from boot onwards.
Re:Xnu, not mach (Score:3, Funny)
Great OS Book - but what's Steve up to now? (Score:5, Funny)
So... does anyone here know what Steve Jobs and Mach have been up to since their halcyon days at NeXT?
Re:Great OS Book - but what's Steve up to now? (Score:4, Funny)
OT: PDF link clicking extension (Score:2, Informative)
PDF Download [mozilla.org]
Debate? what debate? (Score:3, Insightful)
There is no debate. It has been well accepted that micro-kernels are the way to go.
--
Toby
Re:Debate? what debate? (Score:2, Funny)
Which is why nowadays it's impossible to find a widely used OS that isn't based on a microkernel!
Re:Debate? what debate? (Score:3, Insightful)
Mac != Mach (Score:4, Interesting)
This approach doesn't make much logical sense to me, but it's what Steve and Avie wanted, and somehow, amazingly, it still just plain works.
mach vs posix (Score:5, Informative)
Given this compatibility effort, mach is not a fair comparison, either in hurd or osx, for comparing the merits and performance between that of monolithic and microkernel achitectures because so much extra stuff was added to a design never intended for posix. Something like QNX4 and later, designed both as a microkernel and for posix, or perhaps a pure mach system running applications designed specifically for mach, might be a more fare basis to compare the value of microkernel vs monolithic architectures.
Mach on hurd is easier to grasp and test since many of the lower level mach kernel services are still represented and usable there. Apple seems to be trying to eliminate visibility of as many of the lower level mach services from application developers as possible. Yet, there are still many things that can only be done in the mach kernel on osx or darwin (such as threads that can be cancelled on socket operations or sleeps). If one wanted a bsd/posix compliant environment, I think Apple would have been far better off starting from PPC/xBSD or Linux kernels, rather than trying to rope and rebuild mach to fit into something it was never originally designed for.
Re:mach vs posix (Score:3, Informative)
The level of Mach-iness of OS X is another good question, though. From what I gather, OS X looks more like a monolithic BSD kernel ported to a Mach/PPC architecture (where instead of targeting the PPC architecture, OS X's
They're both better! (Score:5, Insightful)
What's better? PHP or Python? What's better Pepsi or Coke? The answer is always the same. It depends what your goals/needs/desires are. Neither is "better" in the all encompassing good or bad definition unless you qualify it. Which one's better for performance? Probably the monolithic kernel. Which one's better for security? Probably the micro-kernel. But even then, you have to qualify both of those. Performance of what? Security of what?
I'm sick of all these stupid "which is better?" religious wars that geeks are always so interested in having. What's better? C++ or Java? What's better? IE or Mozilla?
They're all better because the more there are, the more choices you have. There, is that a satisfactory answer?
Re:They're both better! (Score:4, Funny)
Perl.
MacOS / Darwin / xnu isn't a pure microkernel (Score:5, Insightful)
In general, monolithic kernels run in a single address space and use direct procedure calls / variable accesses to pass data and control flow between subsystems. This is true even if they support loadable modules (like Linux). Any driver or other subsystem in your kernel can (if it wants) access any other part of the kernel.
Although Mach itself is a microkernel, the "xnu" kernel which Darwin / MacOS X uses also hosts other components *in the same address space*. Some of the subsystems (e.g. the BSD subsystem) are large and resemble monolithic systems themselves. The overall system is not a "pure" microkernel, with lots of code moved out of privileged mode. Equally, it's not quite like a traditional monolithic UNIX because of the use of Mach and the other Darwin-specific components (e.g. a (relatively?) stable binary interface for drivers).
This article is troll. (Score:4, Funny)
Let's secretly watch (Score:5, Funny)
Now, let's see if he notices!
Re:Let's secretly watch (Score:4, Funny)
Monolithic more popular, microkernel still appeals (Score:4, Interesting)
The microkernel design still appeals, though. For some things (not all) it is beneficial to move stuff out into less-privileged units. (Small) examples of this in Linux include: FUSE (for implementing non-performance-critical filesystems in Linux userspace), udev instead of devfs, moving initialisation code to the initramfs instead of being in the kernel itself...
Other systems (e.g. Dragonfly BSD) are also seeking to move functionality to userspace where possible without undue complexity and / or performance cost.
Some argue that virtual machine monitors are a useful modern equivalent to microkernels. They perform a similar function (partitioning system software into multiple less privileged entities), although they do it in a more "pragmatic", less architecturally "pure" way.
Virtual machine monitors allow multiple virtual machines to use the same hardware. They have also been used for running Linux drivers in fault-resistant sandboxed virtual machines, with performance within a few percent of a traditional monolithic design (fully privileged drivers).
The L4 microkernel is being used as a virtual machine monitor for this work by one research group, Xen has these capabilities also.
Mach Sucks (Score:4, Informative)
Mach is painfully slow. It's an old microkernel and it uses async IPC (to allow for passing messages over the network). This is slow because you have to do a ton of context switches and copy the message between address spaces.
L4 [l4ka.org], on the other hand, uses sync IPC. It has a bunch of neat optimizations, but the main reason why it's faster than Mach is that it doesn't have to copy messages. You send an IPC and it goes into the part of your VM space that L4 sets aside for IPC and then L4 does a quick context switch to the target task, it processes the IPC, and then you get your data back. (Simplifyed a ton).
So, microkernels rock. Mach sucks.
an old joke... (Score:5, Funny)
"Mach was the greatest intellectual fraud in the last ten years."
;login [usenix.org], 9/1990
"What about X?"
"I said `intellectual'."
OS X's kernel is not Mach. (Score:5, Informative)
Amit Singh has a well-written page about XNU: http://www.kernelthread.com/mac/osx/arch_xnu.html
NOT A MICROKERNEL (Score:5, Informative)
It uses Mach and BSD in THE SAME ADDRESS SPACE. As such, it's basically as monolithic as it gets. It just happens to incoporate Mach in the kernel space and uses it for threads and IPC.
Anyone who takes 10 minutes to look at the Darwin documentation would know this.
I wish
Apple does NOT use the MACH kernel. (Score:5, Informative)
It uses code FROM Mach, but it is not Mach and it is not a Microkernel. NT (NT 4.0, NT 5.0 (win2k), NT 5.1 (WinXP) does not use a Microkerenl either.
The only OS that I know of that actually uses a Microkernel is GNU/Hurd.
The OS X kernel, called XNU, is a mixture of *BSD kernel code and Mach kernel code.
Yes, yes, there was a ancient debate between Linus and that other guy about Micro- vs Macro-kernels, and guess what, Linus was right.
Apple does NOT use a microkernel. It does not use Mach. It is BASED on Mach. It would be considured a kludge compared to Linux or FreeBSD, but it works out fine.
Similar to how Mustangs are based on Ford Falcons and Granadas from the late 70's and early 80's. Those cars were as much as a failure as Mach, however the Mustang is flashy and many people desire it. So go figure.
Re:Apple does NOT use the MACH kernel. (Score:3, Informative)
As for winning or losing, you can see for yourself on Google Groups [google.ca] that it was not about winning, it was a discussion on the merits of both.
Here's your answer, smart guy (Score:4, Insightful)
The problem with Mach (Score:5, Interesting)
What's sad about this is that the failure of Mach tainted ALL ukernels. By the mid-1990s the idea was basically dead. But what an idea! Don't have your machine on a network? Simply don't run the network program. Using a diskless system? Don't run the disk server. Don't want _VM_... no problem. You can use the exact same OS image to build anything from a minimal OS for a handheld to a full-blown multi-machine cluster, without even compiling. No pluggable kernels, no shared libraries, no stackable file systems, nothing but top and ls.
But it just didn't work. IIRC performance of a Unix app on a truly collection-of-servers Mach was 56% slower than BSD. Unusable. Of course you can compile the entire thing into a single app, the "co-located servers" idea, but then all the advantages of Mach go away, every single one.
Now, given this, the question has to be asked: why anyone would still use it? Don't get me wrong, there are real advantages to Mach, notably for Apple who ship a number of multiprocessor machines. But the same support can be added to monokernals. Likewise Apple's version has support for soft realtime, which has also been added to monokernels. So in the end the Mac runs slower than it could, and I am hard pressed to find an upside.
Of course it didn't have to be this way. The problems in Mach led from the development process, not the concepts within. As L4 shows, it is possible to make a cross-platform IPC system that is not a serious drag on performance. And Sun's Spring went further than anyone, really re-writing the entire OS into something I find really interesting, and still providing fast Unix at the same time. I'd love to see someone build Mac OS X on Spring...
Re:The problem with Mach (Score:5, Interesting)
Sad, but true. The developers of Mach chose to start with BSD and tried to hack it into a microkernel, one section at a time. This was a flop. Mach 2.5, which Apple uses, is basically BSD with some Mach features. Mach 3 is more of a microkernel, but is so awful that nobody uses it.
There are really only two microkernels that work - VM, for IBM mainframes, and QNX. In both cases, incredible care was put into getting the key primitives - interprocess communication and scheduling - right. If those are botched, the system never recovers.
Mach suffered from too much "cool idea" syndrome. There's too much generality in key primitives that need to work fast. Message passing has too many options. The ability to build heterogeneous multiprocessor clusters out of whatever you have lying around complicates the simpler cases. And sharing memory across the network isn't worth the trouble.
It's clear from VM and QNX how a microkernel should work. Interprocess communication and scheduling need to play well together. Interprocess communication primitives should be like subroutine calls, not I/O operations. Try for an overhead of about 20%, and don't get carried away with the "zero copy" mania. Organize the I/O system so that the channel drivers that manage memory access are separate from the device drivers that manage the device functions.
This is how you get uptime measured in years.
Re:The problem with Mach (Score:3, Informative)
Actually, Apple's kernel is a collection of parts from BSD, Mach, and IOKit. [freebsd.org] It's a monolithic kernel like Mach 2.5, not a microkernel like Mach 3.0, although some parts from the Mach 3.0 code base were supposedly used.
IOkit is written in the "embedded subset" of C++, an idea from 1999 that never caught on. Drivers are loadable kernel modules, as with Linux, but the structure is quite different.
Any driver can crash the kernel. It's not a microkernel at all.
Exokernels (Score:3, Interesting)
MacOS X is *not* a microkernel architecture (Score:4, Informative)
Just as an example, on the new MacMini hardware, sound level control is done in kernelspace (since HW doesn't support that anymore)! Whereas the LinuxPPC developers refuse to do things like this in kernelspace.
Actually in Linux many things are pushed out to userspace (think udev), making it much more microkernel-like than MacOSX.
(Not that Apple-Fanboys would understand anything of that)
Mono / Micro (Score:4, Insightful)
In reality, though, both MacOS X and Linux have departed from the architectures in mostly pragmatic ways. OS X is not a "pure" microkernel in the mach sense.
Re:Mono / Micro (Score:3, Funny)
More like Linux is like a farming co-op, XNU is Monsanto.
Or, maybe Linux is like a monkey with a spanner, and XNU is like a komodo dragon with a toothache.
No, wait. Linux is like a pulmonary thrombosis! and XNU is the dropped sponge in a gastric bypass!
Damn! OK! I admit it! My analogies have fallen, and they can't get up.
I will now explain the difference with an interpretive dance, perhaps some origami.
Re:Monolithic (Score:5, Informative)
I can't believe people are modding you up for this.
The Linux kernel is monolithic. Linux modules do not run in user-mode. They are loaded into the kernel proper.
mkLinux was an Apple-sponsored effort to run Linux on Mach. The Linux kernel was modified to run in user-mode; it basically became an executable. In fact, you could run multiple instances of the same kernel (or different kernels) simultaneously.
Re:Monolithic (Score:5, Interesting)
mkLinux is not the only microkernel Linux - L4Linux is still maintained and is much more advanced. Nor are these the only Linux kernels to run in userspace - UML Linux, for example, does just fine. It is not clear where XEN fits into the picture.
All in all, though, the situation with Linux is actually a highly complex one, and should not be regarded as being definitely anything.
Re:Monolithic (Score:5, Informative)
I can. Maybe you're too young to remember when the term monolithic was commonly used to describe a kernel which, instead of using loadable modules, was linked as a single binary image. This was, and is, a valid use of the word. Here's an example [linuxjournal.com].
The first time I heard someone say that Solaris is monolithic, I thought that they were saying that, like SysVR3, it didn't support loadable modules. I didn't realize that, with the development of microkernels, the term "monolithic kernel" had started to be used in a different context.
Re:Monolithic (Score:3, Informative)
You can't download the binary of a driver, tell the kernel to load it, and expect it to work unless the person who compiled just so happened to have the exact same version info, and by some miracle the same compile options.
Yes, distros like RedHat and SuSE do have binary drivers for download, BUT ONLY if you stay with the stock kernel.
Just because you can "load modules" doesn't mean you are suddenly a microkernel. God it's like monolithic has become a swear wor
Re:Monolithic (Score:5, Informative)
Re:Monolithic (Score:2, Informative)
> that linux is NOT monolithic - its quite modular. Monolithic
> translates to no modules, correct?
No: Both the modules and the rest of the kernel run in the same address space, so Linux is monolithic.
A microkernel approach puts some (most, for second-generation microkernels like L4) traditional kernel features into user space, where they cannot hurt the kernel directly, by overwriting memory.
Re:Monolithic (Score:2)
Re:Monolithic (Score:4, Informative)
basically all the interfaces are defined as symbols to the linker, and all interfaces are defined c-native.
the micro-kernels are meant to use message passing and more abstracted interfaces, as well as separate address spaces to ensure a bad module does not take down the entire kernel. Think of it like the modules run as only semi-privileged applications, handling their hardware and then giving control back to the micro-kernel which does as little as possible to arbitrate control and schedule between the subsystems and user-mode applications. Drivers are no longer fully privileged, and the entire user-space can be considered a subsystem of the kernel.
it's different, and kinda hard to design for, but i can't wait for hurd to release a linux compat layer.
Re:Monolithic (Score:4, Informative)
Re:Monolithic (Score:5, Informative)
No, you're mixing terms.
So, the linux kernel supports kernel modules, and its design is to some degree "modular" (as any project that size would have to be), but noone would claim it to be a microkernel.
--Bruce Fields
Mono = one (Score:3, Informative)
Personally, I wouldn't consider a modular kernel to be monolithic, because it can take many forms. It may have a signle form at any given time, but over any period of time, it may vary that form. If we are going to give the Linux kernel a "technical" description, then "
You are very confused. (Score:4, Informative)
And second, no BSD based kernel forces you to use modules. Have you actually tried any BSD? Modules are entirely optional, just like linux. In fact, openbsd's kernel only has support for modules, but nothing is actually compiled as a module, and using modules is unsupported.
What kind of crack have you been smoking... (Score:4, Informative)
To clear up any confusion: No *BSD uses a microkernel. (The only part of OS-X that is *BSD is the userland, which is derived from FreeBSD). The *BSD's are basically in the same classification as Linux/Solaris/HP-UX or any other UNIX or *NIX clone. Which means: all the *BSD's are monolithic in nature, with some modular abilities added on in recent years. Like Linux, the *BSD's can load a kernel-module upon request (either during boot, or upon superuser-request). These modules can also be compiled into the kernel itself (which is sometimes a good idea, as it saves a small amount of memory and improves performance).
Anyhoo, back to the original topic: The MACH microkernel. Apple's implementation is excellent these days, but it definitely went through its struggles (which is one reason why we continue to see major speed improvements with new versions of OSX, even on older hardware). Creating a monolithic kernel is difficult enough, but to create a micro like MACH, and do it properly... that takes serious skills. Mad props, Apple engineers.
Re:Monolithic (Score:4, Interesting)
To couch them in terms of Monolithic versus Micro would be like trying to classify an economy as Capitalist or Communist.
Neither economy has ever existed in it's pure form. Both descriptions also have political overtones that have precious little to do with their actualy description.
Re:Monolithic (Score:5, Informative)
a) running in kernel space, not user space
b) communicated with by predefined hooks, rather than a generic message passing interfacing.
That's why linux modules, which are superficially like elements of a microkernel, are not really like them at all.
Re:Monolithic (Score:3, Interesting)
I would disagree with you there. Apple's design may not be beautiful, but it certainly has the best of both worlds.
The BSD Layer, Memeory Managment, etc are all built inside XNU (OS X Kernel) but at the same time its still functions as a microkernel allowing things such as Kernel Extensions (Kext).
The problem with a fully MicroKer
Re:RMS, Is That You? (Score:3, Funny)
I don't think that is needed since more than enough people mock Apple already.
Re:RMS, Is That You? (Score:3, Insightful)
Re:RMS, Is That You? (Score:5, Insightful)
RMS is very important, but he's a zealot, and a lot of people don't agree with his views (I for one don't on a lot of issues). Don't get caught up in the whole Saint iGNUcious thing.