From the kernel architecture perspective there are two main categories of operating
systems, with a number of variations in between. Micro-kernel operating systems are characterized by running most of their services in user mode as user processes and keeping only the very basic scheduling and hardware management mechanisms in the protected kernel space. Monolithic kernels on the other hand are characterized by incorporation of most of the operating system services into the kernel space, sharing the same memory space.
The fundamental principle micro-kernel designs aim to follow is the principle of separation of mechanism and policy. In a micro-kernel, the kernel’s role is to provide the minimal support for executing processes, inter-process communication and hardware management. All the other services—and indeed, policies—are then implemented as servers in user space, communicating with applications and with other components through inter-process communication mechanisms, usually messages. Micro-kernel based operating systems are usually characterized by strong modularity, low kernel footprint and increased security and robustness, as a consequence of the strong isolation of the components and the execution of most services in user-space. On the other hand, micro-kernels traditionally require a higher overhead for access to operating services, as there will be more context switches between user-space and kernel-space modes.
Monolithic kernels excel primarily at speed as all the services of the operating system execute in kernel mode and hence can share memory and perform direct function calls, without the need for using inter-process communication mechanisms, such as message passing. As most of the OS functions are packed together, monolithic kernels tend to be bigger, more complex and hence more difficult to test and maintain, also requiring careful, holistic approach to overall design. There have been several long debates around these two different approaches, most famously between Linus Torvalds, the inventor and gate keeper of Linux (on the monolithic side) and Andrew Tannenbaum, a respected professor and author (advocate of micro-kernel architectures), dating back to 1992 with a revived exchange in 2006. The essence of the debate revolves around maintainability, security, efficiency and complexity of operating systems, with valid arguments brought forward by both camps. The argument for micro-kernels is primarily based on the emphasis on reliability and security, supported by as little data sharing as possible and strict decomposition and isolation of operating system components. The counter-argument brought forward by Torvalds builds on the fact that algorithm design for distributed, share-nothing systems is inherently more complex and hence micro-kernels, with their emphasis on isolation would suffer on the maintainability and performance front.
I was in the U.S. for a couple of weeks, so I haven’t commented much on
LINUX (not that I would have said much had I been around), but for what it is worth, I have a couple of comments now.
As most of you know, for me MINIX is a hobby, something that I do in the evening when I get bored writing books and there are no major wars, revolutions, or senate hearings being televised live on CNN. My real job is a professor and researcher in the area of operating systems.
You use this as an excuse for the limitations of minix? Sorry, but you loose: I’ve got more excuses than you have, and Linux still beats the pants of minix in almost all areas. Not to mention the fact that most of the good code for PC minix seems to have been written by Bruce Evans.
Re 1: you doing minix as a hobby – look at who makes money off minix, and who gives Linux out for free. Then talk about hobbies. Make minix freely available, and one of my biggest gripes with it will disappear. Linux has very much been a hobby (but a serious one: the best type) for
me: I get no money for it, and it’s not even part of any of my studies in the university. I’ve done it all on my own time, and o n my own machine.
Re 2: your job is being a professor and researcher: That’s one hell of a good excuse for some of the brain-damages of minix. I can only hope (and assume) that Amoeba doesn’t suck like minix does.
I think it is a gross error to design an OS for any specific architecture, since that is not going to be around all that long.
“Portability is for people who cannot write new programs”
-me, right now (with tongue in cheek)
This subject was revisited in 2006 after Tanenbaum wrote a cover story for Computer magazine titled “Can We Make Operating Systems Reliable and Secure?”. While Tanenbaum himself has mentioned that he did not write the article to renew the debate on kernel design, the juxtaposition of the article and an archived copy of the 1992 debate on the technology site Slashdot caused the subject to be rekindled. Torvalds posted a rebuttal of Tanenbaum’s arguments via an online discussion forum, and several technology news sites began reporting the issue. This prompted Jonathan Shapiro to respond that most of the field-proven reliable and secure computer systems use a more microkernel-like approach.
Here are some of the important links that would make this whole debate an interesting read :