Since Zee managed to screw up my scheme to become rich and famous on his back (not to mention my pie in the sky idea of the OS to end all OS based on the "Cache Kernel") I guess It's time to do the same to GJ (who says sociopaths ain't persistent?):P
Looking at the
L4 Wikipedia Entry:
The poor performance of first-generation microkernels, such as Mach, led a number of developers to re-examine the entire microkernel concept in the mid-1990s. The asynchronous in-kernel-buffering process communication concept used in Mach turned out to be one of the main reasons for its poor performance. This induced developers of Mach-based operating systems to move some time-critical components, like file systems or drivers, back inside the kernel[citation needed]. While this somewhat ameliorated the performance issues, it plainly violates the minimality concept of a true microkernel (and squanders their major advantages).
Detailed analysis of the Mach bottleneck indicated that, among other things, its working set is too large: the IPC code expresses poor spatial locality; that is, it results in too many cache misses, of which most are in-kernel.[2] This analysis gave rise to the principle that an efficient microkernel should be small enough that the majority of performance-critical code fits into the (first-level) cache (preferably a small fraction of said cache).
So, the early Micro Kernel everyone (including M$, OSF/1 was supposed to be based on it as well) jumped into, had major issues with its implementation of IPC. Which apparently were the reason for its poor performance.
So, looks like the L4 crowd managed to have their cake and eat it too, the original L4 kernels were written in assembly language, but further implementations used C++ while apparently keeping a good deal of performance gains.
According to this:
While these results demonstrate that the poor performance of systems based on first-generation microkernels is not representative for second-generation kernels such as L4, this constitutes no proof that microkernel-based systems can be built with good performance. It has been shown that a monolithic Linux server ported to L4 exhibits only a few percent overhead over native Linux.[15] However, such a single-server system exhibits few, if any, of the advantages microkernels are supposed to provide by structuring operating system functionality into separate servers.
A number of commercial multi-server systems exist, in particular the real-time systems QNX and Integrity. No comprehensive comparison of performance relative to monolithic systems has been published for those multiserver systems. Furthermore, performance does not seem to be the overriding concern for those commercial systems, which instead emphasize reliably quick interrupt handling response times (QNX) and simplicity for the sake of robustness. An attempt to build a high-performance multiserver operating system was the IBM Sawmill Linux project.[16] However, this project was never completed.
It has been shown in the meantime that user-level device drivers can come close to the performance of in-kernel drivers even for such high-throughput, high-interrupt devices as Gigabit Ethernet.[17] This seems to imply that high-performance multi-server systems are possible.
So at the very least they showed that the Micro Kernel concept wasn't inherently flawed (which seems to be the excuse on why there so many monolithic kernels are still around and the rest being hybrids like the Windows Kernel or Apple's XNU) And that your Device Drivers can run on non-priviledged mode with few performance penalties. The article does imply that it remains to be seen if you can have a MicroKernel based system with multiple "servers" (or virtual computers if you like) running with acceptable performance.
Andrew Tanenbaum (hell I didn't know he was still alive) released
Minix 3 which is MicroKernel based.
Apparently Tanenbaum and Linus Torwalds has some arguments regarding Monolithic vs MicroKernels
Part 1Part 2I can see from part 2 that Microkernels had a pretty bad rep in the early 2000s