Cache Kernel and Ring Architectures

Re: Cache Kernel and Ring Architectures

Postby FZR1KG » Mon Aug 11, 2014 10:05 pm

Gullible Jones wrote:Not overthinking IMO, just thinking from a different angle.

http://www.xkcd.com/1200/

The compromise stays in the VM, but if the VM has access to important resources then so does the attacker. I'm basically thinking of a business situation - you have a ton of client data in a database with no support for the new OS, someone wants to break into that database.

Really nasty stuff (keylogging, turning the machine into a Tor exit node, etc.) would be harder (since presumably the new OS is logging network traffic from the VMs). But my point is basically that virtualizing the old OS is only secure if the resources it has access to are unimportant.


The virtual run O/S is only going to be as secure as itself.
The whole point is to start again with a more secure system.

e.g.
Lets say you stay with the current Windows O/S and another person doesn't.
Both get attacked by a nasty virus.
The virtual one loses all its virtual data, thankfully you can always backup the entire state, but let's say you didn't.
It crashes the system and destroys everything on it's space.
It can't however touch the other three O/S's you have running or anything outside it's own scope.

Now, the other person who ran without the virtual O/S loses everything. Including any other O/S's he had installed.

So which option would you prefer?

Option 1 of staying without a virtual O/S means you'll forever be in this mouse wheel.
Option 2 of going with the virtualized system allows you to have a shot at getting out of it.

IOW, it can't magically give windows better security or better stability, that's a job for Windows.
What it can do is isolate an insecure and unreliable O/S from everything else.
FZR1KG
 

Re: Cache Kernel and Ring Architectures

Postby Cyborg Girl » Mon Aug 11, 2014 10:23 pm

I think I forgot what I was complaining about? I'm confused. :lol:

I do understand the concept (not the theory but I'll be working on that). Not sure how it would work out in practice though, as there have IIRC been vulnerabilities in modern Intel CPU architecture (like, on the die) that could allow VM breakout. I will see if I can find articles about that.

Edit: I think this might be the one I was thinking of: http://www.symantec.com/security_respon ... ?bid=53856
User avatar
Cyborg Girl
Boy Genius
 
Posts: 2138
Joined: Mon May 27, 2013 2:54 am

Re: Cache Kernel and Ring Architectures

Postby FZR1KG » Mon Aug 11, 2014 10:27 pm

Gullible Jones wrote:I think I forgot what I was complaining about? I'm confused. :lol:

I do understand the concept (not the theory but I'll be working on that). Not sure how it would work out in practice though, as there have IIRC been vulnerabilities in modern Intel CPU architecture (like, on the die) that could allow VM breakout. I will see if I can find articles about that.


That's what I meant about bugs.
The thing is, it's far easier to fix that sort of a bug than a obscure software one.

In any case, if something does break out it can be very quickly patched and fixed.
I'm not saying that any implementation will be perfect, what I'm saying is if it is implemented correctly and, there is no reason for it not to be eventually, nothing will break out.
FZR1KG
 

Re: Cache Kernel and Ring Architectures

Postby Cyborg Girl » Mon Aug 11, 2014 10:31 pm

Gotcha. And please pardon the kneejerk criticism; I've seen a lot of snake oil and bad security practice lately.
User avatar
Cyborg Girl
Boy Genius
 
Posts: 2138
Joined: Mon May 27, 2013 2:54 am

Re: Cache Kernel and Ring Architectures

Postby Sigma_Orionis » Mon Aug 11, 2014 10:46 pm

Lately? this business has been like that since the 80s :P
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4491
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: Cache Kernel and Ring Architectures

Postby Sigma_Orionis » Tue Aug 12, 2014 12:36 am

Since Zee managed to screw up my scheme to become rich and famous on his back (not to mention my pie in the sky idea of the OS to end all OS based on the "Cache Kernel") I guess It's time to do the same to GJ (who says sociopaths ain't persistent?):P

Looking at the L4 Wikipedia Entry:

The poor performance of first-generation microkernels, such as Mach, led a number of developers to re-examine the entire microkernel concept in the mid-1990s. The asynchronous in-kernel-buffering process communication concept used in Mach turned out to be one of the main reasons for its poor performance. This induced developers of Mach-based operating systems to move some time-critical components, like file systems or drivers, back inside the kernel[citation needed]. While this somewhat ameliorated the performance issues, it plainly violates the minimality concept of a true microkernel (and squanders their major advantages).

Detailed analysis of the Mach bottleneck indicated that, among other things, its working set is too large: the IPC code expresses poor spatial locality; that is, it results in too many cache misses, of which most are in-kernel.[2] This analysis gave rise to the principle that an efficient microkernel should be small enough that the majority of performance-critical code fits into the (first-level) cache (preferably a small fraction of said cache).


So, the early Micro Kernel everyone (including M$, OSF/1 was supposed to be based on it as well) jumped into, had major issues with its implementation of IPC. Which apparently were the reason for its poor performance.

So, looks like the L4 crowd managed to have their cake and eat it too, the original L4 kernels were written in assembly language, but further implementations used C++ while apparently keeping a good deal of performance gains.

According to this:

While these results demonstrate that the poor performance of systems based on first-generation microkernels is not representative for second-generation kernels such as L4, this constitutes no proof that microkernel-based systems can be built with good performance. It has been shown that a monolithic Linux server ported to L4 exhibits only a few percent overhead over native Linux.[15] However, such a single-server system exhibits few, if any, of the advantages microkernels are supposed to provide by structuring operating system functionality into separate servers.

A number of commercial multi-server systems exist, in particular the real-time systems QNX and Integrity. No comprehensive comparison of performance relative to monolithic systems has been published for those multiserver systems. Furthermore, performance does not seem to be the overriding concern for those commercial systems, which instead emphasize reliably quick interrupt handling response times (QNX) and simplicity for the sake of robustness. An attempt to build a high-performance multiserver operating system was the IBM Sawmill Linux project.[16] However, this project was never completed.

It has been shown in the meantime that user-level device drivers can come close to the performance of in-kernel drivers even for such high-throughput, high-interrupt devices as Gigabit Ethernet.[17] This seems to imply that high-performance multi-server systems are possible.


So at the very least they showed that the Micro Kernel concept wasn't inherently flawed (which seems to be the excuse on why there so many monolithic kernels are still around and the rest being hybrids like the Windows Kernel or Apple's XNU) And that your Device Drivers can run on non-priviledged mode with few performance penalties. The article does imply that it remains to be seen if you can have a MicroKernel based system with multiple "servers" (or virtual computers if you like) running with acceptable performance.

Andrew Tanenbaum (hell I didn't know he was still alive) released Minix 3 which is MicroKernel based.

Apparently Tanenbaum and Linus Torwalds has some arguments regarding Monolithic vs MicroKernels

Part 1
Part 2

I can see from part 2 that Microkernels had a pretty bad rep in the early 2000s :)
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4491
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: Cache Kernel and Ring Architectures

Postby FZR1KG » Tue Aug 12, 2014 1:54 am

This reminds me of the Australian government spending $1Million on finding out why termite nests face N-S in the tropic areas of Australia.
All they need to do was ask me and I would have told them for free.

The future is in virtualisation. It has been since computers were first developed and remains so now.
I can even prove it mathematically (long standing joke).
FZR1KG
 

Previous

Return to Sci-Tech… and Stuff

Who is online

Users browsing this forum: No registered users and 9 guests

cron