For FZ: Linux desktops then and now

For FZ: Linux desktops then and now

Postby Cyborg Girl » Mon Aug 04, 2014 3:45 am

KDE circa 2009: http://upload.wikimedia.org/wikipedia/c ... esktop.png
Similar to Windows XP, maybe a little more garish.

KDE circa 2013: http://en.wikipedia.org/wiki/File:KDE_4.png
"omg glass translucency shadows omg!"

Gnome circa 2009: http://upload.wikimedia.org/wikipedia/c ... u_9.10.png
Fairly standard desktop, somewhere between Win2k and MacOS 9.

Gnome circa 2013: http://upload.wikimedia.org/wikipedia/c ... l-3.10.png
"Look at me look at me guys, I'm an iPad!"

Oh, Ubuntu used Gnome 2 until 2011 or so. Now it uses Unity, another 3D Mac-alike desktop:
http://upload.wikimedia.org/wikipedia/c ... _Final.png

Needless to say performance is dreadful on most laptops, since they usually don't have the hardware juice for it. As for desktops, I've run into enough display server crashes and kernel panics with several different machines that I don't even bother with a full desktop environment.

I shouldn't complain (these desktops are free, and I don't use any of them)... But try telling family members that, by the way, your snappy Windows 2000 and XP desktops will perform like a drunk sloth after you switch to Linux. People are never happy to hear that they must upgrade their hardware again in order to do the same damn thing.
User avatar
Cyborg Girl
Boy Genius
 
Posts: 2138
Joined: Mon May 27, 2013 2:54 am

Re: For FZ: Linux desktops then and now

Postby FZR1KG » Mon Aug 04, 2014 4:16 pm

Gullible Jones wrote:Needless to say performance is dreadful on most laptops, since they usually don't have the hardware juice for it. As for desktops, I've run into enough display server crashes and kernel panics with several different machines that I don't even bother with a full desktop environment.

I shouldn't complain (these desktops are free, and I don't use any of them)... But try telling family members that, by the way, your snappy Windows 2000 and XP desktops will perform like a drunk sloth after you switch to Linux. People are never happy to hear that they must upgrade their hardware again in order to do the same damn thing.



Not sure why you are addressing this to me as what I was saying is something completely different.
What I said was:

Now, many people go on about things being compatible as a limiting factor. I call crap. You can design a CPU that can execute non native instruction almost as fast as the native CPU.
Downloadable microcode is one example.
Most O/S now are written in a high level language making porting pretty easy. Nothing like a re-write even if the instruction set is different.
It was certainly possible back then and within a year or so it would be far faster than the original, in many cases faster simply due to the speed of the newer processor.


I was talking about a new CPU that was not constrained like the Intel devices.
Development costs would be way down since it would not need to be many layers of compatible with older hardware.
The speed of the new processor should be able to run the old CPU's code almost as fast or faster (depending on the design type) than the one it was replacing.
However, it would run rings around the old CPU if fed native code for itself rather than emulating the old CPU.
So, porting the O/S would instantly make the O/S faster. Porting other programs, remembering this is usually just a new compiler for the new CPU and some basic conversions relating to the differences of compilers and you get your older software running far faster on the new machine than the old machine.

All you have to do is market it correctly, something that marketing guys rarely do because back then at least, few had engineering skills to even comprehend what they were try to sell.
FZR1KG
 

Re: For FZ: Linux desktops then and now

Postby Sigma_Orionis » Tue Aug 05, 2014 2:02 am

One thing to keep in mind.

End Users expect object code compatibility. They won't pay for the same software they already bought. (in the paid software world, which is mostly windows).

This is so common and pervasive, that if you snoop around open source software you'll find that the Unix/Linux version is offered in source code, 9 out of 10 times you'll see they provide a Windows Binary.

Most IT departments fall into the "End User" category. They tend to use commercial shrink-wrapped software, so the don't have access to the source code. IF they're lucky, they might be able migrate to the same version for different architecture they might not have to spend extra, or maybe a nominal fee. If they're not lucky they might have to buy a new license, that's not going to make the bean-counters happy.

In extreme cases (which are all too common) the version they use is so hopelessly outdated that they can't migrate, more often than not the original software vendor went belly up or was bought up by some other company which promptly discontinued the product.

There are other cases that are much more fun, particularly with applications like ERPs and CRMs, those things can be customized by the customer (who usually doesn't have the expertise, so they hire outside consultants to do it). Depending on the application (and on the general stupidity of the Customer and/or the Consultants) they might have customized the application so much that it can't be migrated (Peoplesoft is a good example of that).

This is all too common. I've been in several migrations of large applications and it's always the same story.

Yeah, I know, most IT departments suck, that's because most organizations like it that way. No matter how much they complain.
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: For FZ: Linux desktops then and now

Postby SciFiFisher » Tue Aug 05, 2014 4:54 am

Sigma_Orionis wrote:One thing to keep in mind.

End Users expect object code compatibility. They won't pay for the same software they already bought. (in the paid software world, which is mostly windows).Yeah, I know, most IT departments suck, that's because most organizations like it that way. No matter how much they complain.


They don't actually like it. They just don't want to spend the money to do things right. Almost good enough is cheaper. :P


There are other cases that are much more fun, particularly with applications like ERPs and CRMs, those things can be customized by the customer (who usually doesn't have the expertise, so they hire outside consultants to do it). Depending on the application (and on the general stupidity of the Customer and/or the Consultants) they might have customized the application so much that it can't be migrated (Peoplesoft is a good example of that).


You forgot the really important part. The application was customized by a former employee (or contractor) who has long since gone. They didn't bother to create a manual detailing what they did. And they sure as hell didn't bother leaving a copy of the coding lying around. So every time the application crashes everyone prays that it will restore intact from the backup. And that the back up isn't more than a few days old. :lol:
"To create more positive results in your life, replace 'if only' with 'next time'." — Author Unknown
"Experience is a hard teacher because she gives the test first, the lesson afterward." — Vernon Law
User avatar
SciFiFisher
Redneck Geek
 
Posts: 4889
Joined: Mon May 27, 2013 5:01 pm
Location: Sacramento CA

Re: For FZ: Linux desktops then and now

Postby FZR1KG » Tue Aug 05, 2014 5:40 am

Sigma_Orionis wrote:One thing to keep in mind.

End Users expect object code compatibility. They won't pay for the same software they already bought. (in the paid software world, which is mostly windows).

This is so common and pervasive, that if you snoop around open source software you'll find that the Unix/Linux version is offered in source code, 9 out of 10 times you'll see they provide a Windows Binary.

Most IT departments fall into the "End User" category. They tend to use commercial shrink-wrapped software, so the don't have access to the source code. IF they're lucky, they might be able migrate to the same version for different architecture they might not have to spend extra, or maybe a nominal fee. If they're not lucky they might have to buy a new license, that's not going to make the bean-counters happy.

In extreme cases (which are all too common) the version they use is so hopelessly outdated that they can't migrate, more often than not the original software vendor went belly up or was bought up by some other company which promptly discontinued the product.

There are other cases that are much more fun, particularly with applications like ERPs and CRMs, those things can be customized by the customer (who usually doesn't have the expertise, so they hire outside consultants to do it). Depending on the application (and on the general stupidity of the Customer and/or the Consultants) they might have customized the application so much that it can't be migrated (Peoplesoft is a good example of that).

This is all too common. I've been in several migrations of large applications and it's always the same story.

Yeah, I know, most IT departments suck, that's because most organizations like it that way. No matter how much they complain.



You're kind of making my point here. :P

The industry is rife with obsolete software running on no longer supported O/S's that can't be migrated to newer hardware.
I have programs that run in some windows versions but not others so have to run a virtual O/S just to access them.
That's how I do my PCB and schematic designs. With software that is about 25 years old but still usable on an old O/S but totally unusable even with all compatibility setting's with news O/S's.
So why make a big deal about a new CPU that is software compatible but just slower than the latest wiz bang multi processor computer but is 100 times faster than the crappy old system they are using right from the start?

They want hardware compatibility that goes back to 1974 but the O/S isn't written to be compatible with older versions of itself beyond a few iterations.
It's a marketing ploy using smoke and mirrors to sell you a new O/S, nothing more.
FZR1KG
 

Re: For FZ: Linux desktops then and now

Postby Sigma_Orionis » Tue Aug 05, 2014 6:02 am

SciFiFisher wrote:They don't actually like it. They just don't want to spend the money to do things right. Almost good enough is cheaper. :P


Same difference. They don't have the criterion to know if it's almost good enough. They find the results acceptable, ergo they like it.

SciFiFisher wrote:You forgot the really important part. The application was customized by a former employee (or contractor) who has long since gone. They didn't bother to create a manual detailing what they did. And they sure as hell didn't bother leaving a copy of the coding lying around. So every time the application crashes everyone prays that it will restore intact from the backup. And that the back up isn't more than a few days old. :lol:


And if by any chance the backup worked, they'll keep ignoring about it till the next crisis. Therefore they must like it :P

And I make sure they know they ignored it, why? because I will not accept responsibility for a disaster caused by THEIR lack of interest. So I make sure the disaster is properly documented with recommendations they won't follow so they can't blackmail me into fixing it for free. Doesn't make me popular, but hey, I'm a sysadmin. I'm socially incompetent to begin with. SO they're happy with their house of cards and I'm happy because I won't fix it for free. I think they it's called WIN-WIN :P

FZR1KG wrote:You're kind of making my point here. :P

The industry is rife with obsolete software running on no longer supported O/S's that can't be migrated to newer hardware.
I have programs that run in some windows versions but not others so have to run a virtual O/S just to access them.
That's how I do my PCB and schematic designs. With software that is about 25 years old but still usable on an old O/S but totally unusable even with all compatibility setting's with news O/S's.
So why make a big deal about a new CPU that is software compatible but just slower than the latest wiz bang multi processor computer but is 100 times faster than the crappy old system they are using right from the start?

They want hardware compatibility that goes back to 1974 but the O/S isn't written to be compatible with older versions of itself beyond a few iterations.
It's a marketing ploy using smoke and mirrors to sell you a new O/S, nothing more.


No I'm not. You keep placing the blame on the vendor. I keep telling you that the vendor does it that way because it is what the customer wants, the vendor doesn't care because it's business is to make money not make good systems. The vendor makes money, the idiot (i mean the customer) gets his house of cards, again: win-win :P
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: For FZ: Linux desktops then and now

Postby FZR1KG » Tue Aug 05, 2014 5:26 pm

Sigma_Orionis wrote:No I'm not. You keep placing the blame on the vendor. I keep telling you that the vendor does it that way because it is what the customer wants, the vendor doesn't care because it's business is to make money not make good systems. The vendor makes money, the idiot (i mean the customer) gets his house of cards, again: win-win :P


Ah grasshopper, you forget the first rule of marketing: You tell the customer what to think so you can sell your product. That's what they did and, it's bullshit.
Surely you were there for the big marketing of computers when it first started:
You can store recipes on it... rofl
Sure you can, but spending $4000 on a PC to store recipes when a $2 book will do it without power is totally retarded.
You can do budgeting with it... rofl
Sure you can, but how does your budget look now that you need a PC, regular software updates, the extra power consumption, modems and an internet connection that cost you more overall than hiring an accountant or a book keeper?

etc etc.

The computer market 30 years ago was in its infancy. No one but engineers, computer specialists or hobbyists knew anything about it.
The industry taught the market what it knows. They taught it crap to sell their crap.

They are hypocrites, which is my point here.
They claim the market wants backward compatibility, thus forcing hardware to be backward compatible, yet they don't follow their own reasoning because their O/S is not backward compatible.
FZR1KG
 

Re: For FZ: Linux desktops then and now

Postby Sigma_Orionis » Tue Aug 05, 2014 6:16 pm

Ah Young Padawan.

You're talking about the Home PC Market, in which 99% of PCs WILL be replaced by tablets within the next few years.

I'm talking about the CORPORATE Market. That ought to have the resources both technical and financial do to it much better.

I can assure you it's exactly the way I describe it. A behemoth the size of GM can grab Oracle by the ear and say "YOU WANT OUR BUSINESS, WE WANT IT THAT WAY". And Oracle WILL comply so they can sell a couple of billion US$ in licenses, the same with IBM, SAP, M$, you name it.
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: For FZ: Linux desktops then and now

Postby FZR1KG » Tue Aug 05, 2014 7:36 pm

Last I checked, the business market never spewed nonsense about hardware compatibility.
What they wanted was software compatibility.
Hardware was changed regularly to provide more RAM, ROM, faster processing, more I/O etc and the O/S was just recompiled to suit.
The corporate world actually made great advances to new and better hardware, they pushed it forward requiring more powerful machines.
The race back then was better/faster hardware not compatible hardware.

The questions that were asked were, will it run the O/S that we have, does it have XYZ compilers/programming languages.
Never does it have a backward compatible instruction set on the CPU.
That last one is the market MS made and, it is now heavily involved in the corporate market as well.
FZR1KG
 

Re: For FZ: Linux desktops then and now

Postby Sigma_Orionis » Tue Aug 05, 2014 9:18 pm

Really?

SUN's (now Oracle) SPARC64 based Servers are backward compatible to the Original SPARC 32 bit CPU so you can run your old binaries

HP/Intel's Itanium, despite being a 64 bit processor from the start has the capacity to run 32 Bit code so applications like SIebel can be recompiled without having to be modified to use the new 64bit system calls (so that means the OS has the 32bit system calls available in the API). As a matter of fact. When Intel was pretending to have everybody jump from the x86 line to Itanium because they said that 64 bits would ONLY be done on Itanium they tried to sell people the idea of a compatibility mode with the x86 line. Until AMD made the extensions for the x86 architecture so it could work on 64 bits.

HP's Old PA Risc supported both 32 and 64 bit binaries from the old days. HP Bent over backwards so much that with their Itanium based servers they implemented a compatibility mode with PA RISC through software (called Aries).

IBM's Power7 still can run old binary code from the late 80s At one point I had to speculate on the possibility of running Oracle's RDBMS version 8.0.5 (which is from the late 1990s) on modern IBM Power Hardware (answer:, it can and IBM will support it, despite that Oracle discontinued support for v8.0.5 more than 10 years ago.

AFAIK the only ones that tried a move like that and compatibility be damned was SUN in the early 90s when moving from the Motorola 680000 based SUN3s to the SPARC based SUN4s and that included a change in OS, from SunOS 4 (BSD Based) to Solaris (SYSV.4 Based) not to mention a shift in Sun's change of business from Engineering Workstations to Servers. And DEC when they moved from the old Vax Processors to the then New Alpha Processor.
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: For FZ: Linux desktops then and now

Postby FZR1KG » Wed Aug 06, 2014 4:06 am

Ah young student, there is a difference between going from bit width to instruction set change.
Likewise from single CPU to multi CPU to achieve throughput. Compatibility is a side effect rather than a design goal.
The fact that marketing can spin it that way is besides the point.
Not to mention all of your examples were after the compatibility (clone) wars began in about the early 1980's.
At that time there were few corporate systems on the 8086 bandwagon, preferring instead to stay with mini's and mainframes from major vendors.
The change of corporate computing from centralized to distributed happened after the clone wars.
By that stage much of corporate mentality was influenced by the "importance" of compatibility.

So if you're going to give examples of compatibility being a high priority you have to look before they started on that bandwagon.
In the days when CPU's were discrete component built rather than single chip.
Hardware back then advanced at phenomenal rates and software bent around what hardware could run it the fastest.
Now the reverse is true.
We get hardware developed to run certain O/S's faster.

To me this is backwards.
Designing a CPU is far harder than writing an O/S.
I know, I've done both. Hardware is magnitudes more difficult to design, debug, produce and update.
The O/S however is as far as I'm concerned now at the point, and has been for some time, where it''s development is self perpetuating.
MS and other companies realised that if they write a good O/S then no one would need to ever upgrade or get another.
Suddenly the O/S had a GUI instead of a GUI running on an O/S.
MS tried to tie IE in with the O/S at one point. ow retarded is that?
They pushed compatibility to keep the market share based on FUD and actively did things to actually keep it that way.
They didn't want compatibility, they wanted dependence.

For decades the computer industry went smoothly from one CPU to another and the main difference was a speed/memory increase.
That's the heyday of CPU development, not this crap where doubling the data bus is some phenomenal feat garbage.
CPU's got faster and faster, did more and accessed more. They had downloadable microcode so you could choose the optimal instruction set for the application.
Back then when we thought supercomputer we thought MIPS in a single sequential process moreso than anything else, though I/O was also key.
Now the common attitude is to think having 250,000 small CPU's is the same thing as having a CPU that is 250,000 times faster than the base line.
That's all well and good but when you have a sequential problem its slow as fuck and not in a good way.

Look at it this way, back in the days before the clone wars, no one thought much about compatibility because most systems could be made compatible.
All you had to do was recompile your programs and in some cases just run them as they were interpreted.
If I had a program that ran on fortran back in 1959 it would run faster in 1999.
Now that's compatibility.
I can't even get a 2005 C++ program to work on a 2010 version, from the same company, guess who? Yep, MS.
Five years later and my code is junk and needs to be re-written. I had trouble compiling it under C++ 2008, three years later.
I find this out after I get a new PC which naturally comes with a new O/S whether I like it or not and my software doesn't run on it.

Here's hardware compatibility in the modern age: people keeping old PC's and laptops so they can run older software because the later O/S's won't run them.
I have two laptops in the USA and two in Australia that I keep because they run older MS O/S's so I can run software that no longer runs on the new ones.
Then I hear how the latest CPU's need to be compatible with the 4004 so they can run old code and are binary compatible. Bullshit.
It's a joke that has gone on far too long and people need to wake up because they are being duped into accepting a fallacy.
FZR1KG
 

Re: For FZ: Linux desktops then and now

Postby FZR1KG » Wed Aug 06, 2014 4:31 am

Oh, forgot to add:
Don't mind me
I'm in the middle of a shit storm of stupid compatibility issues with software, using two laptops and different versions of compilers because what I have here is a system that was claimed to be "compatible" but didn't turn out to be.
So I'm probably a bit more touchy that I should be...die you fucking piece of shit software mother fucking prick of a thing!!!!...sorry. All good now.
I have programmers turrets at the moment.
My humble appolog...fucking compile you shit of a fucking thing!!!
FZR1KG
 

Re: For FZ: Linux desktops then and now

Postby Sigma_Orionis » Wed Aug 06, 2014 4:50 am

FZR1KG wrote:Ah young student, there is a difference between going from bit width to instruction set change......


Tell that to the IBM System/360 introduced in 1964

Direct from the horse's mouth

Mainframe customers tend to have a very large financial investment in their applications and data. Some applications have been developed and refined over decades. Some applications were written many years ago, while others may have been written "yesterday." The ability of an application to work in the system or its ability to work with other devices or programs is called compatibility.

The need to support applications of varying ages imposes a strict compatibility demand on mainframe hardware and software, which have been upgraded many times since the first System/360™ mainframe computer was shipped in 1964. Applications must continue to work properly. Thus, much of the design work for new hardware and system software revolves around this compatibility requirement.

The overriding need for compatibility is also the primary reason why many aspects of the system work as they do, for example, the syntax restrictions of the job control language (JCL) that is used to control batch jobs. Any new design enhancements made to JCL must preserve compatibility with older jobs so that they can continue to run without modification. The desire and need for continuing compatibility is one of the defining characteristics of mainframe computing.

Absolute compatibility across decades of changes and enhancements is not possible, of course, but the designers of mainframe hardware and software make it a top priority. When an incompatibility is unavoidable, the designers typically warn users at least a year in advance that software changes might be needed.


FZR1KG wrote:Oh, forgot to add:
Don't mind me
I'm in the middle of a shit storm of stupid compatibility issues with software, using two laptops and different versions of compilers because what I have here is a system that was claimed to be "compatible" but didn't turn out to be.
So I'm probably a bit more touchy that I should be...die you fucking piece of shit software mother fucking prick of a thing!!!!...sorry. All good now.
I have programmers turrets at the moment.
My humble appolog...fucking compile you shit of a fucking thing!!!


No worries dude, arguments like these are great, makes me research and find stuff that is new to me and we get to pass the time.

BTW MostlySucks has a free version of their Visual C++ what-have-you 2013 thing, it might help.

Right now I am seeing how the idiots at corporate pretend to take a Datacenter that has given them 4 years of almost continous trouble-free operation and turn it into shit. So, yeah, I hear you.

Besides, you and I are way too old and have been too long in this business to bitch at each other over some geeky point that 90% of the world doesn't understand, much less give a damn about :)
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: For FZ: Linux desktops then and now

Postby FZR1KG » Wed Aug 06, 2014 6:50 am

Sigma_Orionis wrote:
FZR1KG wrote:Ah young student, there is a difference between going from bit width to instruction set change......


Tell that to the IBM System/360 introduced in 1964

Direct from the horse's mouth

Mainframe customers tend to have a very large financial investment in their applications and data. Some applications have been developed and refined over decades. Some applications were written many years ago, while others may have been written "yesterday." The ability of an application to work in the system or its ability to work with other devices or programs is called compatibility.

The need to support applications of varying ages imposes a strict compatibility demand on mainframe hardware and software, which have been upgraded many times since the first System/360™ mainframe computer was shipped in 1964. Applications must continue to work properly. Thus, much of the design work for new hardware and system software revolves around this compatibility requirement.

The overriding need for compatibility is also the primary reason why many aspects of the system work as they do, for example, the syntax restrictions of the job control language (JCL) that is used to control batch jobs. Any new design enhancements made to JCL must preserve compatibility with older jobs so that they can continue to run without modification. The desire and need for continuing compatibility is one of the defining characteristics of mainframe computing.

Absolute compatibility across decades of changes and enhancements is not possible, of course, but the designers of mainframe hardware and software make it a top priority. When an incompatibility is unavoidable, the designers typically warn users at least a year in advance that software changes might be needed.



Um, you realise that most of that series had downloadable microcode, basically what I was talking about, right?
Others has different instruction sets but provided emulators in hardware.
If you want compatibility you could have it. If you wanted speed you could have it.
You couldn't have both. If your program was written in a smaller CPU the better CPU would run it but it would run it as slow or slower on a clock by clock basis. Getting the extra speed needed faster clocking or conversion to the new instruction set.
Most of the time the microcode was loaded as a bootloader so changing to a different instruction set required a reset. Not practical but possible if you needed it. We don't need to go that far anymore since technology has come a long way since then.

They also offered microcode for scientific applications as well, you could just download it and get scientific (floating) calculations instead of BCD math that the business instruction set ran. The two were not compatible obviously.

That's the whole point about downloadable microcode, though back in those days it was simply microcode that was usually stored in core memory (magnetic cores). The really fast ones used hardwired logic rather than microcode and thus needed emulation hardware for backward compatibility.
It was when the modern CPU's came to being on a single chip that made having a separate bus or downloadable microcode more difficult, so they settled on the cheaper alternative of ROM based microcode due to silicon real estate. At that point the decision to stay with one CPU really locked you down on a machine code level but you could still emulate or simulate.

But just to be clear, I'm not saying compatibility isn't a top priority.
What I'm saying is that does not translate to CPU instruction set compatibility.
The IBM 360 you inked to is a classic example.
So are the cray's, the Primes, the Honeywell's, the Cybers etc.
They ran different CPU's which were obviously not machine code compatible but the software, microcode or extra hardware made them compatible.
I can't get software compatibility now even though the hardware is machine code compatible, but they got software compatibility without hardware compatibility back then. We've gone backwards, not forwards in that regard.

As a matter of interest, did you know that Motorola produced a 68000-360 a CPU with the scaled down IBM 360 instruction set?
The microcode/nanocode was in ROM rather than downloadable but there's what I was talking about.
CPU's of the modern era that can run code as far back as the late 50's because they ran changeable microcode.
Now if they did that in a downloadable form, the PC would be hardware compatible to almost any CPU's of the past simply because of it's address and data bus widths being more than enough to execute any instruction set.

If you want hardware compatibility, that's the way to do it.
Imagine if we'd gone down that path what things would be like today.
1) I could have any O/S I like on my laptop without compromising it's new features.
2) with multi processing CPU's I could run different O/S's at the same time on the same hardware.
3) If I needed a specific instruction set, I could download it. I could run IBM 360 programs or Cyber 18/20 programs faster than they were ever run.

Basically the computer industry was slowed down. The trend changed from producing better, faster and more flexible CPU's to CPU's that had backward machine code compatibility when there was no real need for it. As your example of the IBM 360 shows.

Sigma wrote:
No worries dude, arguments like these are great, makes me research and find stuff that is new to me and we get to pass the time.

BTW MostlySucks has a free version of their Visual C++ what-have-you 2013 thing, it might help.

Right now I am seeing how the idiots at corporate pretend to take a Datacenter that has given them 4 years of almost continous trouble-free operation and turn it into shit. So, yeah, I hear you.

Besides, you and I are way too old and have been too long in this business to bitch at each other over some geeky point that 90% of the world doesn't understand, much less give a damn about :)


Ain't that the truth. Not that I'm bitching at you. It's the industry I'm pissed off with.
Remember, I was a design engineer back in the days where CPU development was really starting to take off and knew how to design CPU's and their peripherals such as cache etc. So watching the industry go down the path they did was frustrating as hell. Back in those days I'd debate with other design engineers about the trends and we speculated where it was heading. So far our predictions back then were pretty much spot on. Not bad for predictions made close to three decades ago.
The only one that hasn't happened yet is one I predicted would happen but hasn't quite yet, though the trend is heading that way so it's just a matter of time. Maybe I should check again, it's been a while. It may have happened since I last checked. PM me if you want to know what that was as I don't want to post it publicly.

I tried the 2013 C++ but it's not compatible to the 2005 version either.
The 2008 version allows you to update your code but all it does is break it.
The later versions don't even bother trying.

It is fun going back through memory lane though.
Most people don't even know what microcode is or if they have heard of it have no idea what it does or how it works yet their lives revolve around machines that use it. Not that I care, I just find it amusing at times. Especially when hearing people talk about computers and you know they don't really know much about them, but they think they do and are happy to share their wealth of knowledge with everyone. Little do they know the person next to them who looks like a blue collar laborer designed such systems for a living before they were even born.
Even more amusing is when people know I'm an electronics engineer and ask my advice on which computer to get.
I don't know. Why the hell would I?
I don't get involved with the latest CPU or benchmarks because I find it boring and I'm kind of pissed off with the whole industry anyway.
People overclocking CPU's to get them to run a bit faster. I know a guy that spent days tweaking a PC to get a few percent more performance.
Yay, he managed to tweak it to do benchmark software faster. Excellent. Good job. Here's a dog biscuit for you. lol

I'm also probably very biased. I'm a hardware guy. I love design. Digital and analog. I love to make new designs but that side of the industry is dying or so specialized it's not worth the effort. So I guess watching what happened was distressing to me.
Maybe I have 8086 PTSD?
Ever seen Charleston Heston at the end of "Planet Of The Apes?"
That's me banging my head asking, "why this piece of shit processor? why? WHY!!! arrrgggghhhhh" Damn you to hell!!!
FZR1KG
 

Re: For FZ: Linux desktops then and now

Postby FZR1KG » Wed Aug 06, 2014 6:53 am

On, on NPR today I heard about a guy that wrote a book about the most important company in the world, Intel... arrrgghhhh DAMN YOU ALL TO HELLL!!!!

Link: http://www.harpercollins.com/9780062226 ... el-trinity
FZR1KG
 

Re: For FZ: Linux desktops then and now

Postby Sigma_Orionis » Wed Aug 06, 2014 3:24 pm

FZR1KG wrote:Um, you realise that most of that series had downloadable microcode, basically what I was talking about, right?
Others has different instruction sets but provided emulators in hardware.
If you want compatibility you could have it. If you wanted speed you could have it.
You couldn't have both. If your program was written in a smaller CPU the better CPU would run it but it would run it as slow or slower on a clock by clock basis. Getting the extra speed needed faster clocking or conversion to the new instruction set.
Most of the time the microcode was loaded as a bootloader so changing to a different instruction set required a reset. Not practical but possible if you needed it. We don't need to go that far anymore since technology has come a long way since then.

They also offered microcode for scientific applications as well, you could just download it and get scientific (floating) calculations instead of BCD math that the business instruction set ran. The two were not compatible obviously.

That's the whole point about downloadable microcode, though back in those days it was simply microcode that was usually stored in core memory (magnetic cores). The really fast ones used hardwired logic rather than microcode and thus needed emulation hardware for backward compatibility.
It was when the modern CPU's came to being on a single chip that made having a separate bus or downloadable microcode more difficult, so they settled on the cheaper alternative of ROM based microcode due to silicon real estate. At that point the decision to stay with one CPU really locked you down on a machine code level but you could still emulate or simulate.

But just to be clear, I'm not saying compatibility isn't a top priority.
What I'm saying is that does not translate to CPU instruction set compatibility.
The IBM 360 you inked to is a classic example.
So are the cray's, the Primes, the Honeywell's, the Cybers etc.
They ran different CPU's which were obviously not machine code compatible but the software, microcode or extra hardware made them compatible.
I can't get software compatibility now even though the hardware is machine code compatible, but they got software compatibility without hardware compatibility back then. We've gone backwards, not forwards in that regard.

As a matter of interest, did you know that Motorola produced a 68000-360 a CPU with the scaled down IBM 360 instruction set?
The microcode/nanocode was in ROM rather than downloadable but there's what I was talking about.
CPU's of the modern era that can run code as far back as the late 50's because they ran changeable microcode.
Now if they did that in a downloadable form, the PC would be hardware compatible to almost any CPU's of the past simply because of it's address and data bus widths being more than enough to execute any instruction set.

If you want hardware compatibility, that's the way to do it.
Imagine if we'd gone down that path what things would be like today.
1) I could have any O/S I like on my laptop without compromising it's new features.
2) with multi processing CPU's I could run different O/S's at the same time on the same hardware.
3) If I needed a specific instruction set, I could download it. I could run IBM 360 programs or Cyber 18/20 programs faster than they were ever run.

Basically the computer industry was slowed down. The trend changed from producing better, faster and more flexible CPU's to CPU's that had backward machine code compatibility when there was no real need for it. As your example of the IBM 360 shows.


Actually, I think that IBM (to use my example) implemented 99% of the compatibility customers wanted through the use of Hypervisors, After all: they virtually (pun not intended :P )invented the whole damned virtual machine thing. And they created an OS that is specialized to to run as a Hypervisor: The VM Operating System which is the spiritual granddaddy of what I use on Intel Hardware: VMWare:

By using VMWare, I run stuff that was written for whatever version of WIndows Server, or Linux (or almost any other OS written for IBM Compatible Intel Hardware) on modern hardware with virtually no changes.

I've got a Payroll System based on Peoplesoft 7.5, that thing is close to 15 years old. It's only certified on Windows 2000, and the Database it uses (Oracle 8.0.5) won't run on a modern version of Linux (originally the database was on an IBM RS/6000 S70).

We moved it to a couple of VMs over VMWARE, the application server still runs on WIndows 2000, the RDBMS server runs on Redhat Linux 7.3 (fortunately I could upgrade the RDBMS Server to Oracle 8.1.7 with no compatibility issues)

Of course, there's a performance penalty, so these days that CPUs run a gigahertz speeds and you can have 4 or 6 cpus on the same slab of silicon (so 24 or 32 Processor machines are affordable for someone who is not a multinational bank) You fix it with powerful CPUs. (and a good I/O Subsystem over a SAN too :P )

Hey, I am pretty sure that barring copyright problems, you can solve the binary compatibility issues through the use of downloadble microcode, the thing is that customers want it all: full compatibility and the new stuff, running at the same time. And they also want it dirt cheap too.

Of course, what you didn't spend on specialized hardware, you spend it on software licenses, VMWare is expensive as hell.

FZR1KG wrote:Ain't that the truth. Not that I'm bitching at you. It's the industry I'm pissed off with.
Remember, I was a design engineer back in the days where CPU development was really starting to take off and knew how to design CPU's and their peripherals such as cache etc. So watching the industry go down the path they did was frustrating as hell. Back in those days I'd debate with other design engineers about the trends and we speculated where it was heading. So far our predictions back then were pretty much spot on. Not bad for predictions made close to three decades ago.
The only one that hasn't happened yet is one I predicted would happen but hasn't quite yet, though the trend is heading that way so it's just a matter of time. Maybe I should check again, it's been a while. It may have happened since I last checked. PM me if you want to know what that was as I don't want to post it publicly.


You betcha I want to know :)

I do agree that the industry has stagnated, we're still implementing stuff that was invented in the 60s and 70s (Hardware Protection RIngs and Virtual Machines for example)

FZR1KG wrote:I tried the 2013 C++ but it's not compatible to the 2005 version either.
The 2008 version allows you to update your code but all it does is break it.
The later versions don't even bother trying.


The solution that worked for me, probably won't work for you, my problem was for server machines. Yours is a workstation problem.
How do you interface with the PSoC hardware? Serial Port? USB?

And I'm pretty sure the customer won't accept running your old code on a Windows XP VM through VMWAre player or client Hyper-V or VirtualBox (all of which are free of course).

FZR1KG wrote:It is fun going back through memory lane though.
Most people don't even know what microcode is or if they have heard of it have no idea what it does or how it works yet their lives revolve around machines that use it. Not that I care, I just find it amusing at times. Especially when hearing people talk about computers and you know they don't really know much about them, but they think they do and are happy to share their wealth of knowledge with everyone. Little do they know the person next to them who looks like a blue collar laborer designed such systems for a living before they were even born.


I would welcome you to my world, but I think you got here about the same time I did ;)

FZR1KG wrote:Even more amusing is when people know I'm an electronics engineer and ask my advice on which computer to get.
I don't know. Why the hell would I?


Wait till they start asking you about Smartphones and Tablets. :P

FZR1KG wrote:I don't get involved with the latest CPU or benchmarks because I find it boring and I'm kind of pissed off with the whole industry anyway.
People overclocking CPU's to get them to run a bit faster. I know a guy that spent days tweaking a PC to get a few percent more performance.
Yay, he managed to tweak it to do benchmark software faster. Excellent. Good job. Here's a dog biscuit for you. lol


Oh yeah, you should have listened to the people who were excited because of Plug and Play or HotPlug PCI when Tandem was doing stuff like that 20 years before those came out.,

And let's not talk about how everyone is excited about "the cloud". Virtualization (which is at the heart of "cloud hardware") and Centralized access is 1970s stuff. Now it just has a pretty layer of eye candy

FZR1KG wrote:I'm also probably very biased. I'm a hardware guy. I love design. Digital and analog. I love to make new designs but that side of the industry is dying or so specialized it's not worth the effort. So I guess watching what happened was distressing to me.
Maybe I have 8086 PTSD?
Ever seen Charleston Heston at the end of "Planet Of The Apes?"
That's me banging my head asking, "why this piece of shit processor? why? WHY!!! arrrgggghhhhh" Damn you to hell!!!


Hey I prefer stuff solved in hardware. Since, it's more expensive to implement and change than software it tends to be less buggy

And yes, I have used Charlton Heston's "Planet of the Apes" line plenty of times.

Edited for Clarity (TWICE!)
Last edited by Sigma_Orionis on Wed Aug 06, 2014 3:32 pm, edited 2 times in total.
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: For FZ: Linux desktops then and now

Postby Sigma_Orionis » Wed Aug 06, 2014 3:25 pm

FZR1KG wrote:On, on NPR today I heard about a guy that wrote a book about the most important company in the world, Intel... arrrgghhhh DAMN YOU ALL TO HELLL!!!!

Link: http://www.harpercollins.com/9780062226 ... el-trinity


What can you expect from the same people who think that Steve Jobs was a legendary programmer? :roll:
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: For FZ: Linux desktops then and now

Postby Sigma_Orionis » Wed Aug 06, 2014 3:35 pm

And here is a trip down the memory lane for you (in more ways than one):

The Dimension 68000

This thing really tried to be all things to all people, read and weep.
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: For FZ: Linux desktops then and now

Postby FZR1KG » Wed Aug 06, 2014 4:02 pm

Sigma wrote:Actually, I think that IBM (to use my example) implemented 99% of the compatibility customers wanted through the use of Hypervisors, After all: they virtually (pun not intended :P )invented the whole damned virtual machine thing. And they created an OS that is specialized to to run as a Hypervisor: The VM Operating System which is the spiritual granddaddy of what I use on Intel Hardware: VMWare:

By using VMWare, I run stuff that was written for whatever version of WIndows Server, or Linux (or almost any other OS written for IBM Compatible Intel Hardware) on modern hardware with virtually no changes.

I've got a Payroll System based on Peoplesoft 7.5, that thing is close to 15 years old. It's only certified on Windows 2000, and the Database it uses (Oracle 8.0.5) won't run on a modern version of Linux (originally the database was on an IBM RS/6000 S70).

We moved it to a couple of VMs over VMWARE, the application server still runs on WIndows 2000, the RDBMS server runs on Redhat Linux 7.3 (fortunately I could upgrade the RDBMS Server to Oracle 8.1.7 with no compatibility issues)

Of course, there's a performance penalty, so these days that CPUs run a gigahertz speeds and you can have 4 or 6 cpus on the same slab of silicon (so 24 or 32 Processor machines are affordable for someone who is not a multinational bank) You fix it with powerful CPUs. (and a good I/O Subsystem over a SAN too :P )

Hey, I am pretty sure that barring copyright problems, you can solve the binary compatibility issues through the use of downloadble microcode, the thing is that customers want it all: full compatibility and the new stuff, running at the same time. And they also want it dirt cheap too.

Of course, what you didn't spend on specialized hardware, you spend it on software licenses, VMWare is expensive as hell.


Virtulization is a fancy name for simulation via a debugger, even though the debugger is pretty crude being basically a special instruction that does a system call.

You will notice though:
With CP-40, the hardware's supervisor state was virtualized as well, allowing multiple operating systems to run concurrently in separate virtual machine contexts.


Exactly where I was saying we'd be now had they no gone down the path they did with the PC.
I mean hell, the O/S I wrote way back in the early '80's did the same thing on a Z80 CPU that didn't have special system instructions. I used the rst38 instruction which was not implemented on my system as it was designed to be a call to ROM in the lowest part of memory but my system had special hardware to make it start executing ROM from high memory thus no one used the rst instructions. As a bonus it's opcode was 0xFF, a jump of -1, so you could also make conditional rst 38 instructions using the jr (jump relative) instruction.
That way I could run CPM, my own O/S or the native interpreter and debug code by inserting the instruction.

I have noticed though that there is now a push toward virtulization software in the last decade or so. Only about 45 years behind the rest of the industry. :roll:
Likewise as you mentioned, hardware provided back in the '60 and still provided today is not used by MS O/S's.
Maybe the push toward virtulization will encourage that and some young buck will come up with the brilliant concept of using extra layers of rings and run an O/S in an O/S so that Windows type O/S's are the secondary O/S instead of the first and suddenly we can have multiple O/S's running on one system at the same time. That will however take MS to get its monumental head out of its ass long enough to realise that what it's been passing of as an O/S is actually a clusterfuck devil child of a shitty O/S and a GUI rolled into one then passed off as some great system.
FZR1KG
 

Re: For FZ: Linux desktops then and now

Postby FZR1KG » Wed Aug 06, 2014 4:14 pm

Sigma_Orionis wrote:
FZR1KG wrote:On, on NPR today I heard about a guy that wrote a book about the most important company in the world, Intel... arrrgghhhh DAMN YOU ALL TO HELLL!!!!

Link: http://www.harpercollins.com/9780062226 ... el-trinity


What can you expect from the same people who think that Steve Jobs was a legendary programmer? :roll:


W.T.F. ?
FZR1KG
 

Re: For FZ: Linux desktops then and now

Postby FZR1KG » Wed Aug 06, 2014 4:21 pm

Sigma wrote:The solution that worked for me, probably won't work for you, my problem was for server machines. Yours is a workstation problem.
How do you interface with the PSoC hardware? Serial Port? USB?


RS232
When I designed it RS232 was standard with laptops and USB was a PITA as well as expensive.
Now USB is the go and RS232 is done via a USB to RS232 converter.
Not that I mind as RS232 was never really designed for high speed data transfers. It was meant for short distance work in semi noisy environments.
The one good thing about it though is you can make the interface electrically isolated via opto-couplers.
Not possible (practically) with the USB since it's source powered.

The problem is though that even my exe file won't run under the later O/S's.
So I can't run the exe, I can't re-compile. No software compatibility and no object code comparability either.
But they still push the CPU compatibility line. What's the good of that if the O/S won't support it?
FZR1KG
 

Re: For FZ: Linux desktops then and now

Postby Cyborg Girl » Wed Aug 06, 2014 4:24 pm

You guys have utterly lost me with the hardware stuff, however

Sigma wrote:Hey I prefer stuff solved in hardware. Since, it's more expensive to implement and change than software it tends to be less buggy


Don't know about pure hardware (i.e. etched in), but my experience is that flashable firmware tends to be buggy as shit, at least on the desktop/small server end. e.g. Pretty much all my laptops can be reliably made to crash during POST by various methods (usually plugging and unplugging a USB device works).

Re virtualization, I think the deal is that it's now widely available on cheapo commodity hardware, and said commodity hardware has gotten quite powerful; so dividing up your tasks into VMs is usually feasible, and pays for itself when you're running a lot of stuff. But yes, it is old tech, and it always annoys me when interviewers treat it as cutting-edge must-know material that a veteran sysadmin couldn't learn in ~5 minutes.

(I'm not so sure about the security end of things though. Hypervisors usually present a very small attack surface, but I've seen networks where having access to one VM would effectively give SSH access everywhere. By comparison it might very well be more secure to run all your services on one Linux server, with AppArmor restrictions for each service. Of course, with Linux kernel vulnerabilities a dime a dozen these days...)
User avatar
Cyborg Girl
Boy Genius
 
Posts: 2138
Joined: Mon May 27, 2013 2:54 am

Re: For FZ: Linux desktops then and now

Postby Sigma_Orionis » Wed Aug 06, 2014 4:38 pm

FZR1KG wrote:Virtulization is a fancy name for simulation via a debugger, even though the debugger is pretty crude being basically a special instruction that does a system call.


Well, yeah. How else can you write an OS for a system with a new processor that has no software yet? by using what we now call "virtualization" :P

FZR1KG wrote:You will notice though:
With CP-40, the hardware's supervisor state was virtualized as well, allowing multiple operating systems to run concurrently in separate virtual machine contexts.



Exactly where I was saying we'd be now had they no gone down the path they did with the PC.
I mean hell, the O/S I wrote way back in the early '80's did the same thing on a Z80 CPU that didn't have special system instructions. I used the rst38 instruction which was not implemented on my system as it was designed to be a call to ROM in the lowest part of memory but my system had special hardware to make it start executing ROM from high memory thus no one used the rst instructions. As a bonus it's opcode was 0xFF, a jump of -1, so you could also make conditional rst 38 instructions using the jr (jump relative) instruction.
That way I could run CPM, my own O/S or the native interpreter and debug code by inserting the instruction.


Extremely cool, considering that in those days that stuff was considered "esoteric", even more cool that you implemented it yourself. I don't know enough to do stuff like that, I just use it :)

FZR1KG wrote:I have noticed though that there is now a push toward virtulization software in the last decade or so. Only about 45 years behind the rest of the industry. :roll:
Likewise as you mentioned, hardware provided back in the '60 and still provided today is not used by MS O/S's.
Maybe the push toward virtulization will encourage that and some young buck will come up with the brilliant concept of using extra layers of rings and run an O/S in an O/S so that Windows type O/S's are the secondary O/S instead of the first and suddenly we can have multiple O/S's running on one system at the same time. That will however take MS to get its monumental head out of its ass long enough to realise that what it's been passing of as an O/S is actually a clusterfuck devil child of a shitty O/S and a GUI rolled into one then passed off as some great system.


Well yeah, I bitched about that as well-

Now, supposedly the pro and enterprise editions of WIndows 8 provide a type 2 hypervisor and it supports XP as a guest OS. And if (which I'd bet is the case) the edition of your Windows 8.1 is the "entry level" edition, you could use VMWare Player or VirtualBox I suppose you could make a kludge between a WIndows XP VM and a type 2 hypervisor that allowed USB passthrough to connect your USB/RS323 Adapter to get your software running. Of course that requires a WIndows XP VLK license though, which you probably don't have, which explains why you're knee deep trying to get that code to run on Windows 8.1

It Sucks
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: For FZ: Linux desktops then and now

Postby FZR1KG » Wed Aug 06, 2014 4:43 pm

Gullible Jones wrote:You guys have utterly lost me with the hardware stuff, however

Sigma wrote:Hey I prefer stuff solved in hardware. Since, it's more expensive to implement and change than software it tends to be less buggy


Don't know about pure hardware (i.e. etched in), but my experience is that flashable firmware tends to be buggy as shit, at least on the desktop/small server end. e.g. Pretty much all my laptops can be reliably made to crash during POST by various methods (usually plugging and unplugging a USB device works).


Flashable firmware is software GJ, not hardware.
They call it firmware because it's not easy to change and it's non volatile so stays if power is removed for years.
The hardware aspect is the logic gates, the CPU etc. They are fixed and can't be changed.
Though when we start to get into FPGA's, PLA's and PAL's that simplification gets a little confusing.
Even the PSoC I use there is no clear line as it's programmable hardware.



GJ wrote:Re virtualization, I think the deal is that it's now widely available on cheapo commodity hardware, and said commodity hardware has gotten quite powerful; so dividing up your tasks into VMs is usually feasible, and pays for itself when you're running a lot of stuff. But yes, it is old tech, and it always annoys me when interviewers treat it as cutting-edge must-know material that a veteran sysadmin couldn't learn in ~5 minutes.


I was doing it on a Z80 in the early 80's with my own software. No reason it can't be done with any CPU. IBM did it with their early stuff which is less powerful than modern simple single chip micro's. What it requires is that you start your design with that in mind. The hardware to do this has been around however since CPU's were invented. Only it's efficiency/throughput varies and most would be surprised how little it's affected.

GJ wrote:(I'm not so sure about the security end of things though. Hypervisors usually present a very small attack surface, but I've seen networks where having access to one VM would effectively give SSH access everywhere. By comparison it might very well be more secure to run all your services on one Linux server, with AppArmor restrictions for each service. Of course, with Linux kernel vulnerabilities a dime a dozen these days...)


This is why modern CPU's and older mainframes have multiple rings.
The real O/S is run at level 0, the others are run at level 1 or 2 depending on how many rings the CPU supports.
There is a reason MS uses only two rings. It was not designed to use virtulization from the ground up.
FZR1KG
 

Re: For FZ: Linux desktops then and now

Postby Sigma_Orionis » Wed Aug 06, 2014 4:45 pm

Gullible Jones wrote:Don't know about pure hardware (i.e. etched in), but my experience is that flashable firmware tends to be buggy as shit, at least on the desktop/small server end. e.g. Pretty much all my laptops can be reliably made to crash during POST by various methods (usually plugging and unplugging a USB device works).


I've never seen that happen on the hardware I use, so while not impossible is rare. Hell one of the laptops at work wouldn't get past the POST sequence because the former user, SOMEHOW broke the plastic guide in two USB ports and the conectors were shorted. And as Zee says, it's firmware, that is simply software burned in an EPROM.

Gullible Jones wrote:Re virtualization, I think the deal is that it's now widely available on cheapo commodity hardware, and said commodity hardware has gotten quite powerful; so dividing up your tasks into VMs is usually feasible, and pays for itself when you're running a lot of stuff. But yes, it is old tech, and it always annoys me when interviewers treat it as cutting-edge must-know material that a veteran sysadmin couldn't learn in ~5 minutes.


Yup, spot on as Zee would say. Virtualization was originally expensive to do and as Zee says REQUIRED designing stuff to be virtualized, why do you think and AMD and Intel provide hardware support for it? because it wasn't designed correctly originally.

Gullible Jones wrote:(I'm not so sure about the security end of things though. Hypervisors usually present a very small attack surface, but I've seen networks where having access to one VM would effectively give SSH access everywhere. By comparison it might very well be more secure to run all your services on one Linux server, with AppArmor restrictions for each service. Of course, with Linux kernel vulnerabilities a dime a dozen these days...)

Virtualization is NOT inherently secure, AND each virtual machine SHOULD be secured separately.

I've read about ways to hack your way into the hypervisor OS from a hacked VM.

Edited for clarity THRICE!
Last edited by Sigma_Orionis on Wed Aug 06, 2014 4:51 pm, edited 3 times in total.
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Next

Return to Sci-Tech… and Stuff

Who is online

Users browsing this forum: No registered users and 4 guests