Long live the king

Long live the king

Postby Cyborg Girl » Thu Jul 31, 2014 9:29 pm

of operating systems, that is. At least in the eyes of many a sysadmin...

http://www.infoworld.com/t/enterprise-a ... ife-247456

Here's to hoping for an x86-64 port.

Mind, I have only used OpenVMS in the most brief and cursory fashion, and spent most of that time being annoyed by weird CLI keybindings and lack of tab completion... But those things can be fixed. Linux's tendency to harbor kernel vulnerabilities, not so much.
User avatar
Cyborg Girl
Boy Genius
 
Posts: 2138
Joined: Mon May 27, 2013 2:54 am

Re: Long live the king

Postby Sigma_Orionis » Thu Jul 31, 2014 11:01 pm

In the eyes of THIS sysadmin, THIS is the King and Patron Saint of Operating Systems.

VMS IS very good, no doubt about it, I always found it irritating as hell. But it I it would be stupid of me, not to "Render to Caesar the things that are Caesar's".

I do have an OpenVMS VM (running over SIMH) in my PC, I even joined DECUS (these days is called ENCOMPASS) and got a hobbyist license so I could play with it.

So, this sysadmin is eagerly waiting for the fall of 2015, when there MIGHT be an operational Multics emulator available.
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: Long live the king

Postby Cyborg Girl » Fri Aug 01, 2014 12:56 am

But but... OS that actually runs servers and stuff! And has a more flexible permissions system than UNIX! And, don't forget, is invulnerable to a variety common memory exploits!

:P

Edit: though I did find out (just today actually) that Multics had similar capabilities to VMS, with 6 hardware privilege rings.

Edit 2: hey, I wonder if it would be possible to emulate hardware privilege rings in kernel space (without using managed code for userspace programs)?
User avatar
Cyborg Girl
Boy Genius
 
Posts: 2138
Joined: Mon May 27, 2013 2:54 am

Re: Long live the king

Postby Sigma_Orionis » Fri Aug 01, 2014 2:36 am

Gullible Jones wrote:But but... OS that actually runs servers and stuff! And has a more flexible permissions system than UNIX! And, don't forget, is invulnerable to a variety common memory exploits!

:P

Edit: though I did find out (just today actually) that Multics had similar capabilities to VMS, with 6 hardware privilege rings.


It was 8 rings, And all those features were thought up at least 5 years before DEC started working on the project where VMS was born :P

Gullible Jones wrote:Edit 2: hey, I wonder if it would be possible to emulate hardware privilege rings in kernel space (without using managed code for userspace programs)?


According to this it's apparently possible, (don't know about the userspace stuff though) and guess who did it first? MULTICS on a GE645 which didn't have hardware support for it.
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: Long live the king

Postby Sigma_Orionis » Fri Aug 01, 2014 2:44 pm

Finally managed to understand what you meant with "Managed Code on Userspace" (guess I wasn't in full form yesterday)

From what I understand, What Microsoft termed "Managed Code" (ie the use of RunTime modules that provided an additional layer of abstraction, Isolating Object Code from the rest of the system) wasn't in use on those days. Hell one of the complaints of Multics was how complex and unwieldy (from the Late 60s, Early 70s point of view anyways) it was. And it's primary development language (a subset PL/I) doesn't make any mention of the use of runtime modules in that way, it mentions the direct generation of Native Object Code. So, without being an expert in the matter (SURPRISE! :P) I'd say that their implementation of the RIng Architecture did not use Managed Code.
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: Long live the king

Postby FZR1KG » Fri Aug 01, 2014 4:09 pm

Ah the OS issue.
IMHO effort should be put into writing a new OS, one that takes advantage of the primary CPU hardware security available.
Scrap the past, start again with the knowledge that currently exists about both CPU hardware and exploits.

For me program writing is like a novel. Almost everyone can read and write at some level, but few can actually write a good best seller.
What we have today are OS's that are the "50 shades of grey" of the computer world. Crap written for the masses.

While some of the old OS's are good, they really aren't up to scratch for what would ideally be a new standard to take us away from the poor OS's of today.
Bugs, security issues, poor implementation of the CPU protections given, regular crashes, poor real time response, lack of speed etc etc

Oh, and times change, many older OS's were criticized for been too large, at sizes over 100k.
Just think about that. Multics was in it's largest about 0.5 meg unless I'm mistaken. A true monster!! :roll:

I've seen a lot of things in this industry from shit OS's that take 100's of megs, to compilers that generate over a meg of code for a hello world program, in straight text, no graphics. W.T.F.
Even with all that we need to run security patches, bug fixes and antivirus software that all need regular updates and often cause problems themselves.

And don't get me started on device drivers. I have issues with the touchpad on this Samsung chronos, so have many others.
It's been over a year and no one has a solution. The device driver is over 100meg in size.
I design with small CPU's that implement the touch devices, they run on less than 16K and give you all the information you want via I2C, serial or USB.
W.T.F. is wrong with the computer world when a small CPU can control all the hardware to give you XY positioning already and your drive is 100 megs and can't seem to work right?
FZR1KG
 

Re: Long live the king

Postby Sigma_Orionis » Fri Aug 01, 2014 4:59 pm

Well, I for one don't think that Multics is going anywhere. Despite the influence it had on Modern OSs, it was never popular, basically GE got out of the IT Business more than 30 years ago, and Honeywell/Bull didn't know what to do with it. The last Multics installation in the world (At a Canadian Ministry of Defense site of all places) was shutdown for good 15 years ago.

Secondly, Multics was pretty much tied to the hardware it was designed around. And much to a point so is OpenVMS.

I just want an emulator to play with, because I want to see it running, nothing else.

Whatever the future brings. It most probably won't be OpenVMS or other old OSs, and certainly it won't be Multics.

A lot of the features both had will most certainly be implemented in future designs. Hell, the whole concept of what we call "Virtualization" these days was being used by IBM in the late 70s.

Zee's metaphor of software development as writing a Novel, looks to me like a pretty good analogy. Once in a while you get something that is both technically good and at the same time it's popular. The rest of the time you get stuff that is popular but in the inside it sucks. But, because it's popular (and in this business, compatibility kills good every time), huge efforts are made to find workarounds for its shortcomings.

And regarding the size of modern software, it has to do with several things.

- Portability
Most stuff these days in written in High Level languages. As Zee has pointed out earlier, compilers product object code that is optimized "just enough" but that's about it. I don't know how common in this, but in most C compilers (and C doesn't really qualify as a high level language) you usually have to decide to optimize for size, for speed, or for a balance between the two. I think that in most other compilers for other languages the object code is optimized for speed, which includes lots of bloat.

Of course, your software is infinitely more maintainable (and of course portable) if your code is in a high level language and even more if it's on an object oriented language (at least theoretically anyways :P) at the cost of bloat. That's why these days it's normal for a server to have 64 - 128 GB of RAM and disk space in the High Gigabyte range.

- APIs.

This of course is related to the above, you use APIs to do a lot of stuff, otherwise you spend all of your time reinventing the wheel instead of getting something done. The cost? Bloat

- GUIs.

Sure Zee, I can believe you can have low overhead GUIs, but usually requires the programmer to do all the heavy lifting. End users like GUIs... a lot and the more trimmings the better, how much of the bloat comes in the form of graphic images? (JPGs, GIFs) you end up using bloated APIs to do the GUI grunt work and on top that you add lots more bloat in the form of pre-rendered images to have a responsive GUI.

Let's take an extreme example: Video Games. Those things these days are huge monsters, the 3D engine, the code that is actually the game, and TONS of pre-rendered images and sounds

Now take a look at this:

a 3D engine demo that is just 64K size, No APIs and no pre-rendered stuff, but no practical implementation either and HUGE CPU/GPU overhead.

Edited for spelling and clarity (TWICE!)
Last edited by Sigma_Orionis on Fri Aug 01, 2014 11:42 pm, edited 2 times in total.
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: Long live the king

Postby FZR1KG » Fri Aug 01, 2014 6:26 pm

I'm not making a judgement on Multics or older OS's at all or your desire to play with them.
It's fun after all.

My main point about OS's not taking advantage of CPU hardware seems to have been misunderstood.
The x86 architecture has more rings of security than current OS's use and that started around the 286 or 386, not that it matters much since they were a few years apart DECADES ago lol
I just checked, the 386 was released in 1985. Thirty years ago!
OS development still has yet to take advantage of the extra security that it had back then and they still don't use it!

I understand the point about CPU's being tied to the OS's, but 30 years have now gone by where we could have had decent security and the stability it gives.
The current batch of CPU's (not just the x86 series) have these features and more, yet the OS's still fail to implement even the most basic ones fully or correctly that have been around, as I said, for decades.
Now I understand that the software industry is highly competitive and fast paced, which obviously explains the 30 years that have gone by with no one implementing these things on the popular systems. That's sarcasm there in case anyone missed it.
It's what happens when you get one or two companies that basically become monopolies within themselves.
Microsoft monopolized the PC world and Apple had it's proprietary OS.

The end user had a choice of two systems, MuchShit or the fruit.
Since they were non compatible those two companies effectively had monopolies.
One I can kind of understand, the fruit. They kept thing proprietary. That's their business model.
The PC however was available for any developers but the incentive was removed or taken to do so by what could best be described as predatory practices.

We had the too big to fail concept in companies but in software giants, it's more, too unreliable to crash.

*Edited for clarity*
Last edited by FZR1KG on Fri Aug 01, 2014 9:49 pm, edited 1 time in total.
FZR1KG
 

Re: Long live the king

Postby Sigma_Orionis » Fri Aug 01, 2014 9:04 pm

Yeah, Linux does the same thing. Unix based systems have always used two rings. OS/2 used three. Not even OpenBSD with all its security orientation does it differently. (afaik it uses two rings only)

And I'm talking open source stuff. Not controlled by M$ or Apple.

I presume that the usual excuses for doing something like that are: Performance (which these days is moot if you ask me) and compatibility. Probably they'd have to rewrite the APIs which will probably break something. And end users don't like that......
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: Long live the king

Postby FZR1KG » Fri Aug 01, 2014 9:54 pm

You have to break a few API's to make an OS :P
FZR1KG
 

Re: Long live the king

Postby Sigma_Orionis » Fri Aug 01, 2014 10:18 pm

Tell that to BeOS

They broke all the APis, and made their OS, nobody cared :P
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: Long live the king

Postby FZR1KG » Fri Aug 01, 2014 10:41 pm

You'll also notice MicroScum's heavy handed tactics to remove the competition right there.
FZR1KG
 

Re: Long live the king

Postby Sigma_Orionis » Fri Aug 01, 2014 10:59 pm

Yeah indeed from the Wiki Entry it can be seen.

Yet even its open source incarnation lives under the table.

I'm not going to argue about M$ heavy handed anti-competition tactics, those are well documented. Yet, when push comes to shove, in this business, compatibility kills technical excellence, EVERY TIME.
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: Long live the king

Postby FZR1KG » Sat Aug 02, 2014 1:09 am

I totally agree and for good reason.
Where I differ is that I don't believe the two are mutually exclusive.
FZR1KG
 

Re: Long live the king

Postby Sigma_Orionis » Sat Aug 02, 2014 1:39 am

Nope, that's not where we differ, once in a while, something comes along that is technically excellent and very compatible.

Where we differ is that IMO Technically good is relative over time.

Let's suppose that tomorrow morning, Some Company in Elephant Breath Oh., or Three kids in Lithuania, come up with an OS that is technically excellent and managed through an impressive engineering feat (say a Cache Kernel based OS that implements "personality modules" on modern commodity ARM CPU based systems) to be compatible everything a modern organization uses today. And somehow they manage to become wildly popular, used by thousands of organizations all over the world. And furthermore, they become a worldwide corporation that "isn't evil" (like Google CLAIMED to be at one time).

Over time, I'd bet 10 to 1 that it will not keep up with whatever innovations take place on the hardware side. Why do I think so? because the CUSTOMERS will want the thing to be compatible over all things. So our model corporation will compromise compatibility over good engineering and over and over, and in the end, it will turn the thing into a monstrosity.

Sigma Orionis' First Law of IT:

Over time Technical Debt can only increase, just like Entropy.
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: Long live the king

Postby FZR1KG » Sat Aug 02, 2014 2:18 am

Sigma_Orionis wrote:Nope, that's not where we differ, once in a while, something comes along that is technically excellent and very compatible.


I didn't mean you vs I, I meant you and I vs the rest of the industry in general.
I should have made that more clear.

Regarding where companies will tend to go, yes and no.
Hardware is a lot slower in development than software, at least it should be.
Designing hardware takes monumental resources and even small changes are expensive.

Totally the reverse of software.
There should be no reason why the hardware is in advance to the software other than when some new revaluation comes in.
30 years however tells me there are serious problems with the way OS design is approached.
Even longer if you consider older CPU's.

As for compatibility, that's an easy problem. Certainly nothing that would hinder advancement to the point it's 30 years or more behind the hardware.
That's just insane.
FZR1KG
 

Re: Long live the king

Postby Sigma_Orionis » Sat Aug 02, 2014 3:47 am

And why software changes are so cheap?

Because you reuse the code.

If you don't reuse the code you end up having to the rewrite the thing from scratch. Which of course is in many many many cases unacceptable.

So, if you got crummy code (that was designed when things were immature, which on the software side, it's ALL THE BLOODY TIME), You reuse it, what is worse, you build stuff around it.

Once good case in point which I have mentioned before, the maiden flight of Arianne 5.

Software failure, why? because there were unforeseen problems with the software which was developed for Arianne 4.

Ariane 5's inertial reference system is essentially the same as a system presently flying on Ariane 4. The part of the software that caused the interruption in the inertial system computers is used before launch to align the inertial reference system and, in Ariane 4, also to enable a rapid realignment of the system in case of a late hold in the countdown. This realignment function, which does not serve any purpose on Ariane 5, was nevertheless retained for commonality reasons and allowed, as in Ariane 4, to operate for approx. 40 seconds after lift-off.


Bitrot indeed, since it was code that wasn't supposed to be even used it wasn't protected for overflows.

You know the rest of the story.
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: Long live the king

Postby FZR1KG » Sat Aug 02, 2014 11:24 pm

Hate to say it but it's pretty easy to mess up that code when you look at the naming conventions of the variables.
I don't know enough about application programming on say a PC to give naming convention advice but I've been around interfacing software most of my career and the general rule there is to make it completely clear what bit depth each variable is by use of the naming convention.
e.g.
uHorizontalVelocity instead of HorizontalVelocity
The u signifying unsigned 16 bit int that anyone familiar with C code would recognize.

Also, standard macro's need to be used for the conversions.

Funnily enough, I'm in the process now, ok, about day 6 of converting one of my projects to a large memory model. Since getting it to compile and run, every time I write to the e2prom the CPU resets. I've just now tracked it down to the system routines failing in the large memory model. Same code works in the small model, fails in the large. System calls are meant to be bullet proof.
Pass the parameters and it does the rest. The parameters are obviously correct since they work in the small memory model but the software fails.
Strange part is, this can't be just their code since I'm not the only person that uses these functions. But it appears that I am the only person with these problems.
No surprise. I was a beta developer since 2001 for this company and this project was the one that found most of the bugs in their system. I just wished that after near 14 years this would no longer be the case but it appears I'm back to being beta test boy except the product is now officially complete. No work has been done on it for a few years now.
Lucky me. I may have to go a long way in finding the problem and developing a workaround.
FZR1KG
 

Re: Long live the king

Postby Sigma_Orionis » Sun Aug 03, 2014 2:11 am

That sucks, big time.

Large Memory Model? what CPU are you using, a 286? :P

I presume that whatever it is, has segmented memory addressing.
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: Long live the king

Postby FZR1KG » Sun Aug 03, 2014 5:30 am

It's a PSoC.
Has a M8C core, 32k flash and 2k Rom
All the flash is done in 64 byte read/write blocks.
RAM is single page 256 bytes in the small memory model and the large memory model uses registers to access pages with two main modes of accessing the pages and a (third) option of staying in single page mode with a separate stack page.
I'm using the simplest of the three, page zero addressing and the stack page is separate. Migration is almost brain dead simple. Except that it crashes. Go figure.
The C compiler uses it's native mode, which varies depending on the compiler.
I use the PSoC because it has hardware programmable digital and analog block on the chip. So I implemented things like bandpass filters, ADC's, muxes, PWM motor control and a UART for serial comms. Almost everything on one chip and I can reconfigure it if I need to.
As a design engineer it's a blessing.
The same part can be reprogrammed to have different analog and digital circuitry on board so really cuts costs in development in both time and money.

Sadly, I always find these stupid bugs that seem to follow me around.
When I first started I found a long time bug they were tracing in the C compiler.
Sometimes it failed to switch memory banks when accessing I/O.
The very first program I wrote it crashed on. My I/O version of hello world. Flash an LED.
Then it would crash randomly.
I accidentally found the bug because I was putting nop's in trying to debug code that seemed to make no sense at all.
Turned out to be an issue with timing alignment of the flash 64byte boundries in code. Some instructions if they crossed blocks failed due to internal chip timing.
As I put nop's in it shifted the alignment and made the CPU run or not run depending on how many nop's I placed. At that point I knew it was a hardware issue and not my code.
Luckily I knew most of the design team there on a first name basis by that stage so passed the findings on right to the guys that could do something.
They wrote code that would realign by adding nops as part of the assembler. Same thing I was doing to get mine to run.
They introduced a new series without the alignment issue which I beta tested.
Then later, a new compiler, a new chip with more flash and RAM.
I finished the project without the need to test the large memory models.
So I find it absolutely no surprise that now that I went to the larger memory model my project crashes the system.
Mind you, I had hell trying to get this just to compile in the large memory model without errors.
Sometimes it would give no errors but failed to work. Rebuilding then gave the errors again.
Other times it gave errors regardless. Took some time to figure out it's issues with the code because the error reporting was actually incorrect and lead me on wild goose chases.
It kept telling me the boot ROM was too small but it wasn't the boot ROM at all.

Why am I doing all this?
Because the guys using the product started using a different motor, one from China that uses less winding's than the others so it presents as almost a dead short for a lot longer.
Start up current causes major voltage drops and the inputs end up dropping out as there the voltage drops below the threshold for a logic 1.
To fix it I need to buffer the inputs. Basically put in a low pass filter on them in software.
That means one byte per input, three inputs so I need three bytes. Bare minimum, 6 would be ideal.
I can't put in those three bytes because I run out of available stack space.
Yes, the code is that tight.

Just migrating to the separate stack page model would easily give me the memory, but, it crashes on eeprom writes and the product needs to be able to be configured as well as learning codes that it stores in flash.
After a week of this crap, I'm about ready to use it as a long distance target. Considering this problem is one of the big reasons I'm not currently in the Pacific, doubles that temptation.

What shits me is I know once I find it, it'll be some stupid thing that is obvious to either me or the design team for the development software and we'll all wonder how it slipped through the cracks for the last 7 years.
FZR1KG
 

Re: Long live the king

Postby Sigma_Orionis » Sun Aug 03, 2014 12:19 pm

Wow Dude, THAT's messy.

Is it common to write multitasking stuff on that thing? I can't wrap my head on the idea of ram implemented in 256 BYTE PAGES, just trying to understand why. As you might probably know. In most CPUs used for computers they tend to favor flat ram models. Hell I remember people bitching about the 80286 segmented memory architecture. Apparently it was implemented a way to protect multiple pieces of code from stepping on each other's toes.
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: Long live the king

Postby FZR1KG » Sun Aug 03, 2014 5:06 pm

It's a chip designed for interfacing with the real world via digital and analog means rather than as a general purpose CPU.
Even though it uses an M8C core (I complained but the design team said it was chosen due to popularity of the core) it had many interrupts and hardware to cater for critical real time application.
IOW, the device is designed for multitasking as all CPU cores that are designed for interfacing to the real world are, but, it's not like what is found on general purpose CPU's.
So there are few security concerns, the only one being flash levels of security to separate supervisor functions from user code and the ability for a user to define in 64byte blocks which part of the flash are handled in what way, e.g. write protected, field upgradable, no protection, factory protection. All are just protecting the flash from being accidentally overwritten.

The 256 byte memory model for RAM is basically inherited due to the CPU code, your compatibility point.
The instruction set came from a time when it was expensive to to have large RAM or impossible to have it on a chip.
Remember that single chip stuff when it came out was amazing. I was designing interfacing boards where the CPU had no ROM or RAM and needed latches etc to interface to the eternal memory.
When the single chip stuff came out they implemented what they could, that meant limited RAM, ROM and a simple 8 bit addressing mode for RAM.

The designs were meant to replace complex designs such as IR remote controls for TV which required little in the way of RAM. Usually just a few variables and a stack.
Another example would be dashboard instruments in cars.

You'd be surprised but much of the industry uses 8085, 8051 and 6800 based codes in the CPU cores because development tools exist that have proved reliable over the years/decades.

Where I differ is that when designing a new chip such as the PSoC (I was involved during the development stage, before the release of the parts), my choice would be to use a new/newer core as well. One optimized to the hardware. They choose to go with an established core. It was a startup called Cypress Micro Systems, which was eventually taken over by the parent company. Makes development easier, but not better. In the long run I think they regretted that decision as it severely limited the devices acceptance in the community, IMHO.
It was the analog and digital blocks that kept it alive till they got new cores but that happened much later, again, IMHO, formed by discussing the PSoC with other engineers.

Regarding the 286, it was again a compatibility issue. People wanted compatibility with the 8086.
Which IMHO is a load of shit as a concept.
Their reasoning was that it could run 8086 code naively and if they extended the segmentation modes (the 8086 was segmented already) they could run multiple 8086 programs with minimal security issues and minimal development time. Turns out that didn't quite work out the way they wanted and the 386 was born.
Naturally, it now had to be backwards compatible to the 286 which wasn't able to fulfill its design goal.
Basically a fuck up built on a fuck up but that's the whole IBM PC as it was developed to help reduce IBM's tax. IOW what we all have now is a system that was designed as a tax write off.
Any design engineer at the time just looked at the abortion that it was and rejected it as over expensive junk. It remained that way for years.
The business world however saw it came from IBM and embraced it.

Remember that the Amiga was at the time far far cheaper (around $600 compared to about $6000), faster, had a linear memory model, had a graphics processor and blew the PC apart in performance by magnitudes especially in the graphics aspect. While the PC was still with CGA and EGA graphics cards barely doing simple 2D work, the Amiga was doing hires 3D in real time. It also had sound that was real sound, not a stupid beep controlled by on/off in software.

So many other systems, so many better CPU's but we got a segmented pile of shit for years. It took the PC about a decade before it came close to the graphics and sound capability of other systems before it. Then came the abortion OS known as Windows. My laptop has Windows 8.1 and it can't even implement window re-sizing correctly. My code which runs on my other PC (Vista) won't run on the new laptop (8.1) even though it was developed with Microsoft products. The computer industry is shit in a blender.

Now, many people go on about things being compatible as a limiting factor. I call crap. You can design a CPU that can execute non native instruction almost as fast as the native CPU.
Downloadable microcode is one example.
Most O/S now are written in a high level language making porting pretty easy. Nothing like a re-write even if the instruction set is different.
It was certainly possible back then and within a year or so it would be far faster than the original, in many cases faster simply due to the speed of the newer processor.
They tried it with the PowerPC but MicroScum already had the O/S market and didn't want port their O/S to the newer CPU eventually bring it to decline.
Go figure, a company that refuses to sell it's product to expand sales.

IOW, business politics and behind the scenes handshake deals kill the advancement of technology.
Along with idiots that line up to get the newset O/S when it's released like it's some form of magic pill. :roll:

Ok, rant done. Feel much better now. Time to go back to finding out why this stupid thing crashes.
FZR1KG
 

Re: Long live the king

Postby Sigma_Orionis » Mon Aug 04, 2014 2:35 am

Nope, not surprised, literature from the day made it very clear that the 8051 and 8085 were popular with MicroControllers, and the 6800 was probably Motorola's most popular CPU.

Well, yeah, the IBM 5150 (aka the IBM PC) was really a hobbyist system made by IBM to compete with Apple. that could be upgraded to a "business system" to compete with CP/M based Micros. It was a really rushed out job because IBM (which technically was looked down upon by everyone else anyways, remember: it was the Microsoft of its day), never took it seriously.

IBM's supposed Business Machine was IBM 5100 which was a typical IBM monster, and it went on the order of US$10000, they probably sold about 50 and were discontinues before the PC came out.

As for the PowerPC, M$ used to provide WIndows NT for the PowerPC, but since there was no software for it, nobody used it, so when Windows 2000 came out they happily dropped support for the PowerPC, MIPS and DEC Alpha chips. I do know some customers (particularly TeleCom companies) that tried to use the Alpha versions of NT, never saw the other two versions, I do know that M$ had versions for them since Windows NT 3.1. I have also read that it was hard to port intel based applications to the RISC line of WIndows NT.

I suppose that M$ stuck it with support for the Intel Itanium Series because Intel pushed them to it, they tried to say that 64 Bit Windows would only exist in the Itanium line. Well, that didn't work, everybody except HP ignored Itanium so M$ being M$ dropped support for it when Windows 2008 came out. And when Intel (which in turned was pushed by AMD) came out with a 64 bit processor compatible with the 32 bit line M$ promptly ported Windows 2003 to it.

From what I have heard the 64 bit extension that is 32 bit Intel x86 compatible made by AMD and Intel is supposedly a hack. Don't know enough to validate it one way or the other in any way.

As for the 286, I thought compatibility with the 8086 (ah yes I am pretty well aware that thing was segmented to provide compatibility with the 8080, not because I coded in Assembler or C, but because some of the segments were reserved for hardware devices and it was a PITA to deal with)was provided through the "8086 real Mode" and that the "protected mode" (which was also segmented) was not compatible at all, from what you tell me then, the 286 protected mode provided some sort of backwards compatibility with the 8086? or it was simpler to leave it all segmented?
Sic Transit Gloria Mundi
User avatar
Sigma_Orionis
Resident Oppressed Latino
 
Posts: 4496
Joined: Mon May 27, 2013 2:19 am
Location: The "Glorious Socialist" Land of Chavez

Re: Long live the king

Postby SciFiFisher » Mon Aug 04, 2014 2:55 am

FZR1KG wrote:Remember that the Amiga was at the time far far cheaper (around $600 compared to about $6000), faster, had a linear memory model, had a graphics processor and blew the PC apart in performance by magnitudes especially in the graphics aspect. While the PC was still with CGA and EGA graphics cards barely doing simple 2D work, the Amiga was doing hires 3D in real time. It also had sound that was real sound, not a stupid beep controlled by on/off in software.



I loved my Amiga. It was a hand me down from my former FIL. It was years ahead of anything PC. Sadly, finding programs that ran on it where I was at was almost impossible. The nearest store that sold software and accessories for it was almost 300 miles from where I lived. And this was definitely pre-Ebay and Amazon. :o
"To create more positive results in your life, replace 'if only' with 'next time'." — Author Unknown
"Experience is a hard teacher because she gives the test first, the lesson afterward." — Vernon Law
User avatar
SciFiFisher
Redneck Geek
 
Posts: 4889
Joined: Mon May 27, 2013 5:01 pm
Location: Sacramento CA

Re: Long live the king

Postby Cyborg Girl » Mon Aug 04, 2014 3:07 am

You guys are starting to lose me. :lol:

FZ wrote:My laptop has Windows 8.1 and it can't even implement window re-sizing correctly.


Might have to do with Windows 8 leaning heavily on the GPU for everything graphical? If you have shoddy drivers or a shoddy GPU it may not perform well. Not sure why, maybe some kind of optimizations for GPU rendering that is hostile to CPUs.

If you think the situation re graphics is bad on Windows though, you should take a look at Linux. Right now the major Linux distros all use 3D desktops by default, and fast and stable 3D support does not exist on Linux for most hardware. Part of the reasoning was that "demand will create supply," i.e. showing that Linux graphics drivers are broken will result in them getting fixed... Nope, still broken today.

(The other part was "ZOMG end users want eyecandy!")

Actually I should start a new topic for that, you'll get a kick out of it.
User avatar
Cyborg Girl
Boy Genius
 
Posts: 2138
Joined: Mon May 27, 2013 2:54 am

Next

Return to Sci-Tech… and Stuff

Who is online

Users browsing this forum: No registered users and 2 guests