Comments on: True danger of viruses and worms
Whenever I hear about new computer viruses and worms on the attack, my mind drifts back a few decades...
As a young OS kernel engineer at Hewlett Packard's Computer Systems Division, we had some core principles of computer science, known by all on the team, that we strictly applied to our operating system designs.
For example, code, data, and stack segments where isolated; that is "firewalled". They were separate, hardware enforced areas of the memory map. You could not overflow a stack buffer into the code segment as a way to hijack the CPU. You'd segment fault first, and besides, the VM would not even let you map stack access into code segments.
In addition, code segments were read-only. During execution, you didn't get to write into a code segment for any reason whatsoever. If you tried, the memory system would throw an exception faster than it could load the next instruction. (Only the highly privileged loader module could write a code segment, and while it did, it was just a data segment - you couldn't jump the CPU over to it.)
That was 1980, folks.
I can see some of you old-timers shaking your heads, "yep, that's how we did it." It didn't matter if you were an IBM-er, CDC-er, DEC-er, or Burroughs-er. You knew the principles. HP didn't invent most of those concepts. They were the well established, accepted design practices of the day.
So, here we are 29 years later...
And, we are all worried about the "conflicker worm"... in actuality, a threat that shouldn't even exist.
And, what's the true danger here, is it really the worm? Companies, big and small, make a lot of money on these viruses and worms. OS companies, virus protection companies, computer system vendors, network operators and IT consultants... all win huge when a worm wrecks havoc on the net.
Yes, I have to admit, I'm getting a bit jaded with age. You think there will be cure to the disease when patches make a lot more money? It's not just software. Do you think cancer will be cured when medicine makes billions on partial treatments?
What's a computer user to do?
Fortunately, there are alternatives, but you have to be willing to jump ship from your Windows insecurity blanket.
I'm not concerned about conflicker affecting my Mac OS X or various Linux boxes. I think you guru folks know why. Although not perfect, those are more or less real operating systems built on some of those principles of the 1970's and 80's. (Old-timers: let's not debate here if Unix derivatives are real OSes... let's just admit that they are widespread now, ok?)
Again, sorry if this sounds like I've spent too much time racking the wine barrels in the basement. Maybe it's all just par for the course these days... the way our technologies, industries, banks, and governments are headed. (Well, actually, they've pretty much already arrived.)
Or, as I've learned long ago (from lawyers, if I dare say):
It's not what you once knew to be the truth, it's what you pretend to know now that really matters.
Well, personally, I don't buy that brand of dog food. But, the unaware eat it up. So, maybe I'm not the only one who's jaded?
The alternatives have to be viable, of course. I don't believe viability currently includes anything but x86-64 based CPUs. Unless one has a lot of money to back up something new. :)
Also, those who throw in the towel and say that Microsoft already won on the desktop with Windows...have no right to lead the rebellion with any would-be alternative OS. One has to have the guts to say, "Windows is dead meat." And then prove it handily.
You're probably right about the unwholesome activities done by some companies, regarding worms and viruses. It would not surprise me, anyway--it's akin to the military industrial complex (though it's crumbling now) stuffing money into the pockets of politicians for that pro-war vote, so they can either get kickbacks or make a bundle from the stocks in companies manufacturing implements of war.
At any rate, I think it's whatever one simply *decides* to go do--neverminding whatever competition is out there. There are other alternatives, of course. http://i275.photobucket.com/albums/jj307/eyeam2000/Eyechip.jpg :D
Way back when I started (in the 1970s on IBM mainframes) you could modify executable code. Code segments were not read-only - nor were they necessarily separated from data segments.....As an assembler programmer, certain idioms involving self-modifying code were common.
But the operating system had sufficient controls to prevent application level code from finding any way of elevating itself to any form of higher-level privilege. And, of course, there was no way of overwriting another application's code.
That made it pretty safe all round.
Yes, well I think de antivirus software makes a lot of money for some people. Most new PCz you buy have allready a 30 days or 60 days free antivirus installed and ofcourse they want you to take an abonement. And you know what. If there is no dangerous virus out there they could have some one create one. Why not, keep your self employed. Yes it's corrupted. But it's not only virusscans. They have antivirus, antispyware, antiaddware, antimallware, antiphising, firewalls etc. All marketing to sell more. A thread is a thread to me. I know something about computers but this even confuses me. Another thing is that your computer will get even slower. All the background tasks and badly programming. Ok, I have to stop but Carl got me started. |
A young guy here with a question.. why wouldn't Unix derivatives be considered OS's?
(Not looking to start a fight, but the question of what it means to be an OS is fuzzy for me, so I'm curious)
I think that remark goes back to such things as the LISP machines debate. Unix was once devised as a game platform, so it can be doubted even in terms of comparable systems, but there were also much different all-LISP platforms that were pushed out by Unix in a classic worse-is-better scenario.|
Carl, I gotta say, as a former Amiga user you are somewhat of a hero to me, but it's like a punch to the stomach to hear you speak so highly of Unix. Do you insinuate that buffer overflows are not a problem in Unix (or it's clones)? I would beg to differ.
Regardless, the ultimate security weapon is knowledge, and any company who's computers suffer a large scale infection represents a serious lack of proper administration. Yet, so few will point the fingers at themselves, especially when the only ones who can understand the problem are the very same administrators who are at fault in the first place.
In this exploit competition, no one was able to exploit browsers on linux last year, so linux OS was dropped for this year.
It also shows that the OSX being 'safer' is actually totally wrong, safari and firefox on OSX are the easiest and least OS mitigated platforms out there.
MSactually addresses the issue much more seriously and makes it very hard to exploit, but its still possible.
to quote the organizers. "You can do what you want on a remote computer with firefox on OSX" there is nothing in the OS stopping you.
the tightest browser is chrome, since it has a very powerful sand box, which really is effective at mitigating exploits.
Actually, most people don't _know_ that there are existing secure OSs:
"As a young OS kernel engineer at Hewlett Packard's Computer Systems Division, we had some core principles of computer science...that we strictly applied to our operating system designs...code, data, and stack segments where isolated; that is "firewalled". They were separate, hardware enforced areas of the memory map...[and] code segments were read-only"
I was interested to hear you say that, in light of the fact that the exec.library had neither of those features.
I'll say that again, because it sounds vaguely important.
The very heart of the AmigaOS had no concept of memory protection whatsoever. Although the loader module did indeed divide programs into code and data segments, once loaded these memory areas had identical privilege as far as Exec was concerned. Any loaded code was free to overwrite anything, including exec.library (to say nothing of DOS, Intuition, et al.), which meant that an Amiga was fundamentally unsecurable.
I'm not complaining, exactly. Given what the Amiga was supposed to be, and given its hardware limitations, dispensing with real protection made perfect sense. The 68k had no notion of hardware protection (at least until the '020s weird module scheme, which didn't last), and the performance penalties of providing such functionality in software would have been punitive. Minimizing supervisor state switches was a good idea from a performance point of view.
But from your comments I wonder if you think there's no way to make an Amiga-styled kernel architecture (everything is a linked list, all communication is by message passing, completely dynamic API, etc.) with decent memory protection. Because I've withered brain cells trying to envision such a concept--make one that would work on modern CPUs, where hardware memory protection is available. I wonder now if such a thing is flatly impossible.
Post a Comment:
You can post a comment here. Keep it on-topic.