Can random code damage your hardware when run in Real Mode? If yes, give some examples how. What happens when the ACPI tables are overwritten? Is it possible to disable the CPU fan through code, and have the CPU overheat and break, or does the BIOS have moron-proof failsafes incase someone would do that?
Name:
Anonymous2011-12-15 11:43
Some time ago it was possible to burn CRT display.
Name:
Anonymous2011-12-15 11:55
look at nVidia's display driver version 196.75 (very old/outdated by now)
it accidentally turned off the fan of many graphics cards, leading them to overheat and get damaged
Name:
Anonymous2011-12-15 12:01
What is the point of damaging hardware? Are you competitor, who wants to damage reputation of nvidia?
Name:
Anonymous2011-12-15 12:17
>>4
it was accidental, the driver wasn't supposed to turn off the fan
and it was an official driver from nVidia, not a competitor
There was talk, back in the day, about a virus that could damage your monitor. A lot of monitors didn't have any hardware protection against setting a refresh rate higher than they could handle. So if you circumvented Windows (probably Windows 3.1 at the time) and just slammed the VGA register directly, you could run a 60Hz monitor at 120Hz, for example, which could potentially damage the CRT. Yeah, the old "tube" monitors which none of you script kiddies have probably ever seen.
Name:
Anonymous2011-12-15 14:42
>>12'
>that feel when i had to replace my oldest CRT last year since the color was so off
>>1 Real Mode
Why are you still using an obsolete, Jewish CPU architecture? Don't you know that every time you buy from Intel, a Palestinian child dies?
>>16
Yeah, pretty much... There were no video drivers so you just wrote bytes right into the framebuffer. And you had to do things like poll a VGA register to know when the "gun" was in vertical retrace (not drawing anything) and do your screen-flip then to avoid tearing.
Yes, if you happen to poke the VRM controller/clock generators in the wrong way.
(Ask yourself how you can change voltages and such in the BIOS. I still prefer DIP switches or jumpers for this sort of stuff that shouldn't be under the control of software.)
http://sunnyday.mit.edu/papers/therac.pdf
First thought: Why does a single-purpose machine controller even need a scheduler and multitasking?
Second thought: Why isn't "overcomplicated design" high on the list of stupid shit they did?
>>28 Why does a single-purpose machine controller even need a scheduler and multitasking?
You have never programmed microcontrollers, have you?
Of course it needs scheduling and multitasking, because it performs a hell of a lot asynchronous tasks: sending control data, reading responses, reading user input, displaying whatever information it displays (most probably with the entire display stack driven by the single chip, so asynchronous on several levels), maybe even dumping logs somewhere (which means two more asynchronous processes).
Of course you can also write an ad hoc, buggy reimplementation of half of a multitasking system yourself.
>>33
Do not bother educating Cudder, he'll never reach Satori that way. Hell, don't even respond to him in any way, maybe he'll go back to /jp/ that way.
>>36
Polling is shit. You waste processor time instead of using asynchronous interrupts which only use the processor when the device actually needs something. Now imagine that on a critical system like life support or a nuclear missile.
>>37
What the fuck time is being wasted? One instruction per loop? Get the fuck out of here -- context switching will waste significantly more time than that.