>>80
Gawd, does your thick skull not understand the different between caring about performance in terms of seconds, and caring about performance in terms of the order (as in big-o) of the algorithm? You have to realize that it is much easier to code when you can just ignore the underlying machine and just copy what you have written down on paper. Ofcourse you should care about the general system that you program on but caring about whether the machine does in-place swap or not is just a waste of mental effort since they are both O(1) operations.
Mind, I am not denying that performance don't matter (for ex., servers and embedded systems, realtime and I wish, desktop systems) but they don't matter all that much in many fields of computer science. You should really not generalize practices in your little field to what happens in other fields.
In terms of learning, knowing how the machine works is a small part of what compute science is all about. It is mainly about algorithms and datastructures and
solving real problems like computer vision, statistical analysis, linguistics and so on. These fields care about
how you solve the problem, not about how fast it runs on a computer, especially when you have so much damn processing power lying around everywhere.
If you start out from the machine, students are going to feel frustrated and stuck in the machine, instead of learning about the fucking
magic of computer science. The machine is really irrelevant, so programming around the machine is very boring and limiting.