>>16
Just do add on to this:
She says intepreted languages skip the optimization step (at least at compile time). This is wrong. Interpreted languages do optimize the code quite a lot at compile time (especially for embedded VMs). For example, with J2ME and Android, the compiler will do a lot of static analysis (called the pre-verification step) to remove stuff like bounds checks where it's unnecessary from the compiled bytecode. Compilation for BlackBerry in particular can actually be quite slow for these reasons.
There is this recurring assumption that interpreters run slower just because they have to do more runtime work. This is not necessarily true. Some JIT VMs can actually end up with faster code in a lot of scenarios because they can exhibit much better cache locality. They profile the code to figure out what functions are hotspots, compile them, then mash them together into one cache line so the thing runs super fast. Most compilers are extremely dumb in this respect; profile-guided optimization isn't that good, and some compiled languages bloat the outputted code heavily having devastating effects on cache performance (I'm looking at you, C++). Compiled languages can't re-organize the machine code at runtime.
Also, the platform independence of interpreted languages is kind of a joke at this point. Java VMs are starting to get their own bytecode formats for crying out loud, so J2SE != MIDP != BlackBerry != Android. Even for the desktop stuff, the rule is now write once, test everywhere.