>>6
Isn't the big problem with multiple cores is the whole "only certain tasks can be parallelized" and "raping the ram"?
For shit like raw calculations I can see how you can have linear scaling. When we're talking about AI or other state-dependent calculations it's much worse, with the ram throughput needing to be ridiculous high and parallel on the small scale rather than a large one. I know they use threading for shit like audio and input handling, but when we're talking about processors where a single processor can handle everything except the physics, AI, and graphical requirements, it's unimportant to "properly" thread shit like audio.
If they had like a 64mb shared L3 or L2 cache I might see how 8 cores will be efficient, but otherwise, it's going to be basically impossible to program for.
Now, on the other hand, if they instead used the other cores for physics and GPU enhancing when they're not parallelizing a few AIs or something, I can see that being useful, but to efficiently use 8 cores in real tasks is pretty impossible. Also, as we've seen from AMD 4xxx series, deep pipelining is only useful if you customize your compiler and actually can pipeline, and from hyperthreading, we know it gets to be basically shit with more than 2 threads because you eventually DO need to calculate. Faster ram just lessens the usefulness of hyperthreading as you need to wait less and less for data transfers.
Maybe if they could find some way to efficiently pipeline AND use hyperthreading to be pseudo processing, but at the same time, they could instead just use Bulldozer modules and scale much more effectively instead of trying to create a processor which dynamically pipelines tasks.