Name: Anonymous 2014-01-29 12:23
When NVIDIA launched the GeForce 2 in 2000, Jen-Hsun Huang said it was a "major step" towards achieving "Pixar-level animation" in real-time only to be criticized by Pixar's Tom Duff.
"These guys just have no idea what goes into `Pixar-level animation.' (That's not quite fair, their engineers do, they come and visit all the time. But their managers and marketing monkeys haven't a clue, or possibly just think that you don't.)
`Pixar-level animation' runs about 8 hundred thousand times slower than real-time on our renderfarm cpus. (I'm guessing. There's about 1000 cpus in the renderfarm and I guess we could produce all the frames in TS2 in about 50 days of renderfarm time. That comes to 1.2 million cpu hours for a 1.5 hour movie. That lags real time by a factor of 800,000.)
Do you really believe that their toy is a million times faster than one of the cpus on our Ultra Sparc servers? What's the chance that we wouldn't put one of these babies on every desk in the building? They cost a couple of hundred bucks, right? Why hasn't NVIDIA tried to give us a carton of these things? -- think of the publicity milage they could get out of it!
Don't forget that the scene descriptions of TS2 frames average between 500MB and 1GB. The data rate required to read the data in real time is at least 96Gb/sec. Think your AGP port can do that? Think again. 96 Gb/sec means that if they clock data in at 250 MHz, they need a bus 384 bits wide. NBL!
At Moore's Law-like rates (a factor of 10 in 5 years), even if the hardware they have today is 80 times more powerful than what we use now, it will take them 20 years before they can do
the frames we do today in real time. And 20 years from now, Pixar won't be even remotely interested in TS2-level images, and I'll be retired, sitting on the front porch and picking my banjo, laughing at the same press release, recycled by NVIDIA's heirs and assigns. "
"These guys just have no idea what goes into `Pixar-level animation.' (That's not quite fair, their engineers do, they come and visit all the time. But their managers and marketing monkeys haven't a clue, or possibly just think that you don't.)
`Pixar-level animation' runs about 8 hundred thousand times slower than real-time on our renderfarm cpus. (I'm guessing. There's about 1000 cpus in the renderfarm and I guess we could produce all the frames in TS2 in about 50 days of renderfarm time. That comes to 1.2 million cpu hours for a 1.5 hour movie. That lags real time by a factor of 800,000.)
Do you really believe that their toy is a million times faster than one of the cpus on our Ultra Sparc servers? What's the chance that we wouldn't put one of these babies on every desk in the building? They cost a couple of hundred bucks, right? Why hasn't NVIDIA tried to give us a carton of these things? -- think of the publicity milage they could get out of it!
Don't forget that the scene descriptions of TS2 frames average between 500MB and 1GB. The data rate required to read the data in real time is at least 96Gb/sec. Think your AGP port can do that? Think again. 96 Gb/sec means that if they clock data in at 250 MHz, they need a bus 384 bits wide. NBL!
At Moore's Law-like rates (a factor of 10 in 5 years), even if the hardware they have today is 80 times more powerful than what we use now, it will take them 20 years before they can do
the frames we do today in real time. And 20 years from now, Pixar won't be even remotely interested in TS2-level images, and I'll be retired, sitting on the front porch and picking my banjo, laughing at the same press release, recycled by NVIDIA's heirs and assigns. "