>>15
Of course it's used, but only to a limited precision (hence, no real numbers, just rationals) and thus with plenty of errors, but most errors are not significant enough to matter as far as the generated images are concerned.
Just like in physics, real numbers are used, but at a certain cut-off point (let's say a few dozens of orders of magnitude, such as 10
-50), experimental data will start to deviate, and not like we can measure stuff with such precision. It's actually quite likely that infinite precision of real numbers doesn't exist in nature (and may very well be impossible to exist, as it would suggest existence of hypercomputation), and that our world is digital or relational at its lowest level (see Loop Quantum Gravity or spin-nets for examples).
This does not mean that I have the same opinion as
>>1, my personal view on the matter is that of classical finitism (allowing the countably infinite to exist) and the more general one of mathematical monism (mathematical structures are all that exist, read Tegmark's Ultimate Ensemble / Mathematical Universe papers for example, or a more extreme example would be Schmidhuber's algorithmic ToE's which only allow computational universes to exist). However, despite all this, I do think calculus and the infinite precision provided by real numbers is of practical use as far as modeling all kinds of problems as well as discovering more general mathematical truths, despite that it may be likely that the base of this entire system is not consistent (and thus would not actually exist, nor could physically exist) - it's like using a system that is almost always right, but has the chance to be wrong about certain edge-cases, even if we don't yet know any, thus for all practical reasons, we would be fine to keep on using it (unless we can find something that does the job as good as the current one and is consistent, which is unlikely).