Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-

floating point rounding errors...

Name: Anonymous 2006-10-08 17:08

I understand floating point rounding errors are unavoidable in computer graphics and, well, anything dealing with floating point arithmetic in general.  They are a fact of life in computer science and we all have to deal with them.

I always thought, however, that accuracy was guaranteed for up to 6 decimal digits, such that if you were to compare two technically-equal numbers you could define an epsilon value 1e-6  (or 1e-5, I forgot) and compare the two values as such:

(abs(number1 - number2) < epsilon) --> then number1 = number2

Fine, that's reasonable.  But I wrote a Matrix class and the inverse function seems to work fine... however, the inverse of an inverse of a Matrix yields the original Matrix.  Mathematically, it's doing this perfectly, but the rounding errors are "leaking" outside of that acceptable epsilon range somewhere!

Here's output of what I have for an example:

Matrix M is:
|  10.5  45  -5  34000
|  4  5.5  2  45000
|  650  -2  4  31000
|  0  3.5  0  -2100

Matrix M.inverse() is:
|  0.00056584802  -0.0032629438  -0.13246518  -5.4382399e-06
|  0.001664508  0.008365741  -0.16660795  1.3942901e-05
|  0.001539564  0.00010419058  0.0011145427  1.7365096e-07
|  0.0037797824  0.23055643  -1.4419483  -9.1929731e-05

Matrix M.inverse().inverse() is:
|  10.500001  45.000008  -5.0000005  34000.004
|  4  5.4999995  2.0000002  45000.004
|  650.00006  -2.000001  4.0000005  31000
|  6.0025807e-09  3.5000005  -5.666517e-09  -2100.0002

And a cell-by-cell comparison of the first few cells, stopping at the point where the comparison of the values exceed the floating point rounding error epsilon range;  mat is the original matrix, and the matrix 'n' is the inverse of the inverse of mat (which, again, is "mathematically" correct, but computer rounding errors are fucking shit up!!):

mat[0][0] = 10.5        n[0][0] = 10.500001
        fabs(mat[r][c] - n[r][c] = 9.53674316e-07
mat[0][1] = 45  n[0][1] = 45.0000076
        fabs(mat[r][c] - n[r][c] = 7.62939453e-06
mat[0][2] = -5  n[0][2] = -5.00000048
        fabs(mat[r][c] - n[r][c] = 4.76837158e-07
mat[0][3] = 34000       n[0][3] = 34000.0039
        fabs(mat[r][c] - n[r][c] = 0.00390625

Oops... 34000.0039?  34000.000000054 would have been acceptable, but this is too close for comfort, and the comparison of the two matrices fails even on an epsilon comparison!

I know I probably won't ever actually use values this high in an actual affine matrix in, but I am still concerned because a bug is a bug.

I would use doubles but I'm not entirely sure if it's worth making my entire code use doubles instead of floats JUST BECAUSE of specific rounding errors that I know good-and-well other software has solved without resorting to doubles.

One solution, I'm guessing, is to reduce calculations as much as possible such that there are less floating point calculations entirely, everywhere -- the more floating point arithmetic happens over a series of numbers that are already "rounding-errored", the more "corrupt" the accuracy gets over time -- and I guess what happened here is it got so "corrupt" that the rounding errors "leaked" into the area of typically-guaranteed precision.

Any other ideas?

Name: Anonymous 2006-10-08 18:21

32-bit floats give you almost seven digits of accuracy (log 2^23), which, AFAICS, fits your output perfectly. Note: It's not the number of digits after the decimal separator, but the total number of digits, otherwise you could just multiply your values by a million to get six extra digits of accuracy.

If you need more accuracy, just use doubles.

Also, use doubles for intermediate calculations (or even use the native format which is 10 bytes IIRC). Should make things faster too.

Name: Anonymous 2006-10-08 18:59

>>2

Doubles faster?  I'll have to read up on that and verify it.  I'm by no means a hardware guru but I'm pretty sure calculation on a 64-bit (8 bytes) datatype would be slightly (although significantly??) slower than operations on 32-bit datatypes when it comes to operating on a 32-bit CPU.  Then again, I'm assuming home desktop CPUs like the Pentium4 are 32-bit... I could be totally wrong.

And what "10 byte native format" are you talking about?

Until I figure this out is there a more reliable epsilon comparison formula I can use that takes decimal placement into account?

Name: Anonymous 2006-10-08 20:16

Accuracy isn't guaranteed up to any number of SF; the problem is the conversion from decimal to binary. Some numbers--such as 0.1--cannot be stored in a finite representation in binary.

Of course, the sane thing to do is to utilize a matrix from your language's stdlib or similar ;)

Name: Anonymous 2006-10-08 21:06

C and C++ don't have a Matrix class or datatype in their stdlib.  In fact, I think the only language I've ever seen with native linear algebra datatypes "in-the-box" was FORTRAN... and that's only a maybe.

Name: Anonymous 2006-10-08 21:33

And what "10 byte native format" are you talking about?
http://en.wikipedia.org/wiki/Extended_precision

Although a quick benchmark shows that (at least on my machine), there is no significant difference in speed between singles and doubles, and using the 80-bit format made things slightly slower (~15%).

Name: Anonymous 2006-10-08 22:00

>>5

Hence the 'or similar' :p

Name: Anonymous 2006-10-08 22:07

...something like Boost (C++) or GNU's Scientific Library (C/C++), for example.

Name: Anonymous 2006-10-08 22:41

BLAS

Also take some numerical computing.

Name: Anonymous 2009-01-14 12:49

LISP

Name: Anonymous 2009-07-12 5:58

answers these standard answers mystery. standard MS standard more more text, right? PENIS ms ms word fuck. a  /read/prog/1201642372 the  stupid title gets returns  man integer. 32-bit character returns too also    also also  your to has value. than have value. way go that Chasm can't You south way. south you  is program better no   source is (local and you still). and debugging don't any fuck load over wtf??? I load add C that poor your that so Lisp fuckin I man! think are'nt is seriosly you gtfo I was presented ヮ゚) ゚ Mittens! was

Name: Sgt.Kabuᙩ歔kiman㜱⳷ 2012-05-28 20:30

Bringing /prog/ back to its people
All work and no play makes Jack a dull boy
All work and no play makes Jack a dull boy
All work and no play makes Jack a dull boy
All work and no play makes Jack a dull boy
All work and no play makes Jack a dull boy
All work and no play makes Jack a dull boy
All work and no play makes Jack a dull boy
All work and no play makes Jack a dull boy

Don't change these.
Name: Email:
Entire Thread Thread List