Name: Anonymous 2006-10-08 17:08
I understand floating point rounding errors are unavoidable in computer graphics and, well, anything dealing with floating point arithmetic in general. They are a fact of life in computer science and we all have to deal with them.
I always thought, however, that accuracy was guaranteed for up to 6 decimal digits, such that if you were to compare two technically-equal numbers you could define an epsilon value 1e-6 (or 1e-5, I forgot) and compare the two values as such:
(abs(number1 - number2) < epsilon) --> then number1 = number2
Fine, that's reasonable. But I wrote a Matrix class and the inverse function seems to work fine... however, the inverse of an inverse of a Matrix yields the original Matrix. Mathematically, it's doing this perfectly, but the rounding errors are "leaking" outside of that acceptable epsilon range somewhere!
Here's output of what I have for an example:
Matrix M is:
| 10.5 45 -5 34000
| 4 5.5 2 45000
| 650 -2 4 31000
| 0 3.5 0 -2100
Matrix M.inverse() is:
| 0.00056584802 -0.0032629438 -0.13246518 -5.4382399e-06
| 0.001664508 0.008365741 -0.16660795 1.3942901e-05
| 0.001539564 0.00010419058 0.0011145427 1.7365096e-07
| 0.0037797824 0.23055643 -1.4419483 -9.1929731e-05
Matrix M.inverse().inverse() is:
| 10.500001 45.000008 -5.0000005 34000.004
| 4 5.4999995 2.0000002 45000.004
| 650.00006 -2.000001 4.0000005 31000
| 6.0025807e-09 3.5000005 -5.666517e-09 -2100.0002
And a cell-by-cell comparison of the first few cells, stopping at the point where the comparison of the values exceed the floating point rounding error epsilon range; mat is the original matrix, and the matrix 'n' is the inverse of the inverse of mat (which, again, is "mathematically" correct, but computer rounding errors are fucking shit up!!):
mat[0][0] = 10.5 n[0][0] = 10.500001
fabs(mat[r][c] - n[r][c] = 9.53674316e-07
mat[0][1] = 45 n[0][1] = 45.0000076
fabs(mat[r][c] - n[r][c] = 7.62939453e-06
mat[0][2] = -5 n[0][2] = -5.00000048
fabs(mat[r][c] - n[r][c] = 4.76837158e-07
mat[0][3] = 34000 n[0][3] = 34000.0039
fabs(mat[r][c] - n[r][c] = 0.00390625
Oops... 34000.0039? 34000.000000054 would have been acceptable, but this is too close for comfort, and the comparison of the two matrices fails even on an epsilon comparison!
I know I probably won't ever actually use values this high in an actual affine matrix in, but I am still concerned because a bug is a bug.
I would use doubles but I'm not entirely sure if it's worth making my entire code use doubles instead of floats JUST BECAUSE of specific rounding errors that I know good-and-well other software has solved without resorting to doubles.
One solution, I'm guessing, is to reduce calculations as much as possible such that there are less floating point calculations entirely, everywhere -- the more floating point arithmetic happens over a series of numbers that are already "rounding-errored", the more "corrupt" the accuracy gets over time -- and I guess what happened here is it got so "corrupt" that the rounding errors "leaked" into the area of typically-guaranteed precision.
Any other ideas?
I always thought, however, that accuracy was guaranteed for up to 6 decimal digits, such that if you were to compare two technically-equal numbers you could define an epsilon value 1e-6 (or 1e-5, I forgot) and compare the two values as such:
(abs(number1 - number2) < epsilon) --> then number1 = number2
Fine, that's reasonable. But I wrote a Matrix class and the inverse function seems to work fine... however, the inverse of an inverse of a Matrix yields the original Matrix. Mathematically, it's doing this perfectly, but the rounding errors are "leaking" outside of that acceptable epsilon range somewhere!
Here's output of what I have for an example:
Matrix M is:
| 10.5 45 -5 34000
| 4 5.5 2 45000
| 650 -2 4 31000
| 0 3.5 0 -2100
Matrix M.inverse() is:
| 0.00056584802 -0.0032629438 -0.13246518 -5.4382399e-06
| 0.001664508 0.008365741 -0.16660795 1.3942901e-05
| 0.001539564 0.00010419058 0.0011145427 1.7365096e-07
| 0.0037797824 0.23055643 -1.4419483 -9.1929731e-05
Matrix M.inverse().inverse() is:
| 10.500001 45.000008 -5.0000005 34000.004
| 4 5.4999995 2.0000002 45000.004
| 650.00006 -2.000001 4.0000005 31000
| 6.0025807e-09 3.5000005 -5.666517e-09 -2100.0002
And a cell-by-cell comparison of the first few cells, stopping at the point where the comparison of the values exceed the floating point rounding error epsilon range; mat is the original matrix, and the matrix 'n' is the inverse of the inverse of mat (which, again, is "mathematically" correct, but computer rounding errors are fucking shit up!!):
mat[0][0] = 10.5 n[0][0] = 10.500001
fabs(mat[r][c] - n[r][c] = 9.53674316e-07
mat[0][1] = 45 n[0][1] = 45.0000076
fabs(mat[r][c] - n[r][c] = 7.62939453e-06
mat[0][2] = -5 n[0][2] = -5.00000048
fabs(mat[r][c] - n[r][c] = 4.76837158e-07
mat[0][3] = 34000 n[0][3] = 34000.0039
fabs(mat[r][c] - n[r][c] = 0.00390625
Oops... 34000.0039? 34000.000000054 would have been acceptable, but this is too close for comfort, and the comparison of the two matrices fails even on an epsilon comparison!
I know I probably won't ever actually use values this high in an actual affine matrix in, but I am still concerned because a bug is a bug.
I would use doubles but I'm not entirely sure if it's worth making my entire code use doubles instead of floats JUST BECAUSE of specific rounding errors that I know good-and-well other software has solved without resorting to doubles.
One solution, I'm guessing, is to reduce calculations as much as possible such that there are less floating point calculations entirely, everywhere -- the more floating point arithmetic happens over a series of numbers that are already "rounding-errored", the more "corrupt" the accuracy gets over time -- and I guess what happened here is it got so "corrupt" that the rounding errors "leaked" into the area of typically-guaranteed precision.
Any other ideas?