You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Due to the binary representation of decimal numbers, the number stored in the computer is slightly different than the one the user specified.
These small issues become very noticeable when large exponents are applied to these numbers.
For example when using bfloats if 1 is multiplied by 10^10 ten times in a for loop, the resulting number is:
9.999892e+99
Which is already incorrect in the 5th significant digit. If the for loop ran for 100 or 1000 iterations instead of only 10, then the resulting numbers are:
100: 9.998843e+999
1000: 9.98847e+9999
Similar behaviour occurs when dividing instead of multiplying.
A number of questions arise:
How does drift differ between the different types? bfloat vs logspace etc.
How relevant is this drift to computation in general. It is only this extreme in these examples because we are repeatedly performing the same operation with the same numbers. In general, one would expect the 'down-rounding' to cancel out the 'up-rounding' when different numbers gets converted their binary representations.
Are there specific computations that should be avoided? For example is doing 1_10^100 better than doing 1_10^10^10?
When computing probabilities related to genomic analyses, certain numbers will occur often (0; 0.25; 0.5; 0.75; 1.0) - how accurate are the binary representations for these numbers? Are there easy tricks to use that makes the issue go away? for example compute the denominators separately and only calculate the decimal value in the last step? for example instead of 0.25^10 do 1/(4^10) since 4 can be represented perfectly in binary?
The text was updated successfully, but these errors were encountered:
Due to the binary representation of decimal numbers, the number stored in the computer is slightly different than the one the user specified.
These small issues become very noticeable when large exponents are applied to these numbers.
For example when using bfloats if 1 is multiplied by 10^10 ten times in a for loop, the resulting number is:
9.999892e+99
Which is already incorrect in the 5th significant digit. If the for loop ran for 100 or 1000 iterations instead of only 10, then the resulting numbers are:
100: 9.998843e+999
1000: 9.98847e+9999
Similar behaviour occurs when dividing instead of multiplying.
A number of questions arise:
How does drift differ between the different types? bfloat vs logspace etc.
How relevant is this drift to computation in general. It is only this extreme in these examples because we are repeatedly performing the same operation with the same numbers. In general, one would expect the 'down-rounding' to cancel out the 'up-rounding' when different numbers gets converted their binary representations.
Are there specific computations that should be avoided? For example is doing 1_10^100 better than doing 1_10^10^10?
When computing probabilities related to genomic analyses, certain numbers will occur often (0; 0.25; 0.5; 0.75; 1.0) - how accurate are the binary representations for these numbers? Are there easy tricks to use that makes the issue go away? for example compute the denominators separately and only calculate the decimal value in the last step? for example instead of 0.25^10 do 1/(4^10) since 4 can be represented perfectly in binary?
The text was updated successfully, but these errors were encountered: