Floating points, precision, and significant digits

I read Chris Vance’s blog entry about floating point numbers and inaccurate arithmetic.  It got me thinking about precision and significant digits, and a simple problem illustrates problems even with significance arithmetic.

Defining significant digits as all the digits that one knows precisely the value of and the first one that is an estimate (as it is typically the result of rounding subsequent digits), an example of 2222 means that we should consider the value to be between 2221.50 and 2222.50 (i.e., the set of values that equals 2222 to four significant digits, using round-to-even methodology).

Let’s take our value of 2222 and multiply by 444, to get a result of 986,568, which we should properly display as 987,000 (since the multiplicand 444 has 3 significant digits).  Properly expressed, 444 could be a value between 443.50 and 444.50, so I would expect the minimum value to be 2221.50 x 443.50 = 985,235.25 and the maximum value to be 2222.50 x 444.50 = 987,901.25, therefore even in this simple case the variation between the maximum and minimum values is greater than significance arithmetic can answer!

The rule of thumb that programmers should follow is this, IMO: Results should always be rounded to the number of digits that is the lesser of the float type’s precision and the least number of significant digits of the operand–and remember, accuracy suffers in the conversion of the decimal value to a binary value.  Even then, I wouldn’t trust it 100%!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s