Monday, January 25, 2010

Fractions and decimals---why two notations?

Although form represents the same kind of number, rational form ("fractions") and decimal form ("decimals") are used rather diferrently.

Fractions are used mostly in theoretical work including pure mathematics, and also by users of mixed units.

Decimals, on the other hand, are de rigeur in technical work, including engineering, experimental physics, and applied mathematics.

So, oddly enough, users of number convention largely work with fractions, while users of the quantities convention largely work with decimals.

The reason for the difference in use is not simply an accident of history.  In technical work, one is usually working with measured values, and these are usually approximate; in theoretical work (except for computational experiments) one's numbers are usually exact.

The arithmetic of decimals is somewhat simpler than the arithmetic of fractions, and the decimals are much more readily compared for size than are fractions.  These are huge advantages when working with approximate numbers.

The arithmetic of rational fractions, on the other hand, can be exact with nonterminating expressions.  To say `\frac{1}{3}` in  decimal is to say `0.3333...`  In theoretical work, we usually prefer exactness.  The inexactness of finitely terminated decimal values (e.g.  `0.333`) becomes a problem.

In technical calculations, on the other hand, exactness is rare, and we manage inaccuracy with limits.   We stop writing more digits when we reach the limit of accuracy of the value itself if it is measured, or the limit of accuracy of some other values in a calculation, or the limit of accuracy needed for the purpose of the calculation.

No comments:

Post a Comment