Written by
Professor John Beachy,
Department of Mathematical Sciences,
Northern Illinois University

## Using a calculator

Solving a system of linear equations on a computer or calculator is surprisingly difficult. Inverting a matrix or performing certain other matrix operations can lead to numerical errors that require a lot of theory to understand. Our department has an entire undergraduate course, Math 434, Numerical Linear Algebra, that covers numerical techniques in linear algebra. We do not have enough time to discuss numerical algorithms in MATH 240, and if you do not know the relevant theory, you must be very cautious and skeptical about the answer when you just press a button on your calculator.

## Solving a system of equations

If you have been trained to believe that a calculator will always give you the correct answer, you may be in for a shock if you try to solve a system of equations by just plugging the coefficients into your calculator and pressing a button. Even a system of two equations in two unknowns can present problems for the program used by your calculator. Here is one example.

We will try to solve this system of equations.

```     416785x + 415872y = 1
415872x + 414961y = 0
```
The coefficients in the problem have six significant digits. Since the TI-85 calculator stores more than twice that many significant digits internally, solving the system would seem to present no problem. Using the equation solver on the TI-85 gives the following "answer".
```Calculator answer    x = 414843.184536   y = -415753.925885

Exact solution       x = 414961          y = -415872
```
You can "check" the calculator answer by substituting it back into the system. Be sure to use the values stored in the calculator. Then, to the limits of the calculator's accuracy, everything checks out, and, in fact, both solutions appear to be correct.

To find the exact solution, we can use elementary row operations on the system of equations. The first goal is to reduce the size of the numbers, but retain integer values. Here are the results.

```     416785x + 415872y = 1
415872x + 414961y = 0

913x +    911y = 1
415872x + 414961y = 0

913x +    911y =    1
457x +    456y = -455

456x +    455y =  456
457x +    456y = -455
```
Now the determinant is (456)(456)-(457)(455), and it has the form
```     n2 - (n+1)(n-1) = n2 - (n2 - 1) = 1
```
and therefore the original coefficient matrix has determinant 1. The inverse of the original matrix is
```     _                   _  -1        _                   _
|                     |          |                     |
|   416785   415872   |          |   414961  -415872   |
|                     |     =    |                     |
|   415872   414961   |          |  -415872   416785   |
|_                   _|          |_                   _|
```
and multiplying by the inverse gives us the exact solution x = 414961 and y = -415872. It is also possible to continue the row reduction.
```        456x + 455y =  456
457x + 456y = -455

456x + 455y =  456
x +    y = -911

x     =  414961
x + y =    -911

x     =  414961
y = -415872
```

This is not a problem unique to the TI-85. Using MATLAB on a SUN workstation also gives an answer that differs substantially from the correct one. The difficulties are inherent in the problem. To look at this problem from the geometric point of view, we could compute the slope of each line. To 12 decimal point accuracy, we get

```   416785 ÷ 415872 = -1.00219538704

415872 ÷ 414961 = -1.00219538704
```
To most calculators, the lines appear to be parallel, and so there should be no solution at all! Because the angle between the two lines is very small, a small change in the coefficients caused by roundoff error can make a very large difference in the solution. From a geometric point of view, shifting the two lines just a little bit can make a bit difference in the point of intersection.

## Inverting a matrix

In the previous problem, we were able to find an exact inverse for the coefficient matrix.
```     _                   _  -1        _                   _
|                     |          |                     |
|   416785   415872   |          |   414961  -415872   |
|                     |     =    |                     |
|   415872   414961   |          |  -415872   416785   |
|_                   _|          |_                   _|
```
Using the matrix inversion routine on the TI-85 gives the following answer. (Note that the first column contains the same answers we obtained as the solution of the previous system of equations.)
```     _                   _  -1        _                             _
|                     |          |                               |
|   416785   415872   |          |  414843.184536 -415753.925885 |
|                     |     =    |                               |
|   415872   414961   |          | -415753.925885  416666.666667 |
|_                   _|          |_                             _|
```
To illustrate some of the inherent difficulties in doing Gaussian elimination using floating point arithmetic, we will look at the row reduction of a standard "badly behaved" matrix.

The matrix given below is called a Hilbert matrix. It is a well-known example of a matrix that causes problems for numerical algorithms. To help understand the problems, we will do an exact row reduction, compared to a row reduction done using floating point arithmetic. To see how the error in the approximations can be compounded, we will use a highly simplified example, in which the floating point arithmetic is carried out with accuracy to only three significant digits.

```Exact calculations in this column       Floating point calculations here
_                       _           _                       _
|                         |         |                         |
|   1    1/2   1/3   1/4  |         | 1.00   .500  .333  .250 |
|                         |         |                         |
|  1/2   1/3   1/4   1/5  |         |  .500  .333  .250  .200 |
(1)   |                         |         |                         |
|  1/3   1/4   1/5   1/6  |         |  .333  .250  .200  .167 |
|                         |         |                         |
|  1/4   1/5   1/6   1/7  |         |  .250  .200  .167  .143 |
|_                       _|         |_                       _|
```
~>
```       _                       _           _                         _
|                         |         |                           |
|   1  1/2   1/3   1/4    |         | 1.00  .500   .333   .250  |
|                         |         |                           |
|   0  1/12  1/12  3/40   |         | 0.00  .0830  .0830  .0750 |
(2)   |                         |         |                           |
|   0  1/12  4/45  1/12   |         | 0.00  .0830  .0890  .0837 |
|                         |         |                           |
|   0  3/40  1/12  9/112  |         | 0.00  .0750  .0837  .0805 |
|_                       _|         |_                         _|
```
~>
```       _                        _          _                           _
|                          |        |                             |
|   1  1/2   1/3    1/4    |        | 1.00  .500   .333    .250   |
|                          |        |                             |
|   0  1/12  1/12   3/40   |        | 0.00  .0830  .0830   .0750  |
(3)   |                          |        |                             |
|   0   0    1/180  1/120  |        | 0.00  .000   .00600  .00870 |
|                          |        |                             |
|   0   0    1/120  9/700  |        | 0.00  .000   .00870  .0127  |
|_                        _|        |_                           _|
```
~>
```       _                        _          _                           _
|                          |        |                             |
|   1  1/2   1/3    1/4    |        | 1.00  .500   .333    .250   |
|                          |        |                             |
|   0  1/12  1/12   3/40   |        | 0.00  .0830  .0830   .0750  |
(4)   |                          |        |                             |
|   0   0    1/120  9/700  |        | 0.00  .000   .00870  .0127  |
|                          |        |                             |
|   0   0    1/180  1/120  |        | 0.00  .000   .00600  .00870 |
|_                        _|        |_                           _|
```
~>
```       _                         _         _                             _
|                           |       |                               |
|   1  1/2   1/3    1/4     |       | 1.00  .500   .333    .250     |
|                           |       |                               |
|   0  1/12  1/12   3/40    |       | 0.00  .0830  .0830   .0750    |
(5)   |                           |       |                               |
|   0   0    1/120  9/700   |       | 0.00  .000   .00870  .0127    |
|                           |       |                               |
|   0   0      0   -1/4200  |       | 0.00  .000   .000   -.0000600 |
|_                         _|       |_                             _|
```
In the original matrix, labeled (1), some of the decimal entries are already inaccurate. In matrix (3), the three digit computation that produces the 3rd entry in row 4 is this:

.837×10-1 - (.904)(.830×10-1) = .837×10-1 - .750×10-1 = .870×10-2

Comparing this to the correct value of 1 ÷ 120 = .00833 (to 3 digits) shows that it has only one correct digit.

In matrix (5), the last entry of row 4 is computed as follows:

.837×10-2 - (.690)(.127×10-1) = .837×10-2 - .876×10-2 = .006×10-2

Because we have to subtract two values that are nearly equal, the answer has even less accuracy. The cumulative errors in reducing just 3 rows produce a value of -.0000600 instead of (-1) ÷ 4200 = -.000238 (to 3 digits).

The method for defining a Hilbert matrix can be extended to larger sizes, and the 10 by 10 Hilbert matrix presents substantial problems for even a very sophisticated numerical algorithm. You can experiment on your calculator, by inverting the Hilbert matrices of larger and larger sizes.

## REFERENCES

1. Yves Nievergelt, Numerical Linear Algebra on the HP-28 or How to Lie With Supercalculators, American Mathematical Monthly, (1991), 539-544
2. David S. Watkins, Fundamentals of Matrix computations, John Wiley & Sons, 1991