In this section we first establish the connection between Szego polynomials and unitary upper Hessenberg matrices. This connection makes it possible to downdate the coefficients , and when a node-weight pair is deleted from the inner product (1.1) by applying a QR algorithm for unitary upper Hessenberg matrices. Conversely, when a node-weight pair is added to the inner product the coefficients , and can be updated by applying an inverse QR algorithm for unitary upper Hessenberg matrices. The algorithmic details are discussed in the end of the section.

The connection between the Szego polynomials determined by (1.1) and a unitary Hessenberg matrix can be seen as follows. Using the basis of Szego polynomials, we can write

where for , and . Let for , and define the upper Hessenberg matrix . Also define the unitary matrix by

Substitution of , , into (2.1) yields the equation

or, equivalently,

where . Thus, **H** is a unitary
upper Hessenberg matrix with positive subdiagonal elements that is uniquely
determined by the inner product (1.1).
Also note that the first **n** columns of **U** make up the
matrix **Q** defined by (1.8).
Algorithms for the solution of the least-squares problem (1.6)
can therefore be viewed in terms of the spectral decomposition
(2.4).

It is fairly straightforward to show, by using the recurrence relations
(1.2)--(1.3),
that **H** can be written as a product
of **m** elementary unitary transformations that are defined by the
recursion coefficients and for the .
We have

where the , , are Givens matrices

and is the diagonal matrix

We refer to the
representation (2.5) as the
* Schur parametric form* of **H**.

The development of efficient algorithms for eigenproblems for
unitary Hessenberg matrices is facilitated by the fact that
every unitary upper Hessenberg matrix with nonnegative subdiagonal
elements has a unique Schur parameterization.
For example, when the implicitly shifted QR algorithm is applied to
find the spectral decomposition of
a unitary Hessenberg matrix, a sequence of intermediate unitary Hessenberg
matrices is generated that converges to a diagonal matrix.
In [5], the QR algorithm for unitary Hessenberg matrices
is formulated in terms of the Schur
parameters of the intermediate matrices. This results in an
implementation that requires only arithmetic operations per iteration
on a matrix of order **m**.

Assume for the moment that the matrix **H** and scaling factor
are given, but that the
abscissas and weights of the inner product
(1.1) are not explicitly known. Then it follows from
(2.4) and (2.2) that the nodes are the
eigenvalues of **H**, while the normalized square-root of the
weight is equal to
the first component of the normalized eigenvector corresponding with .
(We assume that each eigenvector is scaled to have unit Euclidean
length and a nonnegative first component.)
The unitary Hessenberg QR algorithm can be used to compute
the matrix and vector , without computing the
rest of **U**, with not more than arithmetic operations per iteration.
Throughout this paper denotes the axis
vector in .
The QR algorithm determines one abscissa-weight pair at a time,
and for each pair computed the order of the unitary upper Hessenberg matrix
is reduced by one, so that the reduced
Hessenberg matrix corresponds with the abscissa-weight pairs that have
not yet been determined.

We remark that in the case that the nodes of the discrete
inner product are real, then the analogue of the matrix
**H** is a real symmetric tridiagonal matrix **T** with positive subdiagonal
elements.
This matrix **T** contains recurrence coefficients for orthonormal polynomials
that satisfy a three-term recurrence relation.
Golub and Welsch [4] proposed
the use of the QR algorithm for symmetric tridiagonal matrices for the
computation of the abscissa-weight pairs associated with **T**. This
algorithm also determines one abscissa-weight pair at a time, and reduces the
order of **T** when such a pair has been found.

Conversely, the construction of **H** from the
inner product (1.1) can be regarded as an inverse eigenvalue
problem. In particular, given the matrix
and the
vector , we can perform
a sequence of elementary unitary similarity transformations
whose composition results in an unitary matrix **U**
such that the matrix on the right-hand of

is of upper Hessenberg form with positive subdiagonal elements.
(The represents an arbitrary scalar that remains unchanged.)
In other words, , and is a
unitary upper Hessenberg matrix **H**. Consequently,
**H** is the matrix corresponding with the inner product (1.1).

The transformation of to **H** can
be performed using arithmetic operations with
the inverse unitary QR (IUQR) algorithm described in [1].
The IUQR algorithm is an updating procedure because it incorporates
node-weight pairs one at a time.
After the stage of the algorithm has been executed, the
unitary
Hessenberg matrix corresponding with the inner product determined by the
first **j** nodes and weights has been obtained.

In [10], the approximation problem (1.6) is solved using the IUQR
algorithm to construct the
Schur parameters of the unitary Hessenberg matrix **H**.
The elementary unitary matrices are accumulated against the
vector during the algorithm
to obtain the vector of Fourier coefficients without
explicitly forming **U**. In this way we obtain
the interpolating polynomial
that is the solution of (1.6) with **n = m** in
arithmetic operations.
Of course, the partial sums of the Fourier expansion of the interpolating
polynomial yields
the solution (1.10) of (1.6) for each **n < m**.
Moreover, if one is interested in computing only for **n < m**,
then the algorithm can be * curtailed* so that only arithmetic
operations are required for the computation of the
parameters , , and
. See [10] for details.

We now turn to the algorithmic details of our downdating scheme.
Assume that we have solved the
least-squares problem (1.6) with **n = m**
by the method described in [10],
so that sets of Schur parameters and of
complementary parameters
corresponding with the inner product (1.1), as well as sets of
Fourier coefficients of the interpolating polynomial
are explicitly known. In the downdating problem, we seek to solve
the corresponding least-squares problem
with one term deleted from (1.1).
In particular, let be the node that is to be
deleted from the inner product (1.1). In order to simplify
some formulas that follow, we assume without loss of generality
that .

Introduce the inner product

and discrete norm

We seek the polynomial such that

Denote the family of orthonormal Szego polynomials with positive leading coefficient associated with (2.7) by . Also let and denote the sets of recurrence coefficients for the , and let and , where is the total weight of the measure that defines (2.7). Then from the above discussion, there is a unique unitary Hessenberg matrix and a unique unitary matrix such that

and

where denotes the axis vector in . Moreover, the recurrence coefficients and are the Schur parameters and complementary parameters, respectively, of . The optimal polynomial is then given by

where the vector of Fourier coefficients is given by and .

Our scheme for downdating is based on the observation
that and
can be computed from **H** and **U** by applying one step of the
QR algorithm with ``ultimate'' shift , together with
some permutation similarity transformations, to determine a unitary
matrix **W** such that

where . Then

and

Thus, the downdated Fourier coefficients are obtained by accumulating each elementary unitary transformation against . An efficient implementation is obtained by using the QR algorithm for unitary Hessenberg matrices described in [5].

We now assume that for some **l**, , and let
.
One step of the QR algorithm with shift
applied to the matrix **H** determines a unitary upper Hessenberg
matrix such that

because is an eigenvalue of **H**.
It is easy to see, however, that
the QR algorithm applied to **H** will not yield the required
, because the vector will not be transformed
as required by (2.10).
On the other hand, an
RQ algorithm for **H**, in which the transforming matrix would
be a product of Givens matrices in the reverse order of
(2.12) below, would yield the
desired similarity transformation.

Instead of modifying the unitary Hessenberg QR algorithm of [5]
to perform an RQ iteration,
we apply the QR algorithm to the unitary Hessenberg matrix
, where is the
* reversal matrix* of order **m**.
It is easily seen that if **H** is given by (2.5), then

where for and
.
The application of the QR algorithm on is equivalent
with that of the RQ algorithm on **H**. One iteration of
the QR algorithm with shift applied to
generates a unitary upper Hessenberg matrix

such that

Moreover, can be taken to be an arbitrary unimodular number because deflation has taken place [5]. Let denote the complementary parameter to for .

Observe that only the last two components of are nonzero, and that can be chosen so that these two components are given by

Finally, we transform the right-hand side of (2.13) by
similarity using
, where is the reversal
matrix of order **m-1**, and transpose the result. In this way we
transform **H** to , where
.
Moreover, , and by the uniqueness of the reduction,
,
, and
.
The vector of Fourier coefficients is then determined
from (2.11) by applying to incrementally.

Observe that our downdating procedure requires knowledge
of the node to be deleted but not of the corresponding weight.
We can therefore compare the computed value of to
the actual value in
order to assess the accuracy of the computations.
If is not close to , then the downdated polynomials
and might not be accurate.
Another accuracy check is provided by the computed value of
in the algorithm. This quantity is the diagonal element of the
upper triangular matrix in the QR factorization of , which
is mathematically zero when is an eigenvalue
of **H**.
If the computed value of is not ``tiny'', then the computed downdated
polynomial might be inaccurate.
We therefore consider and
as being part of the output of the algorithm.

The algorithm overwrites the Fourier coefficients
with intermediate quantities. Algorithm 2.1
requires arithmetic operations
and the evaluation of **m-1** square roots.

We have already noted that if the nodes are real, then the analogue of
the matrix **H** is a real symmetric tridiagonal matrix **T** with positive
subdiagonal elements. Similarly with
Algorithm 2.1,
downdating of the orthonormal polynomials associated with **T**, as
well as of Fourier coefficients, can be carried out by algorithms based on
the QR algorithm for symmetric tridiagonal matrices. This observation may
be new.

In certain data-fitting
applications it may be desirable to update the polynomial ,
given by (1.9)--(1.10) with **n=m**, by
replacing certain
abscissa-weight
pairs. This can be carried out by successively removing an
abscissa-weight pair by
Algorithm 2.1, and then adding a new abscissa-weight
pair using
one step of Algorithm 3.1 in [10].
This combination of algorithms yields a
sliding window scheme.

Tue Feb 14 12:51:53 CST 1995