A range of orthonormal basis formulations have been developed in previous chapters. Their utility as a theoretical tool for the quantification of both bias and variance error was also illustrated. The latter was shown to arise often in situations where the original model structure was not formulated in terms of an orthonormal basis. This chapter now turns to the different issue of examining the practical (as opposed to theoretical) dividend of employing orthonormally parameterized model structures. In particular, this chapter recognizes the following. It is natural to question why, when writing computer code for system identification purposes, the model should be parameterized in an orthonormal form rather than a simpler, but mathematically equivalent ?fixed denominator? form. In fact, as will be discussed here, a key motivating factor in the employment of these orthonormal forms is that of improved numerical properties; namely, for white input, perfect conditioning of the least-squares normal equations is achieved by design. However, for the more usual case of coloured input spectrum, it is not clear what the numerical conditioning properties are in relation to simpler and perhaps more natural model structures. This chapter therefore poses the question: what is the benefit of forming model structures that are orthonormal with respect to white spectra, but not for the more common case of coloured spectra? The answer found here via theoretical and empirical argument is that the orthonormal model structures are, in numerical conditioning terms, particularly robust to spectral colouring while simpler more natural forms are particularly fragile in this regard. Of course, the significance of this improved conditioning is open to question. In response, this chapter concludes with an example of adaptive estimation, where there is a clear dividend of improved conditioning; namely, improved convergence rate.
Springer-Verlag New York, 2005. 161-188 p.