Bernard Friedland,

Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ, USA

Keywords: Observer, Full-order observer, Luenberger observer, Residual, Algebraic Riccati equation, Doyle-Stein condition, Bass-Gura formula, Observability matrix, Discrete-time algebraic Riccati equation, Separation principle, State estimation, Metastate.


1. Introduction

2. Linear Observers

3. The Separation Principle

4. Nonlinear Observers


Related Chapters



Biographical Sketch

2.   Linear Observers    

2.1 Continuous-Time Systems

Consider a linear, continuous- time dynamic system


.                                                                                                                      (2)

The more generic output


can be treated by defining a modified output


and working with  instead of . (The direct coupling  from the input to the output is absent in many physical plants.).

A full-order observer for the linear process defined by (1) and (2) has the generic form

,                                                                                                     (3)

where the dimension of state  of the observer is equal to the dimension of process state .

The matrices , and  appearing in (3) must be chosen to conform with the required property of an observer: that the observer state must converge to the process state independent of the state  and the input . To determine these matrices, let


be the estimation error. From (1), (2), and (3)


.                                                                     (5)

From (5) it is seen that for the error to converge to zero independent of  and , the following conditions must be satisfied:


.                                                                                                                     (7)

When these conditions are satisfied, the estimation error is governed by

,                                                                                                                        (8)

which converges to zero if  is a “stability matrix”, i.e., that (8) is an asymptotically stable dynamic system. When  is constant, this means that its eigenvalues must lie in the (open) left half plane.

Note that the initial state of (8) is


hence, if the initial state of the process under observation is known precisely (i.e.,  ) then the estimation error is zero thereafter. Due to the possibility of the occurrence of disturbances (not necessarily the “white noise” assumed in the Kalman filter), however, the true state  may depart from the solution to (1) during the course of operation of the observer. Hence knowledge of the initial state  does not eliminate the need for an observer in practical situations. 

Since the matrices  and  are defined by the plant, the only freedom in the design of the observer is in the selection of the gain matrix .

To emphasize the role of the observer gain matrix, and accounting for requirements of (6) and (7), the observer can be written as

.                                                                                           (9)

Figure 1: Full-order observer for linear process.

A block-diagram representation of (9), as given in Figure 1, aids in the interpretation of the observer. Note that the observer comprises a model of the process with an added input:


The quantity


often called the residual, is the difference between the actual observation  and the “synthesized” observation


produced by the observer. The observer can be viewed as a feedback system designed to drive the residual to zero: as the residual is driven to zero, the input to (9) due to the residual vanishes and the state of (9) looks like the state of the original process.

The fundamental problem in the design of an observer is the determination of the observer gain matrix  such that the closed-loop observer matrix


is a stability matrix, as defined above.

There is considerable flexibility in the selection of the observer gain matrix. Two methods are standard: optimization, and pole-placement.

2.1.1 Optimization

Since the observer given by (9) has the structure of a Kalman filter, (see Kalman Filters.) its gain matrix can be chosen as a Kalman filter gain matrix, i.e.,

,                                                                                                               (12)

where  is the covariance matrix of the estimation error and satisfies the matrix Riccati equation

,                                                                                (13)

where  is a positive-definite matrix and  is a positive, semi-definite matrix. The matrices  and  are, respectively, the spectral density matrices of the white noise processes driving the observation (the “observation noise”) and the system dynamics (the “process noise”).

The initial condition on (13)


is the initial state covariance matrix is chosen to reflect the uncertainty of the state at the starting time .

In many applications the steady-state covariance matrix is used in (12). This matrix is given by setting  in (13) to zero. The resulting equation is known as the algebraic Riccati equationARE. Algorithms to solve the ARE are included in popular control system software packages such as Matlab.

In order for the gain matrix given by (12) and (13) to be genuinely optimum, the process noise and the observation noise must be white with the matrices  and  being their spectral densities. It is rarely possible to determine these spectral density matrices in practical application. Hence, the matrices  and  can be treated as design parameters which can be varied to achieve overall system design objectives.

If the observer is to be used as a state estimator in a closed-loop control system, an appropriate form for the matrix  is

.                                                                                                                 (14)

As has been shown by Doyle and Stein, as , this observer tends to “recover” the stability margins assured by a full-state feedback control law obtained by quadratic optimization.

2.1.2. Pole-Placement

An alternative to solving the algebraic Riccati equation to obtain the observer gain matrix is to select  to place the poles of the observer, i.e., the eigenvalues of  in (11). (See Pole Placement Control.)

When there is a single observation,  is a column vector with exactly as many elements as eigenvalues of . Hence specification of the eigenvalues of  uniquely determines the gain matrix . A number of algorithms can be used to determine the gain matrix, some of which are incorporated into the popular control system design software packages. Some of the algorithms have been found to be numerically ill-conditioned, so caution should be exercised in using the results.

The author of this chapter has found the Bass-Gura formula effective in most applications. This formula gives the gain matrix as

,                                                                                                   (15)



is the vector formed from the coefficients of the characteristic polynomial of the process matrix :


and  is the vector formed from the coefficients of the desired characteristic polynomial

.                                                                      (18)

The other matrices in (15) are given by

,                                                                                        (19)

which is the observability matrix of the process, and

.                                                                                            (20)

The determinant of  is 1, so it is not singular. If the observability matrix O is not singular, the inverse matrix required in (15) exists. Hence the gain matrix  can be found which places the observer poles at arbitrary locations if (and only if ) the process for which an observer is sought is observable.

Ackermann’s algorithm (cited by Kailath and incorporated in the Matlab suite) is an alternative to the Bass-Gura algorithm.

Numerical problems occur with both the Bass-Gura algorithm and the Ackermann algorithm, when the observability matrix is nearly singular. Other numerical problems can arise in determination of the characteristic polynomial  for high order systems and in the determination of  when the individual poles, and not the characteristic polynomial, are specified. In such instances, it may be necessary to use an algorithm designed to handle difficult numerical calculations, such as the algorithm of Kautsky and Nichols, which is included in the Matlab suite.

When two or more quantities are observed, there are more elements in the gain matrix than eigenvalues of , so specification of the eigenvalues of  does not uniquely specify the gain matrix . In addition to placing the eigenvalues, more of the “eigenstructure” of  can be specified. This method of selecting the gain matrix is fraught with difficulty, however, and the use of the algebraic Riccati equation is usually preferable. The Kautsky-Nichols algorithm can also deal with more than a single observation input. It uses the additional degrees of freedom afforded by the multiple input to achieve enhanced robustness in the observer.


2.2 Discrete-Time Systems

©Copyright 2004 Eolss Publishers. All rights reserved. SAMPLE CHAPTERS