5. The Eigensystem Realization Algorithm



The basic development of the state-space realization is attributed to Ho and Kalman who introduced the important principles of minimum realization theory. The Ho-Kalman procedure uses a Hankel matrix to construct a state-space representation of a linear system from noise-free data. The methodology has been modified and substantially extended by Juang and Pappa to develop the Eigensystem Realization Algorithm (ERA) to identify modal parameters from noisy measurement data.

5-1) Hankel Matrices

System realization begins by forming the generalized Hankel matrix composed of the Markov parameters:

\begin{align} \label{Eq: Hankel matrix k} \boldsymbol{H}_k^{(p, q)} = \begin{bmatrix} h_{k+1} & h_{k+2} & \cdots & h_{k+q}\\ h_{k+2} & h_{k+3} & \cdots & h_{k+q+1}\\ \vdots & \vdots & \ddots & \vdots\\ h_{k+p} & h_{k+p+1} & \cdots & h_{k+p+q-1}\\ \end{bmatrix} = \boldsymbol{O}^{(p)}A^{k}\boldsymbol{R}^{(q)}. \end{align}

For the case when \(k=0\),

\begin{align} \label{Eq: Hankel matrix 0} \boldsymbol{H}_0^{(p, q)} = \begin{bmatrix} h_{1} & h_{2} & \cdots & h_{q}\\ h_{2} & h_{3} & \cdots & h_{q+1}\\ \vdots & \vdots & \ddots & \vdots\\ h_{p} & h_{p+1} & \cdots & h_{p+q-1}\\ \end{bmatrix} = \boldsymbol{O}^{(p)}\boldsymbol{R}^{(q)}. \end{align}

If \(pm\geq n\) and \(qr\geq n\), matrices \(\boldsymbol{R}^{(q)}\) and \(\boldsymbol{O}^{(p)}\) are of rank maximum \(n\). If the system is controllable and observable, the block matrices \(\boldsymbol{R}^{(q)}\) and \(\boldsymbol{O}_p\) are of rank \(n\). Therefore,

\begin{align} \text{rank}\left[\boldsymbol{H}_0^{(p, q)}\right] = \text{rank}\left[\boldsymbol{O}^{(p)}\boldsymbol{R}^{(q)}\right] \leq \min\left(\text{rank}\left[\boldsymbol{O}^{(p)}\right], \text{rank}\left[\boldsymbol{R}^{(q)}\right]\right) = n. \end{align}

Since \(\text{rank}\left[\boldsymbol{R}^{(q)}\right] = n\) (\(\boldsymbol{R}^{(q)}\) is non-singular, the system is assumed to be controllable), multiplying both sides by \({\boldsymbol{R}^{(q)}}^\dagger\) yields

\begin{align} n = \text{rank}\left[\boldsymbol{O}^{(p)}\right] = \text{rank}\left[\left(\boldsymbol{O}^{(p)}\boldsymbol{R}^{(q)}\right){\boldsymbol{R}^{(q)}}^\dagger\right] \leq \text{rank}\left[\boldsymbol{O}^{(p)}\boldsymbol{R}^{(q)}\right] = \text{rank}\left[\boldsymbol{H}_0^{(p, q)}\right]. \end{align}

Hence we have

\begin{align} \text{rank}\left[\boldsymbol{H}_0^{(p, q)}\right] = n. \end{align}

If the order is \(n\), then the minimum dimension of the state matrix \(A\) is \(n\times n\) and therefore, for any \(k\geq 0\),

\begin{align} \text{rank}\left[\boldsymbol{H}_k^{(p, q)}\right] = n. \end{align}

Thus, it appears that identifying the number of dominant singular values of the Hankel matrix provides an indication about the unknown order of the reduced model to be identified.

5-2) Hankel Norm Approximation

As described in the previous section, a singular value decomposition on the Hankel matrix provides an insight about the order of the system. Even if more advanced methods for distinguishing true modes from noise modes exist, a simple singular value plot often allows the engineer to determine the order of the system. Thus, it is possible to observe the following approximation

\begin{align} \boldsymbol{H}_0^{(p, q)} &= \boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^\intercal = \begin{bmatrix} \boldsymbol{U}^{(n)} & \boldsymbol{U}^{(0)} \end{bmatrix}\begin{bmatrix} \boldsymbol{\Sigma}^{(n)} & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{\Sigma}^{(0)} \end{bmatrix}\begin{bmatrix} {\boldsymbol{V}^{(n)}}^\intercal\\ {\boldsymbol{V}^{(0)}}^\intercal \end{bmatrix} \\ &= \boldsymbol{U}^{(n)}\boldsymbol{\Sigma}^{(n)}{\boldsymbol{V}^{(n)}}^\intercal+\underbrace{\boldsymbol{U}^{(0)}\boldsymbol{\Sigma}^{(0)}{\boldsymbol{V}^{(0)}}^\intercal}_{\simeq\boldsymbol{0}} \\ &\simeq \boldsymbol{U}^{(n)}\boldsymbol{\Sigma}^{(n)}{\boldsymbol{V}^{(n)}}^\intercal \end{align}

where \(\boldsymbol{U}^{(n)}\) and \(\boldsymbol{V}^{(n)}\) are orthonormal matrices:

\begin{align} {\boldsymbol{U}^{(n)}}^\intercal\boldsymbol{U}^{(n)} = {\boldsymbol{V}^{(n)}}^\intercal\boldsymbol{V}^{(n)} = \boldsymbol{I}^{(n)}. \end{align}

Since \(\boldsymbol{H}_0^{(p, q)}\) is primarily represented by the controllability and observability matrices, a balanced factorization leads to

\begin{align} \label{Eq: factorization OpRq} \boldsymbol{H}_0^{(p, q)} = \boldsymbol{U}^{(n)}\boldsymbol{\Sigma}^{(n)}{\boldsymbol{V}^{(n)}}^\intercal = \boldsymbol{O}^{(p)}\boldsymbol{R}^{(q)} \Rightarrow \left\lbrace\begin{array}{ll} \boldsymbol{O}^{(p)} &\hspace{-0.7em}= \boldsymbol{U}^{(n)}{\boldsymbol{\Sigma}^{(n)}}^{1/2}\\ \boldsymbol{R}^{(q)} &\hspace{-0.7em}= {\boldsymbol{\Sigma}^{(n)}}^{1/2}{\boldsymbol{V}^{(n)}}^\intercal \end{array}\right.. \end{align}

This choice makes both \(\boldsymbol{O}^{(p)}\) and \(\boldsymbol{R}^{(q)}\) balanced. Notice that \(\boldsymbol{R}^{(q)}{\boldsymbol{R}^{(q)}}^\intercal = {\boldsymbol{O}^{(p)}}^\intercal\boldsymbol{O}^{(p)} = \boldsymbol{\Sigma}^{(n)}\). The fact that the controllability and observability matrices are equal and diagonal implies that the realized system is as controllable as it is observable. This property is called an internally balanced realization. It means that the signal transfer from the input to the state and then from the state to the output are similar and balanced.

5-3) Minimum Realization

With \(k=1\) in Eq. (\ref{Eq: Hankel matrix k}), one obtains that

\begin{align} \boldsymbol{H}_1^{(p, q)} = \boldsymbol{O}^{(p)}A\boldsymbol{R}^{(q)} = \boldsymbol{U}^{(n)}{\boldsymbol{\Sigma}^{(n)}}^{1/2}A{\boldsymbol{\Sigma}^{(n)}}^{1/2}{\boldsymbol{V}^{(n)}}^\intercal, \end{align}

and a solution for the state matrix \(A\) becomes

\begin{align} \hat{A} = {\boldsymbol{O}^{(p)}}^\dagger\boldsymbol{H}_1^{(p, q)}{\boldsymbol{R}^{(q)}}^\dagger = {\boldsymbol{\Sigma}^{(n)}}^{-1/2}{\boldsymbol{U}^{(n)}}^\intercal\boldsymbol{H}_1^{(p, q)}\boldsymbol{V}^{(n)}{\boldsymbol{\Sigma}^{(n)}}^{-1/2}. \end{align}

Moreover, it is clear that the first \(r\) columns of \(\boldsymbol{R}^{(q)}\) form the input matrix \(B\) whereas the first \(m\) rows of \(\boldsymbol{O}^{(p)}\) form the output matrix \(C\). Defining \(O_i\) as a null matrix of order \(i\), \(I_i\) as an identity matrix of order \(i\) and

\begin{align} {\boldsymbol{E}^{(m)}}^\intercal &= \begin{bmatrix} I_m & O_m & \cdots & O_m \end{bmatrix},\\ {\boldsymbol{E}^{(r)}}^\intercal &= \begin{bmatrix} I_r & O_r & \cdots & O_r \end{bmatrix}, \end{align}

a minimum realization is given by

\begin{align} \hat{A} &= {\boldsymbol{O}^{(p)}}^\dagger\boldsymbol{H}_1^{(p, q)}{\boldsymbol{R}^{(q)}}^\dagger = {\boldsymbol{\Sigma}^{(n)}}^{-1/2}{\boldsymbol{U}^{(n)}}^\intercal\boldsymbol{H}_1^{(p, q)}\boldsymbol{V}^{(n)}{\boldsymbol{\Sigma}^{(n)}}^{-1/2},\\ \hat{B} &= \boldsymbol{R}^{(q)}\boldsymbol{E}^{(r)} = {\boldsymbol{\Sigma}^{(n)}}^{1/2}{\boldsymbol{V}^{(n)}}^\intercal\boldsymbol{E}^{(r)},\\ \hat{C} &= {\boldsymbol{E}^{(m)}}^\intercal\boldsymbol{O}^{(p)} = {\boldsymbol{E}^{(m)}}^\intercal\boldsymbol{U}^{(n)}{\boldsymbol{\Sigma}^{(n)}}^{1/2},\\ \hat{D} &= h_0. \end{align}

The realized discrete-time model represented by the matrices \(\hat{A}\), \(\hat{B}\), \(\hat{C}\) and \(\hat{D}\) can be transformed to the continuous-time model. The system frequencies and damping may then be computed from the eigenvalues of the estimated continuous-time state matrix. The eigenvectors allow a transformation of the realization to modal space and hence a determination of the complex (or damped) mode shapes and the initial modal amplitudes (or modal participation factors).

5-4) Summary
image


image