Model Reduction of Structured Dynamical Systems by Projecting onto the Dominant Eigenspace of the Gramians

This paper studies the structure preserving (second-order to second-order) model order reduction of second-order systems applying the projection onto the dominant eigenspace of the Gramians of the systems. The projectors which create the reduced order model are generated cheaply from the low-rank Gramian factors. The low-rank Gramian factors are computed efficiently by solving the corresponding Lyapunov equations of the system using the rational Krylov subspace method. The efficiency of the theoretical results are then illustrated by numerical experiments.


Introduction
In this paper we consider model order reduction of second-order linear time-invariant (LTI) continuous-time system of the form ̈ () + () + () = (), y(t) = (), where , ,  ∈ ℝ × are large and sparse matrices,  ∈ ℝ × is the input matrix and  ∈ ℝ × is the output matrix.Correspondingly, () ∈ ℝ  , () ∈ ℝ  and () ∈ ℝ  (,  ≪  ) are referred to as state, input and output vectors, respectively.Systems of the form (1) arise in various discipline of engineering applications e.g., multibody dynamics, mechanics, electromechanics, electrical circuits [1,2,3,4] and so on.In mechanics or multibody dynamics, the matrices , ,  are known as the mass, damping and stiffness matrices while in the simulation of RLCK circuits they are referred to as conductance, capacitance and susceptance matrices, respectively.In real-life applications, the number of differential equations and variables in (1) often exceeds tens of millions.For instance, mathematical models nowadays are often generated by the finite element method (FEM) and most of the systems are composed of large distinct devices which lead the systems to get larger and increases the complexity of the systems.Due to the computational complexity, simulation, analysis and controller design of such large-scale systems are often infeasible within a reasonable amount of time and limited computational resources.Therefore, it is necessary to approximate the system (1) by a lower dimensional system without losing the essential dynamics of the original system, which is efficient for practical implementation.In control literature, this procedure of approximation is known as model order reduction (MOR) [3,4].The most common MOR techniques of second-order systems are based on the linearization of (1) by a phase space transformation, e.g., introducing a new state variable   () = [  (), ̇()], resulting in an equivalent first-order state space system () = () + (), y(t) = (), where Here ,  ∈ ℝ 2×2 ,  ∈ ℝ 2× ,  ∈ ℝ ×2 and  ∈ ℝ × .Note that  could be an arbitrary non-singular matrix and most of the cases the preferable choice for  is   ( ×  identity matrix).Then MOR techniques are applied to (2) to get a reduced order model (ROM) where  ̂,  ̂∈ ℝ × ,  ∈ ℝ × ,  ∈ ℝ × and  ̂() ∈ ℝ  ( ≪ ) .Unfortunately, this linearization approach comes with several drawbacks; for example, the ROM is no longer in second-order structure and hence the reconstruction of the second-order structure again is formidable.On the other hand, for engineering applications, it is necessary to preserve important properties of the original system such as structure, symmetry, stability, passivity and definiteness.Therefore, we are motivated for the second-order to second-order i.e., the structure preserving model order reduction (SPMOR) of the second-order system (1).Over the years, there has been given a great deal of effort towards the second-order to second-order SPMOR and many research articles have been published.In the literature, there are some remarkable techniques for SPMOR of second-order systems; such as Balanced truncation (BT) [5,6], second order Arnoldi method [7], moment matching approximation based on Krylov subspace [8].However, in general, most of the methods do not guarantee the stability of the original system.The author in [9] shows that the structure preserving ROM obtained by projecting the system onto the dominant eigenspace of the Gramians (PDEG) preserves the stability of the original system.In this paper we mainly generalize the idea of PDEG method by creating the projector cheaply from the low-rank factors of the Gramians of the system.The Gramian factors are computed by solving the continuous-time algebraic Lyapunov equations obtained from the second-order system.To solve such Lyapunov equations efficiently, we modify the rational Krylov subspace method (RKSM) which was originated in [10] for solving the Lyapunov equations of standard systems.One of the expensive tasks in the RKSM is to solve a largescale linear system at each iteration.This paper shows that by splitting such linear systems, the dimension can be reduced from 2 to .Moreover, this splitting idea dramatically reduces the overall computational time.The convergence of the RKSM is heavily dependent on the choice of good shift parameters.Unlike, the technique in [10], this paper also discusses two criterions; namely heuristic and adaptive approaches for the selection of shift parameters.We also present a goal oriented termination procedure for the RKSM algorithm which is significant in implementation of the PDEG method.To assess the efficiency of our theoretical results, numerical experiments are provided.
Throughout the paper we use the following notations which are commonly used in matrix analysis literature.The set of real and complex numbers are denoted by ℝ and ℂ respectively, ℂ − denotes the open left complex half plane and  is the imaginary unit.The transpose of matrix  denoted by   ,  * its conjugate transpose.The  ×  identity matrix denoted by   .For a vectors or matrices ‖. ‖ 2 , ‖. ‖ ∞ and ‖. ‖  denote 2-norm, infinity-norm and Frobenius norm respectively.

Preliminaries
The purpose of this section is to establish notation and introduce basic concepts from the literature that will be used in the remaining of the paper.

Second-order system Gramians and Lyapunov equations
The Gramians for the second-order system (1) are defined in [11] see also e.g., [6].From the system (2), we obtain two continuous-time algebraic Lyapunov equations   +   = −  and (6a) where  ∈ ℝ 2×2 and  ∈ ℝ 2×2 are called the controllability and the observability Gramians of the system, respectively.Due to the structure of ,  and  the controllability Gramian,  and the observability Gramian,  can be compatibly partitioned as Based on the physical interpretation, the authors in [11] defines   ,   as the second order controllability position, velocity Gramians and   ,   as the second order observability position, velocity Gramians, respectively.Note that the second-order Gramians are the main ingredient in the PDEG based SPMOR.

Rational Krylov subspace method for solving the Lyapunov equations
As mentioned earlier that the PDEG method discussed in this paper computes the projectors from the low-rank factors of the Gramians.These low-rank Gramian factors can be computed by solving the Lyapunov equations ( 6) using the Rational Krylov subspace method (RKSM) introduced in [10].In the following we briefly introduce the method for solving the controllability Lyapunov equation (6a).The same procedure can also be applied for solving the observability Lyapunov equation (6b).
Applying the Arnoldi process with a modified Gram-Schmidt (re)orthogonalization procedure, the RKSM construct an orthonormal basis,   of  dimensional rational Krylov subspace where   ∈ ℂ,  = 1, … ,  are the set of shift parameters.We project the second order Lyapunov equation (6a) by   as projected equation Then we chase an approximate solution in the form  ≈     * by imposing Galerkin condition, i.e., the residual ℛ = AP  + PA T +   be orthogonal to   .The matrix  can be uniquely determined by solving formally the low-dimensional Lyapunov equation where   =   *   ,   =   *   and  =   *   .We can solve this low dimensional Lyapunov equation by direct techniques such as Bartels Stewart [12] or Hammarling method [13].Now, since  is symmetric and positive definite, we can write the approximate solution as  =  * and thus we obtain  = (  )(  ) * =  * .If we take the eigendecomposition of : and consider that  1 contains the largest eigenvalues and  2 contains negligible eigenvalues, then by truncating  2 we can assure that the computed  contains the smallest number of columns.The summary of RKSM method is presented in Algorithm 1.

The PDEG method for MOR of second-order system
The motivation and the procedure of PDEG method can be found in [9].Here we show how to construct the projectors from the low-rank factors of the system Gramians.Let us consider  ∈ ℝ 2× and  ∈ ℝ 2× (where  ≪ 2) as the low-rank factors of  and , i.e.,   ≈  and   ≈ .The solutions  and  of the Lyapunov equations ( 6) can always be approximated by their low-rank factors since the right hand side matrices of the Lyapunov equations have rank deficiency.This is crucial since computing the low-rank Gramain factors is not only computationally cheap but also saves computer memory.Note that the Gramian factors  and  can be computed by the RKSM as discussed above.By splitting  and  as where ∑  = ( 1 , … ,   ) and   ≥  +1 for  = 1, … ,  − 1.Therefore, we can write which is the eigendecomposition of   .Here ∑  2 is a diagonal matrix which contains the eigenvalues of   with decreasing ordered and   is an orthogonal matrix whose columns are the eigenvectors corresponding to the eigenvalues of ∑  2 .Now we define ∑ 1 2 = ( 1 2 , … ,   2 ) as the  largest eigenvalues of   and construct where   ;  = 1, … ,  are the eigenvectors corresponding to the eigenvalues   2 such that by considering we get the second-order reduced matrices as follows: ̂=   ,  ̂=   ,  ̂=   ,  ̂=   ,  ̂= . ( The above approach formulates a  dimensional ROM via projecting the system onto the dominant eigenspace of the controllability position Gramian   .In short, this procedure is called CP-SPMOR (controllability position SPMOR).The same procedure can also be applied to the controllability velocity Gramian   , observability position   and velocity   Gramians, respectively.In those cases, the MOR approaches are called CV-SPMOR, OP-SPMOR and OV-SPMOR, respectively.The summary of CP-SPMOR is presented in Algorithm 2.  ̂=   ,  ̂=   ,  ̂=   ,  ̂=   ,  ̂= .
Note that in each case the corresponding projector  can be constructed from their respective low-rank Gramian factors.The transformation  is a contra-gradient transformation since using this transformation it can be proved that the ROM is diagonal [9].Thus the transformation   in ( 15) is a balancing transformation.It can be shown that the reduced order matrices , ̂ ̂ and  ̂ are all symmetric and the ROM preserves the definiteness as well.Since the ROM is symmetric and preserves the definiteness, it is also dissipative.Then according to Bendixons theorem [14], the stability of the ROM is guaranteed.

Updated SO-RKSM for computing the low-rank Gramian factors
The key ingredient of the PDEG method discussed in above section is the low-rank Gramian factors of the system.These Gramian factors can be computed by using RKSM which is summarized in Algorithm 1.In this section, we show that how to update this algorithm to increase the efficiency and capability.

Handling the second order structure
In Algorithm 1, the main computational effort is to solve the linear systems in steps 1 and 3. Due to the structure, these linear systems can be splitted into lower dimensional subsystems which is shown in the following texts.In step 1,  = ( −  1 ) −1  can be computed by solving the linear system ( −  1 ) = , which implies then the linear system in step 3 of dimension 2 × 2 is equivalent to (18)   (2) =     (1) +  −1 (1) .
Certainly solving the linear system of dimension  ×  is more efficient than the dimension of 2 × 2.The smaller linear systems can now be solved at low cost by using the sparse direct solver [15].The resulting reformulated techniques are shown in Algorithm 3.

Selection of the shifts
The convergence speed of the RKSM algorithm is strongly influenced by the selection of shift parameters.The author in [10] shows a strategy to compute the shift parameters.However, this approach is not efficient since it requires to solve an eigenvalue problem at each iteration to compute the next shift.Therefore, we present two efficient shift selection strategies; such as Heuristic [16] and Adaptive [17] approaches.Note that both the strategies have been developed and frequently used for implementing the low-rank alternating direction implicit iteration (LR-ADI) [3].Here we investigate them for the RKSM.

Heuristic shifts
The heuristic shifts which is also known as Penzl shifts in honour of the author was first proposed in [16], where  + ≪  Ritz values of  −1  and  − ≪  reciprocal Ritz values of  −1  are obtained using Arnoldi process.Although computing the Ritz values is an expensive task, this approach is applicable to the large-scale systems.The complete algorithm of the heuristic procedure was presented in [16].

Adaptive shifts
Another elegant shift selection strategy is the adaptive approach which was first proposed in [17] and the updated version can be found in [18].This approach is superior to the heuristic approach since it does not require to compute the Ritz values.This procedure generates the shifts and update independently by the algorithm itself.In this procedure, we first compute a set of initial shifts by projecting the matrix pencil  −  onto the range of a random matrix.Once the initial set of shifts have been used, we project the matrix pencil onto the range   generated with the initial set of shifts and compute the eigenvalues of the projected pencil.Note that since our system is stable, according the Bendixon's theorem [14], the eigenvalues of the projected pencil lie in ℂ − .We use these eigenvalues to compute some desired number of optimal shifts by solving the min-max problem [16] and use them as the next set of shifts.We continue this procedure until the algorithm converged to the given tolerance.

Stopping criteria
We now concentrate on the stopping criteria of Algorithm 3. We propose two stopping criteria and both of them are computationally efficient.The first one investigates the Lyapunov residual and the second one monitors the Ritz values of the system.

Residual computations
The most common stopping criteria for algorithms solving such kind of matrix equations is to terminate when the normalized residual norm is significantly small.In the case of Algorithm 3, computing the norm of the residual at each step makes the algorithm expensive since the Lyapunov residual itself is a large and dense matrix.The following proposition computes the norm of the Lyapunov residual in an elegant and cheap way.Proposition 4.3.1:Let the column of   ∈ ℝ × contains the basis of   , and  ̂ =      be the approximate solution of the Lyapunov equation (6a) in   .The Frobenius norm of the residual as defined in (20) satisfies where  is a upper triangular matrix results from the QR factorization of Proof: Let   be an orthogonal matrix whose columns are the basis of the rational Krylov subspace,  +1, = [  ; ℎ +1,    ] is the orthogonal cofficient matrix,   = ( 2 ,  3 , … ,  +1 ).With these notation it was shown in [10] that We start from ( −  +1 ) −1   =  +1  1:+1, ,  = 1,2, … , , and applying the derivation as discussed in [19] we get, Once ‖ℛ‖  is computed, we check e.g., wheather the normalized residual norm ‖ℛ‖  / < , where  is a prescribed tolerance, 0 <  ≪ 1.Typical choice of  is ‖  ‖ 2 .However, if ‖‖ is very small, other reasonable choice could be ‖  ‖ 2 + 2‖‖ 2 ‖‖ 2 ‖  ‖ 2 .

Focusing on the Ritz values
Although proposition 4.3.1 computes the Lyapunov residual norm in an efficient way, it is not suitable in all cases.Because the reduced order model constructed via the PDEG can still be accurate even if the Lyapunov residual is large and dense.In PDEG, since the qualities of interests are the dominant eigenvalues of the Gramians, we can keep an eye on the Ritz values of the Gramians with respect to their relative change in contrast to the previous iteration.Therefore, the algorithm can be stopped when where  () = [ 1 () , … ,   () ] is the vector containing the leading  eigenvalues at step  of Algorithm 3 and  is a constant (0 <  ≪ 1).

Numerical experiments
In this section we present PDEG based SPMOR and the SO-RKSM for solving the second-order Lyapunov equations numerically.All experiments have been carried out in MATLAB® R2015a on an Intel® Core i7-7700HQ @2.80 GHz CPU with 16.0 GB DDR4 RAM.
To assess the accuracy of PDEG approach, we present the Bode plots of the transfer function matrices of the original and reduced systems for the frequency interval  ∈ [  ,   ].The absolute error plots are presented as well.We compute the maximum absolute and relative errors of the original and reduced systems.Here the absolute and relative errors are defined by where the  ∞ -norm is defined by ‖‖  ∞ =  ∈ℝ ‖()‖  ∞ =   (()),   is the maximum singular value.We consider three test examples: the International Space Station model (ISSM), the building model (BM) and the triple chain oscillator model (TCOM), see [20,21] for detailed description.
Example 5.1.This is a structural model component of the International Space Station so-called part 1R of the Russian service module.The system has  = 135 states.
Example 5.2.The is a small building model with 8 floors and each floor has 3 degrees of freedom.Thus, the system has  = 24 states.
Example 5.3.The triple chain oscillator is a mechanical system with three coupled chains of masses interconnected with springs and dampers.The system matrices ,  and  are all symmetric of dimension  = 1501.
We apply the CP-SPMOR and CV-SPMOR to the all three test examples.Table 1 summarizes the speciation of the original models, dimension of the reduced models, the maximum absolute and relative errors.Figure 1 depicts the Bode and absolute error plots of the original and reduced models.Clearly, there is no visual distinction in the Bode plots between the original and reduced models.In Figure 1e and 1f, although the Bode plots match very well, the absolute errors plots show inaccurateness.The main reason of this inaccurateness is that the lowrank controllability Gramian factor has not fully converged since the maximal number of iteration is reached.The absolute error succeeds this inaccurateness since it is evaluated by the approximate low-rank Gramian factor.
However, the reduced model is still satisfactory due to the good matching in the Bode plots.Furthermore, the PDEG approach promises the preservation of the stability of the reduced model and for the ISSM example; Figure 2a represents that the eigenvalues of the exact and reduced models are precisely identical.
Next we investigate the SO-RKSM for solving the Lyapunov equations.The convergence of the SO-RKSM based on the both residual and relative change of the Ritz values have been shown in Figure 2b.Certainly, the proposed residual base criterion takes much fewer iteration steps and time compared to the relative change of the Ritz values.Moreover, as described in section 4.2, the choice of the shift parameters accelerates the convergence speed of the SO-RKSM; Figure 3 shows the iteration steps verses elapsed time of the SO-RKSM for both the heuristic and adaptive shifts.

Conclusions
In this paper we have investigated the structure preserving model reduction based on PDEG method.We have shown that the projectors which carry out the PDEG method can be created cheaply from the low-rank factors of the system Gramians.To compute the low-rank Gramian factors efficiently, we have reformulated the RKSM.Inside the RKSM algorithm, we need to solve a large-scale linear system at each iteration.In this paper we have shown how to split the large-scale linear system into small subsystems to accelerate the computation.For better convergence of the RKSM, we have investigated two shift parameter computation strategies.Besides, a goal oriented stopping criterion of the RKSM has also been proposed.The efficiency and capability of the theoretical results have been discussed by numerical experiments using the data of several models.

Figure 1 :
Figure 1: Comparisons of original and reduced systems of different model examples in the frequency domain.

Figure 3 :
Figure 3: Comparisons between the heuristic and adaptive shift computation strategies.

Table 1 :
Model examples, reduced models, absolute and relative errors.