
Учебники / 0841558_16EA1_federico_milano_power_system_modelling_and_scripting
.pdf168 |
7 Eigenvalue Analysis |
While the meaning of each state variable is clarified in Part III, in this example, it is relevant to note just the way the most associated state variables are determined. Since the full participation factor matrix would occupy too much space, Table 7.2 shows a selection of eigenvalues, say eigenvalues 19, 20 and 23-28. Generally, only few state variables participate with a pij > 0.1 to a given eigenvalue, which eases the choice of control actions.
Table 7.2 Selection of eigenvalue participation factors for the IEEE 14-bus system
State variable |
pk,19 |
pk,20 |
pk,23 |
pk,24 |
pk,25 |
pk,26 |
pk,27 |
pk,28 |
|
|
|
|
|
||||
|
|
|
|
|
||||
δ Syn 1 |
0.0002 0.0002 |
0.0024 0.0024 |
0.0043 0.0043 |
0.0827 0.0827 |
||||
ω Syn 1 |
0.0002 0.0002 |
0.0024 0.0024 |
0.0043 0.0043 |
0.0831 0.0831 |
||||
eq Syn 1 |
0.0003 0.0003 |
0.0022 0.0022 |
0.0021 0.0021 |
0.0480 0.0480 |
||||
ψd Syn 1 |
0.0001 0.0001 |
0.0008 0.0008 |
0.0007 0.0007 |
0.0123 0.0123 |
||||
ψq Syn 1 |
0.0001 0.0001 |
0.0008 0.0008 |
0.0011 0.0011 |
0.0130 0.0130 |
||||
δ Syn 2 |
0.0040 0.0040 |
0.2158 0.2158 |
0.0550 0.0550 |
0.0516 0.0516 |
||||
ω Syn 2 |
0.0041 0.0041 |
0.2168 0.2168 |
0.0553 0.0553 |
0.0518 0.0518 |
||||
eq Syn 2 |
0.0001 0.0001 |
0.0023 0.0023 |
0.0006 0.0006 |
0.0008 0.0008 |
||||
ψd Syn 2 |
0.0002 0.0002 |
0.0052 0.0052 |
0.0012 0.0012 |
0.0012 0.0012 |
||||
ed Syn 2 |
0.0010 0.0010 |
0.0470 0.0470 |
0.0117 0.0117 |
0.0066 0.0066 |
||||
ψq Syn 2 |
0.0019 0.0019 |
0.0848 0.0848 |
0.0193 0.0193 |
0.0098 0.0098 |
||||
δ Syn 3 |
0.0025 0.0025 |
0.0045 0.0045 |
0.1480 0.1480 |
0.1654 0.1654 |
||||
ω Syn 3 |
0.0025 0.0025 |
0.0045 0.0045 |
0.1487 0.1487 |
0.1660 0.1660 |
||||
eq Syn 3 |
0 |
0 |
0 |
0 |
0 |
0 |
0.0005 0.0005 |
|
ψd Syn 3 |
0 |
0 |
0.0001 0.0001 |
0 |
0 |
0.0007 0.0007 |
||
ed Syn 3 |
0.0009 0.0009 |
0.0014 0.0014 |
0.0454 0.0454 |
0.0400 0.0400 |
||||
ψq Syn 3 |
0.0016 0.0016 |
0.0025 0.0025 |
0.0747 0.0747 |
0.0591 0.0591 |
||||
δ Syn 4 |
0.2618 0.2618 |
0.0207 0.0207 |
0.0443 0.0443 |
0.0241 0.0241 |
||||
ω Syn 4 |
0.2637 0.2637 |
0.0208 0.0208 |
0.0446 0.0446 |
0.0242 0.0242 |
||||
eq Syn 4 |
0 |
0 |
0 |
0 |
0 |
0 |
0.0001 0.0001 |
|
ψd Syn 4 |
0.0001 0.0001 |
0.0001 0.0001 |
0.0001 0.0001 |
0.0003 0.0003 |
||||
ed Syn 4 |
0.0047 0.0047 |
0.0003 0.0003 |
0.0008 0.0008 |
0.0004 0.0004 |
||||
ψq Syn 4 |
0.2334 0.2334 |
0.0151 0.0151 |
0.0305 0.0305 |
0.0113 0.0113 |
||||
δ Syn 5 |
0.0759 0.0759 |
0.1289 0.1289 |
0.1148 0.1148 |
0.0373 0.0373 |
||||
ω Syn 5 |
0.0764 0.0764 |
0.1296 0.1296 |
0.1156 0.1156 |
0.0375 0.0375 |
||||
eq Syn 5 |
0 |
0 |
0 |
0 |
0 |
0 |
0.0001 0.0001 |
|
ψd Syn 5 |
0 |
0 |
0.0001 0.0001 |
0 |
0 |
0.0002 0.0002 |
||
ed Syn 5 |
0.0013 0.0013 |
0.0019 0.0019 |
0.0018 0.0018 |
0.0005 0.0005 |
||||
ψq Syn 5 |
0.0627 0.0627 |
0.0860 0.0860 |
0.0724 0.0724 |
0.0158 0.0158 |
||||
vm AVR 1 |
0 |
0 |
0 |
0 |
0 |
0 |
0.0004 0.0004 |
|
vf AVR 1 |
0.0003 0.0003 |
0.0024 0.0024 |
0.0022 0.0022 |
0.0452 0.0452 |
||||
vr1 AVR 1 |
0.0001 0.0001 |
0.0006 0.0006 |
0.0005 0.0005 |
0.0088 0.0088 |
||||
vr2 AVR 1 |
0 |
0 |
0 |
0 |
0 |
0 |
0.0007 0.0007 |
|
vm AVR 2 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
vf AVR 2 |
0 |
0 |
0 |
0 |
0 |
0 |
0.0001 0.0001 |
|
vr1 AVR 2 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
vr2 AVR 2 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
vm AVR 3 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
vf AVR 3 |
0 |
0 |
0 |
0 |
0 |
0 |
0.0001 0.0001 |
|
vr1 AVR 3 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
vr2 AVR 3 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
vm AVR 4 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
vf AVR 4 |
0 |
0 |
0 |
0 |
0 |
0 |
0.0001 0.0001 |
|
vr1 AVR 4 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
vr2 AVR 4 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
vm AVR 5 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
vf AVR 5 |
0 |
0 |
0 |
0 |
0 |
0 |
0.0001 0.0001 |
|
vr1 AVR 5 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
vr2 AVR 5 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |

7.2 Small Signal Stability Analysis |
169 |
7.2.3Analysis in the Z-Domain
The state matrix in (7.16) leads to the computation of the eigenvalues in the S-domain, i.e., the system is stable if {λh} < 0, h = 1, 2, . . . , nx. To compute the eigenvalues in the Z-domain can lead to some numeric advantages as discussed in the following subsection. The Z-domain can also ease the visualization of sti systems since, in the Z-domain, if the system is stable, all the eigenvalues are inside the unit circle [147]. For obtaining the Z-domain eigenvalues, a bi-linear transformation is performed:
AZ = (AS + χInx )(AS − χInx )−1 |
(7.35) |
where χ is a weighting factor that, based on heuristic considerations, can be set χ = 8. Computing AZ is more expensive than AS but using AZ can be useful for fastening the determination of the maximum amplitude eigenvalue (e.g., by means of a power method), especially in case of unstable equilibrium points with only one eigenvalue outside the unit circle.
Example 7.4 Eigenvalues of the IEEE 14-Bus System in the Z-Domain
Figure 7.4 shows the eigenvalue analysis for the IEEE 14-bus tests system in the Z-domain. The figure also shows two dotted curves indicating the
Fig. 7.4 Eigenvalues of the IEEE 14-bus system in the Z-domain
170 |
7 Eigenvalue Analysis |
circumference with unitary radius (stability limit in the Z-domain) and the locus of eigenvalues with a damping ζ = 5%. The latter curve can be computed as follows.
From (7.33) and (7.35), it can be deduced that the transformed value αZ ± jβZ in the Z-domain of a given pair of complex eigenvalues α ± jβ is:
αZ ± jβZ = |
−ζω0 |
± |
|
1 |
− |
ζ2 |
ω0 |
|
χ |
(7.36) |
||
|
ζω0 |
|
1 |
|
ζ2 |
ω0 |
+ χ |
|
||||
|
|
|
|
|
|
|
|
|
||||
|
− |
± |
− |
|
|
|
− |
|
|
Thus, imposing a fixed damping ζc and parametrizing (7.36) with ω0, each value of ω0 yields a point in the Z-domain pertaining to the locus with ζ = ζc.
As expected from the analysis presented in Example 7.1, Figure 7.4 shows a pair of complex eigenvalues poorly damped. Figure 7.4 also shows an eigenvalue equal to −1, which is the equivalent in the Z-domain of a zero eigenvalue in the S-domain.
7.3Computing the Eigenvalues
Most common methods for computing all eigenvalues of a matrix are the QR algorithm, the Arnoldi’s iteration, and, if the matrix is Hermitian, the Lanczos’ method [74, 313]. However, computing of all eigenvalues generally requires the use of the Gram-Schmidth’s orthonormalization method and, thus, can be a lengthy process if the dynamic order of the system is high.
To reduce the computational e ort, it is possible to compute only a few eigenvalues with a particular property, i.e., largest or smallest magnitude, largest or smaller real or imaginary part. All or some of these options may be already available in some scientific-oriented scripting language such as Matlab, but have currently to be implemented in general-purpose scripting languages such as Python.
7.3.1Power Method
A very simple method that allows determining the eigenvalue with greatest absolute value, although may show a slow convergence rate, is the power method, which works as follows.
Given a matrix A and an initial vector ν(0), a generic iteration of the power method is given by:
|
Aν(i) |
|
ν(i+1) = |
|
(7.37) |
|
||
|
Aν(i) 2 |
If A has an eigenvalue λk that is strictly greater than all other eigenvalues of A, i.e., λk > λh, h = k (in this case λk is called the dominant eigenvalue), and ν(0) has a non-zero component in the direction of the eigenvector νk

7.3 Computing the Eigenvalues |
171 |
associated with λk , then ν(i) → νk for i → ∞. Finally, the eigenvalue λk is computed as:
λk = |
νkT Aνk |
(7.38) |
|
νkT νk |
|||
|
|
The rationale of the power method is relatively simple and is worth being briefly outlined. Assume that the initial vector ν(0) can be written as the linear combination of all eigenvectors νh of the matrix A:
ν(0) = c1ν1 + c2ν2 + · · · + ck νk + · · · + cnνn |
(7.39) |
where, by hypothesis, ck = 0. Assume also that A is diagonalizable and can be written as N ΛN −1.6 Then, at the ith iteration:
ν(i) = |
A(i)ν(0) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||
A(i)ν(0) 2 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||
= |
(N ΛN −1)(i)ν(0) |
|
|
|
|
|
|
|
|
||||||||||||
(N ΛN −1)(i)ν(0) 2 |
|
|
|
|
|
|
|
||||||||||||||
|
|
|
|
|
|
|
|
||||||||||||||
= |
N Λ(i)N −1ν(0) |
|
|
|
|
|
|
|
|
||||||||||||
N Λ(i)N |
−1ν(0) 2 |
|
|
|
|
(i) |
|
||||||||||||||
|
|
|
|
||||||||||||||||||
λk |
|
ck |
1 |
|
|
1 |
|
|
|
||||||||||||
|
|
|
|
|
|
|
1 |
|
|
|
1 |
|
|
|
|
||||||
|
|
λk |
|
(i) |
ck |
|
νk + |
|
|
N |
|
|
|
|
Λ b |
||||||
= |
|
|
ck |
|
|
λk |
|||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
(i) |
|
||
| |
| |
| | νk + |
|
|
|
|
|
|
|
|
|
||||||||||
|
N |
|
Λ |
|
b 2 |
||||||||||||||||
ck |
λk |
|
where
b = c1e1 + · · · + ck−1ek−1 + ck+1ek+1 + · · · + cnen
From observing that:
|
|
|
|
|
|
λ1(i) |
|
|
|
|
|
λk(i) |
|
|
1 |
(i) |
|
|
|
|
lim |
|
|
Λ |
= |
|
|
|
|
|
||||
i |
λk |
|
|
|
|
|
→∞ |
|
|
|
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Then, it follows that:
lim
i→∞
. . .
λ(ki)
λ(ki)
1
ck
N
0
. . .
→
. . . |
λn(i) |
|
|
|
|
|
|
|
|
|
|
(i) |
||
|
|
λk |
|
|
|
|
|
|
|
|
|
(i) |
|
|
λk Λ |
b → 0 |
|||
1 |
|
|
|
|
. . .
0
(7.40)
(7.41)
(7.42)
(7.43)
6 A similar proof holds if A is decomposed into its Jordan’s canonical form.

172 |
7 Eigenvalue Analysis |
7.3.2Inverse Iteration
The inverse method is a variant of the power method described above and allows finding the eigenvalue λk and the associated eigenvector νk if a good
˜
estimation λk of the eigenvalue is known.
By way of introduction, observe that to find the minimum eigenvalue of a given matrix A, it su ces to apply the power method iteration (7.37) to A−1. In fact, the eigenvalues of A−1 are the inverse of the eigenvalues of A. Clearly, this algorithm works only if the matrix A is not singular.
The inverse iteration consists in applying the power method iteration (7.37)
− ˜ −1 − ˜ −1
to the matrix (A λk In) . In fact, the eigenvalues of (A λk In) are
− ˜ −1 − ˜ −1 − ˜ −1 ˜
(λ1 λk ) , . . . , (λk λk ) , . . . , (λk λn) . Thus, if λk is su ciently close
− ˜ −1 − ˜ −1
to λh, then (λk λk ) is the biggest eigenvalue of (A λk In) , which is the necessary condition for the power method to converge.
7.3.3Rayleigh’s Iteration
The Rayleigh’s iteration is an improvement of the inverse iteration. One
˜(0) |
|
|
|
|
|
|
(i+1) |
|
chooses an initial value λk , then at each iteration both the eigenvector ν |
|
|||||||
|
˜(i+1) |
are computed, as follows: |
|
|
||||
and the eigenvalue estimation λk |
|
|
|
|||||
ν(i+1) |
= |
|
(A − λ(i)In)−1ν(i) |
(7.44) |
||||
(A − λ(i)In)−1ν(i) 2 |
||||||||
|
|
|
|
|||||
λ(i+1) |
= |
(ν(i+1))T Aν(i+1) |
|
(7.45) |
||||
|
||||||||
|
|
|
ν(i+1))T ν(i+1) |
|
|
where (7.45) is called the Rayleigh’s quotient. The convergence characteristics of this method are generally better (i.e., cubically) than the inverse iteration.
Example 7.5 Inverse and Rayleigh’s Iterations for the IEEE 14-Bus System
For the sake of example, consider the matrix AZ defined in (7.35), ν(0) =
˜(0) − −5
[1, 1, . . . , 1] and λk = 0.9 and a tolerance = 10 . The inverse iteration converges to the eigenvalue −0.9545 in 13 iterations, while the Rayleigh’s iteration in 5.
From the scripting viewpoint, both inverse and Rayleigh’s iterations require practically the same code. For example, the inverse iteration is as follows:
from cvxopt.umfpack import linsolve from cvxopt.base import matrix
from cvxopt.blas import dotu
b = matrix(1, (system.DAE.nx, 1), ’d’)
7.4 Power Flow Modal Analysis |
173 |
mold = 99999 m0 = -0.9 iteration = 0 while 1:
linsolve(As - m0*In, b) nor = (dotu(b, b))**0.5 b = b/nor
m = dotu(b, As*b)/dotu(b, b) if abs(m - mold) < 1e-5:
break mold = m
iteration += 1
To obtain the Rayleigh’s iteration, it su ces to substitute the 12th line of the code above with:
m0 = m = dotu(b, As*b)/dotu(b, b)
7.4Power Flow Modal Analysis
Beside small-signal stability analysis, eigenvalues and eigenvectors can also be used for assessing sensitivities. In particular, an interesting approach is evaluating the modal analysis for the power flow Jacobian matrix [104, 211, 354].
Let us consider the classical power flow model defined in Section 4.3 (e.g., constant PQ loads and constant PV generators). After solving the power flow analysis, the Jacobian matrix can be easily computed. If the Newton’s method is used, the Jacobian matrix is available as byproduct of the solution algorithm. The eigenvalue analysis is performed on a reduced matrix, as follows. Recalling (4.45), the Jacobian matrix can be divided into four sub-matrices:
gy = |
gp,θ gp,v |
(7.46) |
|
gq,θ gq,v |
|||
|
|
In case of the classical power flow model, one can associate a physical meaning to each sub-matrix, since load and generator powers are constant. In fact, consider the linearization of the power flow equations with constant power injections:
p = g q g
p,θ gp,v |
|
θ |
|
q,θ gq,v |
v |
(7.47) |
The basic assumption of [104] is to consider that p ≈ 0. This is reasonable if one is interested only in the relationship between reactive powers and bus voltage magnitudes. Furthermore, recalling the assumptions of the fast-decoupled power flow (see Section 4.4.7 of Chapter 4), the pθ world is quite decoupled from the qv one. Thus, one can define a reduced power flow Jacobian matrix as follows:
J LF = gq,v − gq,θ gp,−1θ gp,v |
(7.48) |
174
where it is assumed that gp,θ from the observation that:
hence:
7 Eigenvalue Analysis
is non-singular. The sensitivity analysis follows
q = J LF v |
(7.49) |
v = J LF−1 q = N Λ−1N −1 q |
(7.50) |
|
and defining the modal reactive power and voltage variations as: |
|
|
qm = N −1 |
q |
(7.51) |
vm = N −1 |
v |
|
it follows that the modal sensitivity for each eigenvalue λk is:
dvm,k |
= |
1 |
, k BP Q |
(7.52) |
dqm,k |
λk |
where BP Q is the set of PQ load buses. If λh > 0, then an increase in the injected reactive power leads to a bus voltage increase, which is the normal situation for systems with inductive branches and loads. If λk < 0, the voltage decreases if the reactive power injected at the bus increases. This is considered an unstable behavior (at least for inductive systems), and actually it is typical of power flow solutions of the lower part of the nose curve (see Section 5.1 of Chapter 5). A special case is λk = 0 that can be viewed as an infinite sensitivity of the voltage with respect to the reactive power. Actually, this case corresponds to a saddle-node bifurcation.
Example 7.6 Power Flow Modal Analysis for the IEEE 14-Bus System
Figure 7.5 and Table 7.3 show the results of the modal analysis of the power flow Jacobian matrix as well as the participation factors for the IEEE 14-bus test system. In this case study, only static power flow data are considered, i.e., loads are modelled as constant PQ and generators as constant PV or slack.
Figure 7.5 and Table 7.3 only show 9 eigenvalues. In fact, the 14-bus system system has 5 generators that maintain constant the voltage at the bus where they are connected. As expected, all eigenvalues are positive, thus indicating that ∂v/∂q > 0 at all load buses.
7.4.1Singular Value Decomposition
The Singular Value Decomposition (SVD) is an important kind of matrix factorization and has several applications, for example, for computing the

7.4 Power Flow Modal Analysis |
175 |
|
|
|
|
|
|
|
Fig. 7.5 Eigenvalues of the power flow Jacobian matrix for the IEEE 14-bus system
pseudo-inverse, for matrix approximation, and for determining the rank of a matrix. The SVD consists in a factorization of a matrix A in the form:
A = U ΣV H |
(7.53) |
where U is an unitary (but not diagonal) matrix, Σ is a diagonal matrix whose diagonal elements are the singular values (non-negative real numbers) and V H is the Hermitian matrix (conjugate transpose) of V , which is also a unitary matrix.
The singular values can be interpreted as “gain controls” that multiply the input signals filtered by the orthonormal V and that pass these signals to the orthonormal U that generates the output signals.
Although the main applications of the SVD can be found in signal processing and statistics, the feature that is relevant in the context of small signal stability is that the computation of the SVD or better of the minimum singular value of a matrix is much more e cient than the corresponding eigenvalue computation [313]. Moreover, the following relevant property of the determinant:
det(A) = det(N ΛN −1) = det(U ΣV H ) |
(7.54) |
det(A) = det(Λ) = det(Σ)
has an interesting application in case the det(A) = 0. In fact, if A is singular, there exists a zero singular value. Since all singular values are non-negative,
176 |
7 Eigenvalue Analysis |
Table 7.3 Power flow modal analysis and participation factors for the IEEE 14-bus system
Eigenvalue |
ph,4 |
ph,5 |
ph,7 |
ph,9 |
ph,10 |
|
|
|
|
|
|
|
|
65.3424 |
0.5416 |
0.4517 |
0.0066 |
0.0001 |
0 |
|
39.9528 |
0 |
0.0006 |
0.1531 |
0.6147 0.2153 |
|
|
|
||||||
21.9828 |
0.0756 |
0.1517 |
0.4942 |
0.0030 0.2216 |
|
|
18.9217 |
0.0005 |
0.0007 |
0.0002 |
0.0002 0.0046 |
|
|
16.4317 |
0.2835 |
0.3223 |
0.0202 |
0.0476 0.1614 |
|
|
2.7060 |
0.0082 |
0.0040 |
0.0699 |
0.1999 0.2394 |
|
|
5.5693 |
0.0024 |
0.0013 |
0.0166 |
0.0314 0.1157 |
|
|
7.6621 |
0 |
0 |
0.0001 |
0.0001 0.0379 |
|
|
11.3351 |
0.0881 |
0.0677 |
0.2392 |
0.1030 0.0041 |
|
|
|
|
|
|
|
|
|
Eigenvalue |
ph,11 |
ph,12 |
ph,13 |
ph,14 |
|
|
|
|
|
|
|
|
|
65.3424 |
0 |
0 |
0 |
0 |
|
|
39.9528 |
0.0076 |
0 |
0.0001 |
0.0085 |
|
|
21.9828 |
0.0534 |
0 |
0.0001 |
0.0003 |
|
|
18.9217 |
0.0021 |
0.1781 |
0.7652 |
0.0485 |
|
|
16.4317 |
0.1530 |
0.0024 |
0.0057 |
0.0040 |
|
|
2.7060 |
0.1108 |
0.0190 |
0.0324 |
0.3164 |
|
|
5.5693 |
0.1281 |
0.3392 |
0.1636 |
0.2017 |
|
|
7.6621 |
0.1168 |
0.4512 |
0.0306 |
0.3634 |
|
|
11.3351 |
0.4282 |
0.0101 |
0.0023 |
0.0573 |
|
|
σk = 0 is the minimum singular value of A. On the other hand, if one looks for the minimum singular value and min{σk} > 0, then A is certainly non-singular. Thus, if one is interested only in knowing if A is singular or not, computing the minimum singular value is a numerically e cient option. This property has been used in voltage stability studies for determining the distance to saddle-node bifurcations [39, 45].
Example 7.7 Minimum Singular Value Index for the IEEE 14-Bus System
Example 5.4 in Chapter 5 shows the continuation power flow analysis for the IEEE 14-bus system using a distributed slack bus model and neglecting reactive power limits of PV generators. In this case, the maximum loading condition is due to a SNB. As discussed above, at the SNB point, the minimum singular value of the power flow Jacobian matrix is zero. Thus, the minimum singular value of the Jacobian matrix can be used as an index for evaluating the proximity to the point of collapse.
Figure 7.6 shows the behavior of the minimum singular value of the power flow Jacobian matrix. As expected, as the loading level μ increases, the minimum singular value decreases. It is numerically quite di cult to find exactly

7.5 Summary |
177 |
|
|
|
|
|
|
|
Fig. 7.6 Minimum singular value of the power flow Jacobian matrix computed during the CPF analysis for the IEEE 14-bus system
the saddle-node bifurcation point, hence, the curve shown in Figure 7.6 only gets very close to zero.
7.5Summary
This section summarizes most relevant concepts related to small-signal stability analysis.
Solver method: There are several methods for computing the eigenvalues of a matrix. Methods that compute all eigenvalues are based on the QR algorithm or on some of its variants, such as the Arnoldi’s iteration. In this chapter, three simple methods for computing a reduced number of eigenvalues are described, namely the power method , the inverse iteration and the Rayleigh’s iteration. The latter is quite e cient if a good estimation of the eigenvalues of interest is known. In practical applications, it is not necessary to compute all eigenvalues since only the smallest ones are of interest. Furthermore, if one is only interested in knowing if a matrix has a zero eigenvalue or not, computing the minimum singular value problem is generally more e cient than finding the minimum eigenvalue.
Matrix type: Typically, the matrix used for studying the small-signal stability analysis is the state matrix. For ODE systems, the Jacobian f x coincides