Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Учебники / 0841558_16EA1_federico_milano_power_system_modelling_and_scripting

.pdf
Скачиваний:
72
Добавлен:
08.06.2015
Размер:
6.72 Mб
Скачать

6.2 Optimal Power Flow Model

137

Example 6.1 Standard Optimal Power Flow Problem

A typical, relatively general OPF-based problem can be represented using the following nonlinear constrained optimization problem:

Minimize

z

subject to

 

 

 

 

ϕ = ( cL(pL)

hS

cG(pG))

(6.25)

hD

 

 

g(θ, v, qG, pG, pL) = 0

Power flow equations

pGmin ≤ pG ≤ pGmax

 

Generator p limits

qGmin ≤ qG ≤ qGmax

 

Generator q limits

pLmax ≤ pL ≤ pLmax

 

Load p limits

| φij (θ, v) |≤ φijmax

 

Flow limits

| φji(θ, v) |≤ φjimax

 

 

Voltage limits

vmin ≤ v ≤ vmax

 

where z = (θ, v, qG, pG, pL), cG and cL are vectors of functions of the generator and load powers, respectively; qG stand for the generator reactive powers; v and θ represent the bus phasor voltages; and pG and pL represent bounded generator and load limits; and φij and φji represent the active powers (or apparent powers or currents) flowing through the lines in both directions. In the security context, power transfer limits are usually determined based only on power flow based voltage stability studies [107] and can be determined using the N − 1 contingency analysis that is described in Subsection 5.4.5 of Chapter 5.

In spite of its simplicity, problem (6.25) can tackle a variety of important problems.

1.If cG are generation cost functions, cL = 0 and pminL = pmaxL , (6.25) allows solving the classical economic dispatch ensuring security limits such as voltage limits and transmission line thermal limits.

2.If cG(pG) = pG and cG(pL) = pL, (6.25) allows minimizing power system losses.

3.If cG(pG) = −pG and cL = 0, (6.25) allows maximizing the power production. This problem is useful to evaluate the allowable penetration of energy resources in the power system (e.g., renewable and distributed generation).

4.If cG(pG) and cL(pL) have the meaning of o ers and bids, respectively, rather then costs, then the objective function becomes the social benefit and (6.25) allows solving the security constrained market dispatch [353].

The demand is said to be inelastic if pminL = pmaxL (which is the common case), elastic if pminL < pmaxL .

138

6 Optimal Power Flow Analysis

Example 6.2 Maximization of the Distance to Voltage Collapse

The following optimization problem is implemented to represent system security through the use of voltage stability conditions, based on what was proposed in [43, 46, 47]:

Minimize

ϕ = −μ

(6.26)

z

 

 

subject to

g(θ, v, qG, pG, pL) = 0

PF equations

 

gc(θc, vc, qGc , kGc , μ, pG, pL) = 0 Max load PF equations

 

μmin ≤ μ

Loading level

 

pGmin ≤ pG ≤ pGmax

Generator p limits

 

pLmin ≤ pL ≤ pLmax

Load p limits

 

φij (θ, v) ≤ φijmax

Flow limits

 

φji(θ, v) ≤ φjimax

 

 

φij (θc, vc) ≤ φijmax

 

 

φji(θc, vc) ≤ φjimax

Generator q limits

 

qGmin ≤ qG ≤ qGmax

 

qGmin ≤ qGc ≤ qGmax

Voltage limits

 

vmin ≤ v ≤ vmax

 

vmin ≤ vc ≤ vmax

 

where z = (μ, θ, v, qG, θc, vc, qcG, pG, pL).

In this case, a second set of power flow equations and constraints with a superscript c is introduced to represent the system at the limit or critical conditions associated with the loading margin μ that drives the system to its maximum loading condition. The critical power flow equations gc can present a line outage. The maximum or critical loading point could be either associated with a thermal or bus voltage limit or a voltage stability limit (collapse point) corresponding to a system singularity (saddle-node bifurcation) or system controller limits like generator reactive power limits (limit induced bifurcation) [39, 259]. Thus, for the current and maximum loading conditions, the generator and load powers are defined as follows:

pcG = (1 + μ + kGc )pG pcL = (1 + μ)pL

where kGc represents a scalar variable which distributes system losses associated only with the solution of the critical power flow equations in proportion to the power injections obtained in the solution process (distributed slack bus

6.3 Nonlinear Programming Solvers

139

model). It is assumed that the losses corresponding to the maximum loading level defined by μ are distributed among all generators.

For the sake of example, consider the Lagrangian function L associated to problem (6.26) with all inequalities transformed into equalities through the vector of slack variables s as in (6.16).

L = ϕ − ρT g(θ, v, qG, pG, pL)

(6.27)

ρTc gc(θc, vc, qcG, μ, pG, pL)

πμmin (μ − μmin − sμmin )

T

max

− pG − spGmax )

− πpGmax (pG

 

T

 

 

min

− spGmin )

− πpGmin (pG

− pG

T

max

− pL − spLmax )

− πpLmax (pL

 

T

 

 

min

− spLmin )

− πpLmin (pL

− pL

− πφTijmax (φijmax − φij − sφijmax )

− πTmax (φmax − φji − sφmax )

φji ji ji

− πTc max (φmax − φ − sφc max )

φij ij ijc ij

− πTc max (φmax − φ − sφc max )

φji ji jic ji

T

max

− qG − sqGmax )

− πqGmax (qG

 

T

 

 

 

min

− sqGmin )

− πqGmin (qG − qG

T

max

 

c

 

− πqGc max (qG

 

 

− qG − sqGc max )

T

c

 

 

min

− sqGc min )

− πqGc min (qG

− qG

 

πTvmax (vmax − v − svmax )

πTvmin (v − vmin − svmin )

πTvc max (vmax − vc − svc max )

πTvc min (vc − vmin − svc min )

where ρ and ρc Rng , and all the other π (πk > 0, k) correspond to the Lagrangian multipliers. The s variables have to satisfy the non-negativity condition s > 0.

6.3Nonlinear Programming Solvers

As indicated in the previous section, the problem of finding a local minimizer of (6.7) or (6.16) is equivalent to solve (6.10)-(6.14) or (6.17)-(6.22), respectively. These are sets of nonlinear equalities and inequalities. The main challenges from the solution method viewpoint are twofold:

140

6 Optimal Power Flow Analysis

1.The inequalities constraints (6.12) and (6.14) or (6.21) and (6.22) complicate considerably the solution process.

2.The conditions (6.10) or (6.17) contain the Jacobian matrices ϕz and gz and hz . Thus any solution method that involves the calculation of Lzz (such as the any Newton’s method), implies setting up the Hessian matrices ϕzz and gzz and hzz . In case of power flow equations, calculating gzz is not a trivial task.

In the following sections, only two solution methods are described, namely the reduced gradient method and the primal-dual interior point method.

6.3.1Generalized Reduced Gradient Method

The generalized reduced gradient (GRG) has been one of first methods used in power system analysis [80]. The GRG method is also used in well-assessed solvers such as CONOPT [81]. This method works for constrained nonlinear problems and resembles the solution approach of the simplex method used for linear programming [96]. The main idea of the GRG method is to divide the variables z into two subsets, one of basic (or dependent) variables and one of non-basic (or independent) variables. In mathematical terms, basic variables are those variables that are unequivocally determined once the vector of nonbasic variables is assigned. To define basic and non-basic variables is generally easy in physical problems. As a matter of fact, according to Sections 1.4 and 6.2, z = [yT , ηT ]T . Thus y are the basic variables and η are the non-basic ones. For example, in the standard optimal power flow problem, y are bus voltages, while η are the generator active powers.

In order to describe the reduced gradient method, consider for simplicity the following optimization problem:

Minimize

ϕ(y, η)

(6.28)

y, η

 

 

subject to

g(y, η) = 0

 

How to handle inequalities is explained later on. The reduced gradient r(η) of (6.7), with r(η) : Rnη Rnη , is defined as:

r(η) =

= ϕy +

dy

ϕη

(6.29)

 

 

 

 

Di erentiating g(y, η), and assuming that the current point (y(i), η(i)) is feasible and satisfies g(y(i), η(i)) = 0, one has:

gy |idy + gη |i= 0

(6.30)

Thus, (6.29) can be rewritten as:

 

r(η) = ϕy − gy1gη ϕη

(6.31)

6.3 Nonlinear Programming Solvers

141

The reduced gradient is used a direction along which finding a small move from the current value of η(i) that is able to decrease the objective function ϕ. For the current feasible point (y(i), η(i)), the step size Δηk(i) for k = 1, . . . , nη is:

(i)

0

if

η(i) = 0

and rk (η) > 0

Δηk

=

rk (η)

k otherwise

 

 

 

 

Then, the basic variable step size

y(i) is computed as:

 

 

y(i) = −gy1gη

η(i)

Thus, the projection move is:

 

 

 

 

z˜(i+1) = z(i) +

z(i)

(6.32)

(6.33)

(6.34)

where z(i) = [[ y(i)]T , [ η(i)]T ]T .

Due to the nonlinearity of constraints g, the projection move provides a new point that does not satisfy g(y˜(i+1), η˜(i+1)) = 0. It is thus necessary to

apply a restoration move that moves the current point back to the constraint boundary. A possibility is to use a linear approximation of the constraints

g(z(i+1)):

 

g(z(i+1)) ≈ g(z˜(i+1)) + gz (z(i+1) − z˜(i+1))

(6.35)

Since equations g(z) are not linear and the Jacobian matrix gz is not square, the correction z(i+1) can be found using a Newton’s method and iterating the following equation until max{abs(g(z(i+1)))} is su ciently small:

z(i+1) = z˜(i+1) − gz (gzT gz )1g(z(i+1))

(6.36)

The whole reduced gradient procedure ends if max{abs( z(i))} < or if the maximum number of moves are completed.

In case the optimization problem contains inequalities, the procedure described above uses the vector of all binding constraints, e.g., one has to sub-

stitute g with ga = [g

T

˜T

T

˜

 

, h ]

 

, where h are the binding inequalities at the

step i. The main di culty is to find a projection move that does not violates inactive constraints. With this aim, the projection move (6.34) is modified using the Haug and Arora’s procedure [125], which is a combination of the projection and the restoration modes:

z(i+1) = α z(i) − ga,z (ga,T z ga,z )1ga(z(i+1))

(6.37)

where α is defined by defining a given reduction γ in the objective function:

α =

γϕ(z(i))

 

(6.38)

 

 

( z(i))T ϕz (z(i))

142

6 Optimal Power Flow Analysis

After solving the restoration move, new constraints may become binding, and the vector ga has to be updated before solving the next projection step.

Example 6.3 Continuation Power Flow as Reduced Gradient

Method

This example states the formal analogy between the reduced gradient method described above and the homotopy predictor-corrector method described in Section 5.4 of Chapter 5. A similar proof was originally presented in [37] and further formalized in [21].

The analogy can be shown straightforwardly by observing that in the continuation power flow analysis, the non-basic variable is the loading level μ and that the objective function is ϕ = −μ. Since ϕy = 0 and ϕμ = 1, the reduced gradient (6.31) becomes:

r(μ) = gy1gμ

(6.39)

Hence, the reduced gradient r coincides with the tangent vector τ used for the predictor step (e.g., projection move) discussed in Section 5.4. As for the restoration move discussed above, it is just another version of the correctors steps described the same Section 5.38. Moreover, the PV generator reactive power limits discussed in Example 5.4 are inequality constraints. Whenever a reactive power limit (e.g., qGhmax) is reached, the constraint qGh ≤ qGhmax becomes binding and the vector of y is updated to include the voltage magnitude vh at the PV generator bus. Alternatively, if vh is already defined as a variable, the constraint vh = vGhref is removed from g.

6.3.2Interior Point Method

Although Interior Points Methods (IPMs) have been formalized in late sixties [95], the nineties are the dawn of most IPM-based applications for power system analysis [8, 111, 148, 209, 311, 352]. IPM-based OPF problems proved to be robust, especially in large networks, as the number of iterations increase slightly with the number of constraints and network size.

In particular, in [251, 311, 312], the authors present a comprehensive investigation of the use of primal-dual IPM for nonlinear problems, and describe the application of Newton’s direction and Mehrotra’s predictor-corrector to the OPF. The latter allows reducing the number of iterations to obtain the final solution. Both methods are described in this section.

The main idea of the primal-dual IPM discussed in [95] is the introduction of logarithmic barrier function that allows incorporating inequality constraints in the objective function. In this way, inequalities are implicitly taken into account. The objective function modified by means of the logarithmic barrier function is:

6.3 Nonlinear Programming Solvers

 

143

 

nh

 

ϕˆ(z, μˆ) = ϕ(z) − μˆ

 

 

ln(−hk(z))

(6.40)

k=1

where μˆ > 0 is the barrier parameter. The logarithmic function ensures that h(z) < 0. In order to e ectively minimize the objective function, during the iterative process of the IPM, μˆ is decreased monotonically to zero.

Applying the logarithmic function to the transformed problem (6.16), one obtains:

 

nh

 

 

 

 

Minimize

ϕ(z) − μˆ ln(sk )

(6.41)

z

k=1

 

 

 

subject to

g(z) = 0

 

 

s + h(z) = 0, s > 0

 

The logarithmic terms impose strict positivity on the slack variables. First order optimality conditions for (6.41) are:

ϕ

+ ρT g

(z) + πT h (z) = 0

(6.42)

z

z

z

 

g(z) = 0

s + h(z) = 0

π − μˆS1e = 0

where S = diag(s1, s2, . . . , snh ) and e = [1, 1, . . . , 1]T . The complementarity constraints can be rewritten as:

Sπ − μˆe = 0

(6.43)

where μˆe with μˆ > 0 is a perturbation of the standard complementarity conditions.

The primal-dual IPM consists in the following steps.

1.Initial guess. Set a starting point i 0, z(0), s(0), π(0) and μˆ(0). The initial guess must satisfy the strict positivity condition.

2.Computing variable directions. Compute the Jacobian matrix of the firstorder optimality conditions (6.42) and compute variable directions.

3.Updating variables. Update primal and dual variables using a step length on the directions computed in the previous step.

4.Reducing the barrier parameter. A new barrier parameter μˆ(i+1) is updated based on the current slack and dual variables s(i) and π(i), respectively.

5.Convergence test. Check if the new point is a local minimizer. If yes the algorithm ends, otherwise set i i + 1, update the barrier parameter μˆ(i) and go back to Step 2.

Each step is briefly described in the following subsections.

144

6 Optimal Power Flow Analysis

Initial Guess

IPM methods do not requires that the initial point z is a feasible point, however the strict positivity conditions s > 0 and π > 0 must be satisfied, otherwise the method does not converge. Some heuristics can also help obtain the convergence. In [311], the following initialization are proposed:

1.Primal variables z can be obtained as the solution of a power flow problem or computing the middle point between the upper and the lower limit for the bounded variables.

2.The slack variables s are initialized to satisfy the strict positivity constraint. Rewriting inequalities as:

h

min

ˆ

max

(6.44)

 

≤ h(z) ≤ h

 

slack variables associated with lower limits, say smin are obtained as:

(0)

= min{max{γh

ˆ

(0)

) − h

min

}, (1

− γ)h

}

(6.45)

smin

, h(z

 

 

where h = hmax − hmin and γ = 0.25. Then, slack variables associated with upper limits are set as:

smax(0) = h

− smin(0)

(6.46)

3. The dual variables π(0) are given by:

 

 

π(0) = μˆ(0)[S(0)]1e

(6.47)

4.The dual variables ρ(0)k are set to 1 if associated with an active power constraint, 0 otherwise.

Computing Variable Directions

At each step i of the IPM method, variable directions are used for following the path of minimizers parametrized by μˆ(i). The most common method to compute directions is the Newton’s method which consists in solving the following linear system obtained from (6.42) and (6.43):

 

 

z

 

 

 

Lzz gzT hzT

Lξξ

ρ

=

gz

0

0

 

 

s

 

 

 

0z

0

S

 

 

π

 

 

 

h

0

0

 

 

 

 

 

 

 

 

 

0

0

Inh

Π

 

z

 

Lz

 

 

ρ

=

Lρ

 

(6.48)

π

Lπ

 

 

 

 

 

 

 

 

 

 

 

 

 

s

 

Ls

 

 

where ξ = [zT , ρT , πT , sT ]T and the super-script i has been omitted for simplicity. The size of the system (6.48) can be reduced to a 2nz ×2nz system by approximating Lπ 0 and Ls 0 as follows:

6.3 Nonlinear Programming Solvers

145

ˆ

T

z

Lz

 

Lzz gz

 

gz

0

ρ =

Lρ

(6.49)

that provides z and ρ, plus the following direct equations:

 

 

s = −hz z

 

(6.50)

 

π = −S1Π

s

 

where:

 

 

 

 

 

Lˆ = L

zz

+ hT S1Πh

(6.51)

zz

 

z

z

 

Newton directions obtained from (6.48) or (6.49) are generally su cient to lead to convergence. However, sparse matrix factorization is the most time consuming operation of the whole algorithm. Thus, it is worth looking for methods that allows reducing the iterations and, consequently, the number of factorizations. In this vein, the Mehrotra’s predictor-corrector method is a good option. The Mehrotra’s method consists in computing variable directions in two steps but only needs one factorization of the matrices in (6.48) or (6.49), thus leading to a computational burden similar to the standard Newton’s directions. Details on the Mehrotra’s method can be found in [190].

Predictor Step: The predictor step is obtained as follows:

Lξξ

ρp

=

Lρ

 

(6.52)

 

zp

 

 

 

Lz

 

 

s p

 

 

 

 

p

 

 

 

Lπ

 

 

 

π

 

 

 

 

 

 

 

 

 

The prediction provided by (6.52) is also called the a ne-scaling direction. Using this direction is possible to estimate a new barrier parameter value μˆ(pi). How to update the barrier parameter is explained in the following subsection.

Corrector Step: The corrector step is obtained as follows:

 

 

z

 

 

 

Lz

 

 

 

s

=

(i)

 

 

(6.53)

Lξξ

 

ρ

 

 

Lρ

 

 

 

 

 

+ μˆp

e

 

Sp πp

 

 

 

π

 

 

Lπ

 

 

 

 

 

 

 

 

where the term μˆ(pi)e is called centering direction and helps the current point keep away from the boundary of the feasible region, and Sp πp is called corrector direction and in some measure compensates nonlinearity not taken into account in the a ne-scaling direction.

Since the matrix Lξξ in (6.52) and (6.53) is the same, only one factorization is needed and thus the corrector step does not suppose a relevant extra computing time. Nevertheless, the variable directions obtained using the Mehrotra’s

146

6 Optimal Power Flow Analysis

method allow reducing the number of iterations with respect to Newton’s directions.

Updating Variables

The new primal and dual variables are computed based on the previously computed directions:

z(i+1)

= z(i) + α(i)

z

(6.54)

 

P

 

 

s(i+1)

= s(i) + α(i)

s

 

 

P

 

 

ρ(i+1)

= ρ(i) + α(i)

ρ

 

 

D

 

 

π(i+1)

= π(i) + α(i)

π

 

 

D

 

 

where α(Pi) (0, 1] and α(Di) (0, 1] are the step length parameters for the primal and dual variables, respectively. The maximum values of the step lengths can be estimated using the following heuristic rules:

P

= min{1, γ

k

{−

k

 

k

if Δs

k

< 0

}}

(6.55)

α(i)

 

min

 

s(i)/Δs

 

 

 

 

(i)

= min{1, γ

min

{−

π(i)

/Δπ

if Δπ

< 0

}}

 

αQ

k

k

 

k

 

k

 

 

 

where γ (0, 1) is a safety factor that ensure that the next point satisfies the strict positivity condition. A typical value for the safety factor is γ = 0.99995. In NLP problems, such as the optimal power flow, primal and dual variables are interdependent due to the dual feasibility conditions. In this case, to use the same step length for both primal and dual variables can help obtain convergence:

αP(i) = αQ(i) minP(i), αQ(i)}

(6.56)

However, separate step lengths have proved to work well in [111].

Reducing the Barrier Parameter

The barrier parameter μˆ(i) has to be updated (and hopefully reduced) at each iteration. The new value of the barrier parameter is computed based on the complementarity gap ˆ(i) that is the residual of the complementarity conditions:

ˆ(i) = [s(i)]T π(i)

(6.57)

The complementarity gap ˆ(i) 0 as the primal variables approach a local minimizer z → z . Then, the new barrier parameter is computed as:

μˆ(i+1) = σ(i+1)

ˆ(i)

(6.58)

nh