Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Heijdra Foundations of Modern Macroeconomics (Oxford, 2002)

.pdf
Скачиваний:
762
Добавлен:
22.08.2013
Размер:
27.75 Mб
Скачать

1984, pp. 385-386) for - nstraint case.

nonstrate the interpretauperscript "0" to denote

igrangian as:

(A.56)

t is changed marginally. s. Differentiating (A.56)

(A.57)

mum (EA = ri = 0 for _ all that the constraint t of a small change in c -imple, if the objective

utility of income.

ramming. We first look choice variables. Then

. We focus on first-order onstraint qualifications

I

nject only to the non- h can arise. These have 19' 4, p. 723).

The function attains a

`erior solution because )t on a boundary). The

as before:

(interior solution)

r nel (b) the function on the boundary of the

Mathematical Appendix

xo x

x

 

x

Figure A.1. Non-negativity

 

constraints

 

feasible region. In panel (b) we thus have:

 

f' (xo) = 0 and xo = 0.

(boundary solution)

 

Finally, in panel (c) we also have a boundary solution but one for which the function f(x) continues to rise for negative (infeasible) values of x. Hence, at that point we

have:

f'(xo) < 0 and xo 0.

(boundary solution)

 

These three conditions, covering the interior solution and both types of boundary solutions, can be combined in a single statement:

f'(xo) 0, xo > 0, xo f' = 0.

(A.58)

There are two key things to note about this statement. First, as is evident from Figure A.1, we can safely exclude the case of r (xo) > 0 from consideration. If f'(xo) > 0

673

the rigl

Mathematical Appendix

even for xo = 0 then this can never be a maximum as raising x by a little would also raise the objective function (see point D in panel (a)). The second key result

concerns the third condition in (A.58), saying that at least one of xo or r(x0) must be zero.

When there are n choice variables the problem becomes one of choosing xi

(i = 1, 2, , n) in order to maximize f (xi, x2, ...,xn) subject to the non-negativity constraints Xi > 0 (i = 1, 2, ... , n). The first-order conditions associated with this

problem are straightforward generalizations of (A.58):

< 0, xi 0, = 0, i = 1, 2, ... , n. (A.59)

General inequality constraints

Suppose that the objective function is given by (A.43) and the set of non-linear constraints is given by:

gl (xi, x2, • , xn) < Cl,

g2 (xi, x2, • • • xn) < C2,

(A.60)

gm (xi , x2 , ..., xn ) < Cm,

where ci are constants and the gi 0 functions are continuous and possess continuous derivatives (j = 1, 2, , m). The Lagrangian associated with the problem is:

L

f(xi, X2, . • . ,

E [c1 - sr/ (xi,x2, • • • , Xn)] 1

(A.61)

 

 

j=1

 

where Ai is the Lagrange multiplier associated with the inequality constraint ci > gq.). The first-order conditions for a constrained maximum are:

Li < 0 xi > 0 xiLi = 0 i = 1, 2, , n,

(A.62)

.CAI > 0 Aj > 0

= 0 j = 1, 2, ... , m,

 

where A aLlaxi and 4,

For a minimization problem, the Lagrangian is the same as before but the firstorder conditions are:

Li> 0 xi> 0 xiLi =0 i = 1, 2, ... , n,

(A.63)

Lxi <0 A.1> 0 A141 = 0 j = 1, 2, ... , M.

 

We refer the reader to Chiang (1984, pp. 731-755) for a detailed discussion of second-order conditions and the restrictions that the constraint functions must satisfy (the so-called constraint qualification proviso).

1

A.4.4 Literature I

Basic: Klein (1998, chs 9- mond (1995, chs 17-18). chs 2-4). Advanced: de

A.5 Single Differe

In this section we show 1 equations. We follow stag Newtonian 'dot' notatio dy(t)/ dt and y(t) d2)

A.5.1 First-order (con

Homogeneous

Suppose we have the fol

(t) ay(t) 0,

where a is a constant. the constant on

a path for y(t), such f/(t)/y(t) = -a. Since 6., the exponential type:

y(t) = Aeat ,

where A 0 0 and a are solve (A.64). This implit

aAeat aAe(t = 0 (a + a) Aeat = 0

where the result folk an initial value for y(t ), solution, y(t) = Ae -at solution of the homo ,,.

y(t) = yoe-at.

674

rig x by a little would ). The second key result one of x0 or f'(xo) must

es one of choosing xi ct to the non-negativity Ins associated with this

(A.59)

11,

id the set of non-linear

(A.60)

and possess continuous the problem is:

(A.61)

inequality constraint :m are:

(A.62)

as before but the first-

111

(A.63)

detailed discussion of ctraint functions must

Mathematical Appendix

A.4.4 Literature

Basic: Klein (1998, chs 9-11), Chiang (1984, chs 9-12, 21), and Sydsazter and Hammond (1995, chs 17-18). Intermediate: Dixit (1990, chs 2-8) and Intriligator (1971, chs 2-4). Advanced: de la Fuente (2000, chs 7-8).

A.5 Single Differential Equations

In this section we show how to solve the most commonly encountered differential equations. We follow standard procedure in the economics literature by using the Newtonian 'dot' notation to indicate derivatives with respect to time, i.e. y(t) dy(t)/dt and j'/(t) d2y(t)/dt2 etc.

A.5.1 First-order (constant coefficients)

Homogeneous

Suppose we have the following differential equation in y(t):

5/(t) + ay(t) = 0,

(A.64)

where a is a constant. This is called a homogeneous differential equation because the constant on the right-hand side is zero. To solve this equation, we must find a path for y(t), such that the exponential rate of growth in y(t) is constant, i.e. y(t)/y(t) = -a. Since growth must be exponential it is logical to try a solution of the exponential type:

y(t) = Aeat ,

(A.65)

where A 0 0 and a are constants to be determined. Clearly the trial solution must solve (A.64). This implies that:

aAeat + aAeat = 0

 

 

(a + a) Aeat = 0

a = -a,

(A.66)

where the result follows from the fact that Aeat

0 0. Suppose we are also given

an initial value for y(t), say y(0) = yo (a constant). Then it follows from our trial solution, y(t) = Ae-at that y(0) = A = yo (since e' = 1 for t = 0) so that the full

solution of the homogeneous differential equation is:

y(t) = yocat

(A.67)

 

675

Mathematical Appendix

 

Non-homogeneous

 

Now suppose that the differential equation is non-homogeneous:

 

Y(t) + ay(t) = b,

(A.68)

where b 0. We look for the solution in two steps. First we find the complementary function, yc(t), which is the path for y(t) which solves the homogeneous part of the differential equation. Next, we find the so-called particular solution, yp(t), to the general equation. By adding the complementary function and the particular solution we obtain the general solution. In case we want to impose the initial condition this can be done after the general solution is found.

Since the complementary function solves the homogeneous part of the differential equation it makes sense to try yc(t) = Ae -at. The particular integral is found by trial and error starting with the simplest possible case. Try yp(t) = k (a constant) and substitute it in the differential equation:

51p(t) + ayp(t) = b

0 + ak = b

b

(for a 0)

(A.69)

k= -

 

a

 

 

Hence, provided a 0 0, our simplest trial solution works and the general solution is given by:

y(t) [=

+ yp(t)] = Ae -at + -

(for a 0 0).

(A.70)

 

a

 

 

If we have the initial condition y(0) = yo (as before) then we find that A = yo - b/a. What if a = 0? In that case the complementary function is HO = Ae-°t = A,

a constant, so it makes no sense to assume that the particular solution is also a constant. Instead we guess that yp(t) = kt (a time trend). Substituting it in the differential equation (A.64) (with a = 0 imposed) we obtain:

yp(t) + ayp(t) = b k=b (for a = 0). (A.71)

Hence, the trial works and the general solution is:

y(t) = A + bt, (for a = 0).

(A.72)

(Imposing the initial condition y(0) = yo we obtain that A = yo .) The thing to note about the general solution is that we could have obtained it by straightforward integration. Indeed, by rewriting (A.68) and setting a = 0 we get dy(t) = bdt which can be integrated:

f

 

(A.73)

dy(t) = f bdt

y(t) = A + bt,

where A is the constant of integration. Of course, equations (A.72) and (A.73) are the same but in the derivation of the latter no inspired guessing is needed.

A.5.2 First-order (vari

Assume that the differc.,

y(t) + a(t)y(t) = b( t

4 where a and b are now bo

constant coefficients it is time derivative y(t). Ti . forward. We first solve t that a(t) is continuous w

dy(t) I dt = -a(t), y(t)

from which we conclude

log ly(t)I = A -

where we have used the 1 of integration. Assum . c we find that the general

y(t) = Aef a(t) at

The non-homogeneo possesses an integratin,

F(t) f a(t) dt.

First we note the follu►.,

[eF(t) v(01 = eF dt '

where we have used - (A.74) by the integratii4

=

[eF(t)y(t) ]

Finally, by integrating t

eF(t)y(t) = A + f i.)■4

At) e—F(t)

where A is again the

676

eneous:

(A.68)

Id the complementary iomogeneous part of the

!ution, yp(t), to the Ben- d the particular solution he initial condition this

eous part of the differeniar integral is found by t) = k (a constant) and

(A.69)

' the general solution is

(A.70)

ve find that A = yo — b/a. on is yc(t) = Ae-°t = A, titular solution is also a ' Substituting it in the n:

(A.71)

(A.72)

yo.) The thing to note it by straightforward we get dy(t) = bdt which

(A.73)

s (A.72) and (A.73) are essing is needed.

Mathematical Appendix

A.5.2 First-order (variable coefficients)

Assume that the differential equation has the following form:

Y(t) + a(t)y(t) = b(t),

(A.74)

where a and b are now both functions of time. Though the expression does not have constant coefficients it is nevertheless linear in the unknown function y(t) and its time derivative y(t). This linearity property makes the solution relatively straightforward. We first solve the homogeneous equation for which b(t) 0. Assuming that a(t) is continuous we can rewrite equation (A.74) as:

dy(t)/dt

—a(t),

(A.75)

Y(t)

 

 

from which we conclude that:

 

log I y(t)I = A — f a(t) dt,

(A.76)

where we have used the fact that f dy(t)/y(t) = logly(t)I and where A is the constant of integration. Assuming that y(t) > 0, as is often the case in economic applications, we find that the general solution for y(t) is:

Y(t) = Ae- f a(t) dt

(A.77)

The non-homogeneous equation (A.74) can also be solved readily because it possesses an integrating factor, eF(t) , where F(t) is given by:

F(t) f a(t) dt.

(A.78)

First we note the following result:

 

dt e[ F(t) y(t)] = eF(t) y(t) + y(t)eF(t) E(t) = eF(t) [5/(0 a(t)y(t)] ,

(A.79)

where we have used the fact that F(t) = a(t). Next, by multiplying both sides of (A.74) by the integrating factor eF(t) and using (A.79) we obtain:

d [ F(t)

(A.80)

dt e Y(0] = b(OeF(t) .

Finally, by integrating both sides of (A.80) we obtain:

 

eF(t) y(t) = A + f b(t)eF(t) dt

 

y(t) = e-F(t) [A + f b(t)eF(t) dt],

(A.81)

where A is again the constant of integration.

 

 

677

Mathematical Appendix

A.5.3 Leibnitz's rule

In the text we occasionally make use of Leibnitz's rule for differentiation under the integral sign (Spiegel, 1974, p. 163). Suppose that the function f (x) is defined as follows:

u2(x)

g(t , x) dt, a <x < b.

(A.82)

f (x) fi (x)

Then, if (i)g(t, x) and ag/ax are continuous in both t and x (in some region including ui < t < u2 and a < x < b) and (ii) u1(x) and u2(x) are continuous and have continuous derivatives (for a < x < b), then df I dx is given by:

(x)

=

fu,(x) ag (tdu2, x) due

du,

(A.83)

 

 

+ g (u2, x)

- g(ui, x)

dx

dx i(x) x

dx

 

 

 

Often u1 and/or u2 are constants so that one or both of the last two terms on the right-hand side of (A.83) vanish. See also Sydsaater and Hammond (1995, pp. 547-549) for examples of Leibnitz's rule.

A.5.4 Literature

Basic: Klein (1998, ch. 14), Chiang (1984, chs. 13-15), Sydsxter and Hammond (1995, ch. 21). Intermediate: Apostol (1967, ch. 8), Kreyszig (1999, chs. 1-5), Boyce and DiPrima (1992), and de la Fuente (2000, chs. 9-11).

A.6 Systems of Differential Equations

The main purpose of this section is to demonstrate how useful Laplace transform techniques can be to (macro) economists. Whilst the technique is not much more difficult than the method of comparative statics—that most students are familiar with—it enables one to study thoroughly (the properties of) low-dimensional' dynamic models in an analytical fashion.

A.6.1 The Laplace transform

The Laplace transform is a tool used extensively in engineering contexts and a very good source is the engineering mathematics textbook by Kreyszig (1999). The

1 By "low dimensional" we mean that the characteristic polynomial of the Jacobian matrix of the system must be of order four or less. For such polynomials closed-form solutions for the roots are available. For higher-order polynomials Abel's Theorem proves that finite algebraic formulae do not exist for the roots. See the amusing historical overview of this issue in Turnbull (1988, pp. 114-115).

a

Laplace transform is ext. Intuitively, the method into a simple problem, I and (iii) we transform bar solution of our hard prof calculus (in step (i)) we v the Laplace transform ter The major advant,.;e which time-varying shoo easy to identify the pro model. As we demonsu in the real business cycle Suppose that f (t) is a transform of that funct,,

00

L{f,s}-m-- fo

In economic terms . I present to the indefinite integral on the right-I seen as a function of .)

Here are some simple

We use the definition(

L{f , s) = L{1, s} = I

for s > 0. We have k the ease with which it v useful one to remer- _ that f (t) = et for t > 0. and get:

Ltf, = L{eat ,s} =I

1 e s - a

provided s > a (other. not defined).

2 Some authors prefer t notation similar to ours bt. We adopt our elaborate nc, :

s below.

678

f (t).
Yet others use

"erentiation under the action f (x) is defined as

I

(A.82)

a some region including e continuous and have y•:

(A.83)

the last two terms on I Hammond (1995, pp.

idsmter and Hammond (1999, chs. 1-5), Boyce

,:ful Laplace transform d ue is not much more Post students are famil- , of) low-dimensional )

neering contexts and a by Kreyszig (1999). The

pf the Jacobian matrix of the r solutions for the roots are algebraic formulae do not

411'111 (1988, pp. 114-115).

Mathematical Appendix

Laplace transform is extremely useful for solving (systems of) differential equations. Intuitively, the method works in three steps: (i) the difficult problem is transformed into a simple problem, (ii) we use (matrix) algebra to solve the simple problem, and (iii) we transform back the solution obtained in step (ii) to obtain the ultimate solution of our hard problem. Instead of having to work with difficult operations in calculus (in step (i)) we work with algebraic operations on transforms. This is why the Laplace transform technique is called operational calculus.

The major advantage of the Laplace transform technique lies in the ease with which time-varying shocks can be studied. In economic terms this makes it very easy to identify the propagation mechanism that is contained in the economic model. As we demonstrate in Chapter 15 this is important, for example, in models in the real business cycle (RBC) tradition.

Suppose that f (t) is a function defined for t > 0. Then we can define the Laplace transform of that function as follows: 2

Eft , s} f e-stf (t) dt.

(A.84)

In economic terms L{f , s} is the discounted present value of the function f (t), from present to the indefinite future, using s as the discount rate. Clearly, provided the integral on the right-hand side of (A.84) exists, £{ f , 51 is well-defined and can be seen as a function of s.

Here are some simple examples. Suppose that f (t) = 1 for t > 0. What is {f, s}? We use the definition in (A.84) to get:

 

(

1

 

Lff,, = ,C{1, s} =f

 

 

1 x e-St dt = - -e-St

 

ID

 

S

 

for s > 0. We have found our first Laplace transform, i.e. r{1, s}

1/s. Despite

the ease with which it was derived, the transform of unity, .C{1, s), is an extremely useful one to remember. Let us now try to find a more challenging one. Suppose that f (t) = eat for t > 0. What is L{f , s}? We once again use the definition in (A.84) and get:

r{f,, s} = L{eat

, s} =f eat e-st dt = f e-o-a)t dt

1

00

 

1

 

-(s-a)t

=

,

 

e

s - a

s - a

 

 

provided s > a (otherwise the integral does not exist and the Laplace transform is not defined).

2 Some authors prefer to use the notation F(s) for the Laplace transform of

notation similar to ours but suppress the s argument and write G{ f) for the Laplace transform of f (t). We adopt our elaborate notation since we shall need to evaluate the transforms for particular values of s below.

679

Mathematical Appendix

Table A.1. Commonly used Laplace transforms

f (t)

to-1

(n —1! eat

teat

to-1 eat

(n — 1)! eat — ebt

a — b aeat — bebt

a — b 14(t — a)

 

Ltf ,

 

valid for:

 

 

 

s> 0

 

 

 

s > 0

 

1 n = 1,2,...; s > 0

 

sn 1

 

s > a

 

s —1a

 

s > a

 

(s

a)2

 

 

 

1

 

 

 

n = 1,2,...; s > a

 

(s — a)n

 

 

 

 

1

s > a, s > b, alb

 

(s — a)(s — b)

 

 

 

 

1

s > a, s > b, alb

 

(s — a)(s — b)

 

 

I

0 for 0 < t < a e-a5

 

 

1 for t > a

 

 

So now we have found our second Laplace transform and in fact we already possess the two transforms used most often in economic contexts. Of course there are very many functions for which the technical work has been done already by others and the Laplace transforms are known. In Table A.1 we show a list of commonly used transforms. Such a table is certainly quite valuable but even more useful are the general properties of Laplace transforms which allow us to work with them in an algebraic fashion. Let us look at some of the main properties.

Property 1 Linearity. The Laplace transform is a linear operator. Hence, if the Laplace transforms of f (t) and g(t) both exist, then we have for any constants a and b that

L{af + bg, s} = aL{f , s} + bL{g, s}.

(P1)

The proof is too obvious to worry about.

The usefulness of (P1) is easily demonstrated: it allows us to deduce more complex transforms from simple transforms. Suppose that we are given a Laplace transform and want to figure out the function in the time domain which is associated with it. Assume that L{r , s} = 1/((s — a)(s — b)), a 0 b. What is f (t)? We use the method of partial fractions to split up the Laplace transform:

1

= 1

1

1

(A.85)

(s—a)(s—b) a—bLs—a s—bj

 

Now we apply (P1) to

si = a — b

where we have used inverted to get our ar

eat ebt

f (t) = a — b

This entry is also fou But we have now which we have not to (A.86) is valid but transform unique? Ti Kreyszig (1999, p. 25,

Property 2 Existence.

interval in the raja t

(01 5- MeYt

for all t > 0 and for 9

s > y.

I

 

With "piecewise is defined on that finitely many sub-1 (Kreyszig, 1999, p. function. The reqc exponential order y absolute value more large as desired the r Armed with these discounting very h of any function of

Property 3 If Lff ,

lim

s} = 0

Property 4 Unique it which are piecewise (-a for t > N, then the ,4. is unique.

Let us now push or

680

.; s > 0

S > a

>b, alb

I

>b, alb

I fact we already possess Of course there are very already by others and i list of commonly used

,-en more useful are the work with them in an es.

r. Hence, if the Laplace

-ants a and b that:

(P1)

- deduce more complex ven a Laplace transform :h is associated with it. ? We use the method of

(A.85)

Mathematical Appendix

Now we apply (P1) to equation (A.85)-which is in a format we know—and derive:

L{r , s} = 1 1 1

1

= 1

[Ltea, s) - L{ebt , s}],

(A.86)

a-bts-a s-b] a-b

 

 

where we have used Table A.1 to get to the final expression. But (A.86) can now be inverted to get our answer:

eat _ ebt

(A.87)

f (t) = a - b

This entry is also found in Table A.1.

But we have now performed an operation (inverting a Laplace transform) for which we have not yet established the formal validity. Clearly, going from (A.87) to (A.86) is valid but is it also allowed to go from (A.86) to (A.87), i.e. is the Laplace transform unique? The answer is "no" in general but "yes" for all cases of interest. Kreyszig (1999, p. 256) states the following sufficient condition for existence.

Property 2 Existence. Let f(t) be a function that is piecewise continuous on every finite interval in the range t > 0 and satisfies:

If .< Me) t ,

for all t > 0 and for some constants y and M. Then the Laplace transform exists for all s > y.

With "piecewise continuous" we mean that, on a finite interval a < t < b, f(t) is defined on that interval and is such that the interval can be subdivided into finitely many sub-intervals in each of which f (t) is continuous and has finite limits (Kreyszig, 1999, p. 255). Figure A.2 gives an example of a piecewise continuous function. The requirement mentioned in the property statement is that f(t) is of exponential order y as t ----> oo. Functions of exponential order cannot grow in absolute value more rapidly than Met as t gets large. But since M and y can be as large as desired the requirement is not much of a restriction (Spiegel, 1965, p. 2).

Armed with these results we derive the next properties. The first one says that discounting very heavily will wipe out the integral (and thus the Laplace transform) of any function of exponential order. The second one settles the uniqueness issue.

Property 3 If L{r s} is the Laplace transform of f (t), then:

lim L{f , s} = 0

(P3)

S-,00

 

Property 4 Unique inversion [Lerch's theorem]. If we restrict ourselves to functions f(t) which are piecewise continuous in every finite interval 0 < t < N and of exponential order for t > N, then the inverse Laplace transform of Lff , 51, denoted by L -1 [C{ f , s}} = f(t), is unique.

Let us now push on and study some more properties that will prove useful later on.

681

Mathematical Appendix

1(t)

b t

Figure A.2. Piecewise continuous function

Property 5 Transform of a derivative. If f (t) is continuous for 0 < t < N and of exponential order y for t > N and f' (t) is piecewise continuous for 0 < t < N then:

L{ f , s) = sLif , s} — f (0),

(P5)

for s > y .

PROOF: Note that we state and prove the property for the simple case with f (t) continuous for t > 0. Then we have by definition: 3

Lit , s) =f e -sr f (t)dt

= e-srfr o+ s

00

e- s t f (t)dt

 

=urn e-sr f (t) — f (0) + sLif , 51. t-÷0.

But for s > y the discounting by s dominates the exponential order of f (t) so that limt_> „„ e-srf (t) = 0 and the result follows.

Of course, we can use (P5) repeatedly. For secondand third-order time derivatives of f (t) we obtain:

5} = s ft. , s) — f (0) = s [s , s) — f (0)] — f (0)

(P6)

Lft. , s) = s3 Lff , s) — s2 f (0) — st. (0) —

We can now illustrate the usefulness of the properties deduced so far and introduce the three-step procedure mentioned above (on page 679) by means of the following prototypical example.

3 We use integration by parts, i.e. f udv = uv f vdu, and set u = e- st and v = f (t).

A.6.2 Simple app

Suppose we have th(

y(t) + 4(t) + 3y

which must be solvt

y(0) = 3, y(0)

Here goes the threeStep 1: Set up the and noting (P6) we

L{y, s} + 4.0 s,

[s2 L{y , s) — Sy ,

[s2 + 4s + 3]

By substituting (A equation including

[52 + 4s + 3] .C(j

Step 2: Solve the manipulating the e left-hand side of (A. for L{y,, s) quite e,:

3s - Liy, sl = (s + 1

3 s +33

3 s + 3

5 s + 1

Step 3: Invert th now written the Inversion of (A.92

y(t) = L-1 {

Of course we cool techniques so for t

4 We show the trivi in s-space is indeed tackling the problem d

682