Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Прикладная математика. Вычислительные задачи

.pdf
Скачиваний:
0
Добавлен:
29.11.2025
Размер:
841.5 Кб
Скачать

We introduce a custom function that implements the sweep method:

Problem 1. Solve the system with a sweep method.

Variant

Elements on the main diagonal, under

The right part

 

the diagonal and over the diagonal

of the system

 

 

N

bi

 

N 2

,

ai

N 2

,

ci

N 4

 

bi

2

 

 

 

 

N

N

 

 

 

N

 

N 47

 

 

To implement the calculations according to the described algorithm, approximately 8n arithmetic operations are required, where as in the

Gauss method this value is approximately equal to 23n3.

71

§ 8 Jacobi iteration method

If a system of linear algebraic equations has a high order, and its main matrix is not tridiagonal, then the use of direct methods for solving systems is not always justified. In this case, often use iterative algorithms that enable you to obtain solutions with the required accuracy. Let us take a system of linear algebraic equations:

AX = C,

where A is a non-degenerate square matrix for which the elements on the diagonal are not equal to zero.

Such a system can be converted to the form:

X = BX + c,

where B is a square matrix of the same dimension as A; c is a column vector.

This transition is carried out as follows: the first variable is expressed from the first equation of the system. From the second equation, the second variable is expressed, and so on. As a vector c, the zero vector or the vector of the right side can be used. Then the calculation formula of the Jacobi method will be as follows:

X(k+1)=BX(k)+c.

Example.

72

No convergence

73

Problem 2. Solve the system by the Jacobi method.

Variant

1

2

3

4

5

6

7

8

1,70

System matrix A

 

Vector b

0,23

0,04

0,05

0,68

0,00

0,80

0,01

0,02

0,48

–0,03

–0,22

–0,10

0,00

–0,08

–0,15

–0,04

–0,03

–1,00

–1,00

3,00

0,38

0,49

0,59

1,51

0,11

2,10

0,32

0,43

1,47

–0,05

0,05

1,20

0,26

1,08

–0,22

–0,11

–0,11

0,30

0,32

0,77

2,04

–0,21

0,18

1,24

–0,45

1,23

–0,06

0,00

–0,88

–1,26

–0,34

1,11

0,00

–0,62

–0,05

0,26

–0,34

1,12

–1,17

0,79

–0,12

0,34

0,16

–0,64

–0,34

3,08

–0,17

0,18

1,42

–0,16

–0,34

0,85

0,31

–0,42

–0,12

0,26

0,08

0,75

0,83

0,99

–0,02

2,62

–0,08

–1,30

–0,03

0,72

–0,33

0,07

1,10

–0,09

–0,13

0,58

–1,28

–1,70

–0,19

0,23

–0,08

0,63

1,50

3,68

0,16

0,18

0,22

1,16

0,12

3,59

0,18

0,21

8,20

0,11

0,14

3,50

0,21

1,24

0,11

0,14

0,17

3,11

1,27

3,55

2,15

0,18

0,21

1,08

0,11

3,46

0,16

0,19

4,12

0,12

0,14

3,37

0,20

1,16

0,10

0,13

2,17

3,28

1,19

2,38

0,10

0,12

0,14

5,08

0,08

2,29

0,11

0,14

5,34

0,07

0,09

2,20

0,15

5,57

0,06

0,08

0,11

1,10

5,75

74

 

1,00

–0,17

0,33

–0,18

–1,20

9

3,00

0,82

–0,43

0,08

0,33

–0,22

–0,18

0,79

–0,07

0,48

 

 

–0,08

–0,07

–0,21

0,96

–1,20

 

0,68

0,18

–0,02

–0,21

1,83

10

–0,16

1,88

0,14

–0,27

–0,65

–0,37

–0,27

1,02

0,24

6,23

 

 

–0,12

–0,21

0,18

0,75

–1,13

§ 9 The conditions of the action of the sweep method and the Jacobi method

Let us give sufficient conditions for the coefficients of the system (sweep method), when performing calculations using direct sweep formulas can be completed (none of the denominators of the coefficients will vanish).

In particular, this guarantees the existence of a solution to the system and its uniqueness.

Theorem. Let the coefficients of the system satisfy the conditions of the diagonal dominance:

 

bk

 

 

 

ak

 

 

 

ck

 

,

 

bk

 

 

 

ak

 

,

1 k m,

 

 

 

 

 

 

 

 

 

 

where a k , b k , c k – overdiagonal, diagonal and subdiagonal elements of

the system matrix.

Then the inverse sweep is stable with respect to the input data. Consider the convergence conditions for the Jacobi method.

Theorem. Let the condition be fulfilled В 1, then

1)the solution X of the system exists and is unique;

2)the Jacobi method converges and the error estimate is valid for an

arbitrary initial approximation X (0):

X (n) X Bn X (0) X .

75

Note that the application of the method is justified when В 1/ 2. However, in real cases В turns out to be close to the unity and therefore the error is determined as follows:

1 B

1 B 1.

Then the value X (n) X (n 1) 1, here turns out to be small, not

because the approximations are close to the solution, but because the method converges slowly.

Problem 3. Find the norms of the vector x (obtained in Problem 2). Find the norms of the matrix A (the matrix of the system from Problem 2). Consider the condition number of the matrix A (from Problem 2).

76

Laboratory session № 3

APPROXIMATION OF FUNCTIONS

§ 10 Basic concepts and definitions

The functions used in mathematical models can be defined both in an analytical way and in a tabular manner, whereby the function is known only for certain discrete values of the argument. In practice, you may need the values of the functions at other points other than those specified in the table.

Approximation of a function f (x) by a simpler function (x) is called an approximation. The approximating function (x) is constructed in such a way that the deviations (x) from f (x) in the given region

are the smallest.

The most commonly used is the so-called rms approximation, for which the smallest value is:

b

M ( f (x) (x))2dx.

a

Approximation, in which the approximation is formulated on a given discrete set of points {xi} is called a point.

To obtain the

point mean-square approximation of the function

y f (x) given by

the table, the approximating function (x) is ob-

tained from the condition of the minimum value:

Sn (yi (xi ))2,

i 0

where yi – the function values f (x) in points xi .

Another type of point approximation is interpolation, in which the approximating function takes at the given points xi the same values yi

as the function f (x), i. e.

(xi ) yi ,

i 0, ..., n.

77

The problem of interpolation is to find the approximate values of the table function with the arguments X, not coinciding with the nodal, by calculating the values of the function f (x). If x [x0,xn ], then finding

the approximate value of a function f (x) is called interpolation; if x [x0,xn ], then the process is called extrapolation.

It is known that through n + 1 points on the plane it is possible to draw a curve that is a graph of a power polynomial of the degree n, and the polynomial is unique.

For example, through two points on a plane it is possible to draw just one single straight line (first degree polynomial), through three points – a parabola (third degree polynomial), etc.

If within the entire interpolation interval [x0,xn ] containing n + 1

nodes one polynomial of the degree n is constructed, then we speak of global interpolation.

Lagrange proposed to build an interpolation polynomial as follows:

n

 

x x j

 

Ln (x) yi

 

 

.

x x

j

i 0

j i

i

 

In order to avoid a high degree of a polynomial, the interpolation segment is divided into several parts, and an independent local low degree polynomial is built on each partial interval. Piecewise linear interpolation involves the construction of a straight line segment on each interval of the approximation. In the engineering calculations package, MathCad is used to implement the built-in lspline function. Piecewise quadratic interpolation involves the construction of an approximation on an interval containing three points in the form of a parabola. A significant disadvantage of piecewise interpolation is that at the junction points of different interpolation polynomials, their first derivative turns out to be discontinuous.

This disadvantage is eliminated by using a special type of local interpolation – interpolation by splines.

A spline is a function that is represented by a polynomial of some degree on each partial interval, and is continuous along the whole segment along with several of its derivatives.

78

On the interval [xi 1,xi ], the cubic spline can be represented as:

si (x) ai bi (x xi 1) ci (x xi 1)2 di (x xi 1)3.

Conditions for matching splines at node points:

1) the equality of the values of the splines and the approximated function at the nodes:

si (xi 1) yi 1,

si (xi ) yi .

2) continuity of the first and second derivatives of splines in nodes:

si (xi ) si 1(xi ), si (xi ) si 1(xi ).

Additional condition: zero curvature of the spline at the boundary points. In the engineering calculations package, MathCad is used to implement the built-in cspline function. If the lspline function found the coefficients based on the condition of the free ends of the spline.

Example. Stages of construction of piecewise linear interpolation, spline interpolation and global interpolation.

79

1) Construction of piecewise linear interpolation:

2) Construction of the Lagrange polynomial. For this, it is necessary to write the corresponding polynomial in the function PL(x).

80