- •Lecture 1.
- •§1. Determinants and their properties
- •Is the minor of .
- •§2. Matrices and operations on them. Inverse matrix
- •Definition 4. (Definition of a matrix)
- •§3. Systems of linear equations
- •Lecture 2.
- •Vectors. The elementary operations over vectors. The scalar, vector and mixed product of vectors.
- •§1. The elementary operations over vectors
- •Vector Addition and Multiplication of a vector by a scalar
- •2 The inner, vector and mixed product of vectors
- •Inner Product and its Properties
- •Vector Product and its Properties
- •The vector product in coordinates. Consider vectors
- •Triple Product in Coordinates. Given three vectors , , and , let us express the triple product of these vectors in terms of their coordinates. Consider the triple product
- •The simplest problem of analytic geometry
- •Division of an interval in a given ratio. Suppose given an interval м1м2. Let us find the coordinates a of point м on the interval for which .
- •3. Second-Order Curves in the Plane Ellipse
- •Parabola
- •Lecture 4 Function. The function limit. Fundamental theorems on limits. Infinitely small and infinitely large quantities
- •1. Functions
- •2. The Theory of Limits
- •Infinitesimals and bounded functions.
- •3. Fundamental Theorems on Limits
- •4. Continuity of Functions
- •Lecture 5 The derivative of the function. Geometric and mechanical meaning of derivative. Table of derivatives. The differential of a function
- •1. The derivative of a function
- •Implicit Function Derivative
- •2. Differential of a function
- •3. Higher Derivatives
- •1. Properties of differentiable functions
- •L’Hospital’s Rule for Form
- •L’Hospital’s Rule for Form
- •Taylor’s formula
- •2. Monotonic conditions. Extremum of function
- •3. Convexity and concavity. Point of inflection
- •4. Asymptotes
- •5. General Scheme for the Investigation of the Graph of a Function
- •1. Concept of functions of several variables
- •2. Partial derivatives. Total differential
- •3. Differentiation of composite and implicit functions. Tangent and surface normal
- •Implicit function of two variables
- •4. Partial derivatives and higher order differentials
- •5. Extrema of functions of two variables
- •Lecture 8 Antiderivative. Indefinite integral and its properties. Table of integrals. Main methods of integration
- •1. Antiderivative and indefinite integral
- •2. Main methods of integration
- •Integration by substitution ( or change of variable )
- •Integration by parts
- •3. Integration of fractional rational functions
- •I. Integrating Proper Rational Functions
- •II. Integrating Improper Rational Functions
- •4. Integration of irrational functions
- •5. Integration of trigonometric functions
- •1. Concept of definite integral
- •2. Main properties of definite integrals
- •Integration by substitution
- •Integration by Parts
- •3. Applications of definite integrals
- •The areas of plane figures
- •Lecture 11 Differential equations of the first and second order. Homogeneous and non-homogeneous linear differential equations. Linear differential equations with constant coefficients
- •1. Problems that lead to differential equations
- •2. Equations with separable variables
- •3. Bernoulli’s equation
- •4. The equations that allow lower-order
- •5. Homogeneous equation
- •1. Numerical series
- •6. Types of series:
- •2. The convergence of the sum of the series
- •Integral
- •3. Alternating series
- •4. Power series
- •Main concept. Definition of probability
- •2. Properties of probability
- •Lecture 15. Elements of Mathematical Statistics. Random variables, their types. Distribution laws of random variables
- •1. Random variables, their types
- •2. Distribution laws of random variables
§3. Systems of linear equations
Solving systems of linear equations on Cramer's rule
The solution to the system
(3)
is given by
,
i=1,2…,n, where
Provided that Δ≠0.
Notes:
Cramer's rule works on systems that have exactly one solution.
Cramer's rule gives us a precise formula for finding the solution to an independent system.
Note that Δ is the
determinant made up of the original coefficients of
,
i=1,2…,n. Δ is used in the denominator
for
,
i=1,2…,n.
is obtained by replacing the first (or
)
column of Δ by the constants
.
is found by replacing the second (or
)
column of Δ by the constants
and so on
is found by replacing the second (or
)
column of Δ by the constants
.
Solving systems of linear equations on matrix method
Consider a system (3) of n equations with n unknowns. Let us find a solution of system (3), by using matrices.
The matrix method applies only where the number of equations equals that of unknowns. Let us write system (3) in matrix form; for this purpose we introduce, principal matrix А, the column matrix Х, and the column matrix of free terms В:
Then system (3) can be written in the form of the matrix equation АХ=В.
Two
matrices of the same size are equal if and only if each element of
one matrix equals the corresponding element of the other matrix. To
find the matrix Х,
we multiply both sides of the matrix equation by the inverse matrix
А-1
on
the left
.
Since
is
the identity matrix, we have
.
Thus, to solve the given system of equations by the matrix method, it is sufficient to find the inverse matrix А-1 and multiply it by В on the right.
Solving systems of linear equations on Gauss's method
Solving a system of n equations with n unknowns by Cramer's rule, we must compute n+1 determinants of order n. This is a hard work.
Moreover, the method of Cramer cannot be used in cases where the principal determinant equals zero or the number of equations does not agree with that of unknowns. In such cases, Gauss' method of successive elimination of unknowns extended by applying matrices is used.
Consider the Gauss method in the case where the number of equations coincides with that of unknowns (3).
Suppose that а110; let us divide the first equation by this coefficient:
.
(*)
Multiplying the resulting equation by –а21 and adding it to the second equation of system (3), we obtain
.
Similarly, multiplying equation (*) by –аn1 and adding it to the last equation of system (3), we obtain
.
At the end, we obtain the new system of equations with n–1 unknowns:
(4)
System (4) is obtained from system (3) by applying linear transformations of equations; hence this system is equivalent to (3), i.e., any solution of system (4) is a solution of the initial system of equations.
To get
rid of
х2
in the third, the forth, …,
nth-equation,
we multiply
the second equation of system (4) by
and, multiplying this equation by the
negative coefficients
of х2
and summing them, obtain
Performing this procedure n times, we reduce the system of equations to the diagonal form
We determine хn from the last equation, substitute it in the preceding equation and obtain xn-1, and so on; going up, we determine х1 from the first equation. This is the classical Gauss method.
