- •Introduction to adjustment calculus
- •Introduction to adjustment calculus (Third Corrected Edition)
- •Introduction
- •2. Fundamentals of the mathematical theory of probability
- •If d'cd; then p (d1) £ lf
- •Is called the mean (average) of the actual sample. We can show that m equals also to:
- •3.1.4 Variance of a Sample
- •Is called the variance (dispersion) the actual sample. The square root 2
- •In the interval [6,10] is nine. This number
- •VVII?I 0-0878'
- •In this case, the new histogram of the sample £ is shown in Figure 3.5.
- •Is usually called the r-th moment of the pdf (random variable); more precisely; the r-th moment of the pdf about zero. On the other hand, the r-th central moment of the pdf is given by:
- •3.2.4 Basic Postulate (Hypothesis) of Statistics, Testing
- •3.3.4 Covariance and Variance-Covariance Matrix
- •X and X of a multivariate X as
- •It is not difficult to see that the variance-covariance matrix can also be written in terms of the mathematical expectation as follows:
- •3.3.6 Mean and Variance-Covariance Matrix of a Multisample The mean of a multisample (3.48) is defined as
- •4.2 Random (Accidental) Errors
- •It should be noted that the term иrandom error" is used rather freely in practice.
- •In order to be able to use the tables of the standard normal
- •X, we first have to standardize X, I.E. To transform X to t using
- •Is a normally distributed random
- •4.10 Other Measures of Dispersion
- •The average or mean error a of the sample l is defined as
- •5. Least-squares principle
- •5.2 The Sample Mean as "The Maximum Probability Estimator"
- •5.4 Least-Sqaures Principle for Random Multivariate
- •In very much the same way as we postulated
- •The relationship between e and e for a mathematical model
- •6.4.4 Variance Covariance Matrix of the Mean of a Multisample
- •Itself and can be interpreted as a measure of confidence we have in the correctness of the mean £. Evidently, our confidence increases with the number of observations.
- •6.4.6 Parametric Adjustment
- •In this section, we are going to deal with the adjustment of the linear model (6.67), I.E.
- •It can be easily linearized by Taylor's series expansion, I.E.
- •In which we neglect the higher order terms. Putting ax for X-X , al for
- •The system of normal equations (6.76) has a solution X
- •In sections 6.4.2 and 6.4.3. In this case, the observation equations will be
- •In matrix form we can write
- •In metres.
- •6.4.7 Variance-Covariance Matrix of the Parametric Adjustment Solution Vector, Variance Factor and Weight Coefficient Matrix
- •I.E. We know the relative variances and covariances of the observations only. This means that we have to work with the weight matrix к£- 1
- •If we develop the quadratic form V pv 3) considering the observations l to be influenced by random errors only, we get an estimate к for the assumed factor к given by
- •Variance factor к plays. It can be regarded as the variance of unit
- •In metres,
- •Is satisfied. This can be verified by writing
- •Into a . О
- •6.U.10 Conditional Adjustment
- •In this section we are going to deal with the adjustment of the linear model (6.68), I.E.
- •For the adjustment, the above model is reformulated as:
- •Is not as straightforward, as it is in the parametric case (section 6.4.6)
- •VeRn VeRn
- •Into the above vector we get 0.0
- •0.0 In metres .
- •In metres.
- •Areas under the standard normal curve from 0 to t
- •Van der Waerden, b.L., 1969: Mathematical Statistics, Springer-Verlag.
In matrix form we can write
V
«
A
X
-
L
6,1
6,3
3,1
6,1
where
V =
6,1
v3
\
v6
x -
3,1
H.
» L =
6,1
6.16 12.57
6Л1
1.09 11.58
5-07
and the design matrix, A5is
A «
6,3
0 |
1 |
0 |
0 |
0 |
1 |
0 |
-1 |
1 |
1 |
0 |
0 |
-1 |
0 |
1 |
-1 |
1 |
0 |
1
Since we have no information about the correlation between
h., we will treat them as uncorrelated. Hence, the variance-
l
covariance matrix of the observed quantities will be:
Li
Zj- = diag (k9 2, 2, h-9 2, k) 6,6
understanding that* the constant factor к is assumed one. The corresponding weight matrix is given as:
P = diag (0.25, 0.5, 0.5, 0.25, 0.5, 0.25) 6, 6
The normal equations are
N X * U 3,3 3,1 3,1
yielding the solution
-1
X ~ N U 3,1 " 3,3 3,1
where
|
|
|
N |
|
т А Р |
|
А |
|
т |
|
|
|
|
|
|
|
|
3,3 |
|
3,6 6,( |
|
6, |
3 |
|
|
|
|
|
|
Thus: |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Г 0 |
0 |
0 |
1 |
-1 |
-1 |
|
0. |
25 |
0 |
0 |
0 |
0 |
0 |
N = |
l |
0 |
-1 |
0 |
0 |
1 |
|
|
0 |
0.5 |
0 |
0 |
0 |
0 |
|
0 |
1 |
1 |
0 |
1 |
0 |
|
|
0 |
0 |
0.5 |
0 |
0 |
0 |
|
|
|
|
|
|
|
|
|
0 |
0 |
0 |
0.25 |
0 |
0 |
|
|
|
|
|
|
|
|
|
0 |
0 |
0 |
0 |
0.5 |
0 |
|
|
|
|
|
|
|
|
|
0 |
0 |
0 |
0 |
0 |
0., |
0 |
1 |
0 |
0 |
0 |
1 |
0 |
-1 |
1 |
1 |
0 |
0 |
-1 |
0 |
1 |
-1 |
1 |
0 |
and N =
ООО 0.25 0 -0.5 О 0.5 0.5
0.25
О О
-0.5
О
0.5
-0.25 0.25
О
0 |
1 |
0 |
0 |
0 |
1 |
0 |
-1 |
1 |
1 |
0 |
0 |
-1 |
0 |
1 |
-1 |
1 |
0 |
Finally:
N 3,3
1.00 -0.25 -0.5
-0.25 .1.00 -0.50
-0.5 -0.5
1.50
Note that N is a symmetricjpositive-definite matrix.
Hence:
-1
H 3,3
1.6 0.8 0.8 1.6 0.8 0.8
0.8 0.8 1.2
Computing U = A PL , we get
U = 3,1
ООО 0.25 0.25 0 -0.5 О О 0.5 0.5 О
-0.5
О
0.5
-0.25 0.25
О
6.16 12.57
6Л1
1.09 11.58
5.07
and
U = ЗД
-6.7850 -0.3975 15.2800
Performing the multiplication N U, |
we |
get |
X as: |
||||||||
|
1.6 |
0.8 |
0.8 |
|
-6.7850 |
|
|
1.05 |
|||
X = |
0.8 |
1.6 |
0.8 |
|
-0.3975 |
= |
|
6.16 |
|||
3,1 |
0.8 |
0.8 |
1.2, |
|
15,2800 |
|
|
12.59 . |
Therefore, we have obtained: the following estimates ^ = 1.05
m.
H = 6.16 с
m ,
EL
12.59
m.
By substituting the values of X we get the residual vector V for the observed Ik from the equation
V = A X - L .
Namely:
V = 6,1
1 0.00 m 0.02 m 0.02 m -0.04 m -0.04 m 0.04 m
The adjusted observations h are computed from:
h
=
and we get:
h. l |
= h. + l |
vi ' |
i = 1 |
, 2 / . |
6.16 |
|
r 0.00 |
|
6.16 |
12.57 |
|
0.02 |
|
12.59 |
6.41 |
|
0.02 |
|
6.43 |
1.09 |
+ |
-0.04 |
|
1.05 |
11.58 |
|
-0.04 |
|
11.54 |
5.07 |
|
°-04 |
|
5.11 |