- •Introduction to adjustment calculus
- •Introduction to adjustment calculus (Third Corrected Edition)
- •Introduction
- •2. Fundamentals of the mathematical theory of probability
- •If d'cd; then p (d1) £ lf
- •Is called the mean (average) of the actual sample. We can show that m equals also to:
- •3.1.4 Variance of a Sample
- •Is called the variance (dispersion) the actual sample. The square root 2
- •In the interval [6,10] is nine. This number
- •VVII?I 0-0878'
- •In this case, the new histogram of the sample £ is shown in Figure 3.5.
- •Is usually called the r-th moment of the pdf (random variable); more precisely; the r-th moment of the pdf about zero. On the other hand, the r-th central moment of the pdf is given by:
- •3.2.4 Basic Postulate (Hypothesis) of Statistics, Testing
- •3.3.4 Covariance and Variance-Covariance Matrix
- •X and X of a multivariate X as
- •It is not difficult to see that the variance-covariance matrix can also be written in terms of the mathematical expectation as follows:
- •3.3.6 Mean and Variance-Covariance Matrix of a Multisample The mean of a multisample (3.48) is defined as
- •4.2 Random (Accidental) Errors
- •It should be noted that the term иrandom error" is used rather freely in practice.
- •In order to be able to use the tables of the standard normal
- •X, we first have to standardize X, I.E. To transform X to t using
- •Is a normally distributed random
- •4.10 Other Measures of Dispersion
- •The average or mean error a of the sample l is defined as
- •5. Least-squares principle
- •5.2 The Sample Mean as "The Maximum Probability Estimator"
- •5.4 Least-Sqaures Principle for Random Multivariate
- •In very much the same way as we postulated
- •The relationship between e and e for a mathematical model
- •6.4.4 Variance Covariance Matrix of the Mean of a Multisample
- •Itself and can be interpreted as a measure of confidence we have in the correctness of the mean £. Evidently, our confidence increases with the number of observations.
- •6.4.6 Parametric Adjustment
- •In this section, we are going to deal with the adjustment of the linear model (6.67), I.E.
- •It can be easily linearized by Taylor's series expansion, I.E.
- •In which we neglect the higher order terms. Putting ax for X-X , al for
- •The system of normal equations (6.76) has a solution X
- •In sections 6.4.2 and 6.4.3. In this case, the observation equations will be
- •In matrix form we can write
- •In metres.
- •6.4.7 Variance-Covariance Matrix of the Parametric Adjustment Solution Vector, Variance Factor and Weight Coefficient Matrix
- •I.E. We know the relative variances and covariances of the observations only. This means that we have to work with the weight matrix к£- 1
- •If we develop the quadratic form V pv 3) considering the observations l to be influenced by random errors only, we get an estimate к for the assumed factor к given by
- •Variance factor к plays. It can be regarded as the variance of unit
- •In metres,
- •Is satisfied. This can be verified by writing
- •Into a . О
- •6.U.10 Conditional Adjustment
- •In this section we are going to deal with the adjustment of the linear model (6.68), I.E.
- •For the adjustment, the above model is reformulated as:
- •Is not as straightforward, as it is in the parametric case (section 6.4.6)
- •VeRn VeRn
- •Into the above vector we get 0.0
- •0.0 In metres .
- •In metres.
- •Areas under the standard normal curve from 0 to t
- •Van der Waerden, b.L., 1969: Mathematical Statistics, Springer-Verlag.
6.4.4 Variance Covariance Matrix of the Mean of a Multisample
We have seen in section 6.4.3 that the mean I of a sample L has also a standard deviation S- associated with it. This standard deviation is /(n)-times smaller than the standard deviation S of the sample
L
Itself and can be interpreted as a measure of confidence we have in the correctness of the mean £. Evidently, our confidence increases with the number of observations.
We can now ask ourselves the following question: Does the mean L of a multisample L also have a variance-covariance matrix associated with it? The answer is - there is nothing to prevent us from defining it by generalising the discovery from the last section. We get
where
2 12
S^ , = — S„ 7 n. Si.
l
and
s- - = h sa.a. .
Si. i. 1 i л i j
Here we have to require again that n. = n. , i.e. that both components of
l j
the multisample have the same number of elements (see section 3.3.5). Obviously, if this requirement is satisfied for all the pairs of components we have
nn = n0 = . . . = n ~ n 12 s
and
T
Гг
= BE=rB
X h
By analogy, the variance-covariance matrix obtained via the covariance law (see section 6.3.1) from the var i anc е-с оvar ianc e matrix of the mean of multisample is associated with the mean of the derived multi-sample, or statistical estimate X. We say that
(6.53)
is the variance-covariance matrix of the statistical estimate X, i.e. of the solution of uniquely determined mathematical model
X = F(L) .
Similar statements can be used for other laws of propagation of errors. Development of these is left to the student, who should also compare results of this sections with the solution of Example 6.lk.
Example 6.1j?:
Let us take again the experiment described in Examples 6.1э 6.3 and 6.k. This time ve shall be interested in deriving the variance-covariance matrix of the solution
A
vector X.
Solution: First we evaluate £=r from eq. (6.6l). We obtain . . L
S- = -~ S = - = 0.0008 cm i
a 5 a 5 J
s2
1g2=
0.0056
cm
.. cm2
Ъ 5 b 5
Since S . = 0 we get
ab
0.0008 0 0 0.0011
2 1' cm = — ET 5 1»
Now;2^ can be evaluated from equation-(Uand" we have
1 T 1 T 1
ZX = B(5 EL} B = 5 BZLB = 5 SX'
or
0.00081 cm2 0.1079 cm3 0.01079 cm3 21.51251* cieA
Thus the standard deviations of the estimates d and otare
given by
S^ = /(0.00081 cm2)= 0.028 cm , d '
h P
S~ = /(21-.5125^ cm ) = k.6k cm .
6Л.З The Method of Least-Squares, Weight Matrix
The least-squares principle as applied on the trivial identity transformation^i.e. the sample mean>can be generalized for other mathematical models. Takirg the general formulation of the problem of adjustment as
described in section 6.П.1, i.e.
F(L + V, X) = 0,
we can again ask for such X that would make the value of the quadratic
T
form of the weighted discrepencies, V PV, minimum, i.e.
min V PV « XeR
(6.6U)
The- condition (6.6k) for the majority of mathematical models, is enough to specify such X =X uniquely. The approach to adjustment using this condition became known as the method of least-squares.
The question remains here as how to choose the matrix P. In the case of the sample mean we have used
P = diag (K/S2 , K/S2? , *1 2
m
that is
P = К diag (l/S2, , l/S2, , ... , l/S2, ).
*1 2 m
Using the notation developed for the multisample, this can be rewritten as:
-1
P = К Z- (6.65)
which indicates that the matrix P is obtained by multiplying the constant К by the inverse of the variance-cоvariance matrix of the means of observations . This is in our case a diagonal matrix as we have postulated the sample L to be uncorrelated.
We again notice that, mathemat i с ally, there is not much difference between a sample and multisample - they can be hence treated in much
the same way. Thus, there is not basic difference between the apparently
trivial adjustment of the sample mean and the general problem of adjustment. The only difference is that in the first case X is a vector of one component, while generally it may have many components.
This gives a rise to the question of what would be the role of
2
К (К having been a scalar equal to S* in the adjustment of the mean of sample),
x
in the least squares method, where X has several constituents. Let us just say at this time that we usually compute the weight matrix P, as it is called in the method of least-squares 5as
P = к Z -™1 (6.66)
JLj
where к is an arbitrarily chosen constant, the meaning of which will be shown later. This can be done because, as will also be shown later, the solution X is independent of к since it does not change the ratio between the weights or variances of. the individual observations.
In this course we shall be dealing with only two particular mathematical models which are the most frequently encountered in practice. In these models, we shall use the following notation:
ja for the number of constituents of the primary or original multisample L; ju for the number of constituents of the derived, or unknown(to be derived) multisample X;
for the number of independent equations (relationships) that can be formulated between the constituents of L and X. Moreover, we shall consider these models to be linear. The first model is
A X = L , (6.67)
in which A is an n b u matrix, X is a u b 1 vector and L is an n by 1 vector (n = r > u). The adjustment of this model is usually called parametric adjustment, adjustment of observation equations, or adjustment of indirect observations, etc.
The second model is
BL = С (6.68)
in which В is an r by n matrix, L is n x 1 and С is r x 1 vectors (r < n) . The adjustment of this model is known as conditional adjustment, adjustment of condition equations, etc.
The two mathematical models are evidently quite special since they are both linear. Fortunately many problems in practice, although non-linear by nature, can be linearized. This is the reason why the two treated models are important.