- •Introduction to adjustment calculus
- •Introduction to adjustment calculus (Third Corrected Edition)
- •Introduction
- •2. Fundamentals of the mathematical theory of probability
- •If d'cd; then p (d1) £ lf
- •Is called the mean (average) of the actual sample. We can show that m equals also to:
- •3.1.4 Variance of a Sample
- •Is called the variance (dispersion) the actual sample. The square root 2
- •In the interval [6,10] is nine. This number
- •VVII?I 0-0878'
- •In this case, the new histogram of the sample £ is shown in Figure 3.5.
- •Is usually called the r-th moment of the pdf (random variable); more precisely; the r-th moment of the pdf about zero. On the other hand, the r-th central moment of the pdf is given by:
- •3.2.4 Basic Postulate (Hypothesis) of Statistics, Testing
- •3.3.4 Covariance and Variance-Covariance Matrix
- •X and X of a multivariate X as
- •It is not difficult to see that the variance-covariance matrix can also be written in terms of the mathematical expectation as follows:
- •3.3.6 Mean and Variance-Covariance Matrix of a Multisample The mean of a multisample (3.48) is defined as
- •4.2 Random (Accidental) Errors
- •It should be noted that the term иrandom error" is used rather freely in practice.
- •In order to be able to use the tables of the standard normal
- •X, we first have to standardize X, I.E. To transform X to t using
- •Is a normally distributed random
- •4.10 Other Measures of Dispersion
- •The average or mean error a of the sample l is defined as
- •5. Least-squares principle
- •5.2 The Sample Mean as "The Maximum Probability Estimator"
- •5.4 Least-Sqaures Principle for Random Multivariate
- •In very much the same way as we postulated
- •The relationship between e and e for a mathematical model
- •6.4.4 Variance Covariance Matrix of the Mean of a Multisample
- •Itself and can be interpreted as a measure of confidence we have in the correctness of the mean £. Evidently, our confidence increases with the number of observations.
- •6.4.6 Parametric Adjustment
- •In this section, we are going to deal with the adjustment of the linear model (6.67), I.E.
- •It can be easily linearized by Taylor's series expansion, I.E.
- •In which we neglect the higher order terms. Putting ax for X-X , al for
- •The system of normal equations (6.76) has a solution X
- •In sections 6.4.2 and 6.4.3. In this case, the observation equations will be
- •In matrix form we can write
- •In metres.
- •6.4.7 Variance-Covariance Matrix of the Parametric Adjustment Solution Vector, Variance Factor and Weight Coefficient Matrix
- •I.E. We know the relative variances and covariances of the observations only. This means that we have to work with the weight matrix к£- 1
- •If we develop the quadratic form V pv 3) considering the observations l to be influenced by random errors only, we get an estimate к for the assumed factor к given by
- •Variance factor к plays. It can be regarded as the variance of unit
- •In metres,
- •Is satisfied. This can be verified by writing
- •Into a . О
- •6.U.10 Conditional Adjustment
- •In this section we are going to deal with the adjustment of the linear model (6.68), I.E.
- •For the adjustment, the above model is reformulated as:
- •Is not as straightforward, as it is in the parametric case (section 6.4.6)
- •VeRn VeRn
- •Into the above vector we get 0.0
- •0.0 In metres .
- •In metres.
- •Areas under the standard normal curve from 0 to t
- •Van der Waerden, b.L., 1969: Mathematical Statistics, Springer-Verlag.
Is satisfied. This can be verified by writing
1 1 n Vi2
<|>(LoSS;L) = exp [-^ I ^
(2тг)п/2 П S . 1=1 Si
i-1 1
exp [- VTPV],
, n 2k
(2тт)П/^ П S. i=l
which is maximum if "both V PV and trace (Z") are minimum. This is valid for any fixed к.
6л.9 Relative Weights, Statistical Significance of A Priori and A Posteriori Variance Factors
We have seen in section 6.U.6 that the choice of the a priori 2
variance factor a , or к, does not influence the estimated solution о
vector X. Also, in section 6.k.J we have seen that the same holds true
even for the estimated variance-covariance matrix E". Hence, for the
X
purpose of getting the solution vector X along with its Z" , we can assume
X
2-1 2
any relative weights, i.e. P = a Z~ , with a chosen arbitrarily. On
о Li о
T
the other hand, t^e. matrix of. nojmal^quations, i.e. N = A PA, and the
a 2 ЛТ л ,
estimated variance factor, i.e., = V PV/df, are influenced by the
2
selection of a
о
2
These features of frQ are used in practice for two different purposes. First, is to render the magnitude of the elements of the normal equation matrix N such as to make the numerical process of its
inversion the most precise. This is accomplished by choosing the value
2
of о such as to make the average of the elements of N close to one.
The second purpose is to test the consistency of the mathematical model with the observations and to test the correctness of the assumed variance-covariance matrix Z=r. Usually, if we do not have any idea
L
2 2
about the value of the variance factor a , we assume a =1. Then,
о о
д 2 ЛТ л
after performing the least-squares adjustment, we get Cq = V PV/df
2 a 2 2
as an estimate of the assumed a . The ratio о /а , provides some
о о о
testimony about the correctness of E- and the consistency of the model.
This ratio should be approaching 1. By assuming in particular, = 1,
we should end with a2 = 1 as well. If this is not satisfied, we start
о
looking into the assumed E- and use the obtained a2 from the adjustment instead of a2 in computing the weights. If the resulting new variances and covariances of the observations are beyond the expected range known from experience, we have to start examining the consistency of the mathematical model with the observations, i.e. if it really represents the correct relationship between the observed and the unknown quantities.
This approach is also used to help detecting the existing "systematic errors" in the observations L, that manifest themselves as
deviations from the mathematical model. These deviations cause an
T
"overflow" into the value of the quadratic form V PV and consequently,
A 9