- •Introduction to adjustment calculus
- •Introduction to adjustment calculus (Third Corrected Edition)
- •Introduction
- •2. Fundamentals of the mathematical theory of probability
- •If d'cd; then p (d1) £ lf
- •Is called the mean (average) of the actual sample. We can show that m equals also to:
- •3.1.4 Variance of a Sample
- •Is called the variance (dispersion) the actual sample. The square root 2
- •In the interval [6,10] is nine. This number
- •VVII?I 0-0878'
- •In this case, the new histogram of the sample £ is shown in Figure 3.5.
- •Is usually called the r-th moment of the pdf (random variable); more precisely; the r-th moment of the pdf about zero. On the other hand, the r-th central moment of the pdf is given by:
- •3.2.4 Basic Postulate (Hypothesis) of Statistics, Testing
- •3.3.4 Covariance and Variance-Covariance Matrix
- •X and X of a multivariate X as
- •It is not difficult to see that the variance-covariance matrix can also be written in terms of the mathematical expectation as follows:
- •3.3.6 Mean and Variance-Covariance Matrix of a Multisample The mean of a multisample (3.48) is defined as
- •4.2 Random (Accidental) Errors
- •It should be noted that the term иrandom error" is used rather freely in practice.
- •In order to be able to use the tables of the standard normal
- •X, we first have to standardize X, I.E. To transform X to t using
- •Is a normally distributed random
- •4.10 Other Measures of Dispersion
- •The average or mean error a of the sample l is defined as
- •5. Least-squares principle
- •5.2 The Sample Mean as "The Maximum Probability Estimator"
- •5.4 Least-Sqaures Principle for Random Multivariate
- •In very much the same way as we postulated
- •The relationship between e and e for a mathematical model
- •6.4.4 Variance Covariance Matrix of the Mean of a Multisample
- •Itself and can be interpreted as a measure of confidence we have in the correctness of the mean £. Evidently, our confidence increases with the number of observations.
- •6.4.6 Parametric Adjustment
- •In this section, we are going to deal with the adjustment of the linear model (6.67), I.E.
- •It can be easily linearized by Taylor's series expansion, I.E.
- •In which we neglect the higher order terms. Putting ax for X-X , al for
- •The system of normal equations (6.76) has a solution X
- •In sections 6.4.2 and 6.4.3. In this case, the observation equations will be
- •In matrix form we can write
- •In metres.
- •6.4.7 Variance-Covariance Matrix of the Parametric Adjustment Solution Vector, Variance Factor and Weight Coefficient Matrix
- •I.E. We know the relative variances and covariances of the observations only. This means that we have to work with the weight matrix к£- 1
- •If we develop the quadratic form V pv 3) considering the observations l to be influenced by random errors only, we get an estimate к for the assumed factor к given by
- •Variance factor к plays. It can be regarded as the variance of unit
- •In metres,
- •Is satisfied. This can be verified by writing
- •Into a . О
- •6.U.10 Conditional Adjustment
- •In this section we are going to deal with the adjustment of the linear model (6.68), I.E.
- •For the adjustment, the above model is reformulated as:
- •Is not as straightforward, as it is in the parametric case (section 6.4.6)
- •VeRn VeRn
- •Into the above vector we get 0.0
- •0.0 In metres .
- •In metres.
- •Areas under the standard normal curve from 0 to t
- •Van der Waerden, b.L., 1969: Mathematical Statistics, Springer-Verlag.
Van der Waerden, b.L., 1969: Mathematical Statistics, Springer-Verlag.
Wells, D.E. and Krakiwsky, E.J., 1971: The Method of Least Squares,
Department of Surveying Engineering, U.N.B., Lecture Notes No* 18.
Wilks, S.S. , 196З (2nd printing): Mathematical Statistics, Wiley and Sons.
Wonnacott, Т.Н. and Wonnacott, R.J., 1972 (2nd edition): Introductory Statistics, Wiley & Sons.
1From matrix algebra we know that if A is a symmetric matrix and X is a vector we get:
3 ъ T T
77~r AX = A and -гтг (X AX) = 2X A . oX oX
+ Note that the normal equations can be obtained directly from the mathematical model by pre-multiplying it by ATP .
2 ATmatrix say N, is positive definite if the value of the quadratic form Y NY is positive for any vector Y (of the appropriate dimension).
3 Here, the vector V is the vector of residuals from the least squares adjustment ,
elements are called (Hansen's) weight coefficients.
-1
Note that X is called uncorrelated when N is diagonal, i.e. when N is diagonal. In such a case, we can solve the normal equations separately for each component of X which satisfies our intuition. The correlation of X is only remotely related to the correlation of L. X
4 If we have a non-linear model F(L) = 0, it can be again linearized by Taylorfs series expansion, yielding:
3F
(L-L°) + ...-,
L=L°
in which we again neglect the higher order terms. Putting V = (L-L°), В for 3F/9L and W = F(L°), we end up with the linearized condition equations of the form: BY + W = 0 } which is the same as (6.103),
FfL) = F(L°) + эь
5This is why the conditional adjustment is sometimes called: adjustment by correlates.
6 It can be shown that similarly Z~ = TP
V о