 •Introduction to adjustment calculus
 •Introduction to adjustment calculus (Third Corrected Edition)
 •Introduction
 •2. Fundamentals of the mathematical theory of probability
 •If d'cd; then p (d1) £ lf
 •Is called the mean (average) of the actual sample. We can show that m equals also to:
 •3.1.4 Variance of a Sample
 •Is called the variance (dispersion) the actual sample. The square root 2
 •In the interval [6,10] is nine. This number
 •VVII?I 00878'
 •In this case, the new histogram of the sample £ is shown in Figure 3.5.
 •Is usually called the rth moment of the pdf (random variable); more precisely; the rth moment of the pdf about zero. On the other hand, the rth central moment of the pdf is given by:
 •3.2.4 Basic Postulate (Hypothesis) of Statistics, Testing
 •3.3.4 Covariance and VarianceCovariance Matrix
 •X and X of a multivariate X as
 •It is not difficult to see that the variancecovariance matrix can also be written in terms of the mathematical expectation as follows:
 •3.3.6 Mean and VarianceCovariance Matrix of a Multisample The mean of a multisample (3.48) is defined as
 •4.2 Random (Accidental) Errors
 •It should be noted that the term иrandom error" is used rather freely in practice.
 •In order to be able to use the tables of the standard normal
 •X, we first have to standardize X, I.E. To transform X to t using
 •Is a normally distributed random
 •4.10 Other Measures of Dispersion
 •The average or mean error a of the sample l is defined as
 •5. Leastsquares principle
 •5.2 The Sample Mean as "The Maximum Probability Estimator"
 •5.4 LeastSqaures Principle for Random Multivariate
 •In very much the same way as we postulated
 •The relationship between e and e for a mathematical model
 •6.4.4 Variance Covariance Matrix of the Mean of a Multisample
 •Itself and can be interpreted as a measure of confidence we have in the correctness of the mean £. Evidently, our confidence increases with the number of observations.
 •6.4.6 Parametric Adjustment
 •In this section, we are going to deal with the adjustment of the linear model (6.67), I.E.
 •It can be easily linearized by Taylor's series expansion, I.E.
 •In which we neglect the higher order terms. Putting ax for XX , al for
 •The system of normal equations (6.76) has a solution X
 •In sections 6.4.2 and 6.4.3. In this case, the observation equations will be
 •In matrix form we can write
 •In metres.
 •6.4.7 VarianceCovariance Matrix of the Parametric Adjustment Solution Vector, Variance Factor and Weight Coefficient Matrix
 •I.E. We know the relative variances and covariances of the observations only. This means that we have to work with the weight matrix к£ 1
 •If we develop the quadratic form V pv 3) considering the observations l to be influenced by random errors only, we get an estimate к for the assumed factor к given by
 •Variance factor к plays. It can be regarded as the variance of unit
 •In metres,
 •Is satisfied. This can be verified by writing
 •Into a . О
 •6.U.10 Conditional Adjustment
 •In this section we are going to deal with the adjustment of the linear model (6.68), I.E.
 •For the adjustment, the above model is reformulated as:
 •Is not as straightforward, as it is in the parametric case (section 6.4.6)
 •VeRn VeRn
 •Into the above vector we get 0.0
 •0.0 In metres .
 •In metres.
 •Areas under the standard normal curve from 0 to t
 •Van der Waerden, b.L., 1969: Mathematical Statistics, SpringerVerlag.
The system of normal equations (6.76) has a solution X
given by
X = n""^"U = (A^{T}PA) ~^{1} (A^{T}PL) (6.77)
T
if the normal equation matrix, N = A PA, has an inverse. Note that N is a symmetric positive definite matrix.^{2})
To discuss the influence of the weight matrix P on the solution vector X, let us use a different weight matrix, say P', such that
P' = yP (6.78)
where у is an arbitrary constant. Substituting (6.78) into (6.77) we get:
T 1 T X^{1} = (A P'A) (A P'L)
= (а^ура)""^{1} (A^{T}yPL) (6.79)
1 T 1 T =  (A PA) у (A PL) Y
= X .
This result indicates that the factor к in equation (6.66) for computing the weight matrix P from can be chosen arbitrarily without any influence on X, which really verifies the statement we have made earlier, in section 6.4.4.
It should be noted that the vector of discrepancies V as defined in (6.70), becomes after minimization of the vector of residuals (see 4.8) of the observed quantities. As such, it should be again denoted by a different symbol, say R, to show that it is no longer a vector of variables (function of X) but a vector of fixed quantities. Some authors use v for this purpose and this is the convention we are going to use (see also 6.4.2) . The values v^ are computed directly from equation (6.70) in the same units as these of the vector L. Then the adjusted observations will be given by L = L + V.
We should keep in mind that one of the main features of the parametric method of adjustment is that the estimate of the vector of unknown parameters, i.e. X, is a direct result of this adjustment as given by equation (6.77).
At this stage, it is worthwhile going back to the trivial
problem of adjustment  the sample mean. According to the equation (6.79),
we can choose the weights of the individual observations to be inversely
proportional to their respective variances with an arbitrary constant к
of proportionality. This indicates that the weights do not have to equal
n
to the experimental probabilities for which £ P. =1, as we required
i=l ^{1}
In sections 6.4.2 and 6.4.3. In this case, the observation equations will be
x = + v^, with weight p^,
Л, mm /4
x = &_{2} + ^{v}2' with weight p_{2},
x = I 4 v , with weight P . n n n
Or, in matrix form
AX = L + V ,
where
with weight matrix, P = diag (p, p , . _{;} p ).
Substituting in equation (6.77) we get the solution, i.e. the weighted mean of the sample, as
n
;6.8o)
s pi. . . _ i i
X
n
£ p.
i
i=l
equals to one.
which agrees with the result in section 6Л.2. > when Zpj
i=l
Formula (6.8o) is the general formula used to compute the weighted mean of a sample of weighted observations.
results h_{n} , h^ and h.
Example 6.l6: Let us have a levelling line connecting two junction points, G and J, the elevations of which,H , H ,are known. The levelling line is divided into three sections d^, d^ and d^ long. Each level difference h^, h^ and h^ was observed^with
The observations h. are considered
uncorrelated with variances proportional to the corresponding lengths d., i = 1, 2, 3.
It is required to determine the adjusted values of the elevations of points 1 and 2, i.e. and respectively _{ь }using the parametric adjustment. Solution
From the given data we have:number of observations n=3 ; number of unknowns u = 2. Therefore, we have.one redundant observation. The independent relationships between the observation* and the unknowns are written as follows (each relation > corresponds to one observation .)t
^{h}l 
^{=} ^{H}i 
 h_{g} , 
^{h}2 
^{=} ^{H}2 
"^{H}l • 
^{h}3 
^{=} ^{H}J 

The above relations can be rewritten in the general form
used in the previous development:
A x = L 3,2 2,1 ЗД
where X = (H^, Hg) and
^{H}l = ^{h}l ^{+} ^{H}G = ^{L}l > E_{±} + H_{2} = h_{2} = l_{2} j
H 

^{(h}l ^{+} ^{H}G^{)} 


1 



= 
2 
EL 

.^{(h}_{3}^{H}j^{}}. 
L 2, 

~^{H}2 ^{=} ^{h}3 " ^{H}J ^{=} ^{L}3 ' Putting this in matrix form, we get 1 0 1 1 0 1
The corresponding set of observation equations are: H_{1=}H_{G+} (n_{1+Vl}) .
н_{х} + н_{2} = (h_{2} + v_{2}) ,
H_{2} = Hj + (h_{3} + v_{3}) .
These observation equations can be written in matrix form as:
v = a x  l ,
3,1 3,2 2,1 3,1
where:
(h_{x} '+ h_{g})
V = 3,1
X = 2,1
L =
(h_{3}  _{Hj})
and the design matrix A is given by
1 0
3,2
We assumed that the observed values h^, h^ and h^ are uncorrelated. We will also assume that HL and H_{T} are errorless, Hence:
E = diag (S^{2} , , ) .
2
But S is proportional to d^, i = 1, 2, 3; i
thus
Zj = к diag (d_{l5} dg, d^). Further _{5} we choose к =1 and we get
P = к Zjf^{1} = diag (i, , ) .
Applying the method of leastsquares the normal equations are
Г
T
N = A PA =
where
a N X 
= 
U 

2,2 2 51 
2 


i i 
0 

^{}1 ^{d}l 
0 1 L 
1 

0 
о 0 ^{d}3
1 0
1 1
0 1
This gives
N 2,2
_{d2 }_{' }_{4}
and
T —
U = A PL
110 0 11
0 0
=T~ 0
T 0
0 v~
(_{4} _{+}
(Eg 
Hence
A ^{+} ^{H}G
U =
^{d}2
h.
_{h„ }_{ }_{H,}
N U
2,2 2,1
The solution X is given Ъуr^{1}
2,1
where
1 ^{d}l ^{d}2 ^{d}3 ^{N} ^{=} (d_{n+} d + dj
1_ d.
^{d}2 ^{d}l ^{d}2 J
Performing the multiplication N^{1}U and realizing that
X = (H . H ),we obtain: l' 2
d,
H.
_{1 " }_{HG + El + }_{Ed. }_{(HJ }_{" }_{HG }_{~f}_{ }_{V»}
11 ^{1}
^{H}2 ^{=} ^{H}J
Now, we compute the residuals v^ from the equation V ■ AX  L and find
Finally, we compute the adjusted observations from L = L + V ,
Remembering that EL and H_ are assumed errorless we get:
VJ J
^{h}i ^{=} \ ^{+} V i = 1_{?} 2, 3.
A local levelling network composed of 6 sections ^shown in Figure 6.7;vas observed. Note that the arrow heads indicate the direction of increasing elevation . The following table summarizes the observed differences in heights lu along with the corresponding length of each section.
Section 
i , Stations 
 h. i 
length Ji. 

No. 
from 
to 
(m) 
(km) ^{1} 
1 
a 
с 
6.16 
k 
2 
a 
d 
12.57 
2 
3 
с 
d 
6Л1 
2 
h 
a 
b 
1.09 
h 
5 
b 
d 
11.58 
2 
6 
b 
с 
5.07 
k 
Assume that the variances S , i = 1, 2, ... ^ 6, are
. i
proportional to the corresponding lengths £^. The elevation
H of station a is considered to be 0 metres. It is
a
required to adjust this levelling net by the parametric method of adjustment and deduce the leastsquares estimates
A /4
, H_{Q} , and for the elevations H^, and of the points b, с , and d.
Solution:
From the given data we have  number of independent obser . vations: n = 6, number of unknowns: u = 3. Hence we have 3 redundant observations_{y} i.e. 3 degrees of freedom . Our mathematical model in this case is linear, i.e.
A X = ■ L j
6,3 3,1 6,1
where
X = (H H , H _{я}) *
зд ■ ^ ^{c} ^{d}
The 6 independent observation equations will be ( one equation for each observed quantity):
^{5}l^{+V}l ^{=} ^{H}c^{H}a ^{=} ^{H}c°^{0} ^{=} ^{H}c'
^{5}2^{+V}2 ^{=} ^{H}d^{H}a ^{=} ^{H}d°^{0} ^{=} ^{H}d'
^{й}3 ^{+} Y^{=H}d^{H}c >
4 ^{+} \ = *b " ^{H}a ^{e} =b " °'° = =b '
^{E}5 ^{+} ^{y}5"^{H}aV
 ^{£}6^{+V}6^{:}"^{H}cV
The above set of equations can be rewritten in the following
form, after substituting the values of 1ь :
v H  6.l6 *
1 с
^{v}_{2} ^{=} ^d ^{1257} >
v_{o} = H + H, 6.Ul >
3 с d
v. = у _{+} H_{d}  11.58 ,
v_{6}= _g_{b} _{+} S_{c}  5.0T
Тут вы можете оставить комментарий к выбранному абзацу или сообщить об ошибке.
Оставленные комментарии видны всем.