- •Introduction to adjustment calculus
- •Introduction to adjustment calculus (Third Corrected Edition)
- •Introduction
- •2. Fundamentals of the mathematical theory of probability
- •If d'cd; then p (d1) £ lf
- •Is called the mean (average) of the actual sample. We can show that m equals also to:
- •3.1.4 Variance of a Sample
- •Is called the variance (dispersion) the actual sample. The square root 2
- •In the interval [6,10] is nine. This number
- •VVII?I 0-0878'
- •In this case, the new histogram of the sample £ is shown in Figure 3.5.
- •Is usually called the r-th moment of the pdf (random variable); more precisely; the r-th moment of the pdf about zero. On the other hand, the r-th central moment of the pdf is given by:
- •3.2.4 Basic Postulate (Hypothesis) of Statistics, Testing
- •3.3.4 Covariance and Variance-Covariance Matrix
- •X and X of a multivariate X as
- •It is not difficult to see that the variance-covariance matrix can also be written in terms of the mathematical expectation as follows:
- •3.3.6 Mean and Variance-Covariance Matrix of a Multisample The mean of a multisample (3.48) is defined as
- •4.2 Random (Accidental) Errors
- •It should be noted that the term иrandom error" is used rather freely in practice.
- •In order to be able to use the tables of the standard normal
- •X, we first have to standardize X, I.E. To transform X to t using
- •Is a normally distributed random
- •4.10 Other Measures of Dispersion
- •The average or mean error a of the sample l is defined as
- •5. Least-squares principle
- •5.2 The Sample Mean as "The Maximum Probability Estimator"
- •5.4 Least-Sqaures Principle for Random Multivariate
- •In very much the same way as we postulated
- •The relationship between e and e for a mathematical model
- •6.4.4 Variance Covariance Matrix of the Mean of a Multisample
- •Itself and can be interpreted as a measure of confidence we have in the correctness of the mean £. Evidently, our confidence increases with the number of observations.
- •6.4.6 Parametric Adjustment
- •In this section, we are going to deal with the adjustment of the linear model (6.67), I.E.
- •It can be easily linearized by Taylor's series expansion, I.E.
- •In which we neglect the higher order terms. Putting ax for X-X , al for
- •The system of normal equations (6.76) has a solution X
- •In sections 6.4.2 and 6.4.3. In this case, the observation equations will be
- •In matrix form we can write
- •In metres.
- •6.4.7 Variance-Covariance Matrix of the Parametric Adjustment Solution Vector, Variance Factor and Weight Coefficient Matrix
- •I.E. We know the relative variances and covariances of the observations only. This means that we have to work with the weight matrix к£- 1
- •If we develop the quadratic form V pv 3) considering the observations l to be influenced by random errors only, we get an estimate к for the assumed factor к given by
- •Variance factor к plays. It can be regarded as the variance of unit
- •In metres,
- •Is satisfied. This can be verified by writing
- •Into a . О
- •6.U.10 Conditional Adjustment
- •In this section we are going to deal with the adjustment of the linear model (6.68), I.E.
- •For the adjustment, the above model is reformulated as:
- •Is not as straightforward, as it is in the parametric case (section 6.4.6)
- •VeRn VeRn
- •Into the above vector we get 0.0
- •0.0 In metres .
- •In metres.
- •Areas under the standard normal curve from 0 to t
- •Van der Waerden, b.L., 1969: Mathematical Statistics, Springer-Verlag.
3.3.4 Covariance and Variance-Covariance Matrix
Before we start describing the variance-covariance matrix,
let us define another statistical quantity needed for this matrix. This
quantity is called covariance and it is defined for any two components i к
X and X of a multivariate X as
(x^{k}-y_{k}) ф(х) dX
(3.42)
3
= E* ((x^{3}-y .) (x^{k}-y, )) =a,,eR;k, j = 1,2
1 К Kl
We note three things in equation (3-42). First, if j = к we see that the expressions for the covariances become identical with those for the variances, namely:
а = о. . = о^{2}, for j = к. jk кз 3
Secondly, if the components of the multivariate are statistically independent, the covariances (j ^ k) are all equal to zero. To show this, let us write
a., = / (x^{j} -у.) (x^{k}-y. ) П ф (x^{A}) dx* ^{зк} R^{s} ^{3} ^{k} £=1 *
= /_{R}(x^{j}-y.) ф^х^{3})^ • /_{R}(x^{k}-y_{k}) Ф_{k}(x^{k})dx^{k}
[/_{R}x^{j}4>j (x^dx^l [/_{К}х^{к}ф_{к}(х^{к}) dx^{k} - y_{k}]
- гу. - у.] [y_{k} - y_{k}] - 0 .
Finally, noting that for a pair of components of a statistically
independent multivariate we have
<l_{jk} = E*((x^{j}-y ) (x^{k}-]i_{k})) = 0 , (3.43)
we can write:
"i к п к
^{a}jk - E*(x^{J}x -x^{J} у_{к}-у^х + у^.у_{к})
= E*(x^{j}x^{k}) - y_{k}E*(x^{j}) - yjE*(x^{k}) 4- у y_{fc}
= E*(x^{D}x^{k}) - y_{k}y. - y.y_{k} + y.y_{k}
- E*(x^{j}x^{k}) - y.y = E*(x^{3}x^{k}) - E^x^{3}) E*(x^{k}) - 0 . 1 к
i к
Hence, for statistically independent components x^{J} and x , we get
E* (x^{j}x^{k}) = E*(x^{j})-E*(x^{k}) , (3-44)
or more generally, for r independent components we get
r r
E* ( П x^{£}) = П E*(x^{£}) . (3-45)
£-1 £=1
Equation (3-45) completes the list of properties of the E* operator stated in section 3.2.3.
As we stated in section 3.3.3, the variance (a^{2}) of a multivariate is not enought to fully characterize the statistical properties of the multivariate on the level of second moments. To get the same amount of statistical information as given by the variance alone (in the univariate case), we have to take into account also the covariances. The variances and covariances can be assembled into one matrix called the variance-covariance matrix or just the covariance matrix.
The variance-covariance matrix of a multivariate X is usually denoted by
Z* and looks as follows: X
°12 ^{0}13
Is
^{Q}21 °2
23
2s
z* -
X
(3-46)
si
s2
It is not difficult to see that the variance-covariance matrix can also be written in terms of the mathematical expectation as follows:
Z* = E* [(X-E*(X)) (X-E*(X)) ] X
(3-47)
which is the expectation of a dyadic product of two vectors. Note that the superscript T in the above formula stands for the transposition in matrix operation. The proof of equation (3-47) is left to the student.
Note that the variance-covariance matrix is always symmetrical, the diagonal elements are the variances of the components and the off-diagonal elements are the covariances between the different pairs of components. The necessary and sufficient condition for the variance-covariance matrix to be diagonal, i.e. all the covariances to be zeros, is the statistical independence of the multivariate. The variance-covariance matrix is one of the most fundamental quantities used in adjustment calculus. It is positive - definite (with diagonal elements
always positive) and the inverse exists if and only if there is no absolute
correlation between components.
3.3.5 Random Multisample, its PDF and CDF
Like in the univariate case, we can also define here a quantity n corresponding to the random sample £, defined in section 3.1.1
as follows:
r^{1}
л =
/ 5~ f / 5 ) e R
^{n}l _{n}
(3-48)
(С
s _{£}s
n
n ^{e }s
which is a straightforward generalization of a random sample, and will be called a random multisample. From the above definition, it is obvious that л has s components (constituents), £^{3}, each of which is a random sample on its own. The number of elements п.. in each component Z^{3} may or may not be the same.
We can also define the definition set as well as the actual (experimental) PDF and CDF of a multisample in very much the same was as we have done for a random sample. Also, the distribution and cumulative distribution histograms and polygons can be used for two-dimensional multi-samples. The development of these concepts, however, is left to the student.
Тут вы можете оставить комментарий к выбранному абзацу или сообщить об ошибке.
Оставленные комментарии видны всем.