Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
INTRODUCTION TO ADJUSTMENT CALCULUS.docx
Скачиваний:
48
Добавлен:
07.09.2019
Размер:
3.01 Mб
Скачать

6.4.4 Variance Covariance Matrix of the Mean of a Multisample

We have seen in section 6.4.3 that the mean I of a sample L has also a standard deviation S- associated with it. This standard devi­ation is /(n)-times smaller than the standard deviation S of the sample

L

Itself and can be interpreted as a measure of confidence we have in the correctness of the mean £. Evidently, our confidence increases with the number of observations.

We can now ask ourselves the following question: Does the mean L of a multisample L also have a variance-covariance matrix associated with it? The answer is - there is nothing to prevent us from defining it by generalising the discovery from the last section. We get

where

2 12

S^ , = — S„ 7 n. Si.

l

and

s- - = h sa.a. .

Si. i. 1 i л i j

Here we have to require again that n. = n. , i.e. that both components of

l j

the multisample have the same number of elements (see section 3.3.5). Obviously, if this requirement is satisfied for all the pairs of components we have

nn = n0 = . . . = n ~ n 12 s

and

T

Гг = BE=rB

X h

By analogy, the variance-covariance matrix obtained via the covariance law (see section 6.3.1) from the var i anc е-с оvar ianc e matrix of the mean of multisample is associated with the mean of the derived multi-sample, or statistical estimate X. We say that

(6.53)

is the variance-covariance matrix of the statistical estimate X, i.e. of the solution of uniquely determined mathematical model

X = F(L) .

Similar statements can be used for other laws of propagation of errors. Development of these is left to the student, who should also compare results of this sections with the solution of Example 6.lk.

Example 6.1j?:

Let us take again the experiment described in Examples 6.1э 6.3 and 6.k. This time ve shall be interested in deriving the variance-covariance matrix of the solution

A

vector X.

Solution: First we evaluate £=r from eq. (6.6l). We obtain . . L

S- = -~ S = - = 0.0008 cm i

a 5 a 5 J

s2 1g2= 0.0056 cm .. cm2

Ъ 5 b 5

Since S . = 0 we get

ab

0.0008 0 0 0.0011

2 1' cm = — ET 5 1»

Now;2^ can be evaluated from equation-(Uand" we have

1 T 1 T 1

ZX = B(5 EL} B = 5 BZLB = 5 SX'

or

0.00081 cm2 0.1079 cm3 0.01079 cm3 21.51251* cieA

Thus the standard deviations of the estimates d and otare

given by

S^ = /(0.00081 cm2)= 0.028 cm , d '

h P

S~ = /(21-.5125^ cm ) = k.6k cm .

6Л.З The Method of Least-Squares, Weight Matrix

The least-squares principle as applied on the trivial identity transformation^i.e. the sample mean>can be generalized for other mathematical models. Takirg the general formulation of the problem of adjustment as

described in section 6.П.1, i.e.

F(L + V, X) = 0,

we can again ask for such X that would make the value of the quadratic

T

form of the weighted discrepencies, V PV, minimum, i.e.

min V PV « XeR

(6.6U)

The- condition (6.6k) for the majority of mathematical models, is enough to specify such X =X uniquely. The approach to adjustment using this condition became known as the method of least-squares.

The question remains here as how to choose the matrix P. In the case of the sample mean we have used

P = diag (K/S2 , K/S2? , *1 2

m

that is

P = К diag (l/S2, , l/S2, , ... , l/S2, ).

*1 2 m

Using the notation developed for the multisample, this can be rewritten as:

-1

P = К Z- (6.65)

which indicates that the matrix P is obtained by multiplying the constant К by the inverse of the variance-cоvariance matrix of the means of observa­tions . This is in our case a diagonal matrix as we have postulated the sample L to be uncorrelated.

We again notice that, mathemat i с ally, there is not much dif­ference between a sample and multisample - they can be hence treated in much

the same way. Thus, there is not basic difference between the apparently

trivial adjustment of the sample mean and the general problem of adjust­ment. The only difference is that in the first case X is a vector of one component, while generally it may have many components.

This gives a rise to the question of what would be the role of

2

К (К having been a scalar equal to S* in the adjustment of the mean of sample),

x

in the least squares method, where X has several constituents. Let us just say at this time that we usually compute the weight matrix P, as it is called in the method of least-squares 5as

P = к Z -™1 (6.66)

JLj

where к is an arbitrarily chosen constant, the meaning of which will be shown later. This can be done because, as will also be shown later, the solution X is independent of к since it does not change the ratio between the weights or variances of. the individual observations.

In this course we shall be dealing with only two particular mathematical models which are the most frequently encountered in practice. In these models, we shall use the following notation:

ja for the number of constituents of the primary or original multisample L; ju for the number of constituents of the derived, or unknown(to be derived) multisample X;

for the number of independent equations (relationships) that can be for­mulated between the constituents of L and X. Moreover, we shall consider these models to be linear. The first model is

A X = L , (6.67)

in which A is an n b u matrix, X is a u b 1 vector and L is an n by 1 vector (n = r > u). The adjustment of this model is usually called parametric adjustment, adjustment of observation equations, or adjustment of indirect observations, etc.

The second model is

BL = С (6.68)

in which В is an r by n matrix, L is n x 1 and С is r x 1 vectors (r < n) . The adjustment of this model is known as conditional adjustment, adjustment of condition equations, etc.

The two mathematical models are evidently quite special since they are both linear. Fortunately many problems in practice, although non-linear by nature, can be linearized. This is the reason why the two treated models are important.