Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
INTRODUCTION TO ADJUSTMENT CALCULUS.docx
Скачиваний:
48
Добавлен:
07.09.2019
Размер:
3.01 Mб
Скачать

3.3.4 Covariance and Variance-Covariance Matrix

Before we start describing the variance-covariance matrix,

let us define another statistical quantity needed for this matrix. This

quantity is called covariance and it is defined for any two components i к

X and X of a multivariate X as

(xk-yk) ф(х) dX

(3.42)

3

= E* ((x3-y .) (xk-y, )) =a,,eR;k, j = 1,2

1 К Kl

We note three things in equation (3-42). First, if j = к we see that the expressions for the covariances become identical with those for the variances, namely:

а = о. . = о2, for j = к. jk кз 3

Secondly, if the components of the multivariate are statistically independent, the covariances (j ^ k) are all equal to zero. To show this, let us write

a., = / (xj -у.) (xk-y. ) П ф (xA) dx* зк Rs 3 k £=1 *

= /R(xj-y.) ф^х3)^ /R(xk-yk) Фk(xk)dxk

[/Rxj4>j (x^dx^l [/Кхкфкк) dxk - yk]

- гу. - у.] [yk - yk] - 0 .

Finally, noting that for a pair of components of a statistically

independent multivariate we have

<ljk = E*((xj-y ) (xk-]ik)) = 0 , (3.43)

we can write:

"i к п к

ajk - E*(xJx -xJ ук^х + у^.ук)

= E*(xjxk) - ykE*(xj) - yjE*(xk) 4- у yfc

= E*(xDxk) - yky. - y.yk + y.yk

- E*(xjxk) - y.y = E*(x3xk) - E^x3) E*(xk) - 0 . 1 к

i к

Hence, for statistically independent components xJ and x , we get

E* (xjxk) = E*(xj)-E*(xk) , (3-44)

or more generally, for r independent components we get

r r

E* ( П x£) = П E*(x£) . (3-45)

£-1 £=1

Equation (3-45) completes the list of properties of the E* operator stated in section 3.2.3.

As we stated in section 3.3.3, the variance (a2) of a multi­variate is not enought to fully characterize the statistical properties of the multivariate on the level of second moments. To get the same amount of statistical information as given by the variance alone (in the univariate case), we have to take into account also the covariances. The variances and covariances can be assembled into one matrix called the variance-covariance matrix or just the covariance matrix.

The variance-covariance matrix of a multivariate X is usually denoted by

Z* and looks as follows: X

°12 013

Is

Q21 °2

23

2s

z* -

X

(3-46)

si

s2

It is not difficult to see that the variance-covariance matrix can also be written in terms of the mathematical expectation as follows:

Z* = E* [(X-E*(X)) (X-E*(X)) ] X

(3-47)

which is the expectation of a dyadic product of two vectors. Note that the superscript T in the above formula stands for the transposition in matrix operation. The proof of equation (3-47) is left to the student.

Note that the variance-covariance matrix is always symmetrical, the diagonal elements are the variances of the components and the off-diagonal elements are the covariances between the different pairs of components. The necessary and sufficient condition for the variance-covariance matrix to be diagonal, i.e. all the covariances to be zeros, is the statistical independence of the multivariate. The variance-covariance matrix is one of the most fundamental quantities used in adjustment calculus. It is positive - definite (with diagonal elements

always positive) and the inverse exists if and only if there is no absolute

correlation between components.

3.3.5 Random Multisample, its PDF and CDF

Like in the univariate case, we can also define here a quan­tity n corresponding to the random sample £, defined in section 3.1.1

as follows:

r1

л =

/ 5~ f / 5 ) e R

nl n

(3-48)

s £s

n

n e s

which is a straightforward generalization of a random sample, and will be called a random multisample. From the above definition, it is obvious that л has s components (constituents), £3, each of which is a random sample on its own. The number of elements п.. in each component Z3 may or may not be the same.

We can also define the definition set as well as the actual (experimental) PDF and CDF of a multisample in very much the same was as we have done for a random sample. Also, the distribution and cumulative distribution histograms and polygons can be used for two-dimensional multi-samples. The development of these concepts, however, is left to the student.