Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
3.01 Mб

Van der Waerden, b.L., 1969: Mathematical Statistics, Springer-Verlag.

Wells, D.E. and Krakiwsky, E.J., 1971: The Method of Least Squares,

Department of Surveying Engineering, U.N.B., Lecture Notes No* 18.

Wilks, S.S. , 196З (2nd printing): Mathematical Statistics, Wiley and Sons.

Wonnacott, Т.Н. and Wonnacott, R.J., 1972 (2nd edition): Introductory Statistics, Wiley & Sons.

1From matrix algebra we know that if A is a symmetric matrix and X is a vector we get:

3 ъ T T

77~r AX = A and -гтг (X AX) = 2X A . oX oX

+ Note that the normal equations can be obtained directly from the mathemati­cal model by pre-multiplying it by ATP .

2 ATmatrix say N, is positive definite if the value of the quadratic form Y NY is positive for any vector Y (of the appropriate dimension).

3 Here, the vector V is the vector of residuals from the least squares adjustment ,

elements are called (Hansen's) weight coefficients.


Note that X is called uncorrelated when N is diagonal, i.e. when N is diagonal. In such a case, we can solve the normal equations separately for each component of X which satisfies our intuition. The correlation of X is only remotely related to the correlation of L. X

4 If we have a non-linear model F(L) = 0, it can be again linearized by Taylorfs series expansion, yielding:


(L-L°) + ...-,


in which we again neglect the higher order terms. Putting V = (L-L°), В for 3F/9L and W = F(L°), we end up with the linearized condition equations of the form: BY + W = 0 } which is the same as (6.103),

FfL) = F(L°) + эь

5This is why the conditional adjustment is sometimes called: adjustment by correlates.

6 It can be shown that similarly Z~ = TP

V о

Тут вы можете оставить комментарий к выбранному абзацу или сообщить об ошибке.

Оставленные комментарии видны всем.