Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Diss / 25.pdf
Скачиваний:
78
Добавлен:
27.03.2016
Размер:
2.49 Mб
Скачать

Appendix

A.1 Kronecker Product

Kronecker products, also known as direct products or tensor products, are used frequently in this book, especially when working with 2D arrays.

Given two matrices A = [aij] C m × n and B = [bij] C p × q, the Kronecker product of A and B is defined as the partitioned matrix:

a11B a12B

a n 1B

 

 

 

 

 

A B = a 21B

a 22B

a n 2B

(A.1)

 

 

 

 

 

 

 

 

am 1B am 2B

amn B

 

Very often, Kronecker products are used with the vec{ } operator. Here, vec{A} denotes a vector-valued function that maps an m × n matrix A into an m n-dimensional column vector by stacking the columns of

the

matrix A. Given Υ = [ y

1ij

] C y 1 × y 2

,

Υ = [ y

2ij

] C y 2 × y 3

, and

 

 

 

1

 

 

 

 

 

2

 

 

 

Υ

= [ y

3ij

] C y 3 × y 4 , the following important identity relates the vec{ }

3

 

 

 

 

 

 

 

 

 

 

 

 

 

operator with the Kronecker products:

1 )

 

 

 

 

 

 

 

 

{ 1 2

Υ

3}

 

3

2

}

 

 

 

 

 

 

vec Υ Υ

 

= (ΥT Υ

vec {Υ

 

 

(A.2)

179

180 Introduction to Direction-of-Arrival Estimation

A.2 Special Vectors and Matrix Notations

This section gives a brief summary of the notations used in this book. If for any positive integer p, Ip denotes the p × p identity matrix and Πp denotes the p × p exchange matrix with 1 on its antidiagonal and zeros elsewhere:

 

0

0

0

1

 

 

 

 

0

0

1

0

 

 

 

Π p

 

 

R p × p

 

=

 

(A.3)

 

 

0

1

0

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1

0

0

0

 

 

 

 

 

 

 

 

Πp is a symmetric matrix and has the property that Π 2p = I p . It is not

difficult to show that the premultiplication of a matrix by Πp will reverse the order of its rows, whereas the postmultiplication of a matrix by Πp will reverse the order of its columns.

A diagonal matrix Φd with the diagonal elements φ1, φ2, …, φd is denoted as

 

φ1

0

 

0

0

 

 

 

 

0

φ 2

 

0

0

 

 

Φ d

 

 

 

 

 

 

 

C d × d or R d × d (A.4)

=

 

 

 

 

0

0

 

φ d 1

0

 

 

 

 

 

 

 

 

0

0

 

0

φ

 

 

 

 

 

 

 

 

d

 

A.3 FLOPS

The easiest way to measure computational effort is to count the number of floating point operations (FLOPS) in an algorithm. A FLOP is an addition, subtraction, multiplication, or division. As a rough model of computational expenditure, it is assumed that each FLOP takes the same amount of computational time. Thus, the algorithms that have higher FLOP counts usually take longer to run than algorithms with lower FLOP counts.

Appendix

181

 

 

For example, consider the inner product of two column vectors u and v with five elements each:

σ = u T v = u1v1 + u 2v 2 + u 3v 3 + u4 v4 + u 5v 5

(A.5)

The computation of uTv appears to take five multiplications and four additions, or nine FLOPS in total. One more FLOP is hidden when the inner product is written in the preceding mathematical form. Hence, if u and v are of length n, then an inner product of u and v takes 2n FLOPS. The product of an m × r matrix and an r × n matrix involves mn inner products of length r, for a total of 2mnr FLOPS. If two matrices are square (i.e., m = n = r), then the matrix-matrix product takes 2n3, or O(n3) FLOPS. In discussing the number of FLOPS required by matrix operations, one is usually only concerned with a single significant digit in the work estimated. Greater precision is not useful, because the actual execution time can be greatly influenced by implementation details in the software and the design of the computer hardware.

Соседние файлы в папке Diss