Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

UnEncrypted

.pdf
Скачиваний:
11
Добавлен:
16.05.2015
Размер:
6.75 Mб
Скачать

Vandermonde LS and QR

array of sensors [7] used in beamforming and DOA (Direction Of Arrival) Signal Processing applications. The targets of these applications are usually to improve the signal-to-noise ratio of the receiving signals, to determine the number of signal sources, to estimate some parameters of them, to track the movement of the sources, etc. For instance, Generalized Sidelobe Canceller (GSC) is an e cient implementation of the Linear Constrained Minimum Variance algorithm for optimum beamforming, where a Vandermonde system must be solved as a part of the total solution [8]. We can find this kind of applications in fields as Seismology, Biomedicine, Astronomy, Radar, Sonar, etc., where these structured data appear.

Several algorithms have been proposed in the literature that solve Vandermonde systems or get their QR factorization, with less computational complexity but with less accurate results than other algorithms for systems with non-structured matrices (see [1] and [3]). Anyway, these algorithms can be suitable in real time applications where execution time may be more important than an extremely precise result, due to either the characteristics of the application or to a lack of high computing power in the system (low performance hardware, energy e cient devices as mobile devices, etc.).

1.1State of the Art

Since the early nineties, there are not significant contributions in algorithms for solving e ciently Vandermonde systems or getting its QR factorization, taking advantage of the structure of this kind of matrices. In [1] a fast algorithm for computing the QR decomposition of a complex column Vandermonde matrix is shown, with quadratic complexity but with poor precision in the results, particularly in the orthogonality of the Q matrix. In [2], [3] and [4], some other algorithms are presented; they are based on discrete least squares approximation of a real-valued function given at arbitray distinct nodes in [0, 2π) by trigonometric polynomials which fits better in some of the signal processing problems presented before. The Stieljes procedure for Szeg¨o polynomials is used to get an intermediate solution, and compared with a better technique based on solving and inverse eigenvalue problem of a Hessenberg matrix with real positive subdiagonal elements. Both methods share the way the solution is obtained from the intermediate solution. Here, the precision of the results are highly dependent on how the nodes are distributed along the interval [0, 2π), getting optimal results when they are equispaced in this interval, and worse results when they are randomly distributed in such interval, and even worse when they are concentrated and equispaced in a narrower subinterval.

1.2Objectives and paper organization

This paper is motivated by the need to solve least squares problems with complex Vandermonde matrices, using the QR decomposition in beamforming problems and DOA analysis. The objective is to provide e cient algorithms that can be executed in devices with low

c CMMSE

Page 83 of 1573

ISBN:978-84-615-5392-1

P. Alonso, R. Cortina. F.J. Mart´ınez, A.M. Vidal

computation power as mobile devices. In this paper we present an analysis of existing algorithms that solve the aforementioned problems. Some of these algorithms have been extended to compute the QR decompositivon in an explicit way. New algorithms have been developed starting from the existing ones, obtaining the QR decomposition in a way such that its use is easy for solving the least squares problem and for updating the QR decomposition when a new column is added to the Vandermonde matrix. An incremental method for the QR decomposition is developed starting from the ideas of the previous updating algorithm.

The rest of the paper is organized as follows: The section 2 is devoted to the description of the QR decomposition algorithm as well as its updating and the least mean squares problem. In section 3, a precision experimental analysis of the algorithms is done. Finally, in section 4, the conclusions are shown.

2 Algorithms

 

 

 

 

2

2

 

2

 

 

positive

Let {z1, z2, . . . , zm} be a set of m distinct nodes, let {w1 , w2 , . . . , wm} be a set of

 

)

T

),

weights. For functions g and h defined at the nodes z

 

(g =

 

g(z

 

), g(z ), . . . g z

 

denote the inner product on the unit circle:

 

 

 

 

k

 

 

1

2

(

m

 

m

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

h(z

 

)w2.

 

 

 

 

 

 

 

 

< g, h >=

g(z )

k

 

 

 

 

 

 

 

 

 

k

 

 

k

 

 

 

 

 

 

 

 

k=1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The nodes with a complex exponential formulation zl = el ,

1 ≤ l ≤ m are specially

interesting in signal processing applications such as beamforming or DOA.

 

 

 

 

The first algorithm shown in [3], known as Stieljes procedure for Szeg¨o polynomials, solves the system:

DAc = Dg

where D = diag(w1, w2

and c = (c0, c1, . . . cn−1

, . . . , wm). A is the transposed Vandermonde matrix:

 

1

z1

z12

· · ·

z1n−1

 

 

 

 

1

z2

z22

· · ·

z2n−1

 

Cm

n

A = . . .

 

.

 

 

×

 

.. .. .. . . .

..

 

 

 

1

z

m

z2

 

zn 1

 

 

 

 

 

 

 

m

m

 

 

 

 

 

 

 

 

 

· · ·

 

 

 

 

 

)T is the set of coe cients of

n−1

p(z) = cj zj

j=0

(1)

(2)

such that the discrete least squares error < g − p, g − p > is minimized.

c CMMSE

Page 84 of 1573

ISBN:978-84-615-5392-1

Vandermonde LS and QR

If we get the QR decomposition of A = QR, with Q Cm×m a unitary matrix and R Cm×n an upper triangular matrix, then the solution of (1) can be expressed as:

c = R1c , with c = QH Dg

where QH denote the conjugate transpose of the matrix Q.

The computation and application of matrices Q and R are e ciently done using the Szeg¨o polynomials relation with the problem [6]. Let j }mj=01 denote the family of orthogonal Szeg¨o polynmials with respect to some inner product. The grade of φj is j and its leading coe cient is positive. Matrix R can be deduced from the relation [3]:

 

k

 

 

zk−1

j

 

≤ k ≤ n.

= rjk φj−1

(z), 1

 

=1

 

 

Hence, the jth column of R1 is the vector with the coe cients of the polynomial φj−1(z). Matrix Q is determined with:

qkj = φj (z)wk .

The algorithms that compute the QR factorization of a matrix without exploiting any structure need O(mn2) flops, [5]. This method needs only O(mn) floating point operations to obtain the result. Once c is obtained, the final solution c requieres only O(n2) flops (see algorithm 4.1 of [3]).

The second method is numerically more accurate than the previous one and it is based on an algorithm that constructs a unitary upper Hessenberg matrix from spectral data using elementary unitary similarity transformations [4], [3] (inverse eigenvalue problem), getting c = QH Dg in O(mn) arithmetic operations.

Let Λ = diag(z1, z2, . . . , zm), then we can compute a unitary upper Hessenberg matrix,

H, with real positive subdiagonal elements:

 

 

 

 

UH ΛU = H,

(3)

if the first column of U is determined. This first column is:

 

 

 

 

u1 = Ue1 = σ01(w1, w2, . . . , wm)T

(4)

where σ0 =

m

w2

1/2. Matrix H in (3) can be obtained using unitary reflectors com-

 

k=1

k

 

 

puted to get zeros sequentially in certain positions. Hence, the matrix Q is the first n columns of matrix U.

The final solution c = R1c is obtained in O(n2), as in the previous method.

c CMMSE

Page 85 of 1573

ISBN:978-84-615-5392-1

P. Alonso, R. Cortina. F.J. Mart´ınez, A.M. Vidal

2.1Gram-Schmidt and modified Gram-Schmidt methods for obtaining Q

The second algorithm presented in the last subsection allows to construct an Arnoldi like procedure [5] for computing easily the factors Q and R of A. Similar ideas are used to get the QR decomposition of a matrix using the Gram-Schmidt method.

Given the first column of U as in (4) and expressing (3) as ΛU = UH we can obtain

Λuk = h1k u1 + h2k u2 + . . . + hkk uk + hk+1,k uk+1, k = 1, 2, . . . , n − 1.

Hence, as U is a unitary matrix:

 

 

h

= uH Λu , j = 1, 2, . . . , k,

jk

j

k

 

 

k−1

 

 

j

v = (Λ − hkk I)uk − hjk uj ,

 

 

=1

hk+1,k = ||v||2,

 

uk+1

= v/hk+1,k .

Algorithms based on Gram-Schmidt techniques are usually numerically unstable due to catastrophic cancellation in premature subtractions. That is the reason why they are usually modified in order to avoid a great deal of subtractions before orthogonalizations. This technique is used in the next algorithm. In this case, we compute previously all the possible coe cients of hk (column k of H) that we can compute. Let us remember that hjk = uHj Λuk, then we can find the value of hkk from the expression

k−1

q Λuk hjk uj = hk kuk + hk+1,k uk+1, j=1

hence

hkk = uHk q.

Now, we obtain

uk+1 = (q − hkk uk ) /hk+1,k ,

with

hk+1,k = ||q − hkk uk ||2.

c CMMSE

Page 86 of 1573

ISBN:978-84-615-5392-1

Vandermonde LS and QR

2.2Updating

Let us suppose we know the QR decomposition of A Cm×n, m > n, and we want to compute the QR decomposition of a new matrix A1 = Q1R1, where A1 C(n+1),

m ≥ n + 1, is the original A matrix with an additional column. Obviously, we can take advantage of this matrix structure to get an easy algorithm that computes Q1 and R1. Matrix Q1 is the matrix made up of n + 1 columns of U in the expression (3), so only it is necessary to compute one additional column of U to get U1. Besides, R1 = UH1 A1, so the first n columns of U1 match the ones of U plus zeros to complete the n + 1 components, and

R1(:, n + 1) = UH1 A1(:, n + 1).

2.3An incremental algorithm based on the updating technique

From the point of view of the applications, it is interesting to provide algorithms that receives information gradually and process it in the same way. An e cient algorithm can be designed starting from the previous updating algorithm for computing the QR decomposition of a matrix which columns are growing increasingly.

3Experimental results

Experimental precision results have been obtained using double precision arithmetic, with a constant number of rows of the transposed Vandermonde matrix (m = 500), varying the number of columns (n) upto m. The resulting Vandermonde system has been solved and compared with several methods. The experiment results have been averaged ten times. The experiments take into account two kind of nodes: equispaced nodes in certain subinterval, and uniformly distributed random nodes in certain subinterval. All the experiments have been repeated using single precision arithmetic, obtaining qualitatively the same conclusions.

3.1Matrix conditioning and result precision

Figure 1(a) shows the relative error obtained using the Lapack GELS subtroutine solving a Vandermonde least squares problem generated with di erent node distribution, and Figure 1(b) shows the reciprocal 2-norm condition number of each matrix.

The worst behavior is obtained when the nodes are distributed equispacedly in a subinterval narrower than [0, 2π) (the narrower, the worse results). The best results are obtained when the nodes are distributed equispacedly in the interval [0, 2π). When the nodes are distributed randomly in the [0, 2π) interval, the precision of the results are in the intermediate positions with worse results when the matrix is getting squared (n ≈ m), following

c CMMSE

Page 87 of 1573

ISBN:978-84-615-5392-1

P. Alonso, R. Cortina. F.J. Mart´ınez, A.M. Vidal

 

 

Equisp. [0..3π/2)

 

 

Equisp. [0..π)

 

 

Equisp. [0..2π)

 

 

Random [0..2π)

 

100

 

 

1

 

 

0.01

 

2

 

 

/||c||

0.0001

 

1e-06

 

2

 

||

 

 

sol

1e-08

 

c

 

1e-10

 

||c

 

 

 

 

1e-12

 

 

1e-14

 

 

1e-16

 

 

0

50 100 150 200 250 300 350 400 450 500

 

 

n

(a) Lapack GELS solution relative error

 

 

 

Equisp. [0..3π/2)

 

 

 

Equisp. [0..π)

 

 

 

Equisp. [0..2π)

 

 

 

Random [0..2π)

 

 

1

 

 

 

0.01

 

 

 

0.0001

 

 

 

1e-06

 

)

 

 

T

 

1e-08

 

(V

 

 

 

1

2

1e-10

 

κ

 

 

 

 

1e-12

 

 

 

1e-14

 

 

 

1e-16

 

 

 

1e-18

 

 

 

0

50 100 150 200 250 300 350 400 450 500

 

 

 

n

(b) Reciprocal 2-norm condition number

Figure 1: Error in Lapack GELS solution and reciprocal condition number relation

the trend of the condition number.

Figures 2(a) and 2(b) show a comparison of the relative error in the solution between the Reichel method and the GELS Lapack method using matrices generated with equispaced nodes and random nodes respectively, both in the [0, 2π) interval, with better results for the Lapack case.

3.2Orthogonality results

Figures 3(a) and 3(b) show a comparison of the orthogonality error, ||I QH Q||2, among the Reichel, Lapack, Gram-Schmidt and Modified Gram-Schmidt methods, using matrices generated with equispaced nodes and random nodes respectively, both in the [0, 2π) interval. For the equispaced nodes case, the worst results are for the Reichel method; the rest of the methods share similar performance (with better results for Gram-Schmidt methods). For the random nodes case, there exist a crosspoint in the behavior: before the crosspoint, all the methods get a similar orthogonality precision, with a subtle better performance for the Gram-Schmidt methods; after the crosspoint, the Gram-Schmidt methods loses this precision.

c CMMSE

Page 88 of 1573

ISBN:978-84-615-5392-1

 

 

 

Reichel

 

 

 

Lapack GELS

 

1e-11

 

 

 

1e-12

 

 

2

 

 

 

/||c||

 

 

 

2

 

 

 

||

1e-13

 

 

sol

 

 

 

 

 

||c c

 

 

 

 

1e-14

 

 

 

1e-15

 

 

 

0

50

100 150 200 250 300 350 400 450 500

 

 

 

n

 

(a) Equispaced nodes in [0, 2π)

Vandermonde LS and QR

 

 

 

Reichel

 

 

 

Lapack GELS

 

1e+40

 

 

 

1e+30

 

 

2

1e+20

 

 

/||c||

 

 

 

 

 

2

 

 

 

||

1e+10

 

 

sol

 

 

 

 

 

c

 

 

 

||c

1

 

 

 

 

 

 

1e-10

 

 

 

1e-20

 

 

 

0

50

100 150 200 250 300 350 400 450 500

 

 

 

n

 

(b) Random nodes in [0, 2π)

Figure 2: Solution relative error

3.3Incremental QR algorithm results

Figures 4(a) and 4(b) show the orthogonality error, ||I QH Q||2, and the decomposition relative error, ||A QR||2/||A||2, respectively when the incremental QR algorithm is used, observing that the performance is comparable with the non-incremental counterpart algorithm results.

4Conclusions

Vandermonde matrices are di cult to work with due to their numerical properties. In this paper we have analyzed the behavior of several algorithms that solve the least square problem and obtain the QR decomposition of a Vandermonde matrix. The obtained peformance is as expected: it depends strongly on the condition number and it get worse when the matrix is becoming square. Our contribution is an algorithm for obtaining the updating of the QR decomposition of a Vandermonde matrix, and a QR incremental algorithm suitable for real time signal processing applications.

c CMMSE

Page 89 of 1573

ISBN:978-84-615-5392-1

P. Alonso, R. Cortina. F.J. Mart´ınez, A.M. Vidal

Reichel

Lapack GEQRF

Gramm-Schmidt

Modified Gramm-Schmidt

 

1e-13

 

 

2

1e-14

 

 

Q||

 

 

 

H

 

 

 

||I Q

1e-15

 

 

 

 

 

 

1e-16

 

 

 

0

50

100 150 200 250 300 350 400 450 500

 

 

 

n

 

(a) Equispaced nodes in [0, 2π)

Reichel

Lapack GEQRF

Gramm-Schmidt

Modified Gramm-Schmidt

 

100

 

 

1

 

 

0.01

 

2

0.0001

 

Q||

1e-06

 

H

 

 

 

Q

1e-08

 

||I

 

1e-10

 

 

 

 

1e-12

 

 

1e-14

 

 

1e-16

 

 

0

50 100 150 200 250 300 350 400 450 500

 

 

n

 

(b) Random nodes in [0, 2π)

Figure 3: Orthogonality error

Acknowledgements

This work was financially supported by the Spanish Ministerio de Ciencia e Innovaci´on projects TEC2009-13741 and TIN2010-14971, the Vicerrectorado de Investigaci´on de la UPV through Programa de Apoyo a la Investigaci´on y Desarrollo (PAID-05-11-2733) and Generalitat Valenciana through projects PROMETEO/2009/013 and ACOMP/2012/076.

References

[1]C. J. Demeure, Fast QR Factorization of Vandermonde Matrices, Linear Algebra and its applications. 124 (1989) 165–194.

[2]L. Reichel, Fast QR Decomposition of Vandermonde-like matrices and polynomial least squares approximation, SIAM J. Matrix Anal. Appl. 12(3) (1991) 552–564.

[3]L. Reichel, G. S. Ammar, and W. B. Gragg, Discrete least squares approximation by trigonometric polynomials, Mathematics of Computation. 57(195) (1991) 273–289.

[4]G. S. Ammar, W. B. Gragg and L. Reichel, Constructing a Unitary Hessenberg Matrix from Spectral Data, Numerical Linear Algebra, Digital Signal Processing and

c CMMSE

Page 90 of 1573

ISBN:978-84-615-5392-1

Vandermonde LS and QR

 

 

Equisp. [0..3π/2]

 

 

Equisp. [0..π]

 

 

Equisp. [0..2π]

 

 

Random [0..2π]

 

100

 

 

1

 

 

0.01

 

2

0.0001

 

Q||

1e-06

 

H

 

 

 

Q

1e-08

 

||I

 

1e-10

 

 

 

 

1e-12

 

 

1e-14

 

 

1e-16

 

 

0

50 100 150 200 250 300 350 400 450 500

 

 

n

(a) Ortogonality error

 

 

Equisp. [0..3π/2]

 

 

Equisp. [0..π]

 

 

Equisp. [0..2π]

 

 

Random [0..2π]

 

1

 

 

0.01

 

2

0.0001

 

 

 

/||A||

1e-06

 

 

 

2

 

 

||

1e-08

 

QR

 

1e-10

 

||A

1e-12

 

 

 

 

1e-14

 

 

1e-16

 

 

0

50 100 150 200 250 300 350 400 450 500

 

 

n

(b) Decomposition error

Figure 4: Error in incremental QR algorithm

Parallel Algorithms. G. H. Golub and P. Vand Dooren, eds. NATO ASI Series F70 (1991) 385–395.

[5]G. H. Golub and C. F. Van Loan, Matrix computations (3rd ed.), Johns Hopkins University Press, 1996.

[6]U. Grenander and G. Szego¨, Toeplitz forms and their applications, Chelsea, New York, 1984.

[7]H. Krim and M. Viberg, Two decades of Array Signal Processing research. The parametric approach, IEEE Signal Processing Magazine. 13 (1996) 67–94.

[8]W. Liu and S. Weiss, Wideband beamforming: concepts and techniques, Wiley, 2010.

c CMMSE

Page 91 of 1573

ISBN:978-84-615-5392-1

Proceedings of the 12th International Conference on Computational and Mathematical Methods in Science and Engineering, CMMSE 2012 July, 2-5, 2012.

Collaborative work in mathematics with a wiki

Pedro Alonso1 and Rafael Gallego1

1Departamento de Matem´aticas, Universidad de Oviedo emails: palonso@uniovi.es, rgallego@uniovi.es

Abstract

In this work we show an educational activity consisting of problem solving tasks in Mathematics on a group work basis with a wiki. To this end, we use the software Mediawiki which is open source and it natively allows to introduce mathematical formulae using LATEX syntax. On the other hand, it has a sophisticated system to watch pages and its functionality can be increased by means of extensions made by a large user community.

Key words: wikis, mediawiki, problem solving, mathematics, collaborative work, e- learning

1Introduction

A Wiki is a suitable platform to prepare collaborative works as an alternative to more traditional ways of presenting works in paper or in a electronic file (.doc, .pdf, etc.). A wiki also allows to develop cooperative learning habits. By means of the history associated to every page of the wiki, a teacher can know in detail the contribution of each member of the group to the proposed tasks.

The e-learning platform moodle allows to set up wikis, both for a single user and for groups of users. However, it shows some deficiencies when introducing mathematical formulae, which is of paramount importance in problem solving in Mathematics. Similar problems arose with the known web site wikispaces [1] which allows to set up small wikis for free.

The developer of the first wiki software was Ward Cunningham [2], who in 1994 made WikiWikiWeb [3], originally described as “the simplest online database that could possibly work”. The most known wiki is doubtlessly the Wikipedia [5], based initially on the software

c CMMSE

Page 92 of 1573

ISBN:978-84-615-5392-1

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]