Effective Algorithm for Determination of the Initial Vector for the Definite Quadratic Pencil.
Kostic, Aleksandra ; Aljicevic, Zajim ; Sikalo, Sefko 等
Effective Algorithm for Determination of the Initial Vector for the Definite Quadratic Pencil.
1. Introduction
In the engineering practice eigenvalue problem is very important
because it is related with the mechanical systems and their property of
vibration. Vibration is property of every mechanical system. Also it is
property of electrical systems in the form of oscillation circuits.
Neglecting terms of vibration can lead to resonance. The resonance can
have catastrophic consequences (for example the demolition of the
bridge), on the other hand it can be used positively. Mathematical
description of vibratory conditions leads to a differential equation or
a system of differential equations. This problem is further transformed
to the eigenvalue problem. This is the motivation for considering the
eigenvalue problems. They are divided into linear and nonlinear problems
of eigenvalues. The theoretical basics for eigenvalue problems are given
in the [7]. In practice, nonlinear eigenproblems commonly arise in
dynamic/stability analysis of structures and in fluid mechanics,
electronic behavior of semiconductor hetero- structures, vibration of
fluid-solid structures, regularization on total least squares problems
and stability of delay differential equations.
In practice, most important is the quadratic problem Q([lambda])x =
0 where
Q([lambda]) = [[lambda].sup.2]A + [lambda]B + C, A, B, C [member
of] [C.sup.nxn], A [not equal to] 0, (1)
More information on the practical application can be found in [12].
Since we, in this paper, deal with the improving of the algorithm
to determine whether a quadratic eigenvalue problem is definite or not,
let us briefly summarize what has been done so far in the literature up
to this point.
For quadratic hyperbolic pencils, Higham, Tisseur and Van Dooren
[5] proposed a method for testing hyperbolicity and for constructing a
definite linearization. Another method for detecting whether a Hermitian
quadratic matrix polynomial is hyperbolic is based on cyclic reduction
and was first introduced by Guo and Lancaster [3] and later on,
accelerated by Guo, Higham and Tisseur [2].
Niendorf and Voss [10] take advantage of the fact that all
eigenvalues of a definite matrix polynomial can be characterized as
minmax values of the appropriate Rayleigh functional and that the
extreme eigenvalues in each of the intervals (-[infinity], [xi]), ([xi],
[mu]) and ([mu], +[infinity]) are the limits of monotonically and
quadratically convergent sequences. If a given Hermitian quadratic
matrix polynomial is definite, Niendorf and Voss concurrently determine
parameters [xi] and [mu] such that the matrices Q([xi]) < 0 and
Q([mu]) > 0, which allows for the computation of a definite
linearization.
Kostic and Voss [9] applied the Sylvester's law of inertia on
the definite quadratic eigenvalue problems.
In paper [8] Kostic and Sikalo gave improvement of the method of
Niendorf and Voss [10] for detecting whether a quadratic pencil
Q([lambda]) is definite. They have used some properties of matrices A, B
and C to get improvment of the algorithm. These properties are rank of
the matrices A, B and C and eigenvectors of these matrices which
corespond to the egenvalue zero. Kostic and Sikalo in [8] obtained good
results when matrix B is singular. If matrix B is not singular they
would have to determine eigenspace of the matrix A, to the eigenvalue
zero. Also, they would have to determine eigenspace of the matrix C, to
the eigenvalue zero. This reduces efficiency of their idea.
This paper gives effective algorithm for determination of the
initial vector [x.sub.0]. The algorithm is based on results of the paper
[8] and on application of eigenvector [q.sub.1] which is corresponding
vector of eigenvalue [[lambda].sub.1A] of matrix A, and also on
application of eigenvector [f.sub.n] which is corresponding vector of
eigenvalue [[lambda].sub.nB] of matrix B.
Our paper is organized as follows. In Section 2 we provide
definition of the definite eigenvalue problem and results of previous
papers based on this topic. Section 3 gives a new effective algorithm
for determining of the initial vector [x.sub.0]. This section also
presents new results which lead to our algorithm. A critical review of
the proposed algorithm and the future research plan is given in Section
4.
2. Definite quadratic pencils
In this paragraph we give the theoretical basics for the definite
quadratic pencil. Since the definite quadratic polynomial is
generalization of the hyperbolic quadratic polynomial, let us briefly
consider the hyperbolic quadratic polynomial.
A quadratic matrix polynomial
Q([lambda]) := [[lambda].sup.2]A + [lambda]B + C, A = [A.sup.H]
> 0, B = [B.sup.H], C = [C.sup.H] (2)
is hyperbolic if for every x [member of] [C.sup.n], x [not equal
to] 0 the quadratic polynomial
f([lambda]; x) := [[lambda].sup.2][x.sup.H]Ax + [lambda][x.sup.H]Bx
+ [x.sup.H]Cx = 0 (3)
has two distinct real roots:
[mathematical expression not reproducible] (4)
Functionals in (4) are the Rayleigh functionals of the quadratic
matrix polynomial (2). The Rayleigh functionals are the generalization
of the Rayleigh quotient.
The hyperbolic problem is very importance because all of its
eigenvalues can be variational characterized. More information is
contained in [1].
Higham, Mackey and Tisseur [6] generalized the concept of the
hyperbolic quadratic polynomial by waiving the positive definiteness of
the leading matrix A.
Definition 1
A quadratic pencil (1) is definite if:
A = [A.sup.H], B = [B.sup.H], C = [C.sup.H] are Hermitian, there
exists [mu] [member of] R [union] {[infinity]} such that Q([mu]) is
positive definite, and for every fixed x [member of] [C.sup.n], x [not
equal to] 0 the quadratic polynomial (3) has two distinct roots in R
[union] {[infinity]}.
Free vibrations of a fluid-solid structures are governed by a
non-symmetric eigenvalue problem [4,8,10,11]. This problem can be
transformed into a definite quadratic eigenvalue problem and it is
motivation for further consideration of the definite quadratic pencil
and eigenvalue problems for these pencils.
The following Theorem has been proved by Duffin [1].
Theorem 1. A Hermitian matrix polynomial Q([lambda]) is definite if
and only if any two (and hence all) of the following properties hold:
* (x) := [([x.sup.H]Bx).sup.2] - 4([x.sup.H]Ax)([x.sup.H]Cx) > 0
for every x [member of] [C.sup.n], x [not equal to] 0
* Q([mu]) > 0 for same [mu] [member of] R [union] {[infinity]}
* Q([xi]) < 0 for same [xi] [member of] R [union] {[infinity]}.
Without loss of generality, we take that [xi] < [mu].
f([mu]; x) = [x.sup.H]Q([mu])x
f([xi]; x) = [x.sup.H]Q([xi])x
where Q([xi]) < 0 < Q([mu]), [xi] < [mu].
The Rayleigh functional is explicitly given by
[mathematical expression not reproducible].
The quadratic eigenvalue problem Q([lambda])x = 0 has exactly n
eigenvalues [[lambda].sub.1] [less than or equal to] ... [less than or
equal to] [[lambda].sub.n] in ([xi], [mu]) which satisfy a minmax
characterization with respect to p, and the safeguarded iterations
aiming at [[lambda].sub.1] and [[lambda].sub.n] converge globally and
monotonically decreasing and increasing, respectively. Niendorf and Voss
[10] gave the following algorithm (Fig.i) for detecting definite
quadratic matrix polynomial.
Fig. 1. Algorithm 1.
Require: initial vector [x.sub.0] [not equal to] 0
if d([x.sub.0]) := [([x.sup.H.sub.0]B[x.sub.0]).sup.2] -
4([x.sup.H.sub.0]A[x.sub.0])([x.sup.H.sub.0]C[x.sub.0]) < 0 then
STOP: Q([lambda]) is not definite
end if
determine [[sigma].sub.0] = p([x.sub.0])
for k = 1, 2, ... until convergence do
determine eigenvector [x.sub.k] of Q([[sigma].sub.k-1]) corresponding
to its largest eigenvalue
if d([x.sub.k]) := [([x.sup.H.sub.k]B[x.sub.k]).sup.2] -
4([x.sup.H.sub.k]A[x.sub.k])([x.sup.H.sub.k]C[x.sub.k]) < 0 then
STOP: Q([lambda]) is not definite
end if
determine [[sigma].sub.k]= p([x.sub.k])
if [[sigma].sub.k] [greater than or equal to] [[sigma].sub.k-1] then
STOP: Q([lambda]) is not definite
end if
if Q(2[[sigma].sub.k] - [[sigma].sub.k-1]) is negative definite then
STOP: Q([lambda]) is definite
end if
end for
Now we will briefly explain the Algorithm 1 (Fig. 1). If
Q([lambda]) is definite we can determine by Algorithm 1 a parameter [xi]
such that Q([xi]) is negative definite. We have to allow for one
violation of the monotonicity requirement to incorporate the possible
jump of the iterations from one unbounded interval to the other. A
second similar sweep aiming at [[lambda].sub.n] discovers a parameter
[mu] such that Q([mu]) >0. Once parameters [xi] and [mu] are found
such that Q([xi]) <0 <Q([mu]), the definite linearization can be
determined similarly.
Let's analyze the advantages and disadvantages of the
Algorithm 1. Its advantages include clarity in the functioning of the
algorithm and the ease of programming. The shortcomings of the Algorithm
1 are cost of the algorithm which means a large number of flops in every
iteration and random selection of the initial vector [x.sub.0]. Let us
look first at the cost of the algorithm.
The k-th step of Algorithm 1 requires [n.sup.3]/3 operations for
computing 1 Cholesky factorization, 4[n.sup.2] operations for evaluating
Q(2[[sigma].sub.k] - [[sigma].sub.k-1]), 3 matrix-vector products
(6[n.sup.2] operations), 3 scalar products (6n operations), and the
determination of the largest eigenvalue and corresponding eigenvector of
a matrix Q([[sigma].sub.k-1]) which seems to be the most expensive part.
Let a quadratic matrix polynomial
Q([lambda]) = [[lambda].sup.2] A + [lambda]B + C, A = [A.sup.H], B
= [B.sup.H], C = [C.sup.H] (5)
be definite.
Kostic and Sikalo in [8] developed a strategy for targeting of the
initial vector [x.sub.0]. This leads to reduction in the number of
iterations and the algorithm becomes cheaper. Here are the basic
theorems on which their strategy is based. Proofs of these theorems are
given in [8].
Lemma 1 Problem of the eigenvalues corresponding to the matrix
polynomial (5) has 0 as an eigenvalue if and only if C is singular.
Without loss of generality, Kostic and Sikalo in [8] took that [xi]
< [mu]. They considered rank of matrices A, B and C.
Let Rank(A) = n - p, Rank(B) = n - q, Rank(C) = n - r where 0 <
p < n, 0 [less than or equal to] q [less than or equal to] n and 0
< r < n and let [[y.sub.1], [y.sub.2], ..., [y.sub.n]},
{[z.sub.1], [z.sub.2], ..., [z.sub.n]} and {[w.sub.1], [w.sub.2], ...,
[w.sub.n]) be orthonormal base of vector space [C.sup.n] consisting of
eigenvectors of the corresponding matrix A, B and C respectively.
Without loss of generality, Kostic and Sikalo in [8] took
[mathematical expression not reproducible]. (6)
Theorem 2 Let Q([xi]) < 0 < Q([mu]) and [xi] < [mu] then
stands [mu] > [a.sub.1] > [c.sub.1] > [xi] where
[mathematical expression not reproducible].
Theorem 3 Let Q([xi]) < 0 < Q([mu]) and [xi] < [mu] then
stands [mu] > [a.sub.2] and [c.sub.2] > [xi] where
[mathematical expression not reproducible]
and
[mathematical expression not reproducible]
Theorem 4 Let Q([xi]) < 0 < Q([mu]), [xi] < [mu] and a :=
max([a.sub.1], [a.sub.2]) and c:=min([c.sub.1], [c.sub.2]) then applies
[mu] > a and c > [xi] where [a.sub.1], [a.sub.2], [c.sub.1] and
[c.sub.2] are defined as in Theorem 2 and Theorem 3.
If we now apply Algorithm 1 to determine the parameter [mu] for the
initial vector [x.sub.0] we take the vector through which we specify a.
Analogously if we apply Algorithm 1 (Fig. 1) for determining parameter
[xi] for initial vector [x.sub.0] we take the vector through which we
have determined C.
There remains to consider what happens in the case of the
singularity of the matrix B.
Theorem 5 Let Q([xi]) < 0 < Q([mu]), [xi] < [mu], Rank(B)
= n - q and q > 0 than [mu] > 0 or [xi] < 0.
Remark: In the proof of the Theorem 5 [8] it is clear that
[z.sup.H]Az has the same sign for all z [member of] {[z.sub.1],
[z.sub.2], ..., [z.sub.q]}. If [z.sup.H]Az > 0 we can multiply
problem (5) by -1 and get a new definite eigenvalue problem.
Q([lambda]) = [[lambda].sup.2][A.sub.1] + [lambda][B.sub.1] +
[C.sub.1], [A.sub.1] = -A, [B.sub.1] = -B, [C.sub.1] = -C (7)
Problem (7) has the same eigenvalues as problem (5), but with
[z.sup.H][A.sub.1]z < 0 and [xi] < 0. Therefore, in the case of
singularity of the matrix B without loss of generality we can consider
that [z.sup.H]Az<0 and [xi] < 0. In the case of singularity of the
matrix B in order to reduce the costs of determining the initial vector
[x.sub.0] it is taken [x.sub.0]=[z.sub.1].
3. The effective algorithm for determining the initial vector
Kostic and Sikalo in [8] have used some properties of the matrix A,
B and C for determination of the initial vector [x.sub.0]. Those
properties are for example ranks of the matrices, eigenvectors belonging
to the eigenvalue zero of the A, B and C matrices. For the
implementation of this strategy they would have to find eigenspace of
the matrices A, B and C, which are associated with the eigenvalue zero.
The basic idea of this paper is still using nonzero eigenvalues of
matrices A and B to get a better location parameters and better initial
vector. The acceleration of the algorithm is also obtained by
improvement of the initial vector [x.sub.0]. Our aim is getting
high-quality algorithm for the determination of the initial vector. This
algorithm is obtained by using results of the paper [8] and this paper.
In order to accelerate Algorithm 1 we consider a better
localization of the parameters [mu] and [xi] which hold Q([mu]) > 0
> Q([xi]) and [mu] > [xi]. From this localization we will get the
initial vector [x.sub.0].
In [8] is noticed that matrices A and C must be singular. The
singularity of matrix B can but doesn't have to be satisfied. If
matrix B is singular than considering Remark for the initial vector we
take [x.sub.0]=[z.sub.1]. If the matrix B is not singular theorems 2, 3
and 4 give us possibility of choosing initial vector xo. However this
means we must determine all eigenvectors of the matrix A that belong to
eigenvalue zero and all eigenvalues of the matrix C that belong to
eigenvalue zero. Also, this procedure is weighted by finding a large
number of products of vectors and matrices and scalar products. This
brings additional algorithm costs (a large number of flops and long time
computing). To avoid the large number of flops in the case when matrix B
is not singular, we consider a new strategy.
We assume that the matrix A has at least one positive and at least
one negative eigenvalue. We denote the largest eigenvalue of matrix B
with [[lambda].sub.nB] and the corresponding eigenvector with [q.sub.1].
We denote the smallest eigenvalue of matrix A with [[lambda].sub.1A] and
the corresponding eigenvector with [f.sub.n]. It is clear that
[[lambda].sub.1A]<0.
For every fixed x [member of] [C.sup.n], x [not equal to] 0 is
[[mu].sup.2][x.sup.H]Ax + [mu][x.sup.H]Bx + [x.sup.H]Cx > 0 (8)
[[xi].sup.2][x.sup.H]Ax + [xi][x.sup.H]Bx + [x.sup.H]Cx < 0 (9)
Subtracting the inequality (9) from inequality (8) yields
[mathematical expression not reproducible]. (10)
Further we get
[mathematical expression not reproducible] (11)
From (11) we get for every fixed x [member of] [C.sup.n], x [not
equal to] 0
([mu] + [xi])[[x.sup.H]Ax/[x.sup.H]x] > - [[lambda].sub.nB].
(12)
Specially for x = [q.sub.1] we get
([mu] + [xi])[[q.sub.1.sup.H]A[q.sub.1]/[q.sub.1.sup.H][q.sub.1]]
> - [[lambda].sub.nB]. (13)
Since [q.sub.1] is eigenvector of eigenvalue [[lambda].sub.1A]<0
from (13) is obtained
([mu] + [xi])[[lambda].sub.1A] > - [[lambda].sub.nB]. (14)
From (14) and [[lambda].sub.1A]<0 we get
[mu] + [xi] < -[[[lambda].sub.nB]/[[lambda].sub.1B]]. (15)
From (14) and [mu] - [xi] > 0 we get
[xi] < - [[[lambda].sub.nB]/2[[lambda].sub.1A]]. (16)
This way we get upper bound for parameter [xi] in the case where B
is not singular. In this case because of the great significance of the
matrix B for definite eigenvalue problem we will take [x.sub.0]=
0.9[q.sub.1] + 0.1[f.sub.n] for the initial vector. Now we will give an
effective algorithm for determination of the initial vector [x.sub.0].
Fig. 2. Algorithm 2
Examine singularity of matrix B
If matrix B is singular then [x.sub.0] = [z.sub.1]
else
determine smallest eigenvalue of matrix A [[lambda].sub.1A]
and corresponding eigenvector [q.sub.1]
determine largest eigenvalue of matrix B [[lambda].sub.nB]
and corresponding eigenvector [f.sub.n]
[x.sub.0]= 0.9[q.sub.1] + 0.1[f.sub.n]
end if
Fig. 2. gives the effective algorithm for determining of the
initial vector [x.sub.0]. This algorithm is based on the structure of
the definite quadratic eigenvalue problem. At the end we would like to
point out the following. To determine whether the matrix B is singular
we have to decomposition this matrix. From this decomposition we can see
all the eigenvalues and the corresponding eigenvectors of matrix B.
Therefore, if the matrix B is not singular, we already have an
eigenvector belonging to the largest eigenvalue at no additional cost.
The following two examples numerically show justification for
applying Algorithm 2.
Example 1
In the quadratic matrix polynomial Q([lambda]) := [[lambda].sup.2]A
+ [lambda]B + C we will take
[mathematical expression not reproducible].
We tested the speed of determination weather the matrix is or is
not definite. First, we used only with Algorithm 1. Numerical tests were
performed by using MATLAB. The initial vector is determined by using
function randn. We did 100 tests and we obtained that average number of
expensive iterations for getting the answer is 4.62. When we use
Algorithm 1 and Algorithm 2 we get the answer immediately in the
preparatory phase of the Algorithm 1. Therefore, the answer is obtained
without expensive iterations.
Example 2
In the quadratic matrix polynomial Q([lambda]) := [[lambda].sup.2]A
+ [lambda]B + C we will take
[mathematical expression not reproducible].
Once again we did numerical experiment with MATLAB. The initial
vector was determined by using function randn. We did 100 tests and we
obtained that the average number of expensive iterations for getting the
answer is 4.17. When we used Algorithm 1 and Algorithm 2 we obtained the
answer after one expensive iteration.
Example 1 and Example 2 show the importance of using Algorithm 1
for determination of the initial vector. This is especially important
when matrix B is singular or when there exists great difference between
absolute value of the biggest and the absolute value of the smallest
eigenvalue as in Example 2.
4. Conclusion
This paper gives the effective algorithm for the determination of
the initial vector [x.sub.0]. If the initial vector [x.sub.0] is better
determined we get the algorithm acceleration. In previous papers there
were attempts of getting a better initial vector [x.sub.0] that were
based on the exploitation of some properties of the matrices A, B and C.
Previously used properties were ranks of matrices and their eigenvectors
which belong to eigenvalue zero. In this paper we also give exploitation
of the eigenvector [q.sub.1] which is corresponding vector of the
eigenvalue [[lambda].sub.1A] of matrix A. We also give exploitation of
the eigenvector [f.sub.n] which is corresponding vector of the
eigenvalue [[lambda].sub.nB] of matrix B. In Example 1 and Example 2 we
have concretely shown the justification for applying the Algorithm 2.
Therefore, in this paper we sum up results from previous papers
with the results of this paper. This way we get a high-quality algorithm
for determining initial vector [x.sub.0]. This algorithm enables
significant acceleration of the Algorithm 1. The further research will
go in the direction of the use of features of the matrix A, B and C,
which determine square pencil that is used for improvement of
localization of the parameters [xi] and [mu].
DOI: 10.2507/27th.daaam.proceedings.002
References
[1] Duffin, R. J. (1955). A minimax theory for overdamped networks,
J. Rat. Mech. Anal., 4 (1955) 221 -233
[2] Guo, C.-H.; Higham, N.J. & Tisseur, F. (2007). Detecting
and solving hyperbolic quadratic eigenvalue problems, Technical Report
117, The Manchester Institute of Mathematical Sciences, The University
of Manchester, (2007)
[3] Guo, C.-H. & Lancaster P. (2005). Algorithms for hyperbolic
quadratic eigenvalue problems, Math.Comp., 74 (2005) 1777-1791, ISSN
0025-5718
[4] Harris, C. & Piersol, A. (2002). Harris Shock and Vibration
Handbook, 5 ed., McGraw-Hill, NewYork, 2002
[5] Higham, N. J.; Tisseur, F.& Van Dooren, P.M. (2002).
Detecting a definite Hermitian pair anda hyperbolic or elliptic
quadratic eigenvalue problem, and associated nearness problems, Linear
Algebra Appl., (2002) 351-352 455-474, ISSN 0024-3795
[6] Higham, N.; Mackey, D. & Tisseur, F. (2009). Definite
matrix polynomials and their linearization by definite pencils, SIAM J.
Matrix Anal. Appl. 33 (2009) 478-502, ISSN 0895-4798
[7] Kostic, A. (2016). Eigenvalue Problems In: Applied linear
algebra in action, Katsikis, N. (Ed.),57-83, InTech ISBN
978-953-51-2419-1, Rijeka
[8] Kostic, A. &. Sikalo, S. (2015). Definite Quadratic
Eigenvalue Problems. Procedia Engineering 100 (2015) 56-63, ISSN
1877-7058
[9] Kostic, A. & Voss, H. (2013). On Sylvester's law of
inertia for nonlinear eigenvalue problems, Electr. Trans. Numer. Anal.,
40 (2013) 82-93, ISSN 1068-9613
[10] Niendorf, V. & Voss, H. (2010). Detecting hyperbolic and
definite matrix polynomials, Linear Algebra Appl., 432 (2010) 1017-1035,
ISSN 0024-3795
[11] Stammberger, M. & Voss, H. (2010). On an unsymmetric
eigenvalue problem governing free vibrations of fluidsolid structures,
Electr. Trans. Numer. Anal., 36 (2010), 113-125, ISSN 1068-9613
[12] Tisseur, F. & Meerbergen, K. (2001). The quadratic
eigenvalue problem. SIAM Review, 43 (2001) 235-286, ISSN 0036-1445
This Publication has to be referred as: Kostic, A[leksandra];
Aljicevic, Z[ajim]; Sikalo, S[efko] & Kustura, M[elisa] (2016).
Effective Algorithm for Determination of the Initial Vector for the
Definite Quadratic Pencil, Proceedings of the 27th DAAAM International
Symposium, pp.0010-0016, B. Katalinic (Ed.), Published by DAAAM
International, ISBN 978-3-902734-08-2, ISSN 1726-9679, Vienna, Austria
COPYRIGHT 2017 DAAAM International Vienna
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.