0% found this document useful (0 votes)
11 views

Advances in Operations Research - 2009 - Yuan - A Trust Region Based BFGS Method With Line Search Technique For Symmetric

Uploaded by

jwan.aqrawi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Advances in Operations Research - 2009 - Yuan - A Trust Region Based BFGS Method With Line Search Technique For Symmetric

Uploaded by

jwan.aqrawi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Hindawi Publishing Corporation

Advances in Operations Research


Volume 2009, Article ID 909753, 22 pages
doi:10.1155/2009/909753

Research Article
A Trust-Region-Based BFGS Method
with Line Search Technique for Symmetric
Nonlinear Equations

Gonglin Yuan,1 Shide Meng,2 and Zengxin Wei1


1
College of Mathematics and Information Science, Guangxi University, Nanning,
Guangxi 530004, China
2
Department of Mathematics and Computer Science, Yulin Teacher’s College, Yulin,
Guangxi 537000, China

Correspondence should be addressed to Shide Meng, [email protected]

Received 29 April 2009; Revised 19 August 2009; Accepted 28 October 2009

Recommended by Khosrow Moshirvaziri

A trust-region-based BFGS method is proposed for solving symmetric nonlinear equations. In this
given algorithm, if the trial step is unsuccessful, the linesearch technique will be used instead of
repeatedly solving the subproblem of the normal trust-region method. We establish the global and
superlinear convergence of the method under suitable conditions. Numerical results show that the
given method is competitive to the normal trust region method.

Copyright q 2009 Gonglin Yuan et al. This is an open access article distributed under the Creative
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.

1. Introduction
Consider the following system of nonlinear equations:

gx  0, x ∈ Rn , 1.1

where g : Rn → Rn is continuously differentiable, and the Jacobian ∇gx of g is symmetric


for all x ∈ Rn . Let ϑ be the norm function defined by ϑx  1/2gx2 . Then the nonlinear
equations 1.1 is equivalent to the following global optimization problem:

min ϑx, x ∈ Rn . 1.2

There are two ways for nonlinear equations by numerical methods. One is the line
search method and the other is the trust region method. For the line search method, the
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
2 Advances in Operations Research

following iterative formula is often used to solve 1.1:

xk1  xk  αk dk , 1.3

where xk is the kth iteration point, αk is a steplength, and dk is search direction. To begin, we
briefly review some methods for 1.1 by line search technique. First, we give some techniques
for αk . Brown and Saad 1 proposed the following line search method to obtain the stepsize
αk :

ϑxk  αk dk  − ϑxk  ≤ σαk ∇ϑxk T dk , 1.4

where σ ∈ 0, 1. Based on this technique, Zhu 2 gave the nonmonotone line search
technique:

 
ϑxk  αk dk  − ϑ xlk ≤ σαk ∇ϑxk T dk , 1.5

ϑxlk   max0≤j≤mk {ϑxk−j }, m0  0 and mk  min{mk − 1  1, M}, k ≥ 1, and M
is a nonnegative integer. From these two techniques 1.4 and 1.5, it is easy to see that the
Jacobian matrix ∇gk must be computed at every iteration, which will increase the workload
especially for large-scale problems or this matrix is expensive to calculate. Considering these
points, we 3 presented a new backtracking inexact technique to obtain the stepsize αk :

   
gxk  αk dk 2 ≤ gxk 2  δα2 g T dk , 1.6
k k

where δ ∈ 0, 1, gk  gxk , and dk is a solution of the system of linear 1.15. We established
the global convergence and the superlinear convergence of this method. The numerical
results showed that the new line search technique is more effective than the normal methods.
Li and Fukashima 4 proposed an approximate monotone line search technique to obtain the
step-size αk satisfying

 2  2
ϑxk  αk dk  − ϑxk  ≤ −δ1 αk dk 2 − δ2 αk gk   k gxk  , 1.7

where δ1 > 0 and δ2 > 0 are positive constants, αk  r ik , r ∈ 0, 1, ik is the smallest
nonnegative integer i such that 1.7, and k satisfies



k < ∞. 1.8
k0

Combining the line search 1.7 with one special BFGS update formula, they got some better
results see 4. Inspired by their idea, Wei 5 and Yuan 6–8 presented several approximate
methods. Further work can be found in 9.
Second, we present some techniques for dk . One of the most effective methods is
Newton method. It normally requires a fewest number of function evaluations, and it is very
good at handling ill-conditioning. However, its efficiency largely depends on the possibility
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Advances in Operations Research 3

of solving a linear system efficiently which arises when computing the search dk in each
iteration:

∇gxk dk  −gxk . 1.9

Moreover, the exact solution of the system 1.9 could be too burdensome, or it is not
necessary when xk is far from a solution 10. Inexact Newton methods 2, 3, 10 represent the
basic approach underlying most of the Newton-type large-scale algorithms. At each iteration,
the current estimate of the solution is updated by approximately solving the linear system
1.9 using an iterative algorithm. The inner iteration is typically “truncated” before the
solution to the linear system is obtained. Griewank 11 firstly proposed the Broyden’s rank
one method for nonlinear equations and obtained the global convergence. At present, a lot
of algorithms have been proposed for solving these two problems 1.1 and 1.2see 12–22
etc..
Trust region method is a kind of important and efficient methods in the area of
nonlinear optimization. This method can be traced back to the works of Levenberg 17 and
Marquardt 18 on nonlinear least-squares problems and the work of Goldfeld et al. 23
for unconstrained optimization. Powell 24 was the first to establish the convergence result
of trust region method for unconstrained optimization. Fletcher 25, 26 firstly proposed
trust region algorithms for linearly constrained optimization problems and nonsmooth
optimization problems, respectively. This method has been studied by many authors 15, 27–
31 and has been applied to equality constrained problems 32–34. Byrd et al. 35, Fan
36, Powell and Yuan 37, Vardi 38, Yuan 39, 40, Yuan et al. 41, and Zhang and Zhu
42 proposed various trust region algorithms for constrained optimization problems and
established the convergence. Fan 36, Yuan 39, and Zhang 43 presented the trust region
algorithms for nonlinear equations and got some results.
The normal trust-region subproblem for nonlinear equations is to find the trial step dk
such that

1
min qk∗ d  dT ∇gxk gxk   dT ∇gxk T ∇gxk d
2 1.10
s.t. d ≤ Δk ,

where Δk > 0 is a scalar called the trust region radium. Define the predicted descent of the
objective function gx at kth iteration by

P red∗k  qk∗ 0 − qk∗ dk , 1.11

the actual descent of gx by

Ared∗k  ϑxk  − ϑxk  dk , 1.12

and the ratio of actual descent to predicted descent:

Ared∗k
rk∗  . 1.13
P red∗k
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
4 Advances in Operations Research

For the normal trust region algorithm, if rk∗ ≥ ρ ρ ∈ 0, 1, this case is called a successful
iteration, the next iteration is xk1  xk  dk , and go to the next step; otherwise reduce the
trust region radium Δk and solve this subproblem 1.10 repeatedly. Sometimes, we must
do this work many times and compute the Jacobian matrix ∇gxk  and ∇gxk T ∇gxk  at
every time, which obviously increases the work time and workload, especially for large-scale
problems. Even more detrimental, the trust region subproblem is not very easy see 36, 39
etc. to be solved for most of the practical problems.
In order to alleviate the above bad situation that traditional algorithms have to
compute Jacobian matrix ∇gxk  and ∇gxk T ∇gxk  at each and every iteration while
repeatedly resolving the trust region subproblem, in this paper, we would like to rewrite
the following trust-region subproblem as

1
min qk d  gxk T d  dT Bk d
2 1.14
s.t. d ≤ Δk ,

where matrix Bk is the approximation to the Jacobian matrix of gx at xk . Due to the
boundness of the region {d | d ≤ Δk }, 1.14 has a solution regardless of Bk s definiteness
see 43. This implies that it is valid to adopt a BFGS update formula to generate Bk for trust
region methods and the BFGS update is presented as follows:

yk ykT Bk sk sTk Bk
Bk1  Bk  − , 1.15
sTk yk sTk Bk sk

where yk  gk1 − gk , sk  xk1 − xk . Define the predicted descent of the objective function
gx at kth iteration by

P redk  qk 0 − qk dk , 1.16

the actual descent of gx by


 2  2
Aredk  gxk  − gxk  dk  , 1.17

and the ratio of actual descent to predicted descent:

   
gxk 2 − gxk  dk 2
rk  . 1.18
qk 0 − qk dk 

If rk ≥ ρ ρ ∈ 0, 1, called a successful iteration , the next iteration is xk1  xk dk . Otherwise,
we use a search technique to obtain the steplength λk and let the next iteration be xk1  xk 
λk dk . Motivated by the idea of the paper 4, we propose the following linesearch technique
to obtain λk :
     
gxk  λk dk 2 − gk 2 ≤ −σ1 λk gk 2 − σ2 λk dk 2  σ3 λk dT gk , 1.19
k
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Advances in Operations Research 5

where σ1 , σ2 , and σ3 are some positive constants. In Section 3, we will show 1.19 is well-
defined. Here and throughout this paper,  ·  denotes the Euclidian norm of vectors or its
induced matrix norm. gxk  is replaced by gk .
In the next section, the proposed algorithm for solving 1.1 is given. The global and
superlinear convergence of the presented algorithm are stated in Section 3 and Section 4,
respectively. The numerical results of the method are reported in Section 5.

2. Algorithms
Algorithm 2.1.

Initial: choose ρ, r ∈ 0, 1, 0 < τ1 < τ2 < 1 < τ3 , σ1 , σ2 , σ3 > 0, Δmin > 0, x0 ∈ Rn .
Let k : 0;
Step 1: Let Δk  Δmin ;
Step 2: If gk   0, stop. Otherwise go to Step 3;
Step 3: Solve the subproblem 1.14 with Δ  Δk to get dk ;
Step 4: If
   
gxk 2 − gxk  dk 2
rk  < ρ, 2.1
qk 0 − qk dk 

Go to Step 5; Otherwise Let xk1  xk  dk , Δk1 ∈ dk , τ3 dk , and go to Step 6;

Step 5: Let k be the smallest nonnegative integer i such that 1.19 holds for λ  r i .
Let λk  r ik and xk1  xk  λk dk , Δk1 ∈ τ1 dk , τ2 dk ;
Step 6: Update Bk to get Bk1 by 1.15. Let k : k  1. Go to Step 2.

Here we also give a normal trust-region method for 1.1 and call it Algorithm 2.2.

Algorithm 2.2 the normal Trust-Region Algorithm 44.

Initial: Given a starting point x0 ∈ Rn , Δ0 > 0 is the initial trust region radium, an
upper bound of trust region radius Δ , 0 < Δ0 ≤ Δ . Set 0 < μ < 1, 0 < η1 < η2 < 1 <
η3 , k : 0.
Step 1: If gk   0, stop. Otherwise, go to Step 2.
Step 2: Solve the trust-region subproblem 1.10 to obtain dk .
Step 3: Let

ϑxk  − ϑxk  dk 
rk  , 2.2
qk∗ 0 − qk∗ dk 

if rk < η1 , set Δk1  η1 Δk ; If rk > η2 and dk   Δk , let Δk1  min{η3 Δk , Δ };


Otherwise, let Δk1  Δk .
Step 4: If rk > μ, let xk1  xk  dk and go to Step 5; otherwise, let xk1  xk , go to
Step 2.
Step 5: Set k : k  1. Go to Step 1.
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
6 Advances in Operations Research

Remark 2.3. By yk  gk1 − gk , we have the following approximate relations:

yk  gk1 − gk ≈ ∇gk1 sk . 2.3

Since Bk1 satisfies the secant equation Bk1 sk  yk and ∇gk1 is symmetric, we have
approximately

Bk1 sk ≈ ∇gk1 sk  ∇gk1


T
sk . 2.4

This means that Bk1 approximates ∇gk1 along direction sk .

3. The Global Convergence


In this section, we will establish the global convergence of Algorithm 2.1. Let Ω be the level
set defined by

    
Ω  x | gx ≤ gx0  , 3.1

which is bounded.

Assumption 1. A g is continuously differentiable on an open convex set Ω1 containing Ω.


B The Jaconbian of g is symmetric and bounded on Ω1 and there exists a positive
constant M such that
 
∇gx ≤ M ∀x ∈ Ω1 . 3.2

C ∇g is positive definite on Ω1 ; that is, there is a constant m > 0 such that

md2 ≤ dT ∇gxd ∀x ∈ Ω1 , d ∈ Rn . 3.3

D ϑx is differentiable and its gradient satisfies


    
∇ϑx − ∇ϑ y  ≤ Lx − y, ∀x, y ∈ Ω1 , 3.4

where L is the Lipschitz constant. By Assumptions 1A and 1B, it is not difficult to get the
following inequality:
 
yk  ≤ Msk . 3.5

According to Assumptions 1A and 1C, we have

sTk yk  sTk ∇gξsk ≥ msk 2 , 3.6


9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Advances in Operations Research 7

where ξ  xk  ϑ0 xk1 − xk , ϑ0 ∈ 0, 1, which means that the update matrix Bk is always
positive definite. By 3.5 and 3.6, we have

 2
sTk yk yk  M2
≥ m, ≤ . 3.7
sk 2 sTk yk m

Lemma 3.1 see Theorem 2.1 in 45. Suppose that Assumption 1 holds. Let Bk be updated by
BFGS formula 1.15 and let B0 be symmetric and positive definite. For any k ≥ 0, sk and yk satisfy
3.7. Then there exist positive constants β1 , β2 , and β3 such that, for any positive integer k

β1 dk 2 ≤ dkT Bk dk ≤ β2 dk 2 , β1 dk  ≤ Bk dk  ≤ β3 dk  3.8


hold for at least k/2 
value of k ∈ {1, 2, . . . , k}.

Considering the subproblem 1.14, we give the following assumption similar to


1.14. Similar to 2, the following assumption is needed.

Assumption 2. Bk is a good approximation to ∇gk , that is,

    
 ∇gk − Bk dk  ≤ ε0 gk , 3.9

and dk satisfies

   
gk  Bk dk  ≤ ε1 gk , 3.10

where ε0 ∈ 0, 1 is a small quantity, and ε1 > 0, ε0  ε1 ∈ 0, 1.

Lemma 3.2. Let Assumption 2 hold. Then dk is descent direction for ϑx at xk , that is,

∇ϑxk T dk < 0. 3.11

Proof. Let rk be the residual associated with dk so that gk  Bk dk  rk :

∇ϑxk T dk  gxk T ∇gxk dk


   
 gxk T ∇gxk  − Bk dk  rk − gxk  3.12
 
 gxk T ∇gxk  − Bk dk  gxk T rk − gxk T gxk .

So we have

 2  
∇ϑxk T dk  gxk   gxk T ∇gxk  − Bk dk  gxk T rk . 3.13
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
8 Advances in Operations Research

Therefore, taking the norm in the right-hand side of the above equality, we have that from
Assumption 2

       2
∇ϑxk T dk ≤ gxk  ∇gxk  − Bk dk   gxk rk  − gxk 
 2 3.14
≤ −1 − ε0 − ε1 gxk  .

Hence, for ε0  ε1 ∈ 0, 1, the lemma is satisfied.

According to the above lemma, it is easy to deduce that the norm function ϑx is
descent, which means that gk1  ≤ gk  is true.

Lemma 3.3. Let {xk } be generated by Algorithm 2.1 and suppose that Assumption 2 holds. Then
{xk } ⊂ Ω. Moreover, {gk } converges.

Proof. By Lemma 3.2, we have gk1  ≤ gk . Then we conclude from Lemma 3.3 in 46 that
{gk } converges. Moreover, we have for all k

       
gk1  ≤ gk  ≤ gk−1  ≤ · · · ≤ gx0 . 3.15

This implies that {xk } ⊂ Ω.

Lemma 3.4. Let Assumption 1 hold. Then the following inequalities

 2
gkT dk ≤ −β1 dk 2 , gk  ≥ β2 dk 2 3.16
1

1 
gk 2 ≤ g T dk
− k 3.17
β1

hold.

Proof. Since the update matrix Bk is positive definite. Then, problem 1.14 has a unique
solution dk , which together with some multiplier αk ≥ 0 satisfies the following equations:

Bk dk  αk dk  −gk ,
3.18
αk dk  − Δk   0.

From 3.18, we can obtain

dkT Bk dk  gkT dk  −αk dk 2 ≤ 0, 3.19

−gkT dk − dkT Bk dk
αk  . 3.20
dk 2

By 3.19 and 3.8, we get 3.16, which also imply that the inequality 3.17 holds.
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Advances in Operations Research 9

The next lemma will show that 1.19 is reasonable, and then Algorithm 2.1 is well
defined.

Lemma 3.5. Let Assumptions 1(D) and 2 hold. Then there exists a step-size λk such that 1.19 in a
finite number of backtracking steps.

Proof. From Lemma 3.8 in 1 we have that in a finite number of backtracking steps, λk must
satisfy
   
gxk  αk dk 2 − gxk 2 ≤ δλk gxk T ∇gxk dk , δ ∈ 0, 1. 3.21

By 3.12 and 3.14, let β0  1 − ε0 − ε1 , and we have

 2 β0  2 β0  2 β0  2
gxk T ∇gxk dk ≤ −β0 gk   − gk  − gk  − gk 
3 3 3
3.22
β0  2 β0 β0
≤ − gk  − β12 dk 2  β1 gkT dk ,
3 3 3

where the last inequality follows 3.16 and 3.17. By λk ≤ 1, let σ1 ∈ 0, β0 /3δ, σ2 ∈
0, β0 /3β12 δ, σ3 ∈ 0, β0 /3β1 δ, then we obtain 1.19. The proof is complete.

Lemma 3.6. Let {xk } be generated by the Algorithm 2.1. Suppose that Assumptions 1 and 2 hold.
Then one has


∞ 

−gkT dk < ∞, dkT Bk dk < ∞. 3.23
k0 k0

In particular, one has

lim −gkT dk  0, lim dkT Bk dk  0. 3.24


k→∞ k→∞

Proof. By 3.8 and 3.19, we have

1 1 1
qk dk   gkT dk  dkT Bk dk ≤ gkT dk ≤ − dkT Bk dk . 3.25
2 2 2

From Step 4 of Algorithm 2.1, if rk ≥ ρ is true, we get

   
gxk1 2 − gxk 2 ≤ qk dk  ≤ 1 g T dk ≤ − 1 dT Bk dk , 3.26
2 k 2 k

otherwise, if rk < ρ is true, by Step 5 of Algorithm 2.1, 3.8, and 3.26, we can obtain
     
gxk1 2 − gxk 2 ≤ −σ1 λk gk 2 − σ2 λk dk 2  σ3 λk dT gk
k
3.27
≤ σ3 λk dkT gk ≤ −σ3 λk dkT Bk dk .
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
10 Advances in Operations Research

By Lemma 3.5, we know that 1.19 can be satisfied in a finite number of backtracking steps,
which means that there exists a constant λ∗ ∈ 0, 1 satisfying λ∗ ≤ λk for all k. By 3.26 and
3.27, we have

   
gxk1 2 − gxk 2 ≤ ρ1 g T dk ≤ −ρ1 dT Bk dk ≤ −ρ1 β1 dk 2 < 0, 3.28
k k

where ρ1  min{1/2, σ3 λ∗ }. According to 3.28, we get

∞ ∞
1 ∞    
dkT Bk dk ≤ −gkT dk ≤ gxk 2 − gxk1 2
k0 k0
ρ1 k0

1  N    
 lim gxk 2 − gxk1 2 3.29
ρ1 N → ∞ k0

1  2  2
 lim gx0  − gxN1  ,
ρ1 N → ∞

and by Lemma 3.3, we know that {gk } is convergent. Therefore, we deduce that 3.23
holds. According to 3.23, it is easy to deduce 3.24. The proof is complete.

Lemma 3.7. Suppose that Assumptions 1 and 2 hold. There are positive constants b1 ≤ b2 , and b3
such that for any k, if dk  /
 Δmin , then the following inequalities hold:

   
b1 gk  ≤ dk  ≤ b2 gk , αk ≤ b3 . 3.30

Proof. We will prove this lemma in the following two cases.

Case 1 dk  < Δk . By 3.18, we have αk  0 and Bk dk  −gk . Together with 3.8 and 3.19,
we get

 
β1 dk 2 ≤ dkT Bk dk  −dkT gk ≤ dk gk ,
    3.31
−gk   gk   Bk dk  ≤ β3 dk .

Then 3.30 holds with b1  1/β3 ≤ b2  1/β1 and b3  0.

Case 2 dk   Δk . From 3.19 and 3.8, we have

 
β1 dk 2 ≤ dkT Bk dk ≤ −gkT dk ≤ gk dk . 3.32

Then, we get dk  ≤ 1/β1 gk . By 3.10 and 3.8, it is easy to deduce that

 
1 − ε1 gk  ≤ Bk dk  ≤ β3 dk . 3.33
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Advances in Operations Research 11

So we obtain dk  ≥ 1 − ε1 /β3 gk . Using 3.20, we have

 
−gkT dk − dkT Bk dk gk  β3
αk  ≤ ≤ . 3.34
dk 2 dk  1 − ε1

Therefore, 3.30 holds. The proof is complete.

In the next theorem, we establish the global convergence of Algorithm 2.1.

Theorem 3.8. Let {xk } be generated by Algorithm 2.1 and the conditions in Assumptions 1 and 2
hold. Then one has

 
lim gk   0. 3.35
k→∞

Proof. By Lemma 3.6, we have

lim − gkT dk  lim dkT Bk dk  0. 3.36


k→∞ k→∞

Combining 3.8 and 3.36, we get

lim dk   0. 3.37


k→∞

Together with 3.30, we obtain 3.35. The proof is complete.

4. The Superlinear Convergence Analysis


In this section, we will present the superlinear convergence of Algorithm 2.1.

Assumption 3. ∇g is Hölder continuous at x∗ ; that is, for every x in a neighborhood of x∗ ,


there are positive constants M1 and γ such that

 
∇gx − ∇gx∗  ≤ M1 x − x∗ γ , 4.1

where x∗ stands for the unique solution of 1.1 in Ω1 .

Lemma 4.1. Let {xk } be generated by Algorithm 2.1 and the conditions in Assumptions 1 and 2 hold.
Then, for any fixed γ > 0, one has



xk − x∗ γ < ∞. 4.2
k0
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
12 Advances in Operations Research

Moreover, one has



 
χk γ < ∞, 4.3
k0

where χk γ  max{xk − x∗ γ , xk1 − x∗ γ }.

Proof. Using Assumption 1, we can have the following inequality:

   
mx − x∗  ≤ gx  gx − gx∗  ≤ Mx − x∗ , x ∈ Ω1 . 4.4

By 3.8 and 3.30, we have

−β2 dk 2 ≤ −dkT Bk dk ≤ −β1 dk 2 ,


 2  2 4.5
−b22 gk  ≤ −dk 2 ≤ −b12 gk  .

Together with 3.28, we get

   
gk1 2 − gk 2 ≤ ρ1 g T dk ≤ −ρ1 dT Bk dk
k k

≤ −ρ1 β1 dk 2 4.6


 2
≤ −ρ1 β1 b12 gk  ,

and let ρ0  min{ρ1 β1 b12 , ρ} ∈ 0, 1. Suppose that there exists a positive integer k0 , as k ≥ k0 ,
3.8 holds. Then we obtain

            
gk1 2 ≤ gk 2 − ρ0 gk 2  1 − ρ0 gk 2 ≤ · · · ≤ 1 − ρ0 k−k0 1 gk 2  c0 ck , 4.7
0 1

where c0  1 − ρ0 1−k0 g0 2 , c1  1 − ρ0  ∈ 0, 1. This together with 4.4 shows that
xk1 − x∗ 2 ≤ m−2 c0 c1k holds for all k large enough. Therefore, for any γ, we have 4.2. Notice
that χk γ ≤ xk − x∗ γ  xk1 − x∗ γ ; from 4.2, we can get 4.3.

Lemma 4.2. Let Assumptions 1, 2, and 3 hold. Then, for all k sufficiently large, there exists a positive
constant M2 such that

 
yk − ∇gx∗ sk  ≤ M2 χk sk , 4.8

where χk  max{xk − x∗ γ , xk1 − x∗ γ }.


9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Advances in Operations Research 13

Proof. From Theorem 3.8 and 4.4, it is not difficult to get xk → x∗ . Then 4.1 holds for all k
large enough. Using the mean value theorem, for all k sufficiently large, we have
   
yk − ∇gx∗ sk   ∇gxk  t0 xk1 − xk sk − ∇gx∗ sk 
 
≤ ∇gxk  t0 xk1 − xk  − ∇gx∗ sk 
4.9
≤ M1 xk  t0 xk1 − xk  − x∗ γ sk 

≤ M2 χk sk ,

where M2  M1 2t0  1, t0 ∈ 0, 1. Therefore, the inequality of 4.8 holds.

Lemma 4.3. Let Assumptions 1, 2, and 3 hold and let xk be generated by Algorithm 2.1. Denote
Q  ∇gx∗ −1/2 , Hk  Bk−1 . Then, for all large k, there are positive constants ei , i  1, 2, 3, 4, and
η ∈ 0, 1 such that
    
Bk1 − ∇gx∗  ≤ 1  e1 χk Bk − ∇gx∗ Q,F  e2 χk , 4.10
Q,F
    
 −1   
Hk1 − ∇gx∗   −1 ≤ 1 − ηk2  e3 χk Hk − ∇gx∗ −1  −1  e4 χk , 4.11
Q ,F Q ,F

where AQ,F  QT AQF ,  · F is the Frobenius norm of a matrix and k is defined as follows:
 
 −1 
Q Hk − ∇gx∗ −1 yk 
k     . 4.12
 
Hk − ∇gx∗ −1  −1 Qyk 
Q ,F

In particular, {Bk }F and {Hk }F are bounded.

Proof. From 1.15, we have


 
   B s sT
B y y T
Bk1 − ∇gx∗   k k k k 
 Bk − ∇gx∗   T k
 T k
Q,F  sk Bk sk sk yk Q,F 4.13
 
≤ 1  e1 τk Bk − ∇gx∗ Q,F  e2 χk ,

where the last inequality follows the inequality 49 of 47. Hence, 4.10 holds. By 4.8, in
a way similar to that of 46, we can prove that 4.11 holds and Bk  and Hk  are bounded.
The proof is complete.

Lemma 4.4. Let {xk } be generated by Algorithm 2.1 and the conditions in Assumptions 1, 2 and 3
hold. Then
  
 Bk − ∇gx∗  sk 
lim  0, 4.14
k→∞ sk 

where sk  xk1 − xk .
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
14 Advances in Operations Research

Proof. In a similar way to 46, it is not difficult to obtain


 
 −1 
Q Hk − ∇gx∗ −1 yk 
lim    0. 4.15
k→∞ Qyk 

On the other hand, we have


     
 −1 −1   −1 
Q Hk − ∇gx∗  yk   Q−1 Hk ∇gx∗  − Bk ∇gx∗  yk 
   
 
≥ Q−1 Hk ∇gx∗  − Bk sk 
   
 
− Q−1 Hk ∇gx∗  − Bk sk − ∇gx∗ −1 yk 
   
 
≥ Q−1 Hk ∇gx∗  − Bk sk 
      
   
− Q−1 Hk  ∇gx∗   Bk  ∇gx∗ −1 yk − ∇gx∗ sk 
   
 
≥ Q−1 Hk ∇gx∗  − Bk sk 
     
   
− M2 χk Q−1 Hk  ∇gx∗   Bk  ∇gx∗ −1 sk 
   
 
 Q−1 Hk ∇gx∗  − Bk sk  − osk ,
4.16

where the last inequality follows from 4.8. We know that {Bk } and {Hk } are bounded,
and {Hk } is positive definite. By 3.5, we get
 
Qyk  ≤ MQsk . 4.17

Combining 4.15 and 4.17, we conclude that 4.14 holds. The proof is complete.

Theorem 4.5. Let the conditions in Assumptions 1, 2 and 3 hold. If ε1 → 0 in 3.10. Then the
sequence {xk } generated by Algorithm 2.1 converges to x∗ superlinearly for λk  1.

Proof. For all xk ∈ Ω1 , we get


   
   
gk1  gk  Bk dk  ∇gk − Bk dk  O dk 2 

dk  dk 
      
gk  Bk dk  gk   ∇gk − Bk dk  O dk 2 4.18
≤    
gk  dk  dk  dk 
    
gk   ∇gk − Bk dk 
≤ ε1   Odk ,
dk  dk 
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Advances in Operations Research 15

where the last inequality follows 3.10. By 3.5, we have

       
gk  ≤ gk1 − gk   gk1  ≤ Mdk   gk1 . 4.19

Dividing both sides by dk , we get

   
gk  gk1 
≤M . 4.20
dk  dk 

Substituting this into 4.18, we can obtain

        
gk1  gk1   ∇gk − Bk dk 
≤ ε1 M   Odk , 4.21
dk  dk  dk 

which means that


      
gk1  Mε1   ∇gk − Bk dk /dk   Odk 
≤ . 4.22
dk  1 − ε1 

Since ε1 → 0, and dk  → 0 as k → ∞, by 4.14 and 3.10, we have

 
gk1 
lim  0. 4.23
k→∞ dk 

Using 3.16, we get

 
gk1 
lim    0. 4.24
k → ∞ gk 

Considering 4.4, we have

xk  dk − x∗ 
lim  0. 4.25
k→∞ xk − x∗ 

Therefore, we get the result of the superlinear convergence.

5. Numerical Results
In this section, we test the proposed BFGS trust-region method on symmetric nonlinear
equations and compare it with Algorithm 2.2. The following problems with various sizes
will be solved.
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
16 Advances in Operations Research

Problem 1. The discretized two-point boundary value problem like the problem in 48 is

1
gx  Ax  Fx  0, 5.1
n  12

where A is the n × n tridiagonal matrix given by

⎡ ⎤
8 −1
⎢ ⎥
⎢−1 8 −1 ⎥
⎢ ⎥
⎢ ⎥
⎢ −1 8 −1 ⎥
⎢ ⎥
⎢ ⎥
A⎢ . . . ⎥, 5.2
⎢ .. .. .. ⎥
⎢ ⎥
⎢ ⎥
⎢ . . ⎥
⎢ . . . . −1⎥
⎣ ⎦
−1 8

and Fx  F1 x, F2 x, . . . , Fn xT with Fi x  sin xi − 1, i  1, 2, . . . , n.

Problem 2. Unconstrained optimization problem is

min fx, x ∈ Rn , 5.3

with Engval function 49 f : Rn → R defined by

n 
 2

fx  2
xi−1  xi2 − 4xi−1  3 . 5.4
i2

The related symmetric nonlinear equation is

1
gx  ∇fx  0, 5.5
4

where gx  g1 x, g2 x, . . . , gn xT with

g1 x  x1 x12  x22 − 1,

gi x  xi xi−1
2
 2xi2  xi1
2
− 1, i  2, 3, . . . , n − 1, 5.6

gn x  xn xn−1
2
 xn2 .

In the experiments, the parameters in Algorithm 2.1 were chosen as τ1  0.5, τ2  0.9, τ3 
3, r  0.1, Δmin  g0 , B0  I, ρ  0.25, σ1  σ2  10−5 , and σ3  0.9. We obtain dk from
subproblem 1.14 by the well-known Dogleg method. The parameters in Algorithm 2.2 were
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Advances in Operations Research 17

Table 1: Test Results For Problem 1.

a Small-scales. Test results for Algorithm 2.1.

x0 1,. . .,1 60,. . .,60 600,. . .,600 −1, . . . , −1 −60, . . . , −60 −600, . . . , −600
Dim NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG
13/24/ 14/25/ 17/30/ 13/24/ 14/25/ 17/30/
n  10 4.593832e-07
2.406624e-07 2.272840e-07 3.104130e-07 2.449361e-07 2.398188e-07
48/101/ 49/102/ 50/103/ 48/101/ 49/102/ 50/103/
n  50 2.120250e-07
2.189696e-07 4.009098e-07 2.147571e-07 2.181267e-07 4.008911e-07
82/171/ 89/188/ 91/190/ 82/171/ 89/188/ 91/190/
n  99
6.794811e-07 6.345939e-07 7.804790e-07 8.358725e-07 6.367964e-07 7.801889e-07
x0 1,0,1,0,. . . 60,0,60,0,. . . 600,0,600,0,. . . −1, 0, −1, 0, . . . −60, 0, −60, 0, . . . −600, 0, −600, 0, . . .
Dim NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG
21/42/ 22/43/ 22/45/ 21/44/ 22/43/ 22/45/
n  10 4.895404e-07
7.364467e-07 3.922363e-07 4.894966e-07 3.463471e-08 3.860638e-07
72/153/ 86/181/ 88/185/ 70/151/ 86/181/ 49/83/
n  50 8.003368e-07
9.350290e-07 4.420131e-07 7.620218e-07 6.776281e-07 4.420083e-07
73/156/ 88/185/ 88/191/ 74/161/ 88/185/ 88/191/
n  99 6.856897e-07
9.013346e-07 7.631881e-07 6.856481e-07 9.918464e-07 7.368909e-07

b Large-scales. Test results for Algorithm 2.1.

x0 1,. . .,1 60,. . .,60 600,. . .,600 −1, . . . , −1 −60, . . . , −60 −600, . . . , −600
Dim NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG
83/178/ 106/225/ 117/250/ 85/180/ 106/225/ 117/250/
n  200 8.779599e-7
9.096568e-7 9.483206e-7 8.796828e-7 7.376219e-7 9.263058e-7
85/180/ 103/218/ 115/244/ 83/178/ 103/218/ 115/244/
n  500 9.755827e-7
8.830573e-7 9.825658e-7 9.765194e-7 7.659650e-7 9.796118e-7
76/165/ 96/207/ 105/224/ 76/165/ 96/207/ 105/224/
n  1000
8.611337e-7 8.301215e-7 9.957816e-7 8.587066e-7 8.291876e-7 9.925005e-7
x0 1,0,1,0,. . . 60,0,60,0,. . . 600,0,600,0,. . . −1, 0, −1, 0, . . . −60, 0, −60, 0, . . . −600, 0, −600, 0, . . .
Dim NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG
68/149/ 91/194/ 101/216/ 69/150/ 91/194/ 101/216/
n  200 9.559911e-7
8.780047e-7 7.484521e-7 9.790557e-7 9.770900e-7 7.275693e-7
72/155/ 96/205/ 106/225/ 72/155/ 97/206/ 106/225/
n  500 8.921008e-7
9.797645e-07 9.993161e-7 8.916405e-7 9.886969e-7 7.492841e-7
69/152/ 93/200/ 106/227/ 69/152/ 93/200/ 106/227/
n  1000 8.123102e-7
9.919863e-7 6.930976e-7 8.119328e-7 9.948500e-7 6.946308e-7

c Small-scales. Test results for Algorithm 2.2.

x0 1,. . .,1 60,. . .,60 600,. . .,600 −1, . . . , −1 −60, . . . , −60 −600, . . . , −600
Dim NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG
54/107/ 67/133/ 74/147/ 54/107/ 67/133/ 74/147/
n  10 8.167469e-7
8.039519e-7 7.624248e-7 8.167466e-7 8.061366e-7 7.624560e-7
58/115/ 72/143/ 79/157/ 58/115/ 72/143/ 79/157/
n  50 8.876870e-7
9.602663e-7 7.684310e-007 8.876868e-7 9.603892e-7 7.684327e-007
60/119/ 73/145/ 80/159/ 60/119/ 73/145/ 80/159/
n  99
7.614838e-7 8.350445e-7 9.679851e-7 7.615091e-7 8.350450e-7 9.679851e-7
x0 1,0,1,0,. . . 60,0,60,0,. . . 600,0,600,0,. . . −1, 0, −1, 0, . . . −60, 0, −60, 0, . . . −600, 0, −600, 0, . . .
Dim NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG
52/103/ 64/127/ 72/143/ 52/103/ 64/127/ 72/143/
n  10 7.732660e-7
7.605486e-7 9.929883e-7 7.732628e-7 7.646868e-7 9.930747e-7
56/111/ 69/137/ 77/153/ 56/111/ 69/137/ 77/153/
n  50 8.223488e-7
8.896898e-7 9.690007e-7 8.223484e-7 8.899175e-7 9.690048e-7
57/113/ 71/141/ 78/155/ 57/113/ 71/141/ 78/155/
n  99 8.965852e-7
9.598124e-7 7.734909e-7 8.965851e-7 9.598763e-7 7.734918e-7
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
18 Advances in Operations Research
d Large-scales. Test results for Algorithm 2.2.

x0 1,. . .,1 60,. . .,60 600,. . .,600 −1, . . . , −1 −60, . . . , −60 −600, . . . , −600
Dim NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG
61/121/ 74/147/ 82/163/ 61/121/ 74/147/ 82/163/
n  200 7.610549e-7
8.110467e-7 8.917908e-7 7.610549e-7 8.110534e-7 8.917909e-7
62/123/ 76/151/ 83/165/ 62/123/ 76/151/ 83/165/
n  500 8.958279e-7
9.526492e-7 7.712044e-7 8.958279e-7 9.526504e-7 7.712044e-7
63/125/ 77/153/ 84/167/ 63/125/ 77/153/ 84/167/
n  1000
9.938699e-7 8.049274e-7 9.351920e-7 9.938703e-7 8.049274e-7 9.351920e-7
x0 1,0,1,0,. . . 60,0,60,0,. . . 600,0,600,0,. . . −1, 0, −1, 0, . . . −60, 0, −60, 0, . . . −600, 0, −600, 0, . . .
Dim NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG
59/117/ 72/143/ 79/157/ 59/117/ 72/143/ 79/157/
n  200 9.576414e-7
7.503172e-7 8.249912e-7 9.576414e-7 7.503296e-7 8.249914e-7
60/119/ 73/145/ 81/161/ 60/119/ 73/145/ 81/161/
n  500 8.285552e-7
8.811245e-7 9.701366e-7 8.285552e-7 8.811269e-7 9.701367e-7
61/121/ 75/149/ 82/163/ 61/121/ 75/149/ 82/163/
n  1000 8.649128e-7
9.191890e-7 7.444393e-7 8.649128e-7 9.191896e-7 7.444393e-7

chosen as Δ  Δ0  g0 , η1  0.25, η2  0.75, μ  0.01, and η3  2. Since the matrices


∇gxk T ∇gxk  will be singular, we solve 1.10 by Extreme Minimization with 2—Dimension
Subspace Method to obtain dk . The program was coded in MAT LAB 6.5.1. We stopped the
iteration when the condition gx ≤ 10−6 was satisfied. If the iteration number is larger
than one thousand, we also stop this program and this method is considered to be failed. For
Algorithm 2.1, Tables 1a and 1b and Tables 2a and 2b show the performance of the
method need to solve Problem 1 and Problem 2, respectively. For Algorithm 2.2, Tables 1c
and 1d and Tables 2c and 2d show the performance of the normal trust region method
need to solve Problem 1 and Problem 2, respectively. The columns of the tables have the
following meaning:

Dim: the dimension of the problem,


NI: the total number of iterations,
NG: the number of the function evaluations,
EG: the norm of the function evaluations.

From Tables 1a–2d, it is not difficult to see that the proposed method performs
better than the normal method does. Furthermore, the performance of Algorithm 2.1 hardly
changes with the dimension increasing. Overall, the given method is competitive to the
normal trust region method.

6. Discussion
We give a trust-region-based BFGS method and establish its convergent results in this paper.
The numerical results show that this method is promising. In fact, this problem 1.1 can
come from unconstrained optimization problem and an equality constrained optimization
problem for details see 4. There are some other practical problems, such as the saddle
point problem, the discretized two-point boundary value problem, and the discretized elliptic
boundary value problem, take the form of 1.1 with symmetric Jacobian see, e.g., Chapter 1
in 50. This presented method can also extend to solve the normal nonlinear equations.
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Advances in Operations Research 19

Table 2: Test Results For Problem 2.

a Small-scales. Test results for Algorithm 2.1.

x0 0.5,. . .,0.5 1,. . .,1 3,. . .,3 −0.75, . . . , −0.75 −2, . . . , −2 −3, . . . , −3
Dim NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG

25/44/ 21/32/ 92/103/ 21/32/ 46/59/ 86/105/


n  10 6.348374e-08
9.720971e-07 4.889567e-07 2.475812e-08 8.691255e-07 5.860956e-07
37/56/ 39/56/ 113/139/ 40/63/ 69/96/ 103/125/
n  50 9.404211e-07
9.950345e-07 8.776379e-07 9.587026e-07 6.984106e-07 9.523480e-07
42/59/ 41/60/ 113/135/ 40/55/ 117/489/ 97/129/
n  99
9.725361e-07 7.374460e-07 7.909796e-07 8.380367e-07 9.805302e-07 7.975248e-07
x0 0.5,0,0.5,0,. . . 1,0,1,0,. . . 3,0,3,0,. . . −0.75, 0, −0.75, 0, . . . −2, 0, −2, 0, . . . −3, 0, −3, 0, . . .
Dim NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG

24/35/ 21/30/ 44/65/ 27/48/ 39/76/ 29/42/


n  10 7.623619e-07
4.711749e-07 3.147507e-07 3.529113e-07 4.004367e-07 8.503415e-07
36/57/ 36/57/ 54/77/ 41/64/ 42/69/ 58/77/
n  50 9.752703e-07
8.776354e-07 8.287552e-07 8.491652e-07 9.492805e-07 9.029472e-07
36/61/ 37/56/ 60/93/ 42/73/ 50/79/ 62/88/
n  99 8.307004e-07
8.265146e-07 9.507706e-07 5.373087e-07 8.247653e-07 9.217390e-07

b Large-scales. Test results for Algorithm 2.1

x0 0.5,. . .,0.5 1,. . .,1 3,. . .,3 −0.75, . . . , −0.75


Dim NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG

40/57/ 41/58/ 112/130/ 40/65/


n  200 7.785598e-007
7.464372e-007 4.921097e-007 6.229759e-007
36/57/ 41/60/ 113/135/ 38/69/
n  500 9.785814e-007
7.887407e-007 3.538433e-007 8.871522e-007
42/65/ 40/59/ 120/146/ 40/69/
n  1000
7.382939e-007 7.463210e-007 6.044161e-007 4.563405e-007
x0 0.5,0,0.5,0,. . . 1,0,1,0,. . . 3,0,3,0,. . . −0.75, 0, −0.75, 0, . . .
Dim NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG

41/64/ 36/61/ 63/94/ 39/64/


n  200 9.737674e-007
6.671246e-007 9.977774e-007 8.153527e-007
42/67/ 37/58/ 49/74/ 37/60/
n  500 6.328648e-007
9.154342e-007 8.340650e-007 8.277585e-007
43/62/ 40/61/ 55/76/ 41/68/
n  1000 8.430165e-007
7.874632e-007 8.997602e-007 8.830280e-007

c Small-scales. Test results for Algorithm 2.2.

x0 0.5,. . .,0.5 1,. . .,1 3,. . .,3 −0.75, . . . , −0.75 −2, . . . , −2 −3, . . . , −3
Dim NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG

n  10 NI > 1000 NI > 1000 NI > 1000 NI > 1000 NI > 1000 NI > 1000

n  50 NI > 1000 NI > 1000 NI > 1000 NI > 1000 NI > 1000 NI > 1000

n  99 NI > 1000 NI > 1000 NI > 1000 NI > 1000 NI > 1000 NI > 1000
x0 0.5,0,0.5,0,. . . 1,0,1,0,. . . 3,0,3,0,. . . −0.75, 0, −0.75, 0, . . . −2, 0, −2, 0, . . . −3, 0, −3, 0, . . .
Dim NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG
196/391/ 204/407/ 213/425/
n  10 NI > 1000 NI > 1000 NI > 1000
9.528159e-007 9.939962e-007 9.975163e-007
199/397/ 208/415/ 217/433/
n  50 NI > 1000 NI > 1000 NI > 1000
9.791882e-007 9.592104e-007 9.634499e-007
n  99 NI > 1000 NI > 1000 NI > 1000 NI > 1000 NI > 1000 NI > 1000
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
20 Advances in Operations Research
d Large-scales. Test results for Algorithm 2.2.

x0 0.5,. . .,0.5 1,. . .,1 3,. . .,3 −0.75, . . . , −0.75


Dim NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG

n  200 NI > 1000 NI > 1000 NI > 1000 NI > 1000

n  500 NI > 1000 NI > 1000 NI > 1000 NI > 1000

n  1000 NI > 1000 NI > 1000 NI > 1000 NI > 1000


x0 0.5,0,0.5,0,. . . 1,0,1,0,. . . 3,0,3,0,. . . −0.75, 0, −0.75, 0, . . .
Dim NI/NG/EG NI/NG/EG NI/NG/EG NI/NG/EG
200/399/ 208/415/ 217/433/
n  200 NI > 1000
9.537272e-007 9.804205e-007 9.775482e-007
201/401/ 209/417/ 217/433/
n  500 NI > 1000
9.425775e-007 9.430954e-007 9.908579e-007
202/403/ 209/417/ 218/435/
n  1000 NI > 1000
9.503816e-007 9.824140e-007 9.470469e-007

Acknowledgments
The authrs are very grateful to anonymous referees and the editors for their valuable
suggestions and comments, which improve their paper greatly. This work is supported by
China NSF Grands 10761001 and the Scientific Research Foundation of Guangxi University
Grant no. X081082.

References
1 P. N. Brown and Y. Saad, “Convergence theory of nonlinear Newton-Krylov algorithms,” SIAM
Journal on Optimization, vol. 4, no. 2, pp. 297–330, 1994.
2 D. Zhu, “Nonmonotone backtracking inexact quasi-Newton algorithms for solving smooth nonlinear
equations,” Applied Mathematics and Computation, vol. 161, no. 3, pp. 875–895, 2005.
3 G. Yuan and X. Lu, “A new backtracking inexact BFGS method for symmetric nonlinear equations,”
Computers & Mathematics with Applications, vol. 55, no. 1, pp. 116–129, 2008.
4 D. Li and M. Fukushima, “A globally and superlinearly convergent Gauss-Newton-based BFGS
method for symmetric nonlinear equations,” SIAM Journal on Numerical Analysis, vol. 37, no. 1, pp.
152–172, 1999.
5 Z. Wei, G. Yuan, and Z. Lian, “An approximate Gauss-Newton-based BFGS method for solving
symmetric nonlinear equations,” Guangxi Sciences, vol. 11, no. 2, pp. 91–99, 2004.
6 G. Yuan and X. Li, “An approximate Gauss-Newton-based BFGS method with descent directions for
solving symmetric nonlinear equations,” OR Transactions, vol. 8, no. 4, pp. 10–26, 2004.
7 G. Yuan and X. Li, “A new nonmonotone conjugate gradient method for symmetric nonlinear
equations,” Guangxi Sciences, vol. 16, no. 2, pp. 109–112, 2009 Chinese.
8 G. Yuan, Z. Wei, and X. Lu, “A modified Gauss-Newton-based BFGS method for symmetric nonlinear
equations,” Guangxi Sciences, vol. 13, no. 4, pp. 288–292, 2006.
9 D. Li, L. Qi, and S. Zhou, “Descent directions of quasi-Newton methods for symmetric nonlinear
equations,” SIAM Journal on Numerical Analysis, vol. 40, no. 5, pp. 1763–1774, 2002.
10 S. G. Nash, “A survey of truncated-Newton methods,” Journal of Computational and Applied
Mathematics, vol. 124, no. 1-2, pp. 45–59, 2000.
11 A. Griewank, “The ‘global’ convergence of Broyden-like methods with a suitable line search,” Journal
of the Australian Mathematical Society. Series A, vol. 28, no. 1, pp. 75–92, 1986.
12 D. P. Bertsekas, Nonlinear Programming, Athena Scientific, Belmont, Mass, USA, 1995.
13 W. Cheng, Y. Xiao, and Q.-J. Hu, “A family of derivative-free conjugate gradient methods for large-
scale nonlinear systems of equations,” Journal of Computational and Applied Mathematics, vol. 224, no.
1, pp. 11–19, 2009.
14 A. R. Conn, N. I. M. Gould, and P. L. Toint, Trust-Region Methods, MPS/SIAM Series on Optimization,
SIAM, Philadelphia, Pa, USA, 2000.
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Advances in Operations Research 21

15 J. E. Dennis Jr. and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear
Equations, Prentice Hall Series in Computational Mathematics, Prentice Hall, Englewood Cliffs, NJ,
USA, 1983.
16 R. Fletcher, Practical Methods of Optimization, A Wiley-Interscience Publication, John Wiley & Sons,
New York, NY, USA, 2nd edition, 1987.
17 K. Levenberg, “A method for the solution of certain non-linear problems in least squares,” Quarterly
of Applied Mathematics, vol. 2, pp. 164–166, 1944.
18 D. W. Marquardt, “An algorithm for least-squares estimation of nonlinear parameters,” SIAM Journal
on Applied Mathematics, vol. 11, pp. 431–441, 1963.
19 J. Nocedal and S. J. Wright, Numerical Optimization, Springer Series in Operations Research, Springer,
New York, NY, USA, 1999.
20 N. Yamashita and M. Fukushima, “On the rate of convergence of the Levenberg-Marquardt method,”
Computing, vol. 15, pp. 239–249, 2001.
21 G. Yuan, Z. Wang, and Z. Wei, “A rank-one fitting method with descent direction for solving
symmetric nonlinear equations,” International Journal of Communications, Network and System Sciences,
no. 6, pp. 555–561, 2009.
22 Y. Yuan and W. Sun, Optimization Theory and Algorithm, Scientific Publisher House, Beijing, China,
1997.
23 S. M. Goldfeld, R. E. Quandt, and H. F. Trotter, “Maximization by quadratic hill-climbing,”
Econometrica, vol. 34, pp. 541–551, 1966.
24 M. J. D. Powell, “Convergence properties of a class of minimization algorithms,” in Nonlinear
Programming, 2, O. L. Mangasarian, R. R. Meyer, and S. M. V, Eds., pp. 1–27, Academic Press, New
York, NY, USA, 1974.
25 R. Fletcher, “An algorithm for solving linearly constrained optimization problems,” Mathematical
Programming, vol. 2, pp. 133–165, 1972.
26 R. Fletcher, “A model algorithm for composite nondifferentiable optimization problems,” Mathemati-
cal Programming Study, no. 17, pp. 67–76, 1982.
27 P. T. Boggs, A. J. Kearsley, and J. W. Tolle, “A practical algorithm for general large scale nonlinear
optimization problems,” SIAM Journal on Optimization, vol. 9, no. 3, pp. 755–778, 1999.
28 J. Nocedal and Y. Yuan, “Combining trust region and line search techniques,” in Advances in Nonlinear
Programming, vol. 14, pp. 153–175, Kluwer Acadmic Publishers, Dordrecht, The Netherlands, 1998.
29 T. Steihaug, “The conjugate gradient method and trust regions in large scale optimization,” SIAM
Journal on Numerical Analysis, vol. 20, no. 3, pp. 626–637, 1983.
30 Y. Yuan, “A review of trust region algorithms for optimization,” in Proceedings of the 4th International
Congress on Industrial & Applied Mathematics (ICIAM ’99), pp. 271–282, Oxford University Press,
Oxford, UK, 2000.
31 Y. Yuan, “On the truncated conjugate gradient method,” Mathematical Programming, vol. 87, no. 3, pp.
561–573, 2000.
32 M. R. Celis, J. E. Dennis, and R. A. Tapia, “A trust region strategy for nonlinear equality constrained
optimization,” in Numerical Optimization, 1984, P. R. Boggs, R. H. Byrd, and R. B. Schnabel, Eds., pp.
71–82, SIAM, Philadelphia, Pa, USA, 1985.
33 X. Liu and Y. Yuan, “A global convergent, locally superlinearly convergent algorithm for equality
constrained optimization,” Research Report ICM-97–84, Inst. Comp. Math. Sci/Eng. Computing,
Chinese Academy of Sciences, Beijing, China.
34 A. Vardi, “A trust region algorithm for equality constrained minimization: convergence properties
and implementation,” SIAM Journal on Numerical Analysis, vol. 22, no. 3, pp. 575–579, 1985.
35 R. H. Byrd, R. B. Schnabel, and G. A. Shultz, “A trust region algorithm for nonlinearly constrained
optimization,” SIAM Journal on Numerical Analysis, vol. 24, no. 5, pp. 1152–1170, 1987.
36 J.-Y. Fan, “A modified Levenberg-Marquardt algorithm for singular system of nonlinear equations,”
Journal of Computational Mathematics, vol. 21, no. 5, pp. 625–636, 2003.
37 M. J. D. Powell and Y. Yuan, “A trust region algorithm for equality constrained optimization,”
Mathematical Programming, vol. 49, no. 2, pp. 189–211, 1990.
38 A. Vardi, “A trust region algorithm for equality constrained minimization: convergence properties
and implementation,” SIAM Journal on Numerical Analysis, vol. 22, no. 3, pp. 575–591, 1985.
39 Y. Yuan, “Trust region algorithm for nonlinear equations,” Information, vol. 1, pp. 7–21, 1998.
40 Y. Yuan, “On a subproblem of trust region algorithms for constrained optimization,” Mathematical
Programming, vol. 47, no. 1, pp. 53–63, 1990.
9374, 2009, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2009/909753 by Iraq Hinari NPL, Wiley Online Library on [11/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
22 Advances in Operations Research

41 G. Yuan, X. Lu, and Z. Wei, “BFGS trust-region method for symmetric nonlinear equations,” Journal
of Computational and Applied Mathematics, vol. 230, no. 1, pp. 44–58, 2009.
42 J. Z. Zhang and D. T. Zhu, “Projected quasi-Newton algorithm with trust region for constrained
optimization,” Journal of Optimization Theory and Applications, vol. 67, no. 2, pp. 369–393, 1990.
43 J. Zhang and Y. Wang, “A new trust region method for nonlinear equations,” Mathematical Methods of
Operations Research, vol. 58, no. 2, pp. 283–298, 2003.
44 Y. J. Wang and N. H. Xiu, Theory and Algorithms for Nonlinear Programming, Shanxi Science and
Technology Press, Xian, China, 2004.
45 R. H. Byrd and J. Nocedal, “A tool for the analysis of quasi-Newton methods with application to
unconstrained minimization,” SIAM Journal on Numerical Analysis, vol. 26, no. 3, pp. 727–739, 1989.
46 J. E. Dennis Jr. and J. J. Moré, “A characterization of superlinear convergence and its application to
quasi-Newton methods,” Mathematics of Computation, vol. 28, pp. 549–560, 1974.
47 A. Griewank and Ph. L. Toint, “Local convergence analysis for partitioned quasi-Newton updates,”
Numerische Mathematik, vol. 39, no. 3, pp. 429–448, 1982.
48 J. J. Moré, B. S. Garbow, and K. E. Hillstrom, “Testing unconstrained optimization software,” ACM
Transactions on Mathematical Software, vol. 7, no. 1, pp. 17–41, 1981.
49 E. Yamakawa and M. Fukushima, “Testing parallel variable transformation,” Computational Optimiza-
tion and Applications, vol. 13, no. 1–3, pp. 253–274, 1999.
50 J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables,
Academic Press, New York, NY, USA, 1970.

You might also like