A New Type Method for the Structured Variational Inequalities Problem

In this paper, we present an algorithm for solving the structured variational inequality problem, and prove the global convergence of the new method without carrying out any line search technique, and the global R-convergence rate are also given under the suitable conditions. Keywords—structured variational inequality problem; algorithm; globally convergent; R-linear convergent


I. INTRODUCTION
Leting mappings : , : , the structured variational inequality problem with linear constraint is to find vector *  u  such that ** ( ) ( ) 0, This problem has important applications in many fields, such as network economics, traffic assignment, game theoretic problems, etc.For example, Nagurney et al. ( [1]) developed a variational inequality based supply chain network equilibrium model consisting of three tiers of decisionmakers in the network.They established some governing equilibrium conditions based on the optimality conditions of the decision-makers along with the market equilibrium conditions.In recent years, many methods have been proposed to solve the VI ( [2][3][4][5][6][7][8]).The alternating direction method (ADM) is a powerful method for solving the structured problem (1.2), since it decompose the original problems into a series subproblesm with lower scale, which was originally proposed by Gabay and Mercier ([5]) and Gabay( [6] ).Ye and Yuan [7] proposed a new descent method for VI by adding an additional projection step to the above ADM.Han ([8]) proposed a modified alternating direction method for variational inequalities with linear constraints.At each iteration, the method only makes an orthogonal projection to simple set and some function evaluations.Motivated by [7,8], we present a new algorithm for the structured variational inequality problem, and prove the global convergence of the new method without carrying out any line search technique.Furthermore, we also show that this method is global R  linear convergent under the suitable conditions.Some notations used in this paper are in order.The vectors considered in this paper are all taken in Euclidean space equipped with the standard inner product, which is denoted by n R .We let and

II. PRELIMINARIES
In this section, we first give the following definition of projection operator and some relate properties ( [9]).For nonempty closed convex set n R  and any vector n xR  , the orthogonal projection of x onto  is denoted by ()

re  
The following conclusion provides the www.ijacsa.thesai.orgrelationship between the solution set of (1.2) and that of projection-type residual function ( [10]).
Lemma 2.2  is a solution of (1.2) if and only if ( ) 0.

r  
To establish theoretical analysis of the following algorithm, we also need the following definition.Definition 2.1 The mapping : nm f R R  is said to be cocoercive with modulus 0 is said to be strongly monotone if there is constant 0 Obviously, suppose that f is strongly monotone with positive constant  , and is Lipschitz continuous with positive constant 0 L  .Then the f is co-coercive, i.e., for any , .

III. ALGORITHM AND CONVERGENCE
In this following, we formally state our algorithm.

Algorithm 3.1
Step1.Take 0,    are defined in the following Theorem 3.1, and take initial point 0 . ; 2), and so is also (1.1).In the following theoretical analysis, we assume that Algorithm 3.1 generates an infinite sequence.
Lemma 3.1 Suppose that the matrix M is positive semidefinite, and , where the second inequality follows from the fact that , fg are co-coercive with positive constants 12 , and Lemma 3.1; the fourth inequality is obtained by the ninth inequality follows from the Cauchy-Schwarz inequality.Thus, we have .
by (3.4), we conclude that the nonnegative sequence * {} k   ‖ ‖ is strictly decreasing and convergent.Thus, we have () k rx converges to 0 by (3.5).We also obtain that the  be a subsequence of {} k  and converges to  , since () r  is continuous, we have ( ) 0 r   , i.e.,  is a solution of (1.1).
On the other hand, we suppose that  is also a accumulation point of {} k  , and let {} j k  be a subsequence of  and converges to  .For any j k , there exists i such that ij kk  , by (3.4), we obtain that 22 0 ) again, we can also obtain . Thus, we have i.e.,   .Thus, the sequence {} k  converges globally to a solution of (1.1).
To establish the R  linear convergence rate of Algorithm3.1, we also need the following conclusion which is crucial to convergence rate of algorithm.
. Thus, we can conclude that (3.6) holds.Theorem 3.2 Suppose that the hypotheses of Lemma 3.2 holds, and  satisfies the condition

IV. CONCLUSIONS
In this paper, we proposed a new iterative method for solving the structured variational inequality problem (VI), and have proved its global convergence without carrying out any line search technique.Furthermore, the error bound estimation for VI is also established under the suitable conditions, based on this, we prove that the method has global R-linear convergence rate.Surely, under milder conditions, we may established global error bounds for VI, and may use the error bound estimation to establish quick convergence rate of the method for solving the VI.This is a topic for future research.

W
, and assume that it is nonempty throughout this paper.By attaching a Lagrange multiplier vector r R   to the linear constraints Ax By b ,(1.1)canbe equivalently transformed into the following compact form, denoted by VI: which is always assumed to be nonempty.

1 respectively denote the usual Euclidean 2 -
norm and 1-norm of vectors in n R .The transpose of matrix M (vector x ) be denoted by T M (

Lemma 3 . 2
Suppose that , fg are strongly monotone with positive {} k  converges to a solution of (1.1) R  linearly.Proof: Combining (3.5) with (3.6), one has Since the matrix M is positive semi-definite, then there exists an orthogonal matrix P such that