An Algorithm Research for Supply Chain Management Optimization Model

—In this paper, we consider the extended linear complementarity problem on supply chain management optimization model. We first give a global error bound for the extended linear complementarity problem, and then propose a new type of algorithm based on the error bound estimation. Both the global and quadratic rate of convergence are established. These conclusions can be viewed as extensions of previously known results.


I. INTRODUCTION
We consider a solution method for the extended linear complementarity problem on supply chain management optimization model.Letting () F x Mx p  , () G x Nx q  , the extended linear complementarity problem, abbreviated as ELCP, is to find a vector * n xR  such that * * 0 * * ( ) , ( ) , ( ) ( ) 0, where , The solution set of the ELCP is denoted by * X , which is assumed to be nonempty throughout this paper.
As is well-known, the extended linear complementarity problem (ELCP) is a special case of the extended nonlinear complementarity (ENCP) which plays a significant role in supply chain management.The topics of supply chain modeling, analysis, computation, and management are of great interests, both from practical and research perspectives.Research in this area is interdisciplinary by nature since it involves manufacturing, transportation, logistics, and retailing/marketing.
A lot of literatures have paid much attention to this area.See [1,2,3] for a recent surveys.Nagurney et al. ([4]) developed a variational inequality based supply chain network equilibrium model consisting of three tiers of decision-makers in the network.They established some governing equilibrium conditions based on the optimality conditions of the decision-makers along with the market equilibrium conditions.Dong et al.( [5]) establish the finite-dimensional variational inequality formulation for a supply chain network model consisting of manufacturers and retailers in which the demands associated with the retail outlets are random.Nagurney et al. ([6]) establish the finite-dimensional variational inequality formulation for a supply chain network model in which both physical and electronic transactions are allowed and in which supply side risk as well as demand side risk are included in the formulation.The model consists of three tiers of decisionmakers: the manufacturers, the distributors, and the retailers, with the demands associated with the retail outlets being random.
In recent years, many efficient solution methods have been proposed for solving it ( [7,8]).The basic idea of these methods is to reformulate the problem as an unconstrained or simply constrained optimization problem ( [7,8]).
It is well-known that nonsingularity of Jacobian at a solution guarantees that the famous Levenberg-Marquardt (L-M) method for ELCP has a quadratic rate of convergence ( [8]).Recently, Yamashita and Fukushima showed that the L-M method has a quadratic rate of convergence under the assumption of local error bound, which is much weaker than the nonsingularity of Jacobian( [9]).This motivates us to consider the error bound estimation for the ELCP.
The paper is organized as follows.In Section 2, we recall the error bound for the ELCP.In Section 3, using the obtained result of error bound, the famousL-M algorithm is employed for obtaining solution of the ELCP, and we establish its the global and quadratic convergence based on the established error bound.Section 4 concludes this paper.Moreover, we do not require M and N to be square, and compared with the algorithm converges in [8], our conditions are weaker.These conclusions can be viewed as extensions of results in [8].Some notations used in this paper are in order.Use n R  to denote the nonnegative orthant in n R ; x  and x  denote the orthogonal projections of vector  denotes the Euclidean 2-norm, the transpose of a matrix M be denoted by T M .Without of making confusion, we denote a nonnegative vector n xR   by 0 x  .www.ijacsa.thesai.org

II. PRELIMINARY
In this section, we mainly quote some known results on the error bound from [10] for ELCP.First, we give the needed assumptions.

Assumption 1 For ,,
A M N are the matrices defined in ( 1).
(A1) The matrix T MN is semi-definite (not necessarily symmetric); (A2) The matrix T  A is column-full rank.
The following result from Ref.4 mainly discusses the error bound for ELCP which will be applied to convergence of algorithm in next section.Assumption 2 For system (2), there exists point ˆ, x  such that ˆ( ) 0, ( ) 0, Theorem 1 Suppose that that Assumption 1(A1) and (A2) hold, and matrix   ( ) ,( ) Then there exists constant 1 0

III. ALGORITHM AND CONVERGENCE
In this section, we propose a new type of solution method to solve the ELCP based on the error bound results in Theorem 1, and the global and quadratic rate of convergence is also established, which was introduced first by Wang ( [8]) for ENCP, but result of it was not given.
We now formulate the ELCP as a system of equations via the Fischer function ( [11]) then the following result is straightforward.

Theorem 2 *
x is a solution of the ELCP if and only if * ( ) 0.
x  In this following, allows us to extend above error bound in Theorem 1 to another residual function () x  .First, we give the following result in which Tseng ( [12]) showed.Theorem 3 Suppose that the conditions of Theorem 1 hold, then there exists a constant 2 0 where the second inequality follows from Lemma 1 with constant 1 0 c  , the third inequality follows from the fact that Clearly, this bound is an extensions of Theorem 2.1 in Mangasarian and Ren ( [13]), Lemma 1 in Pang ( [14]), and Corollary 3.2 in Xiu and Zhang ( [15]).
Next, we review some definitions and basic results which will be used in the sequel.

The function () x
 is not differentiable everywhere with respect to n xR  .However, it is locally Lipschitzian, and therefore has a nonempty generalized Jacobian in the sense of Clarke ([16]).In the following, for a locally Lipschitzian Now, we recall some basic definitions about semismoothness and strong semi-smoothness.
A locally Lipschitz continuous vector valued function : nm RR  is said to be semi-smooth at It is well known that the directional derivative, denoted by '( ; ) xh  , of  at x in the direction h exists for any n hR  if  is semi-smooth at x .The following properties about the semi-smooth function are due to Qi and Sun in [18].Lemma 2 Suppose that : nm RR  is a locally Lipschitz function and semi-smooth, then a) for any ( ), 0, Semi-smooth functions lie between Lipschitz functions and continuously differentiable functions, and both continuously differentiable functions and convex functions are semi-smooth.A stronger notion than semi-smoothness is strong semismoothness.
The function : nm RR  is said to be strongly semismooth at x if  is semi-smooth at x and for any ( ), 0, V x h h    it holds that 2 '( ; ) (|| || ).

Vh x h o h   
A favorable property of the function () fx is that it is continuously differentiable on the whole space n R although () x  is not in general.We summarize the differential properties of  and f defined by ( 5) and ( 6) in the following lemma ( [19,20]).Lemma 3 For the vector-valued function  and realvalued function f defined by ( 5) and ( 6), the following statements hold.
(b) f is continuously differentiable, and its gradient at a point n xR  is given by ( ) ( ) , where V is an arbitrary element belonging to ( ).Vx   From Lemma 3 and discussion above, we can obtain the following result.
In this following, a method for solving the ELCP is outlined.It is similar to that in [8,9], But we consider method for ELCP with Armijo step size rule, and discuss its global convergence.

Algorithm 1
Step 1: Choose any point 0 Step 3: Choose an element () kk Vx  .Let kn dR  be the solution of the linear system kk , go to Step 5. Otherwise, go to Step 4.
Step 4: Let k m be the smallest non-negative integer m such that Step :1 kk , go to Step 2.
For the above Algorithm 1, we assume that Algorithm 1 generates an infinite sequence{} k x .By Theorem 3, Theorem 4, combining the proof of Theorem 3.1 in [9], we can obtain the following global convergence theorem.

Theorem 5 Let {} k
x be generated by Algorithm 1 for ELCP with line search, then any accumulation point of the sequence {} k x is a stationary point of f .Moreover, if an accumulation point * x of the sequence {} k x is a solution of (5).Then * ( , ) k dist x X converges to 0 quadratically.
In Theorem 5, we have showed that Algorithm 1 has a quadratic rate of convergence under local error bound, which is much weaker than the nonsingularity of Jacobian.it is an extensions of the algorithm converges conclusion in [8], which is a new result for ELCP.

IV. CONCLUSION
In this paper, we consider an algorithm for the extended linear complementarity problem on supply chain management optimization model.To this end, we first give the global error bound for the ELCP, and use the error bound estimation to establish the global and quadratic convergence of algorithm for solving the ELCP.
Surely, under milder conditions, we may established global error bound for ELCP with the mapping being nonmonotone, and may use the error bound estimation to establish quick convergence rate of the Newton-type method for solving the ELCP instead of the nonsingular assumption just as was done for nonlinear equations in [9], this is a topic for future research.
By Lemma 1 and Theorem 1, we have the following result. k n