A Self-adaptive Algorithm for Solving Basis Pursuit Denoising Problem

In this paper, we further consider a method for solving the basis pursuit denoising problem (BPDP), which has received considerable attention in signal processing and statistical inference. To this end, a new self-adaptive algorithm is proposed, its global convergence results is established. Furthermore, we also show that the method is sublinearly convergent rate of O( 1 k ). Finally, the availability of given method is shown via some numerical examples. Keywords—Basis pursuit denoising problem; algorithm; global convergence; sublinearly convergent rate; sparse signal recovery


I. INTRODUCTION
The basis pursuit denoising problem (BPDP) is considered to be an important issue encountered in the fields of signal processing and statistical inference, which is to find a sparse signalx ∈ R n from linear system z = Ax, and can be mathematically depicted as the following where f (x) := 1 2 Ax − z 2 2 , ϕ(x) = x 1 , A ∈ R m×n (m n), ρ > 0 is a parameter, and the 1 -norm and 2 -norm of the vector x are defined by x 1 = n i=1 |x i | and x 2 = ( n i=1 x 2 i ) 1/2 , respectively. In addition, we denote the solution set of the problem (1) by Ω * , and Ω * = ∅.
Clearly, (1) is an unconstrained convex optimization problem, and some standard algorithms such as the Newton-type algorithms or the conjugate gradient methods to solve it. But, these methods are not suitable for large-scale cases of BPDP, and it even become invalidation as n increases. In recent years, there are a lot of algorithms for solving BPDP have been extensively developed since its appearance. He and Cai et al( [1]) introduce a splitting method (MPRSM) for solving Dantzig selector problem, and the BPDP is a special case of this problem. Based on this theory, Sun and Liu et al ( [2])further investigate MPRSM for BPDP, and regularize its first subproblem by the proximal regularization. Yang and Zhang ([3]) investigate alternating direction methods for several 1 -norm minimization, including the basis pursuit problem, the basis-pursuit denoising problems, and so on. Yu et al.( [4]) apply the primal Douglas-Rachford splitting method to solve equivalent transformation form of BPDP. In [5], the authors proposed some efficient methods to solve 1 −-norm minimization problems, and are used in BPDP. Zhang and Sun ([6]) presented projection-type method to solve BPDP, its global convergence results of the new algorithm is established. BPDP can be transformed into a smooth optimization problem by some splitting technique equivalent. some iterative algorithms which can solve smooth optimization problem are applicable to this problem. Xiao and Zhu ([7]) transformed BPDP into a convex constrained monotone equations, and presented a conjugate gradient method for the equivalent the forms. Sun and Tian ( [8]) give a derivative-free conjugate gradient projection algorithms for non-smooth equations with convex constraints. Sun et al. ( [9]) reformulated BPDP as variational inequality problem, and proposed a novelly inverse matrix-free proximal point algorithm. Base on the same transformation of ( [9]), Feng and Wang ( [10]) also proposed a projection-type algorithm. Although there are so many ways to solve it, the solving speed and accuracy are still need improved. In the paper, we further consider a new self-adaptive method to solve BPDP, which this method is sublinearly convergent rate, the motivation behind this is for the better numerical performance when the dimension increases.
The rest of this paper is organized below. In Section 2, some related properties are given, which are the basis of our analysis. We present a new self-adaptive algorithm with Armijo-like line search to solve BPDP, and show that this method is global convergence in detail. Furthermore, the sublinearly convergent rate of O( 1 k ) is presented. In Section 3, we give some numerical experiments on BPDP for sparse signal recovery to show availability of the presented algorithm. Finally, some results are described in Section 4.
In the end of this section, we give some notations used in this paper. Use R N to denote an N -dimensional Euclidean space with the standard inner product. For vectors x, y ∈ R M , we use < x, y > to denote the standard inner product. We denote the standard l 1 -norm and l 2 -norm by · 1 and · , respectively.

II. ALGORITHM AND CONVERGENCE
In this section, we will present a new iterative algorithm with Armijo-like line search to solve BPDP, and the global convergence and sublinearly convergent rate of new algorithm is proved in detail. To this end, we give some needed preliminaries which will be used in the sequel.  mation of F (x) below: and (2) can be further written as: Next, we recall Lemma 2.1 below, which is fundamental property for smooth function in the class C 1,1 . It will be crucial for the convergence analyses of our algorithm below.
From Lemma 2.1, if L ≥ L f , then for any y ∈ R n , one has Now, we formally state our algorithm for model (1) as follows.
Remark 2.1: By the subdifferential of the absolute value function |t|, which be given as follows: Combining this with (7), we obtain the following results.
By the above analysis, we have i.e., Remark 2.2: Combining (7) with Lemma 2.2, we know that for some m. In addition, we know that L k /η must violate (7), i.e., L k < η A A . Thus, we obtain Using L k = η m k β and η > 1, one has β < L k for every k ≥ 1. Hence, β < η A A .
Next, we will discuss global convergence results and sublinearly convergent rate of the proposed method. To this end, we present some lemmas below. Proof: For any k ≥ 1, we have where the first inequality is obtained by using (4)with y = x k−1 , x = x k and L = L k , the second inequality follows from (2) and (6). Thus, the desired result follows.
Theorem 2.1: Suppose that x * be an arbitrary solution of (1), and {x k } be sequence generated by Algorithm 2.1. Then, for any k ≥ 1, one has Proof: Applying Lemma 2.3 with x = x * , k = m, one has (19) Since x * be a solution of (1), then one has F (x * )−F (x k ) ≤ 0. Combining this with (10), we obtain where where the second inequality is by (19). By (20), we can deduce (21) Applying (12) i.e., By (24), we can deduce Adding (21) and (25), we have Thus, the desired result follows.

Remark 2.3: Theorem 2.1 indicates that we can obtain an -optimal solution, denoted byx, and requires the number of iterations at most
Theorem 2.2: Suppose that Ω * is bounded. Then, the {x k } generated by Algorithm 2.1 converges globally to a solution of (1).
Proof: By (11), using F (x) ≥ 0, we know that {F (x k )} be convergent. Combining this with (23), one has Applying (19) and the fact F (x * ) − F (x k ) ≤ 0, we have By (29), then the nonnegative sequence { x k −x * } is decreasing, so it converges. Since the solution set of (1) is bounded. Thus, {x k } is bounded, and let {x ki } be a subsequence of {x k } and converges towardx, combining this with (28), one has From (8), one has σ∂ϕ(x ki )+∇f (x ki−1 )+L ki (x ki −x ki−1 ) = 0. Combining this with (30)and (28), one has www.ijacsa.thesai.org Since the function F (x) be convex, combining this with (31), we havex is a solution of (1). As a result, thex can be used as x * to discussion of Theorem 2.1 above. Thus, we obtain that the sequence { x k −x } also converges, combining lim i→∞ x ki −x = 0, we have lim k→∞ x k −x = 0. i.e. {x k } converges globally towardx.
Remark 2.4: By (28), we know that the termination criteria in Step2 of Algorithm 2.1 is reasonable.

III. NUMERICAL RESULTS
In this section, we present some numerical experiments about BPDP to show availability for Algorithm 2. The initial signalx is generated by p=randperm(n); x(p(1:k))=randn(k,1).

We set the stop criterion is
where F k = F (x k ). The relative error is calculated by where the recovery signal be denoted byx.

A. Test on additive Gaussian white Noise
In this subsection, apply Algorithm 2.1 to recover a simulated sparse signal which observation data is corrupted by additive Gaussian white noise. We set n = 2 11 , m = 2 9 , k = 2 6 .
The original signal, the measurement and the reconstructed signal(marked by red point) by Algorithm 2.1 are given in Fig.1. Obviously, from the first and the third plots in Fig.1, all elements in the original signal are circled by the red points, which indicates that the Algorithm 2.1 can recover the original signal quite well.
On the other hand, use a same technique in [8] to create another type of matrix A. Using the parameters above, the original signal, the measurement and the reconstructed signal(marked by red point) by the Algorithm 2.1 is given in Fig.2. It can be concluded that our algorithm is can also reconstruct the original signal in [8] .  The numerical results are listed in Table I. From the table, we can drive that CPU time of Algorithm 2.1 are obviously less than other algorithms in different k-Sparse signal whether it is Free noise or Gaussian noise. In addition, we can not only know that the running speed is faster than other algorithms, but also that our algorithm is more accurate than other algorithms, which shows that Algorithm 2.1 is batter than PPRSM and LAPM.

IV. CONCLUSION
In this paper, we consider a new self-adaptive method to solve the basis pursuit denoising problem (BPDP), which has received considerable attention in signal processing and statistical inference. Global convergence result of this method is given in detail. Furthermore, the global sublinearly convergent rate of the method also is shown. Finally, some numerical results illustrate that this algorithm is valid for the given tests on sparse signal recovery.
According to its limitations, this work has several possible extensions. Firstly, the parameters of Algorithm 3.1 is adjusted dynamically to further enhance the efficiency of the corresponding method. Secondly, we may established error bound for (1) just as was done for GLCP in [14], [15], [16], [17], and may use the error bound estimation to establish quick convergence rate of the new Algorithm for solving (1). This is a topic for future research.