Deployment and Migration of Virtualized Services with Joint Optimization of Backhaul Bandwidth and Load Balancing in Mobile Edge-Cloud Environments

Mobile edge-cloud computing environments appear as a novel computing paradigm to offer effective processing and storage solutions for delay sensitive applications. Besides, the container based virtualization technology becomes solicited due to its natural lightweight and portability as well as its small migration overhead that leads to seamless service migration and load balancing. However, with the mobility property, the users’ demands in terms of the backhaul bandwidth is a critical parameter that influences the delay constraints of the running applications. Accordingly, a Binary Integer Programming (BIP) optimization problem is formulated. It minimizes the users’ perceived backhaul delays and enhances the load-balancing degree in order to offer more chance to accept new requests along the network. Also, by introducing bandwidth constraints, the available user backhaul bandwidth after the placement are enhanced. Then, the adopted methodology to design two heuristic algorithms based on Ant Colony System (ACS) and Simulated Annealing (SA) is presented. The proposed schemes are compared using different metrics,and the benefits of the ACS-based solution compared to the SA-based as well as a genetic algorithm (GA) based solutions are demonstrated. Indeed, the normalized cost and the total backhaul costs are given by more optimal values using the ACS algorithm compared to the other solutions. Keywords—Mobile edge-cloud computing; delay-sensitive services; container migration; container deployment; backhaul bandwidth; load balancing


I. INTRODUCTION
Mobile Edge Computing (MEC) is an emerging distributed computing paradigm that can deliver timely services to mobile users [1], [2]. They generally use resource-limited smart mobile devices (SMD) that allow them to run indispensable smart applications related to social networking, learning, businesses and entertainment. To reinforce privacy, reduce latency, preserve bandwidth and offer location-awareness, MEC enables computation and storage at the edge of the network using a set of edge nodes (EN). These nodes are resource-rich network cells or edge servers (ES) that are deployed in close proximity of the end-users and offer virtualized services to allow offloading of the mobile applications' workloads [3]. The use of these applications leads to appear new constraints related to mobility, limited energy, limited computational capacity and short latency.
The MEC model uses the virtualization techniques to master the resource allocation operations for Virtual Services (VSs) [4]. These VSs are often placed, migrated or replicated over the ENs according to the users' locations and resources availability while considering constraints such as QoS, load balancing and energy. Besides, the new container-based lightweight virtualization solution is intended to decrease the communication network overhead and enhance continuity and quality of services. Though, especially with the user's mobility intrinsic property that is mostly frequent and unpredictable and the limited coverage of nodes, a guaranteed QoS for the deployed virtualized services is the most critical issue [5]. Indeed, when the user moves far from the edge server that deploys the corresponding virtualized service, the service response time becomes significant and can hamper the smooth running of the service. Therefore, a service migration [6] process in this case becomes important to make the service more interactive and guarantee its continuity. But, due to the high cost of this process regarding its time and the consumption of the available network bandwidth and other resources, the migration decision is very critical. Actually, with the non-negligible migration overhead, frequent migration according to the user's movement cannot be tolerated in all network conditions, whereas limited migration leads to the accumulation of communication delays which may degrade the QoS.
Service migration has sprung up recently as a leading problem in MEC networks. It involves complex procedures to dynamically move running services from one edge node to another. It becomes solicited in different edge management procedures, such as service failures handling, load balancing, mobile workloads offloading handling, etc. Also, to guarantee service-level agreements (SLA) or seamless services, it has to meet many constraints related to the available network and computing resources, the latencies' order of magnitude as well as the users' mobility [7]. The migration decisions are taken while optimizing a general cost or profit function that is evaluated in a long-term or short-term scenarios. Its formulation uses many metrics such as the migration duration, service downtime duration, network resources consumption, etc. However, a precise evaluation of these metrics remains a major problem for a good modelling of this problem. On the one hand, because of the great diversity and the strong dynamicity of the parameters as well as the mobility of the users. On the other hand, because of the limitation of the resources involved in the migration which accentuates the constraints and limits the number of possible solutions.

II. RELATED WORKS
With the high-mobility characteristic in the context of Vehicular Edge Network, the authors of [8] considered a delaybased cost function involving wireless transmission, backhaul and computing delays. They examined the problem of joint service migration and mobility optimization with minimum migration cost and travel time. To solve the problem, a multiagent deep reinforcement learning algorithm was proposed. In [9], the authors use migration frequency and migration time as the migration cost and suggest a QoS aware solution to enhance the handover operations by exchanging additional information in order to perform service migration.
With the user mobility awareness assumption and QoS concerns for efficient service migration in MEC networks, many relevant works target service migration optimization. The work in [10] uses the follow-me edge concept to derive a service performance optimization problem constrained to a long-term cost budget to decide the service migration. The decision metrics include the Computing and Communication delays plus the migration cost. The long-term optimization problem is decomposed using Lyapunov optimization then approximated based on Markov approximation to derive a nearoptimal solution with fast convergence rate. Moreover, based on this last concept to guarantee high availability and prop ultra-low latency, in [11] the authors studied four containerbased migration strategies. They considered both predefined and unknown path scenarios. The work in [12] considered a cost function with a combination of three metrics: the topology cost that depends on the network structure and routing mechanism, the user-perceived delay and the risk of location privacy leakage. They modelled the migration procedure as a Markov Decision Process (MDP) problem, and propose a modified policy iteration algorithm to find the optimal decision. Also, a distance-based MDP was proposed in [13] to optimize the trade-off between the user-experienced delay and migration cost while considering the distance separating the user and the service locations. The work in [14] considered a dynamic task migration problem with delays, tasks' deadlines and user mobility consideration. The objective function to maximize was the number of tasks with guaranteed deadlines.
However, the mobility information is usually unavailable in real world due to privacy and inaccuracy issues. With this consideration, several recent works tackled the optimization of service or container migration from various perspectives. The work in [15] studied container migration in edge networks using a joint load balancing and migration cost minimization model. The migration cost encompasses two main metrics: network transmission delay and container migration downtime. They designed a migration solution based on a modified Ant Colony System algorithm. In [16] a live migration framework of container-based offloading services is presented. The basic optimization idea consist in sharing common storage layers across the edge hosts. Also in [17], the authors addressed the high network consumption problem while migrating virtual machines within cloud-edge fusion computing. They proposed heuristic algorithms to balance migration and communication costs.
The rest of this work is organized as follows. The system's model is describe in Section III. The obtained optimization problem is presented in Section IV , and its resolution's ap- proaches are summarized in Section V . Evaluation and results are presented in section V I. Finally, Section V II concludes the paper.

III. USER'S BACKHAUL AND LOAD BALANCING AWARE MIGRATION AND DEPLOYMENT OF CONTAINERS (UBL-MDC)
In this section, the need to optimise a multiple criteria decision-making problem in the proposed edge-cloud architecture is shown. Then, the involved cost functions as well as the final overall objective function to optimize is formulated.

A. System Model
In this paper, the service deployment and migration problem from the perspective of an edge-cloud service provider is studied. As shown in Fig. 1, an edge-cloud network that uses a public/private cloud (PC) and a set of Edge Nodes (ENs) within a 2-D geographical local area is considered. Each EN is equipped with an Edge Server (ES) that can be hosted in a Base Station (BS) that offers access to the wireless communication network for all SMDs in its coverage, or simply deployed to offer to the network more processing and storage capabilities. In this last case, the server is called independent edge server and denoted (IS). For ease of use, an EN or its ES are indifferently used, while the edge-cloud server i is denoted s i . A given ES within a BS serves the SMDs within the coverage area of the BS or other remote ones, whereas an independent ES serves only remote SMDs. The PC is supposed to have unlimited capacity, whereas all ESs are supposed heterogeneous with limited resources. Also, each ES can provide a set of independent virtualized services (VS) using the container-based lightweight virtualization technology where each running service uses a container instance and serves one SMD only. The set of all available edge-cloud servers is denoted S = {s 1 , s 2 , ..., s σs } where σ s is the number of servers. For ease of use, the set of all involved containers is denoted C = {c 1 , c 2 , ..., c σc } where σ c is the number of containers.

1) UBL-MDC Variables:
To model the involved operations in the studied system, the decision variables are presented: The migration binary decision variable of container i from its edge server s c i to EN j is denoted α i,j where α i,j = 1 refers to the decision to migrate c i from s c i to j, otherwise, α i,j = 0.
Additionally, when migrating container i from its edge server s c i to j the decision variable to select the migration path among the possible paths set P s c i ,j is the binary variable β k i,j where β k i,j = 1 refers to the decision to use the k-th path in P s c i ,j to migrate i from edge server s c i to j, otherwise β k i,j = 0.
2) Paths and delay: The SMDs get access to the ESs via wireless channels, while the nearby ENs are connected to each other in wired manner using high speed Ethernet cables or optical fibers. The MEC network topology is given by the set of nodes S and the set of links relying them. The set of links is denoted L which can be defined as L = {L j,j |; j ∈ S; j ∈ S \ {j}} where L j,j is one hop link between ES s j and s j . Also, P s c i ,j is used to denote the set of feasible 1 paths connecting ESs s c i and j that can serve to migrate container c i located in ES s c i to server s j . Without loss of generality, we assume that the set P s c i ,j is precalculated and given while deciding the containers migration. Then, P is used to denote the set of all sufficient paths connecting all pairs of distinct nodes (j, j ) defined as: Each path p k in P s c i ,j is an ordered set of distinct links of length |p k | such that p k = L 1 , L 2 , ..., L |p k | . Here, the source node of link L 1 is s c i and the target node of the last link L |p k j,j | is j. For the remaining links, the source node of link L is the target node of link L −1 and the target node of link L is the source node of link L +1 . Fig. 2 shows a network topology example given by a set of five ESs S = {s 1 , s 2 , s 3 , s 4 , s 5 } where σ s = 5 and six wire links denoted : L = {L s1,s2 , L s1,s3 , L s2,s3 , L s1,s4 , L s1,s5 , L s4,s5 }. Also, the dotted links show a migration path instance p1 with its ordered links set given by p 1 = (L s2,s1 , L s1,s4 ). Then, the set of possible paths connecting s 2 and s 4 is given by the following set P 2,4 = {p 1 , p 2 , p 3 , p 4 } where: p 1 = (L s2,s1 , L s1,s4 ), p 2 = (L s2,s1 , L s1,s5 , L s5,s4 ), p 3 = (L s2,s3 , L s3,s1 , L s1,s4 ) and p 4 = (L s2,s3 , L s3,s1 , L s1,s5 , L s5,s4 ). In the proposed model each link ∈ L is characterized by its total available bandwidth b( ). In addition, given the path p k ∈ P s c i ,j and the set L of all σ l links, the binary array δ k i,j of length σ l indicating membership of all links to p k is defined. Accordingly, the binary indicators δ k, i,j of each link ∈ L can be computed using the paths in P s c i ,j such that δ k, i,j takes 1 if link in L is crossed in path p k ∈ P s c i ,j , otherwise it takes 0. 1 we assume that a restriction set of paths is sufficient to obtain the optimal solution without the need to consider all possible paths Thus, each available path p k ∈ P j,j with the hop count H k j,j offers an allocatable bandwidth B k j,j (see Ref [15]) for multi hop data transmission. They are respectively expressed as: Thus, when container c i is transferred to node j, the backhaul bandwidth between its target node s t i and node j is given by: which gives: 3) Containers: Hereafter and for ease of notation, the following variables i, j, k, , m are reserved to use for containers, servers, paths, links and resources respectively. Also, from now on, each container c i is characterized by the following operating parameters : Here, s c i refers to the current server hosting container c i and s t i refers to the actual node hosting the communication access point connecting the user of the service associated with container c i . This node is considered as the best candidate target node for deployment or migration so that the best transfer paths for c i are in P s c i ,s t i . Actually, if a path is a feasible solution, container c i will be migrated to the direct proximity of the user with no communication overhead. Also, R dem x i indicates whether c i is requested for a migration (x i = 1) or for a new deployment procedure (x i = 0).

4) Edge servers resources:
Each ES provides a set of resources among multiple types including CPU, GPU, memory, storage, etc. Here, the set of possible σ r resources is denoted R = {r 1 , r 2 , ..., r σr }. Accordingly, every server j is characterized by its capacity set in terms of resources which is denoted R cap j = {r c j,1 , r c j,2 , ..., r c j,σr }. Here, r c j,m represents the maximum available quantity in terms of resource r m that s j can furnish. Within the server j which runs a set of containers using the allocated resources, the deployment and migration will result in hosting new containers and freeing others conforming to the placement decisions. Thus, the utilization of resource r m on ES j after the migration process is calculated as follows:

5) Container deployment:
Deploying a service in this work refers to the transfer of unstarted components of the container (program codes, libraries, databases, etc.) from the storing node to a MEC server in order to make them available to serve a user. The provider's containers are stored in its PC or in a specific known EN depending on the requested service. Thus, all new incoming service requests from the users trigger service deployment from the hosting nodes to the ENs. Here, the same notation s c i is adopted to refer the hosting node of the requested container c i . 6) Container migration: Migrating a container tries to achieve load balancing of ESs and increase the number of services that meet the execution latency constraints if necessary. this process involves the transfer of all runtime memory states as well as the related storage data that should be synchronized in the target ES. Furthermore, migration traffic routing in MEC networks not only helps to significantly reduce services downtime and interruption by selecting the best routing paths, but it protect the network from route failure. Indeed, if some links are in use or completely fail, alternate paths can be selected to redirect and salvage the data flows. Accordingly, a container migration decision has to found the expedited path to route the migration flows while avoiding the network congested links.
The important notations used are summarized in Table I.   TABLE I The set of containers S The set of edge-cloud servers L The set of links σc, σs, σr The the total number of containers, servers, resources P j,j The set set of sufficient paths connecting servers sj and s j B k j,j The bandwidth of path p k ∈ P j,j H k j,j The hop count of path p k ∈ P j,j Bi The backhaul bandwidth associated with container ci Ωi The operating parameters of container ci The ci demand in terms of resource rm r u j,m The sj resource usage in terms of resource rm

B. The Cost Models
As already alluded above, the containers' deployment or migration has to be decided while optimizing a cost model as it is the most suitable way to favour one possible migration solution over another. Thus, in the present section the costs that are involved to formulate the objective function of the optimization problem are presented. Table II shows some important notations used to express these costs. The container ci backhaul cost when transferred to sj Cost back The overall user backhaul cost Cost proc j The processing load cost related to server sj Cost proc The overall processing load cost Cost netw The overall network load cost The cost or objective function φ0, φi,j The pheromone initial and current values The local and global pheromone evaporation rates ε1, ε2 The pheromone and heuristic information parameters temp0 The initial temperature value 1) User backhaul cost: After placing container c i at node j, the backhaul delay of its user depends on the characteristics of the path connecting node j and its communication access node s t i . In fact, the ideal situation is achieved if j = s t i . Accordingly, to favour such migration, this cost is introduced in order to bring the containers as close as possible to their end users. Generally, the smaller is this cost the more efficient the placement is. To assess this cost, the available bandwidth between nodes j and s t i as well as the hop count between them are used. Accordingly, the following weighted sum is adopted: Here i ∈ C; j ∈ S and Cost back i,j is ranging in [0,1], ∆ r and ∆ h are the weights associated respectively with the available data rate (bandwidth) and the hop count costs such that ∆ r +∆ h = 1. Also, the fractions' max and min expressions are used for normalization purpose. Therefore, with the decision vector α, the overall user backhaul cost can be obtained as: 2) Load balancing cost: To ensure the service quality while taking into account the service delay, the model favours containers' migration from over-loaded ENs to release resources for future nearby users requests. Also, to avoid the unbalanced network load, e.g. some links are highly loaded while some others are less loaded, the links traffic load metric is introduced. The main intuition behind balancing this load is to select paths that best balance the traffic loads across different links and keep critical links available for future traffic. Accordingly, the load balancing cost involves the processing or computation load cost of the running containers on all ENs and the traffic load of all available links. The processing load ratios θ j,m of resource r m in ES j and their mean valueθ m are defined as follows: Then, the processing load related to ES j with regards to all resource types is defined as: which gives the following overall processing load: On the other hand, the network load balancing cost shows the distribution ratio of the links' load or indicates whether containers receive a fair share of data transfer resources. Hence, the lack of capacity ϑ of link using the the allowable bandwidth B k s c i ,j is given by: Accordingly, this lack of capacity is chosen as the network load unbalance cost related to link which gives the following overall network traffic load unbalance cost: Moreover, as ϑ (α, β) b(l) * (σ c − 1), herein the following normalization sum is presented: Finally, the following weighted sum is adopted to asses the overall load balancing cost where ∆ p and ∆ n are the weights associated respectively with both processing and network loads such that ∆ p + ∆ n = 1. Also, the denominators in this expression are used for normalization purpose.

A. Multi-objective Cost Function
Now, to get the overall cost model, a multi-criteria migration and deployment decisions by considering all the three cost metrics within the proposed edge computing system is designed. The proposed multi-objective function is formulated as a weighted sum of these four costs using the following function: Here ∆ b , and ∆ l are regulatory weights constants to balance this cost function. Their values are ranging in [0,1] such that ∆ b + ∆ l = 1. By deciding these weights one can adjust the priority to attribute to each metric. Here, the variables are given by α (two dimensions binary array [σ c × σ s ] ) and β (two dimensions array [σ c × σ s ] of vectors where β i,j is a vector of binaries of length |P i,j |).

B. Constraints
In the proposed model model, the case when container i is not migrated is represented by setting α i,s c i = 1 and α i,j = 0 for j ∈ S \ {s c i } and if migrated, only one target server is selected. Accordingly, the migration decision of container i has to meet the following constraint: By selecting a path p = L s c i ,s1 , L s1,s2 , ..., L s |p|−1 ,j in P s c i ,j to serve the transfer flow of container c i from node s c i to j, many constraints have to be satisfied. With the start node s c i of path p, its last node j must be the placement node of container i which is expressed as: Also, the resource capacity in EN j must satisfy all the containers resource requirements that are decided to be deployed in or migrated to ES j for all resource types, which is finally formulated as: Lastly, the serving bandwidth constraint after the placement of container c i using the maximal available bandwidth in (7) is formulated as:

C. Formulation
In light of the above clarifications of the studied problem, the formulation of the proposed UBL-MDC framework which aims to efficiently deploy and migrate the involved containers while considering their priorities is presented. The joint deployment, migration and route selection are made while deciding the best placements to minimize the objective consisting of the costs related to the resulting users backhaul bandwidth and the load balancing degree. Finally, the following optimization problem P1 generates the minimal deployment and migration cost with resource allocation and traffic routing while maximum number of priority containers are satisfied.

D. The UBL-MDC Problem Complexity
Since problem P1 is a binary integer programming problem, it is considered to be NP-complete. This is highlighted when showing its search space dimension that is 2 σcσs i∈C j∈S 2 |Pi,j | . For example, when σ c = 20, σ s = 5 and |P i,j | = 10, the search space size is 2 100 × (100 × 2 10 ) 1.298 × 10 35 . As such, the search space's exponential growth with the problem's dimension is obvious and one can observe the excessive computational requirement to solve such a problem. Therefore, the following section shows the development procedure of a low-complexity heuristic scheme.

A. The BFS-PS Exact Solution
To get the optimal containers' migration and deployment decision given by problem P1, an exhaustive search is performed over all possible solutions using a Brute Force Search with Path Selection that is denoted (BFS-PS). It is presented in is feasible for limited settings. Indeed, when σ c = 20, σ s = 5 and |P i,j | = 10, the iterations' count N 9.536×10 33 , which is already not feasible. for each container i in C do 6: for each node j in S do if constraints of P1 are satisfied then 15: X ← Cost(α, β) according to (19); 16: if X < Cost * then 17: (α * , β * , Γ * ) ← (α, β, X) end if 20: end for 21: return (α * , β * , Γ * ) As input, Algorithm 1 requires the parameters' vector Ω as well as the information regarding containers, servers and paths. The main for loop of the algorithm iterates N times over the instructions' bloc that tries to enhance the best solution using variables α and β that are built using the current iteration value.

B. ACS-PS Approximate Algorithm
To get a feasible containers' migration and deployment decisions, hereafter an efficient discrete ACS-based algorithm with Paths Selection (ACS-PS) is designed with two different migration strategies. To compare its performance, two other meta-heuristic algorithms based on simulated annealing (SA) and genetic algorithms (GA) are used. The first is summarized in Algorithm 4 whereas the second is based on the work in [15].
1) Algorithm description: ACS schemes adopt pheromone evaporation and sharing strategies to share the learned experience among different ants' groups. They simulate the feeding process of ants to simulate the decision of containers' migration and deployment. The main pieces of this algorithm are summarized as follows: • Ants are randomly placed in the containers to be transferred.
• every ant A a selects a mapping tuple < c i ; s j > with a probability p i,j , referring the transfer of container c i to node s j using path p k according to the pheromones φ i,j and the heuristic information ψ i,j . Then, c i is placed into tabu list T abu a of A a .
• To get its migration plan, ant A a returns to the next container in the transfer containers set C, and repeats the previous process to complete the next migration allocation.
• That all the ants complete the allocation of all the transfer containers in C once, can be regarded as one iteration.
• The algorithm terminates when the maximum iterations' number is reached.
2) Algorithm skeleton: In practice, ants use a kind of chemical substance named pheromone to share information with each other [18]. Its initial value is defined as follows: Pheromone variation rules: When transferring the containers, the ACS algorithm dumps the ants' search experience using the matrix [φ] of size σ c × σ s . Each element φ i,j saves the pheromone amount that informs ants about the tendency to choose pair (c i ; s j ). The next equations are the rules serving to update the pheromone locally and globally, respectively: here ∆ l φ and ∆ g φ are the local and global pheromone evaporation rates respectively. ∆ a i,j is its increment of additional where Cost(X + a ) is the cost value of an iteration's best solution found by ant A a . Actually, when the mapping relation tuple < c i ; s j > is chosen, the ant updates locally the pheromone value of this path using Eq. (25). On the other hand, when the mapping relation tuples of all current solutions is completed, the best one w.r.t. Cost is chosen to perform pheromone update globally using Eq. (26) in order to maintain the experience of the global best solution.
Heuristic information: The proposed model uses heuristic information ψ i,j that is obtained based on the maximum allowable bandwidth to transfer container c i to node n j that is expressed as: Usually ants tend to choose the path with more pheromones and higher expectations of the ongoing path. Nevertheless, this deterministic choice has the disadvantage to fall into local optimum. Accordingly, ACS algorithm reacts by using a pseudorandom rule where ants probabilistically select the next mapping transfer tuple < c i , s j , p k > using a probabilistic rule. First, Eq. (29) defines the set ω a (i) of possible target nodes j related to ant A a and their leading routes k that verify all constraints in (31). Each element in this set represents a possible candidate placement node j with its associated possible leading routes that are given by the set ω a,j (i). The set of candidate placement nodes only in ω a (i) is denoted ω a (i).
The nodes selection: The next pair container-node is chosen based on the following equation: where q is a uniformly distributed random number ranging in [0, 1] and q 0 ∈ [0, 1] is a threshold parameter. ε 1 and ε 2 are pheromone and heuristic information parameters, respectively. When q q 0 , A a choose pair (i, j) with the maximum value to transfer c i to node j. Otherwise, the pair (i, j) is chosen with the Roulette Wheel procedure (see Alg.(2)) within the set ω a (i) using probabilities χ i,j defined in Eq. (33).
The node-path pair selection: if container c i is selected for transfer, the model proposes to select the pair s j − p k Algorithm 2 : Roulette Wheel Rule Algorithm for Container c i using ω a (i).
Require: S,P,ω a (i),Ω i , ε 1 and ε 2 Ensure: the candidate node j 0 ; 1: for each node j in S do 2: if j in ω a (i) then 3: calculate χ i,j using Eq. (33) 4: else 5: end if 7: end for 8: q1 ← random(0, 1) * χ total ; 9: p ← 0; 10: for each node j in ω a (i) do 11: p ← p + χ i,j ; 12: if q1 p then 13: j 0 ← j; 14: break; 15: end if 16: end for 17: return j 0 denoted (j, k) as the target node and the path of its transfer. The adopted path selection strategy uses two versions: the first strategy denoted (ACS-PS-1) select the path with the maximum allowable bandwidth, while the second one denoted (ACS-PS-2) adopts a random selection strategy. With the first strategy ACS-PS-1, the following equation that gives the maximum transfer bandwidth while choosing path p k is adopted: 3) Algorithm pseudo-code: The pseudo-code of the proposed algorithm is summarized in Algorithm 3 where a solution X a is given by the variables' arrays (α, β) and X is the solutions' set of all ants. As input, Algorithm 3 requires the sets C,S and P; the parameters' vector Ω, the maximum iterations count parameter n max , the ants' count σ a , the pheromone initial value q 0 , the local and global pheromone evaporation rates ∆ l φ and ∆ g φ ; ε 1 , ε 2 the pheromone and heuristic information parameters and the path selection strategy s. In lines 1 to 3, the initial solution's vectors are built and the optimal cost F * associated with the optimal solution (α * , β * ) is initialized. In line 4, the general for loop repeat the process using n max iterations where in each iteration all ants are involved using the loop in line 6. At each ant step, probability matrix is updated (lines 7-11), the containers' placement decisions with paths' selection are performed using Eq. (32) and strategy s which results in the vectors α and β (lines 12-35); and the local update of pheromone is executed. Then the iteration solutions corresponding to all ants are examined with a global pheromone update(lines 38-40) using the best solution and Eq.(26).

C. The SA-PS Approximate Algorithm
In this section, the proposed Simulated Annealing based heuristic solution with Paths Selection (SA-PS) is described. This heuristic optimization technique is characterized by its simplicity and general applicability features. In terms of speed, www.ijacsa.thesai.org 1: Generate an initial solution (α, β) 2: Calculate F = Cost(α, β) according to (19); 3: (α * , β * , F * ) ← (α, β, F ) 4: for n = 1 to n max do 5: for a = 1 to σ a do 7: for each container i in C do 8: for each node j in S do 9: calculate χ i,j using Eq. (33) 10: end for 11: end for 12: for each container i in C do 13: choose pair < s j0 ; p k0 > from ω a (i) using 14: Eq. (32) and strategy s; 15: for each node j in S do 16: if j = j 0 then 17: 18: for each path k in P s c i ,j0 do 19: if k = k 0 then 20: end if 44: end for it is considered among the main efficient heuristics compared to other techniques. Probabilistically, this algorithm accepts not only cost gain, but also cost degradation in order to leave the local minima. Inspired by the Very Fast Simulated Annealing [19] variant, this algorithm use the cost function Cost as the thermodynamic system's energy. During the solutions' space probabilistic iteration, the acceptance of the current state is done such that new states with less energy compared to the previous energy are accepted; otherwise, the new state is accepted when the probability exp |F −Fnew| temp is greater than a random generated float using a uniform distribution U [0, 1]. Also, with decreasing temperature process, the chance for the system to accept such penalizing transitions decreases. The temperature schedule in this algorithms is given by: where k is the current iteration number and temp 0 is the initial temperature parameter. The detail of the solution is presented in Algorithm (4). As input, Algorithm 4 requires the sets C,S and P; the parameters' vector Ω, the maximum iterations count parameter k max , the initial temperature value temp 0 . In lines 1 to 3, the initial solution's vectors are built and the optimal cost F * associated with the optimal solution (α * , β * ) is initialized. Then a for loop (line 4) is used in order to repeat the annealing process using k max iterations. At each step, the temperature value temp is updated (line 5); then, a neighboring state α new of the current state α in line 6 is generated and its corresponding paths selection vector is built in line 7. Then, the new cost F new is evaluated in line 8. Then, the new state is accepted if generating more profit; otherwise it is accepted using a probabilistic test(lines 10 to 15). Here, random(0, 1) is a function's call that uniformly generates a random number in [0, 1].

VI. EVALUATION AND RESULTS
In this section, the proposed experiments used in order to compare the proposed solutions are presented based on the execution time and the cost function metrics.

A. Simulation Setup
All developed simulation programs were ran using a 2.4GHz Intel Core i5 processor in a PC with a maximum 8GB of RAM. Moreover, the basic parameters of the simulation experiments are listed in Table III. To investigate the feasibility and limitation of Algorithm 1, the first experiment is carried where the achieved costs are measured and the execution time of all five solutions is recorded. In fact, the performance of the optimal BFS based solution is studied compared to the proposed heuristic solutions where the ACS-PS algorithm is studied relatively to both proposed strategies denoted ACS-PS-1 and ACS-PS-2. Accordingly, the containers' count (σ c ) is varied between 2 and a maximum feasible experimentation value σ c = 9 while the nodes' count σ s = 5, and |P i,j | ∈ [ [3; 5]]. The obtained results are depicted in Fig. 3. The obtained normalized cost for the proposed solutions is

C. Heuristic Solutions Comparison
The second experiment studies the heuristic solutions' performance only. In this experiment, the containers' number (σ c ) is taken such that σ c ∈ {10, 20, 30, 40, 50, 60, 70, 80, 90, 100}. With regard to the total number of containers, Fig. 4 shows the achieved Normalized Cost obtained as the value of the objective function defined in Eq. (19). The results demonstrate the superiority in performance of the ACS-PS solution for both strategies. In particular, the ACS-PS-2 solution gives the best results compared to all other solutions for all values of σ c . Also, the results of the solutions based on GA-PS and SA-PS are slightly bigger in that order compared to those of ACS-PS.

D. The Total Backhaul Cost
Now, the next evaluation is introduced where the performance related to the total backhaul cost achievements for all heuristic solutions is studied. The reported values are obtained using Eq. (10). First, the experience is performed such that σ c ∈ {10, 20, 30, 40, 50, 60, 70, 80, 90, 100} with σ s = 5 and record the overall backhaul costs using ACS-PS, SA-PS and GA-PS heuristic methods. In each value of σ c the overall achieved backhaul cost is sown without normalization using different settings. Thus, the variation shape of the same curve does not provide any information. Hence, Fig. 5  when the value of ∆ b increases, the total backhaul cost generally decreases except for a few values where small tolerable increases are observed. these increases can be explained by the probabilistic aspect of these heuristic solutions which remain acceptable. The same figure further demonstrates superior performance of the ACS-PS-2 solution. Indeed, for this solution only, the variation of the total backhaul cost remains decreasing for all values of the experience. Consequently, this experiments shows that the factor ∆ b , used as regulator coefficient to balance the importance of the backhaul bandwidth cost among the other metrics, really fulfills its role.

VII. CONCLUSIONS AND PERSPECTIVES
In this paper, a containers' deployment and migration problem with resource consideration within a multi-server mobile edge-cloud system is studied. The model considers a set of containers to deploy and migrate to a set of edge-cloud nodes where the transfer is compellable to users' backhaul bandwidth constraints. The formulated optimization problem minimizes a derived multi-objective function that jointly minimizes end-users perceived bandwidths and the system's load balance degree. Accordingly, the optimal transfer decisions are established by solving the obtained optimization problem. To handle its high complexity, two moderate complexity algorithms based respectively on Ant Colony System and Simulated Annealing are proposed. Then, a set of simulation experiments are performed to study their performance. The results reveal that the proposed BFS-based exact method is inefficient with big settings and it is highly time consuming. Furthermore, the ACS-PS is considerably efficient and gives good result with more acceptable execution time, whereas the SA-based solution is very efficient in terms of execution time. Moreover, the balance effect of the ∆ b factor serving to balance the importance degree of the backhaul cost is well established. Finally, we plan as perspectives to involve the transfer delays regarding the migrations types in the studied edge-cloud system.