A Comparative Study of Eight Crossover Operators for the Maximum Scatter Travelling Salesman Problem

The maximum scatter traveling salesman problem (MSTSP), a variation of the famous travelling salesman problem (TSP), is considered here for our study. The aim of problem is to maximize the minimum edge in a salesman’s tour that visits each city exactly once in a network. It is proved be NP-hard problem and considered to be very difficult problem. To solve this kind of problems efficiently, one must use heuristic/metaheuristic algorithms, and genetic algorithm (GA) is one of them. Out of three operators in GAs, crossover is the most important operator. So, we consider eight crossover operators in GAs for solving the MSTSP. These operators have originally been designed for the TSP which can also be applied on the MSTSP after some modifications. The crossover operators are first illustrated manually through an example and then executed on some wellknown TSPLIB instances of different types and sizes. The obtained comparative study clearly demonstrates the usefulness of the sequential constructive crossover operator for the MSTSP. Finally, a relative ranking of the crossover operators is reported. Keywords—Traveling salesman problem; maximum scatter; genetic algorithms; crossover operators; sequential constructive crossover


I. INTRODUCTION
The travelling salesman problem is a famous problem (TSP) that aims to find shortest tour of a salesman who starts his journey from depot node and visit all remaining n nodes (cities) such that each node is to be visited only once and then returns to the depot. It is a NP-Hard problem [1] that is very easy to define but difficult to solve. Several researches have been done to deal with the problem and consequently numerous good algorithms have been reported in the literature. However, few circumstances require different restrictions on the acceptability of a tour as solution. One such restriction is to maximize the minimum cost edge in a tour of the salesman, which is named as maximum scatter TSP (MSTSP). In MSTSP, given a weighted graph, the aim is to find a Hamiltonian circuit so that the minimum cost edge is maximized. That is, the aim is to make each point away from (scattered) its previous and next points in the circuit. It is also called the max-min 1-neighbour TSP. In the max-min mneighbor TSP, the aim is to maximize the minimum cost between any city and all its m-neighbours in the Hamiltonian circuit. These problems are close to the bottleneck TSP (BTSP) [2].
The MSTSP, defined first in [3], has application in operations involving heating workpiece, where it is equally important to keep each point away from its immediate ancestor and successor along with its m-neighbors for allowing cooling period in each operation. It has application in some other manufacturing processes that attach metal sheets together. After required alignment, the topmost sheet has some prespecified points where riveting operations are applied to attach the sheets together. To avoid nonuniform deformation of the sheets, it is required to arrange the riveting process such that the distance between any rivet and its next rivet is very large; that means, the riveting operations must be scattered. It has application in some kind of medical imaging also. During imaging physical functions by Dynamic Spatial Reconstructor, radiation sources are positioned on the upper half of a circular ring and sensors are positioned directly opposite on the lower. The 'firing sequence' decides the sequence of radiation sources along with their associated sensors, generally periodically. The sensors gather energy intensity which goes through the patient positioned in the middle of the ring. It is required that if the i th source is activated, then its neighbour sources (for example, (i-1) th , (i+1) th , (i + 2) th , etc.) must not be activated, and hence some amount of scattering occurs [1]. The problem can be applied to a case where someone is falsely accused of a crime and given is death penalty. Now, he tries to escape from the police by visiting different safe places across his country to avoid the capture. Throughout his journey, he looks for a tour such that the smallest distance between consecutive places is very big [4].
The MSTSP can be formally defined as follows: Let a network with a set of n nodes, considering node 1 as depot node and a travel cost (time or distance, etc.) matrix C=[c ij ] of order n connected with ordered pair (i, j) of nodes is given. Let (1=α 0 , α 1, α 2, ,....,α n-1 , α n =1) ≡ {1→α 1 →α 2 →.... →α n-1 →1} be a tour. The tour cost is defined as min { , +1 : = 0, 1, 2, … . , − 1}. The aim is to find a tour that has maximum tour cost. The problem can be transformed to a BTSP by the transformation d ij = L-c ij where D = [d ij ] nxn is equivalent BTSP's cost (or distance) matrix and L is very large number [5].
Since the problem is NP-hard, obtaining optimal solution using exact method is very hard, if not possible. The moderate sized TSP instances have been effectively solved by using 317 | P a g e www.ijacsa.thesai.org operations research methods, like branch-and-bound [6], lexisearch [7], branch-and-cut [8] and local search [9]. As the problem size increases, obtaining exact solution is very hard. For solving large sized instances, one must go for heuristic algorithms, which, of course, don't promise to obtain optimal solution of a problem instance; however, they give near exact solution very quickly. Hence heuristic algorithms are used to solve some difficult problems. The most current algorithms that are used to solve various difficult optimization problems are termed as metaheuristics. There are metaheuristic algorithms based on simulated annealing [10], tabu search [11], insertion heuristic [12], ant colony algorithm [13], genetic algorithms [14], variable neighbourhood method [15], etc. However, genetic algorithms (GAs) are extensively applied methods amongst modern metaheuristics, and hence, we are applying GAs to solve the MSTSP.
Genetic Algorithms (GAs) first developed by John Holland in 1975, based on imitating the Darwinian survival-of-thefittest theory among different species created by arbitrary changes in the chromosomes' structure in the natural biology [14]. They are powerful and robust metaheuristic algorithms for solving large-sized problem instances. They have been fruitfully applied to numerous combinatorial optimization problems to find their solutions. Each feasible solution of a problem may be assumed as a chromosome whose fitness is measured by its objective function value [16].
In general, simple GAs begin using randomly created a set of chromosomes called initial population, also termed as pool of genes, and then apply, mainly three, genetic operators to produce new, and possibly, better populations in subsequent generations. The first operator is selection which probabilistically copies and discards some of the chromosomes of the present generation to the next generation. Crossover is the second operator that selects randomly a pairs of chromosomes and mates to produce new chromosomes. The third operator is mutation, which randomly alters some position values (genes) of a chromosome. Crossover is very powerful operator in the GA search. Mutation diverges the GA search space. Generally, probability of applying mutation operator is fixed very low comparative to probability of crossover operator [14].
The crossover operators which have been developed for the usual TSP are also applied on the variant TSP after some modification. Since the MSTSP is a variant TSP, we consider eight crossover operators in simple GAs for solving the MSTSP. The crossover operators are first illustrated manually through an example and then executed on some well-known TSPLIB instances of different types and sizes. The obtained comparative study clearly demonstrates the usefulness of the sequential constructive crossover operator [16] for the MSTSP. Finally, a relative ranking of the crossover operators is reported.
This paper is organized as follows: A survey of the literature for the MSTSP is reported in Section II. Section III develops simple genetic algorithms using eight crossover operators for the problem, whereas, Section IV reports computational experiments for eight crossover operators. Finally, Section V presents conclusion and future works.

II. RELATED WORK
There are few literatures about MSTSP, and the relevant papers are as follows. Arkin et al. [1] developed the first method for solving the problem. The problem was shown be NP-hard and unless P = NP, any no constant-factor approximation method can be designed. They developed factor-2 (which is best factor) approximation method with the triangle inequality for the max-min 1-neighbor TSP, for the cycle and path versions. Further, the method expanded to obtain a factor-2 approximation solution for the max-min 2neighbor TSP, for cycle as well as some cases of path version. They also developed methods for the max-min 2-neighbor TSP with the triangle inequality, for both the path and cycle versions. The methods also expanded to obtain an approximation solution for path version of the max-min mneighbor TSP.
Chiang [17] developed approximation methods for the max-min 2-neighbor TSP that follows the triangle inequality. He developed approximation methods for the path and cycle versions by improving methods in [1]. As mentioned, both algorithms are much simpler. John [4] also studied many works of MSTSP and its relevant models. Kabadi and Punnen [18] obtained an approximation method for the MSTSP that satisfies the triangle inequality, which is claimed to be the best bound for the case. Hoffmann et al. [19] extended the algorithm in [1] that produces optimal solutions for the nodes on a line to a regular mxn-grid. As reported, in some particular cases, the algorithm takes linear computational time to find an optimal tour.
The MSTSP is close to the BTSP, where the aim is to minimize the maximum cost edge in a Hamiltonian circuit [20]. Exact algorithms based on lexisearch approach have been developed ( [21], [22]). Also, hybrid algorithms have been proposed for solving the problem ( [23], [24]). Another closely related problem of the MSTSP is the maximum TSP (MaxTSP), in which the aim is to maximize total length of a tour in a Hamiltonian circuit [25]. A hybrid GA is proposed for solving the problem [26].
Dong et al. [27] proposed the multi-salesmen version of the MSTSP, multiple MSTSP (MMSTSP). They developed three improved GAs using greedy initialization, hill-climbing and simulated annealing algorithms to improve GAs for solving the MMSTSP. As claimed the improved algorithms are efficient algorithms and can reveal several characteristics in finding the solution of the problem.
A multi-start iterated local search approach is proposed in [28] for the MSTSP. Two local search algorithms based on insertion and modified 2-opt moves have been developed as part of our approach. To investigate the effectiveness of the method, it is tested on the TSPLIB instances, and found very good results.

III. SIMPLE GENETIC ALGORITHMS FOR THE MSTSP
Beginning with an initial population, a simple GA recurrently applies three genetic operators, selection, crossover and mutation, until the stopping criterion is satisfied. Though GA is among the best metaheuristic algorithms, but its performance verily depends on initial chromosome population, 318 | P a g e www.ijacsa.thesai.org (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 11, No. 6, 2020 three operators and some parameters [14] that are discussed in this section.

A. Chromosome Representation and Initial Population
There are numerous ways to represent solutions as chromosomes for the TSP and its variants. Path representation is considered for the MSTSP that lists labels of nodes so that no any node is repeated in a chromosome. Suppose, {1, 2, 3, 4, 5, 6, 7, 8} represents the node labels in an 8-node instance, then the tour {1→7→2→3→8 → 4→6→ 5 →1} can be denoted by (1,7,2,3,8,4,6,5). The objective function is defined as the sum of the costs of edges in the tour. Since the problem is a maximization problem, fitness and objective functions are same. Usually a simple GA begins with a pool of chromosomes called initial population. Here randomly created initial population is considered.

B. Selection Operator
In selection process, strings/chromosomes are replicated to the mating pool of next generation based on probabilities associated with their fitness function values. By transferring a higher portion of fitter chromosomes to the next generation, selection imitates the Darwinian survival-of-the-fittest in natural biology. Here, no any new chromosome is formed. Generally, the proportionate selection is applied in which any chromosome is chosen based on a probability that is calculated as proportional to its fitness function value. For example, roulette wheel selection, tournament selection, stochastic remainder, etc. are some of them. We consider stochastic remainder selection method [29] for our GAs.

C. Crossover Operators
Crossover operators selects two parent chromosomes and a point throughout the length of the chromosomes and exchanges their information after the crossover point. It performs a very significant role in GAs. Several good crossover methods are suggested for the TSP in the literature which are supposed to be good for the MSTSP. For example, partially mapped crossover [30], ordered crossover [31], alternating edges crossover [32], cycle crossover [33], edge recombination crossover [34], generalized N crossover [35], greedy crossover [32], sequential constructive crossover [16] are some of them. We are going to investigate these eight crossover methods.
1) Partially mapped crossover operator. The partially mapped crossover (PMX) uses two crossover points and produces two offspring chromosomes [30]. It defines exchange mappings in the segment between the crossover points. It is the first crossover operator designed for the TSP in GAs. We illustrate the PMX through the 8-node example instance along with its cost matrix given in Table I and the parent chromosome pair P 1 : (1, 5, 4, 7, 8, 2, 3, 6) and P 2 : (1, 8, 3, 4, 5, 6, 2, 7) with costs 3 and 1 respectively. We start journey (computation) from the first gene (headquarters), node 1.
Let the arbitrarily assumed cut points are after 3 rd and 6 th genes that are marked with "|", as follows:  The mapping segments are between these cut points. So, the exchange mappings are 7↔4, 8↔5 and 2↔6. These mapping segments are copied to the offspring chromosomes as follows: We now add some more genes from the alternative parent chromosomes that do not form invalid chromosome as follows: The node 8 should be in the place of first * in O 1 which comes from P 2 , but, since it is available in O 1 , so after checking the mapping 8↔ 5, node 5 is placed there. The second * in O 1 should be 2 which comes from P 2 , but, since it is available in O 1 , so after checking the mapping 2↔6, node 6 is place there. Finally, 4 is added at third *. So, the first complete offspring becomes. 2) Ordered crossover operator. To create offspring chromosomes, the ordered crossover (OX) selects a subsegment of a route from one parent chromosome and then preserves the relative order of genes from the other one [31]. We choose the same parent chromosomes and cut points marked with "|" as: P 1 : (1, 5, 4 | 7, 8, 2 | 3, 6) and We always fix first gene as 'node 1'. At first, the offspring are created by simply copying the segments between these cuts into the offspring as: Now, starting from 2 nd cut of one parent chromosome, the genes (un-available) from the other chromosome are copied in the same sequence. The order of genes in P 2 from the 2 nd cut is Similarly, second offspring is created as: 3) Alternating edges crossover operator. The alternating edges crossover (AEX) operator considers a chromosome as a cycle of arcs [32] that creates only one offspring by choosing alternative arcs from the parents. In case of invalid offspring, random arc is chosen to create valid offspring. We choose the same example chromosomes P 1 : (1, 5, 4, 7, 8, 2, 3, 6) and P 2 : (1,8,3,4,5,6,2,7).
At the beginning the arc (1, 5) is chosen from P 1 and the arc (5, 6) from P 2 are added to the offspring. Next, the arc (6, 1) is chosen P 1 , as 6 is the last node, but this arc creates a cycle. So, an arc leaving node 6 to an unvisited node is chosen randomly. Suppose the arc (6, 2) is chosen. Next, the arc (2, 7) from P 2 , (7, 8) from P 1 and then (8, 3) from P 2 are added to the current offspring. Finally, the following offspring may be created: O: (1, 5, 6, 2, 7, 8, 3, 4) with cost 1.
All arcs present in the offspring (O) are inherited from either of the parents.
The first gene is 1 and for the 2 nd position, we choose randomly either 5 or 8. Suppose we choose node 5, then the offspring chromosome becomes: O 1 : (1, 5, *, *, *, *, *, *) All genes in the offspring is chosen from either of the parents in the same location, so the next gene to should be 8, which is located in P 2 just below the present node 5. In P 1 , this node 8 is located at 5 th position; so, the offspring chromosome becomes: Since, next node to be selected is 5 that is already available in O 1 ; thus, a cycle is completed and so, the remaining blank locations will be filled up by the genes of those locations that are present in P 2 . This way the offspring is built as follows:

5)
Edge recombination crossover operator. The edge recombination crossover (ERX) is proposed in [34]. Most operators consider the position and the order of the node. This operator considers the links between these nodes. To apply this operator, we first construct the edge list of the parents P 1 : (1, 5, 4, 7, 8, 2, 3, 6) and P 2 : (1,8,3,4,5,6,2,7). Table II shows the edge list of all the nodes for the given example. Since the 1 st gene of the offspring is 1, the nodes 6, 5, 7 and 8 are the candidates for the next gene. The nodes 6, 7 and 8 have three edges: initially four node minus the present node 1. Similarly, the node 5 has two edges. Among them, node 5 has minimum edges, thus it is chosen, and the offspring becomes (1,5).
Since this offspring is yet incomplete, we fill it up randomly. So, the final offspring may be (1,5,3,4,8,6,2,7) with cost 1. Only four edges are chosen from either of the parents.  7) Greedy crossover operator. The greedy crossover (GX) selects the first node randomly [32]. Since the MSTSP is a maximization problem, hence some steps of the GX must be modified. So, our modified GX for the problem is as follows. In each step, total four neighbor nodes of the present node are considered from the parents, and the (unvisited) node having the largest cost is selected, because it is best at present. If either this best node or all neighbour nodes are available in the offspring, then any other unvisited node is chosen randomly. GX produces one offspring only from the parents. We consider the same chromosomes P 1 : (1,5,4,7,8,2,3,6) and P 2 : (1, 8, 3, 4, 5, 6, 2, 7).
We have the initial offspring (1). The nodes 5 and 8 are neighbour nodes of node 1 with their costs 66 and 2 respectively. Having higher cos, the node 5 is better, so, it is added to the offspring: (1, 5).
The nodes 4, 1, 6 and 4 are neighbour nodes of node 5 with their costs 31, 60, 50 and 31 respectively. Though the node 1 is the best, since it is available in the offspring, node 2 is chosen randomly and added to the offspring: (1, 5, 2).
The nodes 3, 2 and 5 are neighbour nodes of node 6 with their costs 20, 82 and 39 respectively. Though node 2 is the best, since it is available in the offspring, node 3 is chosen randomly and added to the offspring: (1, 5, 2, 6, 3). Finally, the complete offspring may be: (1, 5, 2, 6, 3, 4, 7, 8) with cost 13. 8) Sequential constructive crossover operator. The sequential constructive crossover (SCX) operator creates only one offspring by using better arcs available in the parents' structure ( [16], [36]). Additionally, sometimes it uses better arcs those are not available in either of the parents' structure. It sequentially searches both parent chromosomes and selects first legitimate (unvisited) node that appears after the present node. If no any legitimate node is available in either of the parents, it sequentially searches from the beginning of the chromosome and then compares their associated cost to decide the next node of the offspring chromosome. This operator is found to be very effective for the TSP and some other problems ([37]- [40]). The SCX is slightly modified for the MSTSP as below: Step 1: Start from 'node 1' (i.e., current node p =1).
Step 2: Sequentially search both parent chromosomes and consider the first 'legitimate node' (the node that is not yet visited) appeared after 'node p' in each parent. If no 'legitimate node' after 'node p' is present in any of the parents, search sequentially from the starting of the parent and consider the first 'legitimate node', and go to Step 3.
Step 3: Suppose the 'node α' and the 'node β' are found in 1st and 2nd parent respectively, then for selecting the next node go to Step 4.

D. Mutation Operator
Mutation operator increases variety in the population by applying random changes in the population. For example, swap mutation, inversion mutation, insertion mutation, adaptive mutation [14], etc. are some of them. We have implemented swap mutation for our simple GAs.

E. Control Parameters
Control parameters rule the genetic process at some extent. They are -population size that decides number of chromosomes available during the process, crossover probability that fixes the probability of performing crossover between parents, mutation probability that fixes the probability of performing gene-wise mutation and stopping criterion that fixes when to stop the genetic process [16]. A simple GA may be summarized as follows: SimpleGA( ) { Initialize population randomly; Evaluate the population; Generation = 0; While stopping criterion is not satisfied { Generation = Generation + 1; Select better chromosomes by selection operator; Perform crossover using crossover probability (P c ); Perform mutation using mutation probability (P m ); Evaluate the population; } } 321 | P a g e www.ijacsa.thesai.org (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 11, No. 6, 2020

IV. C COMPUTATIONAL EXPERIENCES AND DISCUSSIONS
To perform compare study among eight different crossover operators, simple GAs using these crossover operators have been encoded in Visual C++ on a Laptop with i7-1065G7 CPU@1.30 GHz and 8 GB RAM under MS Windows 10, and then run for twenty TSPLIB instances [41]. Out of the twenty, the nine instances ftv33, ftv38, ftv44, ft53, ftv64, ft70, ftv70, kro124p and ftv170 are asymmetric, and the remaining eleven instances dantzig42, eil51, st70, lin105, ch130, kroA150, si175, d198, pr226, a280 and lin318 are symmetric. We run GAs for different setting of parameters, and selected parameters are listed in Table III. Fig. 1 presents results for ftv170 (by considering only 100 generations) by all GAs. Each curve is for one crossover, and it shows improvement of current solution in the successive generations. The figure shows some variations of SCX and shows that SCX is the best. ERX also has some variations and is place in second position. But GX and AEX have no variations and get trapped in local maximum very quickly and shown to be the worst.
The comparative study among the eight simple GAs are summarized in two tables: Tables IV and VIII. These tables are prepared similarly: each row is for an instance and each column is for one GA using a particular crossover operator. The result is defined best solution cost, average solution cost, standard deviation (S.D.) of solution costs, and average convergence time (in second). The best result for a particular instance among all GAs is marked by bold face. From the Table IV, it is seen that the crossovers OX, AEX, CX, ERX and GX could not obtain either best solution or best average cost for any asymmetric instance. The crossover PMX obtains best average costs with lowest S.D. for the instances ftv33, ftv38 and ftv44, whereas SCX obtains best lowest average costs with lowest S.D. for the remaining six instances. So, SCX is shown to be the best. These results are shown in Fig. 2 that also shows the usefulness of crossover SCX. The crossovers ERX and GNX are competing, and GX is the worst. To confirm whether SCX-based GA average is statistically and significantly different from the averages found by other crossover-based GAs, Student's t-test is performed. It is to be mentioned that 50 runs have been performed for each instance. Following t-test formula is used here [42]: The values of � 2 and 2 are found by the SCX-based GA, and � 1 and 1 values are found by other GAs. The t-statistic are reported in Table V. The t-values may be positive or negative. Since the problem is maximization problem, the negative value shows that SCX found better solution than the competitive crossover. In the positive case, the competitive crossover found better solution. The confidence interval at the 322 | P a g e www.ijacsa.thesai.org 95% confidence level (t 0.05 = 1.96) is used. When t-value is greater than 1.96, the difference between them is significant. In this condition, if t-vale is negative then SCX-based GA solution is better, otherwise the competitive crossover-based GA solution is better. If t-value is less than 1.96, then there is no significant difference between the obtained values. The table also shows the information about the crossovers that found significantly better solutions.
On four instances there is no statistically significant difference between SCX and PMX. On three instances SCX is found better than PMX, whereas, PMX is found better than SCX on two instance. There is no significant difference between SCX and CX on two instances. On five instances SCX performed better than CX, whereas, on two instances CX is better than SCX. Next, there is no significant difference between SCX and GNX on three instances. SCX is better than GNX on five instances, whereas, GNX is better than SCX on only one instance. On all nine instances, SCX is found better than OX, AEX and ERX. From this study we can say that SCX is the best for asymmetric instances.  To rank the other crossover operators, the t-values against PMX is calculated and reported in Table VI. There is no significant difference found between PMX and GNX on five instances. Each of them performed better than the other one on two instances. There is no significant difference found between PMX and CX on five instances. On three instances PMX is found better than CX, whereas, CX is found better than PMX on one instance. It shows that PMX and GNX are sharing 2 nd rank. We further carried out an adequate statistical analysis. The results of our hypotheses testing are summarized in Table VII. In the table, each row contains two columns, where the first lists a crossover operator and the second column lists its inferior crossover operators. Each crossover is ranked according to its number of inferior crossover operators. No significant difference is found between AEX and GX, and hence, they share the worst rank.
From the Table VIII, it is seen that the crossovers OX, AEX, CX and GX could not obtain either best solution or best average cost for any asymmetric instance. The crossover PMX and ERX obtain best average costs with lowest S.D. for the instances eil51 and dantzig42 respectively, whereas SCX obtains best lowest average costs with lowest S.D. for the remaining nine instances. So, the crossover SCX is found to be the best.
The results are shown in Fig. 3 that also shows the usefulness of SCX. The crossovers ERX, OX, CX and GNX are competing, and GX is the worst. Based on this study also one can say that SCX is the best and GX is the worst, and others are competing. For these symmetric instances also, to confirm whether SCX-based GA average solution is significantly different from the average solution found by other GAs, Student's t-test is performed, and the calculated t-values are reported in the Table  IX. On two instances there is no statistically significant difference between SCX and ERX. On eight instances SCX is better than ERX, whereas, ERX is better than SCX on one instance only. On one instance, there is no significant difference between SCX and (PMX, OX, CX and GNX). SCX performed better than PMX, CX and GNX on nine instances, 324 | P a g e www.ijacsa.thesai.org whereas, PMX, CX and GNX are better than SCX on only one instance. Next, on one instance there is no statistically significant difference between SCX and OX. On remaining ten instances SCX is better than OX. From this study we can conclude that SCX is the best. However, to rank the other crossover operators, an adequate statistical analysis is carried out, and the results are summarized in Table X. The crossovers PMX, ERX, GNX, CX, OX, AEX and GX are placed in the 2 nd , 3 rd , 4 th , 5 th , 6 th , 7 th and worst rank, respectively. On both kind of problem instance, SCX is placed the 1 st rank, PMX is in the 2 nd rank and GX is in the worst rank.

V. CONCLUSION AND FUTURE WORKS
Numerous crossover operators have been proposed for the TSP using GAs which can also be used for its variations. In this paper, eight simple GAs using eight different crossover operators, namely PMX, OX, AEX, CX, ERX, GNX, GX and SCX, have been developed for solving the MSTSP. We first applied these operators in manual experiment on two parent chromosomes to produce an offspring, for each crossover operator. We then run the algorithms run on TSPLIB instances of different types and sizes. We set highest crossover probability to show exact nature of crossover operators. We carried out comparative study of the GAs on nine asymmetric and eleven symmetric TSPLIB instances. In terms of solution quality, our comparative study showed that crossover operator SCX is the best, PMX is the second-best and GX is the worst. Our observation is confirmed using Student's t-test at 95% confidence level. Thus, SCX may be good crossover operator to obtain more accurate results, researchers may apply it for other related combinatorial optimization problems. However, it is seen that PMX is better than SCX for small-sized instances.
In this study, our aim was to compare the solution quality found using different crossover operators, neither to improve the solution quality nor to develop the most competitive algorithm for the MSTSP. So, neither any local search technique is used to improve the solution quality nor parallel version of algorithms is developed to find exact solution. Therefore, we have developed simple and pure GAs. Thus, modified SCX operators ( [43]- [45]) can be used instead of SCX and then good local search and immigration procedures [46] can be incorporated to hybridize the algorithm to solve the instances more accurately, which is under our investigation. 327 | P a g e www.ijacsa.thesai.org