Three on Three Optimizer: A New Metaheuristic with Three Guided Searches and Three Random Searches

— This paper presents a new swarm intelligence-based metaheuristic called a three-on-three optimizer (TOTO). This name is chosen based on its novel mechanism in adopting multiple searches into a single metaheuristic. These multiple searches consist of three guided searches and three random searches. These three guided searches are searching toward the global best solution, searching for the global best solution to avoid the corresponding agent, and searching based on the interaction between the corresponding agent and a randomly selected agent. The three random searches are the local search of the corresponding agent, the local search of the global best solution, and the global search within the entire search space. TOTO is challenged to solve the classic 23 functions as a theoretical optimization problem and the portfolio optimization problem as a real-world optimization problem. There are 13 bank stocks from Kompas 100 index that should be optimized. The result indicates that TOTO performs well in solving the classic 23 functions. TOTO can find the global optimal solution of eleven functions. TOTO is superior to five new metaheuristics in solving 17 functions. These metaheuristics are grey wolf optimizer (GWO), marine predator algorithm (MPA), mixed leader-based optimizer (MLBO), golden search optimizer (GSO), and guided pelican algorithm (GPA). TOTO is better than GWO, MPA, MLBO, GSO, and GPA in solving 22, 21, 21, 19, and 15 functions, respectively. It means TOTO is powerful to solve high-dimension unimodal, multimodal, and fixed-dimension multimodal problems. TOTO performs as the second-best metaheuristic in solving a portfolio optimization problem.


INTRODUCTION
Many real-world problems can be seen as optimization problems. This circumstance comes from the nature of human behavior or activity in achieving their objective most efficiently. Ironically, people always find certain limitations or constraints. This consideration is the same as the optimization work. In general, optimization is constructed by two elements: objective and constraint. In the optimization problem, many solutions can be chosen in the solution space. However, some solutions are better than others. One solution that is the best one is called the optimal global solution.
The objective of optimization can be minimization or maximization. In the minimization, the optimal global solution is the solution with the lowest value. Some experimental parameters in the minimization, such as delay [1], total order completion time [2], idle time [2], tardiness cost and maintenance [3], project duration [4], energy consumption [5], transmission losses [6], and so on. On the other hand, in maximization, the optimal global solution is the solution with the highest value. Some parameters in the maximization are profit [7], voltage stability [6], revenue [8], service level [9], and so on.
Two ways can be chosen to solve the optimization problem. These methods are a mathematical method and a metaheuristic [10]. The mathematical method is robust in solving a simple optimization problem. It guarantees finding the optimal global solution. However, the mathematical or deterministic method often fails to solve a complex optimization problem, such as a non-convex or multimodal problem. Moreover, the mathematical method is less flexible in facing various realworld optimization problems [11].
On the other hand, metaheuristic is widely used in many optimizations. Metaheuristics have several advantages. First, it is flexible enough to be implemented in various problems because it focuses on the objectives and constraints [10]. Second, it can be implemented in an environment with limited computational resources because of its approximate approach so that not all possible solutions are traced [12]. This approximate approach comes with the consequence that metaheuristic does not guarantee finding the optimal global solution.
Many early metaheuristics depend on a single strategy to find the optimal solution to problems. Unfortunately, as mentioned in the no-free-lunch theory, no metaheuristics can solve all optimization problems [34]. A search may be excellent for solving some optimization problems [34]. Meanwhile, the performance of this search may need to improve in solving other optimization problems. This circumstance gives strong motivation to develop multi searchbased metaheuristic. Many shortcomings in metaheuristics were built by accommodating multiple searches in every iteration. Moreover, this multiple search strategy is proven better in solving an optimization problem. However, adding more searches in a single metaheuristic can increase the algorithm complexity and, in the end, the computational resources.
Many existing swarm-based metaheuristics depend more on the guided search than the random one. In some metaheuristics like KMA [16] or COA [15], segregation of roles is implemented in the population. It means that some agents implement segregation of roles while some other agents do not implement segregation of roles. A more guided search is deployed in many swarm-based metaheuristics with multiple searches. Some metaheuristic deploys neighborhood search only or does not implement any random search. Meanwhile, random search is essential in tackling the local optimal entrapment often found in multimodal problems.
Based on this problem and motivated by the no-free-lunch theory, this paper is aimed to propose a new swarm-based metaheuristic with multiple searches where there is a balance between the guided search and the random search. This proposed metaheuristic is named a three-on-three optimizer (TOTO). This name represents the six searches adopted in this metaheuristic. Three searches are guided searches, while three other searches are random.
The scientific contributions of this paper are listed as follows.
1) This paper proposes a new swarm-based metaheuristic with multiple searches within this metaheuristic.
2) This paper proposes a balanced proportion between the guided search and random search in every iteration.
3) The evaluation is performed by benchmarking the performance of the proposed TOTO with five shortcoming metaheuristics: GWO, MPA, MLBO, GSO, and GPA.
4) The benchmark test to evaluate the proposed metaheuristic in solving the classic 23 functions and the comparison with other shortcoming metaheuristics is carried out as a proof of concept regarding the significant performance of TOTO.
5) The stock optimization problem is chosen as evaluation of the proposed TOTO in solving real-world optimization problem in financial sector and comparison with other metaheuristics.
The remainder of this paper is arranged as follows. The investigation of several shortcomings of metaheuristics is carried out in section two. This investigation includes the mechanics adopted in these metaheuristics and the position of this work to make the novelty and contribution of this work clear. A detailed presentation of the proposed metaheuristic can be found in section three. This presentation includes the concept, algorithm, mathematical model, and algorithm complexity. The test carried out in this work regarding the evaluation of the proposed metaheuristic is presented in section four. The investigation of the result and findings are explored in section five. Finally, the conclusion and the opportunity for future works are summarized in section six.

II. RELATED WORKS
Metaheuristics have been evolving for decades. Early metaheuristics are simple with a single strategy. In general, they deploy neighborhood search with some strategy to avoid the local optimal. Simulated annealing (SA) and tabu search (TS) are examples of metaheuristics that adopt a simple neighborhood search with a distinct approach to tackling the optimal local problem. In simulated annealing, some probabilistic calculations can still accept a worse solution, where the acceptance becomes more difficult as the iteration goes [35]. Meanwhile, tabu search uses some list to avoid old solutions will be revisited for a certain period [36].
The evolution goes with the development of populationbased metaheuristics. Genetic algorithm (GA) and invasive weed optimizer (IWO) are clear examples of these metaheuristics. Population-based metaheuristics give two main advantages. First, they improve the solution faster than the single solution-based metaheuristics. Second, they give broader tracing within the search space. GA has a simple solution in improving the solution by implementing a crossover strategy between two solutions and mutation for exploration [37]. In IWO, multiple new solutions are generated around every existing solution based on normal distribution [38]. Then, all solutions are ranked, and the worse ones are eliminated [38].
The evolution continues with the introduction of a swarmbased metaheuristic. A particle swarm optimizer (PSO) is an example of an early swarm-based metaheuristic. In the swarmbased metaheuristic, each solution can be seen as an autonomous agent that does not have centralized coordination. However, specific interaction and collective intelligence are conducted to help the improvement. As a member of the swarm, each agent conducts a guided search. It means each agent moves in a specific direction, called a guided search toward some references. As an early swarm-based metaheuristic, PSO deploys a simple strategy. In PSO, each agent moves toward the reference at a certain speed [39]. Its references are the combination between the local best solution and the global best solution [39].
In recent decades, the development of swarm-based metaheuristics is more extensive. Various aspects can be used as a baseline for developing new swarm-based metaheuristics. These aspects include reference, stochastic movement, acceptance rejection, segregation of roles, etc. Moreover, many shortcomings of metaheuristics deploy multiple strategies rather than a single strategy to improve its quality in achieving www.ijacsa.thesai.org a better final solution or achieving the objective faster. This multiple strategy/search approach is carried out in a single or multiple phases. Some metaheuristics are also equipped with random search to avoid the convergence achieved faster and trapped in the local optimal. A detailed review of shortcoming swarm-based metaheuristics is presented in Table I. The mechanics of the proposed metaheuristic is presented in the last row to make a clear position and contribution of this work.  Table I indicates the limitations of the existing swarmbased metaheuristics. First, many swarm based metaheuristics prioritize guided search rather than random search. Second, some swarm-based metaheuristics deploys only one random search while some others do not deploy any random search. Third, searching methods implemented in every metaheuristic are still less than five methods. Fourth, these multiple searches are performed in the multiple phase process. Based on this circumstance, there is a room in developing swarm-based metaheuristic that deploys multiple searches and puts the balance between guided search and random search as in this work.

III. PROPOSED MODEL
TOTO is a swarm-based metaheuristic with multiple strategies in every iteration. The multiple-strategy approach aims to cover each strategy's disadvantages because each has its strengths and weaknesses. The multiple strategy approach is interpreted by conducting six searches by each agent in every iteration. It means that these searches are mandatory for each agent. In other words, TOTO does not implement segregation of roles. It differs from other metaheuristics, such as RDA, KMA, or SKA, where the segregation of roles is implemented among the population. These searches are three guided searches and three random searches. Each search generates a candidate. Then, the best candidate among these six candidates is chosen as the selected candidate to be compared with the corresponding agent. If this candidate is better than the current solution of the corresponding agent, this candidate becomes the replacement as a new solution for the corresponding agent. This mechanism is also different from some metaheuristics, such as DTBO, GPA, NGO, and so on, where the searches are conducted sequentially. This new solution is then compared with the current global best solution. If this new solution is better than the global best solution, then the global best solution is updated.
The global best solution and a randomly selected agent become the reference in these three guided searches. In the first guided search, a candidate is generated based on the movement of the corresponding agent toward the best global solution. In the second guided search, a candidate is generated based on the movement of the global best solution avoiding the corresponding solution. In the third guided search, a candidate is generated based on the movement of the corresponding agent relative to a randomly selected agent. If the randomly selected agent is better than the corresponding agent, then the third candidate is generated based on the movement of the corresponding agent toward the randomly selected agent. Otherwise, the third candidate is generated based on the movement of the corresponding agent, avoiding the randomly selected agent.
The local search space is used for the first and second random searches in random searches. On the other hand, local search space is not needed in the third random search. The local search space width declines linearly due to the increase in the iteration. In the first random search, a candidate is generated within the local search space of the corresponding agent. In the second search space, a candidate is generated within the local search space of the global best solution. A candidate is generated within the search space in the third random search. The illustration of these six searches is presented in Fig. 1.
Based on the previous explanation, the rationale of the proposed strategy is highlighted and summarized as follows.
 The multiple searches are proven better than single search as the multiple search approach is adopted in various shortcoming metaheuristics.
 The balance between the guided searches and random searches is designed to give balance between exploration capability and exploitation capability.
 Multiple references are adopted to expand the searching capability because searching process cannot depend on only single reference.
 The strict acceptance-rejection approach is adopted to avoid the searching process moves to the worse solution or area. 424 | P a g e www.ijacsa.thesai.org The algorithm of TOTO is then constructed based on its main concept as the formalization of this metaheuristic. This algorithm is presented in algorithm 1. Equation (1) to (11) is used as formalization of the related process. The annotations used in this paper are as follows.
a agent A set of agents a best global best agent a sel randomly selected agent a l lower boundary a u upper boundary c 1 first candidate c 2 second candidate c 3 third candidate c 4 fourth candidate c 5 fifth candidate c 6 sixth candidate c sel selected candidate f fitness function r 1 real uniform random number between 0 and 1 r 2 integer uniform random number between 1 and 2 r 3 real uniform random number between -1 and 1 t iteration t max maximum iteration u uniform random second guided search using (4) 10 third guided search using (5) and (6)  11 first random search using (7)  12 second random search using (8)  13 third random search using (9)  14 choose c sel using (10)  15 update a using (11)  16 update a best using (2) 17 end for 18 end for The explanation of (1) to (11) is as follows. Equation (1) states that the initial solution is randomized within the search space. Equation (2) states that the global best solution is updated by comparing the current value of the global best solution and the new value of the corresponding agent. If the corresponding agent is better than the global best solution, then the corresponding agent replaces the current global best solution. Equation (3) states that the candidate of the first guided search is generated based on the movement of the corresponding agent toward the global best solution. Equation (4) states that the candidate of the second guided search is generated based on the movement of the global best solution avoiding the corresponding agent. Equation (5) states that an agent is randomly selected among the population. Equation (6) states that the candidate of the third guided search is generated based on the movement of the corresponding agent relative to a randomly selected agent. Equation (7) states that the candidate of the first random search is generated based on the neighborhood search around the corresponding agent. Equation (8) states that the candidate of the second random search is generated based on the neighborhood search around the global best solution. Equation (9) states that the candidate of the third random search is generated based on the constant random search within the search space. Equation (10) states that the best candidate among these six candidates is selected to be compared with the corresponding solution. Equation (11) states that this candidate replaces the corresponding solution if this candidate is better than the corresponding solution.
The algorithm complexity of TOTO is presented as O(6t max .n(A)). Based on this presentation, the complexity of TOTO is affected by two parameters: maximum iteration and population size. Each parameter has a linear proportion to the complexity.

IV. SIMULATION AND RESULT
This section discusses the performance analysis regarding the proposed metaheuristic. Some optimization tests are carried out due to provide the performance data. The first test is carried out to evaluate the performance of TOTO in solving classic 23 functions. These functions are standard as benchmark tests in many works proposing new metaheuristics. These functions consist of seven high dimension unimodal functions (Sphere, Schwefel 2.22, Schwefel 1.2, Schwefel 2.21, Rosenbrock, Step, and Quartic), six high-dimension multimodal functions (Schwefel, Rastrigin, Ackley, Griewank, Penalized, and Penalized 2), and ten fixed dimension multimodal functions (Shekel Foxholes, Kowalik, Six Hump Camel, Branin, Goldstein-Price, Hartman 3, Hartman 6, Shekel 5, Shekel 7, and Shekel 10). The second test is carried out to analyze the www.ijacsa.thesai.org hyperparameters of TOTO. The third test is carried out to evaluate the performance of TOTO in solving the practical optimization problem. The third test is carried out to evaluate the performance of TOTO in solving a real-world optimization problem. The portfolio optimization problem is chosen as the use case in this work.
The first test compares TOTO with five shortcoming swarm-based metaheuristics: GWO, MPA, MLBO, GSO, and GPA. GWO and MPA are older than the others but widely used in various optimization works. On the other hand, MLBO, GSO, and GPA are newer, but the use of these metaheuristics is still rare.
Several parameters regarding the first test are set as follows. The population size is 10. The maximum iteration is 50. The problem dimension is 50. The fishing aggregate devices for MPA are 0.5. The number of candidates for GPA is 5. The result is presented in Table II. The average fitness score, less than 10 -4 , is rounded to the nearest 10 -4 value. The best score in every function is presented in bold font. The cluster-based comparison is presented in Table III. In the second test, the performance of TOTO is evaluated due to the increase in the maximum iteration. This test has three maximum iterations: 100, 150, and 200. The result is presented in Table IV.   Table IV indicates that the increase in maximum iteration does not improve the performance of TOTO in most functions. There are two possible reasons for this circumstance. The first reason is the optimal global solution has been found when the maximum iteration is still low. The second reason is TOTO fails to improve, although the optimal global solution has yet to be found. In some functions, the increase of maximum iteration improves the performance of TOTO but is less significant. Fortunately, when the maximum iteration increases, TOTO can find the optimal global solution of Schwefel 1.2, Griewank, and Shekel Foxholes.
In the third test, the performance of TOTO is evaluated due to the increase of population size. There are three values of population size: 20, 30, and 40. The result is presented in Table V.  Table V indicates there are also two circumstances regarding the increase in the population size. First, the performance of TOTO needs to improve the increase in population size. Like in the second test, this circumstance happens because the optimal global solution has been found or TOTO fails to improve, although the optimal global solution has yet to be found. Like in the first test, the population size increase also makes TOTO find the optimal global solution of three more functions: Schwefel 1.2, Griewank, and Shekel Foxholes. Meanwhile, in some other functions, TOTO can improve its performance.
In the fourth test, TOTO is challenged to solve the portfolio optimization problem as a real-world problem. In this work, the optimization determines the stocks the investor should buy. The stocks selected in this work are stocks from the banking sector in Indonesia, which are listed in Kompas 100 index. Kompas 100 is a list that consists of preferred stocks in Indonesia. 13 stocks in the banking sector are listed in Kompas 100. Detailed information regarding these stocks is presented in Table VI. In Table VI, the second column represents the stock index, the third column represents the stock price taken on 11 November 2022, and the third column represents the sixmonth capital gain of these stocks. The price and capital gain are presented in rupiah per share. The objective of this portfolio optimization is to maximize the total capital gain. The total capital gain is calculated by accumulating the capital gain from the stocks. Two constraints limit this optimization. The first constraint is that the investor can purchase stock from 100 to 1,000 lots for each stock. One lot means 100 shares. The second constraint is that the maximum investment is 4,000,000,000 rupiah. This work benchmarked TOTO with GPA, GSO, MLBO, MPA, and GWO. The population size is ten, and the maximum iteration is 50. The result is presented in Table VII.

V. DISCUSSION
This fifth section will discuss the more profound analysis of the result and findings. This discussion consists of four parts. The first part is the discussion regarding the performance of TOTO in solving the 23 classic functions. The second part is the discussion regarding hyperparameters. The third part is the discussion regarding the performance of TOTO in solving the portfolio optimization problem. The fourth part is the discussion regarding the limitation of this work, especially the proposed metaheuristic.
The first discussion is about the performance of TOTO. TOTO performs well in solving the classic 23 functions and benchmarking with five other metaheuristics. As mentioned previously, TOTO performs as the best metaheuristic in solving 17 functions in the low population size and low maximum iteration circumstances. Meanwhile, TOTO can find the optimal global solution of eight functions in these circumstances. Moreover, TOTO can find the optimal global solution of three additional functions in the high maximum iteration or population size circumstances. Comparing metaheuristics, TOTO is better than GWO, MPA, MLBO, and GSO in almost all functions. Meanwhile, TOTO is better than GPA in 15 functions.
In general, the superiority of TOTO occurs in all three groups of these 23 functions. It means that TOTO can tackle various problems indicated in these functions. Due to the consideration that multimodal functions are used to test the exploration capability while unimodal functions are used to test the exploitation capability [40], the proposed TOTO is proven in having superior exploration and exploitation capabilities. Meanwhile, the inferiority of TOTO in solving the fixed dimension multimodal functions compared to GPA is that GPA performs very well in solving the fixed dimension multimodal problems. But the superiority of GPA in solving these functions comes with the consequence that the complexity of GPA is higher than TOTO. GPA generates several candidates in its searches, whether it's guided search toward the global best solution or its local search [21]. On the other hand, the tournament of six searches adopted in TOTO is proven better than the multiple candidate strategy adopted in GPA [21]. The result also proves that the sorting mechanism at the beginning of the iteration, as adopted in GWO [41], is unimportant.
The second discussion is related to hyperparameter analysis. Theoretically, the population size and maximum iteration positively affect the performance of metaheuristics. Meanwhile, the result in Table III and Table IV indicates that after some level, the increase of maximum iteration or population size remains the same. In the low maximum iteration or population size, the increase of one of these two parameters may improve the result. But, increasing one of these parameters does not produce a better result in the high maximum iteration and population size. In almost all functions, the acceptable solution, whether globally optimal or sub-optimal, has been found in the low maximum iteration and population size.
The third discussion is related to the performance of TOTO in solving the portfolio optimization problem. The result indicates that TOTO performs well in solving this problem by producing the second-best total capital gain. On the other hand, GPA becomes the best one. Fortunately, the performance gap between GPA and TOTO is very narrow. This test also indicates that real-world optimization problems should be used to test all metaheuristics. This result shows that the performance gap in the portfolio optimization problem is narrow compared with the classic 23 functions. It is also indicated that gaining significant improvement in the realworld optimization problem is much more complicated than in the theoretical optimization problem, primarily when the solution is based on an integer number. www.ijacsa.thesai.org The fourth discussion is regarding the limitation of this work and the proposed metaheuristic. This paper has presented that metaheuristics with multiple strategies generally perform better than other metaheuristics with one or two strategies. Meanwhile, TOTO adopts only six strategies (three guided and three random searches). On the other hand, there are a lot of other guided searches, and random searches can be chosen. It means TOTO can be improved by embedding more searches into this metaheuristic. But there are also limitations in embedding new searches. The first limitation is that it can only accommodate some searches into a single metaheuristic. The second limitation is that accommodating various searches into a single metaheuristic is also not wise because some may be less effective than other searches. The question is which searches are better than others. In other words, in which condition is some searches better than others.
The limitation that has been previously discussed makes the development of a new metaheuristic still possible. Although the references used in the guided search converge to the best or randomly selected solution, the selection of these references is still various. For example, many metaheuristics choose the best solution so far like in GPA [21], the best solution in the current iteration like in COA [15], or some best solutions as the leader. In many metaheuristics, for example in RLSBO [27], a randomly selected solution or some randomly selected solutions are uniformly selected among the population. On the hand, like in SO [10], some solutions from some fixed-size groups are randomly chosen to reduce dependency on the best solution that may come with convergence too early.
The second opportunity comes from the motive to minimize the maximum iteration or population size. This work demonstrated that TOTO could perform well in low population sizes and maximum iteration. Theoretically, the performance of any metaheuristic can be improved by increasing the population size or maximum iteration to a very high number. Meanwhile, this work has demonstrated that choosing an appropriate strategy can be a better option than just scaling up the maximum iteration or population size, which is closer to the greedy approach.
The third opportunity comes from the scalability aspect. For example, large-scale problems with very high dimensions can be found easily in many real-world optimization problems, such as optimizing the purchase order of a supermarket with hundreds of stock-keeping units or optimizing the investor's portfolio with hundreds of stocks can be chosen. Large-scale optimization problems consume more computational resources, whether from increasing the maximum iteration or population size so that a sub-optimal solution can be found. It is still challenging to develop a new metaheuristic that does not need excessive computational resources in handling the large-scale problems.

VI. CONCLUSION
A new swarm-based metaheuristic, namely TOTO, with its superior performance, has been presented in this paper. This paper also has presented a novel mechanism for adopting three guided and three random searches in a competition to find a better solution. The test result indicates that TOTO and precisely the strategy adopted in this metaheuristic perform better not just in beating some previous metaheuristics but also finding acceptable solutions in the low maximum iteration and low population size circumstances. This work presents that TOTO can find the optimal global solution of eleven functions. Moreover, TOTO is better than GWO, MPA, MLBO, GSO, and GPA in solving 22,21,21,19, and 15 functions, respectively. It means TOTO can tackle problems faced in the three groups of functions. In the test related to a portfolio optimization problem, TOTO is better than GWO, MPA, MLBO, and GSO by producing better total capital gain.
Due to the limitations of this work, this work and especially the proposed metaheuristic can become the baseline for future studies. These future studies can be carried out by improving TOTO or implementing TOTO to solve various real-world optimization problems.