Adaptive Gray Wolf Optimization Algorithm based on Gompertz Inertia Weight Strategy

—To solve the problems that the Gray Wolf Optimizer (GWO) convergence speed is not fast enough and the solution accuracy is not high enough, this paper proposes an Adaptive Gray Wolf Optimizer based on Gompertz inertia weighting strategy (GGWO). GGWO uses the characteristics of the Gompertz function to achieve nonlinear adjustment of the inertia weight, which better balances the speed of global search and accuracy of local search of the GWO algorithm. At the same time, the Gompertz function is used to realize the adaptive adjustment of the individual gray wolf’s position and to better update the gray wolves’ position according to the fitness values of different gray wolf individuals. Use 6 classic test functions to compare the performance of GGWO in optimization and 10 other classic or improved swarm intelligence algorithms. Results show that GGWO has better solution accuracy, stability, and faster convergence than all other 10 swarm intelligence algorithms.


I. INTRODUCTION
The world is full of unknowns, and there are many uncertain problems in the unknown world.There is a lot of uncertain information that needs to be processed.To represent and process uncertain information, many optimization problems arise.Many optimization algorithms have emerged as the times require.In the fields of applied mathematics and engineering, there are a large number of optimization problems, and the computational solutions of these problems are located in a complex solution space.Therefore, finding new optimization methods with fast computation speed and strong convergence ability is of great practical significance.With the development of digitization and informatization, more and more meta-heuristic algorithms are being applied in the fields of science and engineering.Metaheuristic algorithms have the characteristics of self-organization, compatibility, parallelism, holism, and coordination.Their working principle is to initialize a set of random solutions and then perform repeated feedback iterations to approach the expected goal.In this search mechanism, the algorithm only needs to know the objective function and search range and can obtain the target solution regardless of whether the search range is continuous and differentiable.Metaheuristic algorithms are mainly divided into three types: biological evolution, natural phenomena, and species' living habits.The swarm intelligence optimization algorithm can specifically represent unknown information.The swarm intelligence algorithm is a model with optimization as its task.The swarm intelligence algorithm is generally based on imitating the living methods and behavioral habits of natural organisms and can optimize problems and methods with special methods.The iterative process of swarm intelligence algorithms is generally based on feedback from individuals within the population, such as particle swarm optimization, where all individuals in the population refer to the position information of the globally optimal individual to move.The cuckoo algorithm simulates the behavioral habits of cuckoo chicks and updates position information based on Levy flight.The grey wolf optimization algorithm is based on the hunting behavior habits of the grey wolf population, and the position updates of all individuals in the population are based on the globally optimal positions of three grey wolves.Researchers have proposed many solutions and drawn many important conclusions and achievements in exploring the performance improvement of swarm intelligence algorithms.
In the expansive domain of optimization algorithms, methodologies that diligently seek optimal or near-optimal solutions have demonstrated their indispensable value across a spectrum of disciplines and fields.These algorithms are not merely instrumental in enhancing the efficiency and performance of systems, designs, or models but also play a pivotal role in decision-making, resource allocation, and quality improvement.In various domains, such as machine learning, engineering, economics, logistics, biology, and computer science, optimization algorithms facilitate optimal parameter adjustments, design, resource distribution, and experimental design, thereby crafting a robust platform for interdisciplinary communication and collaboration.While these instances merely skim the surface regarding the application of optimization algorithms across disciplines, their profound impact and extensive connections undeniably propel the continuous progression and development of scientific technology.Transitioning from the general landscape of optimization algorithms, the Gray Wolf Optimizer (GWO) warrants specific attention, presenting its unique methodology in the rich field of optimization.
Researchers have been inspired by long-term observations of the social interactions, lifestyles, and biological behaviors of organisms such as fish, ants, elephants, wolves, and bees in nature, and have developed a series of related optimization algorithms to solve many practical problems such as engineering optimization and power dispatch.The GWO algorithm is implemented by utilizing the intelligence of gray wolves and their collective hunting characteristics.The GWO algorithm simulates a group of wolves following a specific hierarchical pattern, with different categories of wolves (named alpha, beta, delta, and omega) playing different roles www.ijacsa.thesai.org in the hunting mechanism to achieve search and hunting purposes.

II. LITERATURE REVIEW
GWO is a swarm intelligence optimization algorithm proposed by Mirjalili.This algorithm is derived from the hunting mechanics and leadership levels of gray wolves in the natural world.Mirjalili [1] used GWO to train the multi-layer perceptron (MLP).It's found that GWO has higher accuracy and is competitive in avoiding local optimality.Mohanty [2] completed the maximum power point tracking design of photovoltaic PV systems by GWO.Mirjalili [3] proposed the multi-objective gray wolf optimizer (MOGWO) in 2016, which is very effective in solving multi-objective optimization problems.Emary [4] proposed a binary version called bGWO for better feature selection.Heidari [5] integrated Levy flight and greedy algorithms to obtain a new LGWO algorithm to improve optimization performance.Faris [6] also discussed different versions of GWO optimization in detail, divided them into modified versions, hybrid versions, and parallel versions, and analyzed their role in the main application fields.Kohli [7] introduced chaos theory into GWO, which greatly improved the global convergence speed.Gupta [8] proposed an improved algorithm called RWGWO rooted in random walks to solve optimization problems in life.Nadimi [9] proposed an efficient gray wolf optimizer (I-GWO) based on the dimensional learning hunting (DLH) search strategy.Al [10] combined GWO with PSO to implement a new BPSOGWO binary algorithm to find the best feature subset.
GWO transcends its foundational application in mathematical function optimization, demonstrating utility across diverse domains.Altan [11] used GWO to optimize the intrinsic model function output and efficiently utilize wind energy.Zhao [12] used GWO to search feature sets to improve the diagnostic accuracy of patients with paraquat poisoning.Jayabarathi [13] also used GWO to solve economic dispatch problems.Sulaiman [14] applied GWO to solve the power deployment problem (ORPD) in the power system.Shariati [15] created a model combining a hybrid extreme learning machine with GWO to forecast the strength of concrete if replaced partially by cement.Jino [16] applied optimization algorithms like GWO in advanced image processing fields.Ramakrishnan [17] uses MRG-GWO for segmentation in an estimate of the CT brain tumor images.The GWO algorithm can help find optimal or suboptimal scheduling solutions, Jiang [18] used GWO to solve cases of scheduling Job Shop and Flexible Job Shop.Wei [19] improved SVM by using GWO and applied it in predicting the second major.Yang [20] used grouped GWO to optimize the parameters of wind turbines and improve the maximum power and obstaclebreaking capability.
GWO has more application scenarios including machine learning, image processing, and engineering design.
Within the machine learning sphere, SVM's (Support Vector Machines) potential is often bound by the intricacies of parameter optimization.Zhou's research [21] in 2021 elucidated this challenge by introducing two models.While both models aimed at earthquake forecasting, the latter, harnessing the capabilities of GWO, showcased commendable performance.This reinforces GWO's capability for effective exploration and exploitation in parameter optimization.
In the domain of image processing, particularly image segmentation, achieving a synergy of quality and efficiency remains pivotal.Khairuzzaman's work [22] in 2017 offers a paradigm in this regard.By leveraging GWO for multilevel thresholding, the research underscored GWO's adaptability in delivering quality segmentation with computational expediency.
Transitioning to engineering design, particularly in wind energy optimization, the nuances of turbine efficiency stand paramount.Yang's 2017 study [20] on the optimization of Maximum power points for wind turbines, specifically those operating on doubly-fed induction generators, provides a testament to GWO's efficacy.The introduction of Grouped GWO in this context exemplifies the optimizer's finesse in adaptive parameter adjustments for enhanced energy outputs.
Beyond its established domains, GWO has shown remarkable adaptability in addressing various complex optimization problems, encompassing both classical combinatorial issues and cutting-edge applications.
A quintessential example is the Traveling Salesman Problem (TSP).Given the complexity of determining the shortest possible route that visits each city exactly once and returns to the origin, Panwar's contribution is notable.In 2021, Panwar [23] employed a discrete GWO approach, paving the way for efficient solutions to symmetric TSP instances.This application not only accentuates GWO's versatility but also underscores its potential in combinatorial optimization.
Furthermore, in the domain of software engineering, predicting software defects based on metrics data is a crucial task.GWO's application in Software Defect Prediction (SDP) focuses on optimizing both feature selection and classifier parameters.One intriguing application by Kermadi [24] highlighted GWO's efficacy in designing an efficient photovoltaic array hybrid maximum power point tracker, specifically tailored for intricate local shading conditions in SDP.
Multi-objective optimization problems (MOP) present another challenging arena, where the goal is to find noninferior solutions across multiple objectives.Wu's research in 2020 [25] exemplifies this by integrating GWO with other objective optimizations for wind speed forecasting.This innovative approach, leveraging GWO's capabilities, orchestrates harmony among various objectives, generating superior solutions.
It further reveals novel applications of GWO in electrical and control systems.Lakum [26] employed GWO for optimally placing and sizing active power filters in radial systems, especially amidst nonlinear-distributed generation.Arora's work [27] ventured into algorithmic hybridization, combining GWO with the Crow Search Algorithm (CSA) for enhanced function optimization and feature selection.Sun's research [28], on the other hand, utilized GWO for the intricate task of feedback control optimization in PM hub motors.Moreover, Eltamaly's study [29] stands out in www.ijacsa.thesai.orgharnessing GWO-FLC to track dynamic maximum power points (MPP) for PV systems under variable shading conditions.Jamal employs the Improved Grey Wolf Optimization (IGWO) algorithm to optimize overcurrent relay coordination, demonstrating its enhanced efficiency and reliability over conventional methods [30].Telugu utilizes the Chaos-enhanced Grey Wolf Optimization Algorithm (CGWO) for designing a two-stage CMOS Differential Amplifier, achieving significant improvements in reducing circuit size and power dissipation compared to traditional optimization methods [31].
The diverse applications of GWO demonstrate its adaptability and robustness across various research arenas, addressing distinct challenges and expanding its applicability in optimization landscapes.Its multifaceted use across sectors showcases its versatility in delivering optimal solutions.However, current improvement ideas often blend multiple algorithms, facing issues like low accuracy, slow convergence speed, and poor stability.This article suggests a streamlined and efficient enhancement plan solely based on the GWO algorithm, aiming to tackle these issues.
Section III introduces the background and basic principles of the GWO algorithm; Section IV introduces the two strategies of inertia weight and adaptive weight respectively, and uses them for GWO position update, thereby obtaining an adaptive algorithm based on Gompertz inertia weight(GGWO); Section V uses simulation experiments to compare and analyze the convergence performance, convergence speed, stability and time complexity of 11 algorithms including GWO and GGWO from three dimensions and six test functions; finally Section VI summarizes the full text and looks forward to the diverse application scenarios of GGWO in the future.

III. GRAY WOLF OPTIMIZATION ALGORITHM
GWO simulates the hunting mechanism and leadership levels of gray wolves by dividing these wolves into four layers according to the characteristics of gray wolves.The first layer is the α layer, the leaders of the population, in charge of leading the rest wolves to hunt prey, which is interpreted as the optimal solution in the algorithm.The second layer is the β layer, responsible for assisting the α layer wolf pack, and this layer interprets the sub-optimal solution in GWO.The third layer is the δ layer, which should obey any orders or decisions made by the previous two α and β and they have to investigate more etc.The grading mechanism of gray wolf packs is not static.Some of the α and β with poor fitness will degenerate to δ.The fourth layer is called the ω layer, which updates its position according to the previous three α, β, or δ.Furthermore, these four layers of wolves α, β, δ, and ω cooperate in the hunting.There are in total three main stages of gray wolf hunting, and they are simulated specifically as surrounding prey, chasing prey, and attacking prey.
In the first stage of surrounding the prey, the gray wolf will update its position based on the position information of the prey and gradually surround the prey, as shown in Eq. ( 1): where X stands for the position vector of the gray wolf, t represents the number of iterations, X p is the position vector of the prey, and the parameters A and D as shown in Eq. ( 2), (3): Among them, ⃗a linearly decreases from 2 to 0 during iteration, r⃗ 1 is a random vector on [0,1], and the parameter C is as follows in Eq. ( 4): where is a random vector.
While hunting prey, the behaviors of all the gray wolves are guided by α, β and δ gray wolves and these three layers of wolves may also cooperate with ω gray wolves in hunting.To better simulate and reproduce the hunting strategy of gray wolves, we suppose that α, β, and δ have a better idea of the potential position of prey.Via sorting the fitness values of all the wolves, the best three layers of wolves are chosen as α, β, and δ respectively.The specific gray wolf location update steps are as follows: First, the corresponding D and D are as shown in Eq. ( 5): Then, solve the position vector (t 1) X  of the current gray wolf in the next iteration, as shown in Eq. ( 6) and ( 7): In summary, this algorithm outlines a structured process for GWO, iterating through defined steps to ascertain optimal solutions by emulating the hierarchical and behavioral dynamics of gray wolves.

A. Gompertz Inertia Weight Strategy
The Gompertz function [32] is monotonic, and its function expression is as shown in Eq. ( 8): Draw the graph of the Gompertz function as shown in Fig. 1: As shown in Fig. 1, the Gompertz function curve is characterized by slow growth in the initial and final stages and rapid growth in the middle section.The image of the www.ijacsa.thesai.orgGompertz function tends to decrease as the abscissa increases, which is related to the iterative process of swarm intelligence algorithms.In the early stages of swarm intelligence algorithm iteration, the population is prone to falling into local optima, so it is necessary to give a larger step size initially.Giving a larger inertia weight can increase the step size of individual gray wolf movements, thereby helping the population to better conduct global search.As the algorithm iterates, the needs of individual populations gradually shift from global optimization to local optimization.In the later stages of the algorithm, some individuals need to strive to explore global optima within a small range, so giving a smaller step size in the later stages of the algorithm can help the algorithm perform local optimization.The Gompertz function has this feature, as its value decreases as the abscissa increases, which helps our algorithm balance global optimization and local optimization.This article uses it to improve the inertia weight of the GWO.The Gompertz inertia weight ω used in this article is as follows in Eq. ( 9): The graph of selecting the right side of the y-axis as the inertia weight ω of the GWO algorithm is shown in Fig. 2: As shown in Fig. 2, the Gompertz inertia weight remains large in the early stages of iteration, which is conducive to maintaining a large search range of the algorithm, making the algorithm less likely to fall into the local optimal solution; as the number of iterations of the algorithm increases, the curve in the middle section will decrease rapidly and eventually stabilize at a smaller value, which will help the algorithm have a smaller inertia weight in the later stages of the iteration, which will help find the optimal solution more thoroughly in local area and during long periods.

B. Gompertz Adaptive Position Update Strategy
Gompertz function is used to construct the adaptive weight i  strategy as shown in Eq. ( 10): 1 1, ( 1, 2,3) Among them, f 1 , f 2, and f 3 represent the fitness of the α, β and δ layer wolves respectively.f avg is as shown in Eq. ( 11): Gompertz adaptive weight i  is shown in Fig. 3:  As shown in Fig. 3, the Gompertz adaptive weight is close to 0 when the corresponding wolf fitness value is relatively small, indicating that the wolf is close to the prey.This step control is extremely small, which is conducive to more thoroughly finding the optimal value locally; As the corresponding wolf fitness value ratio increases, it indicates that the wolf is far away from the prey, so the Gompertz adaptive weight increases rapidly to prevent falling into the www.ijacsa.thesai.orglocal optimum, and the step size increases, which is conducive to searching for the optimum in the global scope.

C. Adaptive Gray Wolf Optimization Algorithm based on Gompertz Inertia Weight Strategy
Based on the Gompertz inertia weight strategy, the position update formula is modified as Eq. ( 12): Based on the Gompertz adaptive weight strategy, the new gray wolf position X is obtained as shown in Eq. ( 7), ( 13): The steps of the GGWO algorithm are as follows: according to Eq. ( 2), ( 3) and (4); 2) Define α, β and δ wolves; 3) Initialize the position of the population; 4) Calculate the fitness value according to Eq. ( 5) sort the fitness value from large to small, and filter out the top three D α , D β and D δ corresponding to α, β and δ wolves respectively; 5) Update the positions of α, β and δ wolves according to Eq. ( 12) by adding inertia weight ω; 6) Then update the positions according to Eq. ( 7), ( 13) by adding an adaptive weight i  and performing boundary check; 7) If the maximum number of iterations is reached, the algorithm stops and the optimal value is output; otherwise, return to step 4.
The flow chart of GGWO is shown in Fig. 4: As shown in Fig. 4, the basic parameters of the GGWO algorithm are set to initialize the wolves, and then start iteration.In the process of continuous iteration, the position of the wolves is updated through various strategies like inertial weight, adaptive weight, and bounds check.Finally, after reaching the maximum number of iterations, the optimal value is output.The algorithm in this article uses the Gompertz inertia weight strategy and adaptive position update strategy to balance the global search and local search of the gray wolf population, which can effectively consider all information during the iteration process.Gompertz inertia weight strategy calculates the iteration of the algorithm, giving the same inertia weight value to all gray wolf individuals.This is to balance the global and local search performance of all gray wolf individuals from a global perspective.The adaptive position update strategy allows all gray wolf individuals to give different position update schemes to different gray wolf individuals when the number of iterations is fixed.This reflects the different fitness values of different particles and should be given different inertia weights, which is very scientific and necessary.The unity of the Gompertz inertia weight strategy and adaptive position update strategy can use all particles in the population to have targeted inertia weights and position update adjustment strategies, which is a nonmultiplicative and scientifically reasonable solution.V. SIMULATION EXPERIMENT AND RESULT ANALYSIS Simulation environment: MacOS, memory: 256GB, machine frequency 3.49GHz, MATLAB R2022a.

A. Test Function and Parameter Settings
The six test functions used in the simulation experiments of this article are shown in Table I: Table I introduces the expressions, upper and lower limits, and optimal values of the six test functions used in this simulation.
As shown in Table I, f 1 (x) is a simple sum of squares function, which is smooth and convex.It is typically used to assess the basic performance of an optimization algorithm.f 2 (x) combines a linear sum component and a multiplicative component, introducing both global structure and local minima.It tests an algorithm's ability to handle non-separable and multimodal functions.f 3 (x) is a nested sum of squares, adding complexity by testing the algorithm's performance on hierarchical problems where optimization at one level depends on the optimization at another.f 4 (x) is a maximization www.ijacsa.thesai.org function that tests the algorithm's ability to find the largest element in a vector, which can be useful for problems that require selection from a set of alternatives.f 5 (x) is reminiscent of the Rastrigin function, which introduces a large number of local minima, making it a challenge for algorithms to find the global minimum.f 6 (x) resembles a modified Schwefel function with a sinusoidal component, which is very challenging due to its complex landscape with many local optima.

B. Experimental Results Analysis 1) Comparison of average convergence curves of 11 algorithms
Select 8 classic swarm intelligence optimization algorithms [33][34][35][36][37][38][39][40], and then add the three algorithms improved in this article, which are Gray Wolf Optimization Algorithm Adding Inertia Weight (GIGWO), Gray Wolf Optimization Algorithm Adding Self-adaptive Weights (GSGWO) and Adaptive gray wolf optimization algorithm based on Gompertz inertia weight strategy (GGWO) for a total of 11 optimization methods.The average convergence curves of these 11 algorithms in three dimensions on 6 test functions are shown in Fig. 5 to 10: In Fig. 5 to10, the abscissa reflects the number of current iterations, and the ordinate represents the logarithm of the fitness value.In low dimension (D=30) and high dimension (D=200, 300), As the iteration proceeds, the other eight algorithms except GIGWO, GSGWO and GGWO converge slowly and easily fall into local optimal, the evolution curve of the GGWO algorithm is the unique algorithm with the most obvious decline, the highest solution accuracy, and the fastest convergence speed, and will not fall into the local optimal.
Except for the initial convergence speed of the F2 test function in high dimensions, the other eight swarm optimization algorithms are close to the optimized algorithm GIGWO, GSGWO, and GGWO.However, they will easily fall into local optimality when the number of iterations grows, and the convergence speed and accuracy are also far inferior to those of the optimized ones.In addition, the standard GWO algorithm based on GIGWO, GSGWO, and GGWO, such as the high-dimensional F6 test function, has slightly lower accuracy and convergence speed than WDO.However, after the optimization, the convergence speed of the three algorithms GIGWO, GSGWO, and GGWO are all much higher than that of WDO, which shows that the optimization strategy in this article is quite effective.
And the convergence speed and solution accuracy of GIGWO and GSGWO are far better than GWO.The convergence speed of the GGWO algorithm on F1-F4 test functions is much higher than that of the GIGWO and GSGWO algorithms, indicating that both Gompertz inertia weight and adaptive optimization strategies are effective.On F4 and F5, the convergence speed of GGWO is far better than that of GIGWO and slightly better than GSGWO.The solution accuracy of the three algorithms is close and far better than the other eight algorithms, reflecting that the superposition of the two optimization strategies is still effective because the convergence results of GGWO are better in more cases.
As can be seen from Fig. 5 to 10, the GGWO has the fastest decline rate and the smallest final fitness value.In the high-dimensional case of F4, although GIGWO is not completely stuck in the local optimum, the curve is stable at first, indicating that it is still stuck in the local optimum at the beginning of the iteration, which makes it impossible to search for the global optimal solution as quickly as possible.Similarly, although GSGWO has not completely stuck into the local optimal solution in the F2 high dimension, the curve gradually stabilizes as the iterations proceed, indicating that it has fallen into the local optimal, which is also not conducive to the search for the global optimal.
The optimization performance and stability of 11 algorithms in low dimension (D=30) and high dimension (D=200, 300) are shown in Tables II to IV: In Tables II to IV, the optimization performance and stability of 11 algorithms in low dimension (D=30) and high dimension (D=200, 300) are reflected by calculating the mean and variance.Among them, the bold data is the minimum value of the mean or standard deviation among the 11 algorithms.

2) Comparison of global optimal values of 11 algorithms
Whether it is high-dimensional or low-dimensional, GGWO has the smallest mean or standard deviation, which shows that GGWO has extremely strong optimization performance and stability.On the two test functions F5 and F6, GIGWO, GSGWO, and GGWO found the global optimal solution 0 in every experiment.The performance of GWO is lower than that of WDO, but after adding Gompertz inertia weight or adaptive weight, the performance and stability are much higher than that of WDO, which shows that the optimization strategy for GWO is quite effective.In conclusion, the stability and convergence performance of GGWO is also the best among the 11 algorithms.

C. GGWO Time Complexity Analysis
The time complexity of GWO is (), where n is the gray wolves' total number of in populations, m is the maximum number of iterations, and D is the dimension of the corresponding optimization problem.Moreover, GWO has one of the smallest time complexity among the eight algorithms in this article because the time complexity of other algorithms such as FA and GSA is as high as O( 2 ).GGWO uses Gompertz inertia weights and adaptive weights to update the position in the algorithm, which is essentially equivalent to linearly multiplying a constant in the formula during each iteration of the standard gray wolf optimization algorithm.Therefore, GGWO does not increase the time complexity of the original algorithm GWO, which means the time complexity of the improved algorithm GGWO in this article is also the smallest, ().
GGWO greatly improves the algorithm's convergence speed, ability to jump out of local optima, and stability without additional increase in time complexity.In comparison with the other 10 population intelligent optimization algorithms, it clearly shows that the optimization performance far exceeds that of other algorithms.

VI. CONCLUSION
An adaptive gray wolf optimization algorithm based on the Gompertz inertia weight strategy is proposed, which uses the Gompertz function to improve the inertia weight and position update formulas.By comparing the simulation experiments, 11 different swarm intelligence algorithms were used on 6 test functions to draw the average convergence curve, and the average value of 10 runs was taken as the final display result.Experimental results show that GGWO has the smallest variance in the test functions, proving that it has the best stability.In addition, the average convergence curve of GGWO decreases the fastest, indicating that it has the fastest convergence speed.Moreover, the time complexity of GGWO is (),which has a certain application potential.From the comparative analysis of standard deviation, convergence curve, time complexity, and other angles, the experimental results show that the improved algorithm GGWO has the characteristics of good stability, fast convergence speed, and high solution accuracy.
Although GGWO has made certain improvements in solution accuracy, speed, and stability, there are still some areas for future improvements.GGWO's work mainly focuses on adjusting inertial weights; we can consider other position update formulas in the future.In addition, the population initialization of GGWO is too random.Due to this, Latin hypercube sampling will be considered to initialize the population operation.Besides, while GGWO improved sharply on the simple or unimodal test functions, for processing some complex test functions or data, the effect might not greatly improved.Based on various industrial applications, GGWO can be an efficient optimization tool that can be used to deal with many practical optimization problems.In the future, GGWO will be used in some practical problems, such as medical image recognition, fault detection, UAV path planning, quantum neural network optimization, and other issues.