A novel differential evolution algorithm with multi-population and elites regeneration (2024)

  • Journal List
  • PLoS One
  • PMC11045134

As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsem*nt of, or agreement with, the contents by NLM or the National Institutes of Health.
Learn more: PMC Disclaimer | PMC Copyright Notice

A novel differential evolution algorithm with multi-population and elites regeneration (1)

Link to Publisher's site

PLoS One. 2024; 19(4): e0302207.

Published online 2024 Apr 25. doi:10.1371/journal.pone.0302207

PMCID: PMC11045134

PMID: 38662721

Yang Cao, Conceptualization, Funding acquisition, Supervision, Writing – review & editing1,2,3 and Jingzheng Luan, Data curation, Resources, Writing – original draftA novel differential evolution algorithm with multi-population and elites regeneration (2)1,2,3,*

Mahamed G. H. Omran, Editor

Author information Article notes Copyright and License information PMC Disclaimer

Associated Data

Supplementary Materials
Data Availability Statement

Abstract

Differential Evolution (DE) is widely recognized as a highly effective evolutionary algorithm for global optimization. It has proven its efficacy in tackling diverse problems across various fields and real-world applications. DE boasts several advantages, such as ease of implementation, reliability, speed, and adaptability. However, DE does have certain limitations, such as suboptimal solution exploitation and challenging parameter tuning. To address these challenges, this research paper introduces a novel algorithm called Enhanced Binary JADE (EBJADE), which combines differential evolution with multi-population and elites regeneration. The primary innovation of this paper lies in the introduction of strategy with enhanced exploitation capabilities. This strategy is based on utilizing the sorting of three vectors from the current generation to perturb the target vector. By introducing directional differences, guiding the search towards improved solutions. Additionally, this study adopts a multi-population method with a rewarding subpopulation to dynamically adjust the allocation of two different mutation strategies. Finally, the paper incorporates the sampling concept of elite individuals from the Estimation of Distribution Algorithm (EDA) to regenerate new solutions through the selection process in DE. Experimental results, using the CEC2014 benchmark tests, demonstrate the strong competitiveness and superior performance of the proposed algorithm.

Introduction

Differential Evolution (DE) is a powerful search method proposed by Storn and Price [1,2]. This simple yet efficient algorithm has been widely applied and has shown remarkable effectiveness in solving optimization problems across various fields and real-world applications [37]. In a manner akin to other Evolutionary algorithms (EAs), DE gradually approaches the global optimal solution in each generation through mutation, crossover, and selection operations. In addition, DE is one of the most effective EAs currently in use [8]. Moreover, DE offers flexibility and adaptability, allowing researchers and practitioners to customize its parameters and strategies according to specific problem characteristics. Therefore, some researchers have also combined DE with other metaheuristic algorithms to improve performance by leveraging the strengths of each algorithm [9]. This adaptability makes DE well-suited for a wide range of optimization problems, spanning from continuous to discrete domains.

In DE, the individuals within the population are referred to as target vectors. Through the process of mutation, a mutant vector is created by introducing perturbations to a target vector uses the difference vectors of other individuals within the population. Subsequently, the crossover operation combines the parameters of the mutant vector with those of a parent vector chosen from the population, resulting in the generation of a trial vector. The process of determining the next generation involves evaluating the fitness values of the trial vectors and their corresponding parent vectors. This is done through a selection operation that engages in a one-to-one competition [10,11]. The efficacy of DE heavily relies on the chosen mutation strategy and crossover operator. In addition, the intrinsic control parameters, specifically the population size (NP), scaling factor (F), and crossover rate (Cr), act as crucial factors in striking a balance between population diversity and algorithm convergence speed.

DE possesses numerous benefits, including ease of implementation, dependability, speed, and adaptability [3]. In general, DE is known for its good global exploration capabilities and its ability to approach the global optimum in optimization problems. However, its exploitation rate, which refers to the ability to exploit local search spaces efficiently, is relatively slow compared to other algorithms [12]. Furthermore, the parameters of DE are problem-related, making it difficult to adapt for different problems. Additionally, the performance of DE tends to worsen as the dimension of the search space increases[13].

The distribution estimation algorithm (EDAs) [1416] is a recent branch that has emerged. EDAs are random optimization method that extracts samples from the probability distribution of each generation’s expected solution estimation. An important feature of EDA and other EAS is the establishment of a probabilistic model based on elite individuals sampled from the population. This model-based optimization approach enables EDAs to solve many of the complex problems Hauschild and Pelikan [17]. From the literature [1820], these modifications and improvements to DE mainly focus on developing new mutation rules, and there are some attempts to adjust control parameters in an adaptive or adaptive manner, as well as the exploitation and utilization of elite solutions.

The performance of an optimal optimization algorithm depends on its ability to dynamically adjust the ability to avoid falling into local optima and provide larger steps. during different stages of evolution. When it comes to enhancing search strategies, a well-designed search strategy should promote initial exploration and enable individuals to leverage the information obtained from the search space to enhance and optimize the solution.

The integration of JADE [21] and the newly proposed mutation strategy was implemented, resulting in a new DE variant called EBJADE, which enhanced their search ability for difficult and complex optimization problems.Therefore, this paper has three main innovative aspects:

  1. Introducing a new DE algorithm, it introduces a mutation strategy for enhancing exploitation capabilities known as DE/current-to-ord/1 (abbreviated as ord). To perform mutation, this approach utilizes the global vectors and applies a sorting technique. For each objective vector, the method chooses one vector from the top p best vectors, from the p vectors in the median rank, this paper selects one vector as the median vector, and from the bottom p worst vectors, this paper selects one vector as the worst vector.

  2. The entire population is divided into two indicator subpopulations and one reward subpopulation. Both variants of DE are combined, and the reward subpopulation is dynamically allocated based on the historical performance of each variant, favoring the one with better historical performance.

  3. Utilizing the concept of sampling from the Elite section of EDAs. After the selection process in DE, in this paper, the location of the elite individual is sampled by using the neighborhood area of the elite solution. Inspired by the above, new individuals approach the elite solutions in the probability model are sampled, enabling us to further exploit the promising regions discovered so far and potentially find better solutions. Consequently, this leads to the generation of more competitive solutions, which serve as guides for evolution and enhance the chances of DE avoiding local optima.

The remaining sections of this paper are organized as follows:Section 2 introduces the original DE algorithm, including its typical mutation operator, crossover operator, and selection operator. Section 3 provides a review of related work. Next, in Section 4 proposes various multi-population strategies, mutation strategies, and elitist solution adjustments. Section 5 reports and discusses the effectiveness of the proposed algorithm based on computational results and comparisons with other advanced algorithms using the CEC2014 benchmark. The experimental results demonstrate that the proposed algorithm exhibits strong competitiveness in terms of the robustness, stability, and quality of the obtained solutions. Finally, Section 6 draws conclusions based on the findings.

Original differential evolution algorithm

In this section, this paper describes the fundamental process of Differential Evolution (DE) and introduce the necessary symbols and terminology which aid in explaining the algorithm proposed subsequently.

The original DE algorithm is a type of evolutionary algorithm used for continuous optimization problems. It follows a series of three main steps (mutation, crossover, and selection) in an iterative manner until a predefined termination criterion is satisfied. This research starts by forming an initial population called P, which is comprised of NP individuals that are randomly selected. Every individual in the population is represented as a vector Xi, where Xi = {x1,i,x2,i,x3,i,…xD,i}, with i ranging from 1 to NP. In this context, i denotes the total number of individuals, and D represents the dimensionality of the solution space. Since the population evolves during the process, this paper introduces the concept of generation time, denoted as G, with values ranging from 0 to Gmax, where Gmax indicates the maximum number of iterations. For the ith individual in the Gth generation, this paper represents it as Xi,G,Xi,G = {x1,i,G,x2,i,G,…xD,i,G}. The lower and upper boundaries for each dimension in the search space are denoted as Xl and Xu, respectively. Specifically, Xl = {x1,l,x2,l,…xD,l} and Xu = {x1,u,x2,u,…xD,u}. This paper initializes the population P0 by randomly generating individuals within the specified bounds using a uniform distribution. Subsequently, the DE operators mutation, and crossover, are used to evolve these individuals and generate trial vectors. These trial vectors are then compared with their corresponding parents, determining which vectors should retained for the next generation. The overall steps of DE are described in detail as follows:

Initialization

To initiate the optimization process, this paper quires the generation of an initial population, denoted as P0. Usually, the value of each component (indexed by j) of the ith individual (indexed by i) in the population is generated using the following formula:

xj,i=xj,l+rand×(xj,uxj,l)

(1)

In the formula, the "rand" function generates a random number from a uniform distribution where the values are within the range of [0,1].

Mutation

In generation G, for each target vector Xi,G, a mutation vector Vi,G is generated using the following process:

Vi,G=Xr1,G+F×(Xr2,GXr3,G),r1r2r3i

(2)

This paper randomly selects three indices r1, r2, and r3 from the set {1,2,3,…,NP}, where NP represents the population size. The real number F is used to control the amplification of the vector difference (Xr2,G-Xr3,G).If any component of the mutation vector exceeds the boundaries of the search space, this study employs an equation to generate a replacement value for that component. This approach is known as DE/rand/1/bin, which is a commonly used mutation strategy. There are also other commonly used mutation strategies, including:

DE/best/1/bin

Vi,G=Xbest,G+F×(Xr1,GXr2,G)

(3)

DE/current-to-best/1/bin

Vi,G=Xi,G+F×(Xbest,GXr1,G)+F×(Xr2,GXr3,G)

(4)

Within the DE/rand/1/bin mutation strategy context, the selection of these indices is conducted from the set (1, 2,…, NP), with NP denoting the size of the population. It is important to note that these indices should be different from the index i, which refers to the current target vector.This scaling operation amplifies the impact of the difference vector on the mutation process. The specific value of F determines the extent to which the difference vector influences the mutant vector. By adjusting the value of F, the algorithm is capable of achieving a trade-off between exploration and exploitation within the search space.

Crossover

In DE, there are two main types of crossover: binomial and exponential. Here, this paper will provide a detailed explanation of binomial crossover. In binomial crossover, the target vector Xi,G is combined with the mutation vector Vi,G to generate the trial vector Ui,G.This crossover is achieved using the following scheme:

ui,j,G+1={vi,j,GifrandjCrorj=jrandxi,j,Gotherwise

(5)

Where rand is a uniformly distributed random number in the range [0,1], Cr ∈[0,1] is called the crossover rate, controlling how many components are inherited from the mutation vector, and jrand is a uniformly distributed random integer in the range [1, D]. It ensures that at least one component of the trial vector is inherited from the mutation vector.

Selection

In the process of selection operation, the target vector Xi,G is compared with the trial vector Ui,G, and the vector with better fitness value is entered into the next generation.

Xi,G+1={Ui,G,f(Ui,G)f(Xi,G)Xi,G,otherwise

(6)

Related work

In fact, the performance of the Differential Evolution (DE) algorithm primarily depends on the selected mutation/crossover strategy and the associated control parameters. Due to the limitations of DE mentioned earlier in the introduction, many researchers have been devoted to improving DE. Consequently, several new techniques have been proposed by researchers to overcome its shortcomings and enhance its performance. This section will provide a brief overview of these techniques.

First, improving different adaptive parameters and multi mutation strategies has attracted many researchers. Qin and Suganthan [22] and Qin et al. [23] introduced a modification of DE called self-adaptive differential evolution (SaDE). In SaDE, various strategies are incorporated, and their influence in the search procedure is adjusted by considering their past success rates. Brest et al. [24] suggested an adaptive approach called jDE, in which the control parameters F and Cr are encoded within individuals and dynamically adjusted throughout the operation of DE. Zhang and Sanderson [21] introduced a novel DE algorithm known as JADE, which enhances the optimization performance by integrating an optional external archive, employing a new mutation strategy called DE/current-to-pbest, and adaptively updating control parameters. Through simulation results, it has been demonstrated that JADE surpasses or is on par with other classical or adaptive DE algorithms in terms of convergence performance. Gong et al. [25] to adaptively select more suitable strategies for specific problems and further improve the performance of DE, a simple strategy adaptation mechanism (SaM) has been implemented. By combining SaM with JADE [21,26], the SaJADE algorithm is proposed. Tanabe and f*ckunaga [27] presented an enhanced version of the JADE algorithm called Success History based DE (SHADE). Unlike JADE, SHADE does not sample F and Cr values from a gradually adapting probability distribution. Instead, it utilizes a historical memory archive, MCr and MF. The historical memory archive stores a collection of Cr and F values that have exhibited good performance in recent iterations. R. Tanabe and A. f*ckunaga [28] using Linear Population Size Reduction (L-SHADE) to improve the search performance of SHADE, which won the championship in the CEC2014 benchmark function test. Ali W et al. [12] introduces two novel mutation strategies. This enhancement boosted both the global and local search capabilities while enhancing the speed of convergence. Meng et al. [29] proposed a mutation strategy based on the historical population, which is known as the novel DE variant Hip-DE. This mutation strategy incorporates information from the historical population, which is essentially a collection of discarded Xi during the selection phase, with a size of 5*15*D. Xia et al. [30] proposed an algorithm called NFDDE, which introduces novelty as a driving force to address the limitations of fitness-based driving forces in the algorithm. The average Euclidean distance between similar individuals is used to calculate their individual novelty, and the weight between the two driving forces is adjusted using a proportion factor p. The adaptive adjustment of F and Cr references the history-based approach in SHADE.

On the flip side, the utilization of multiple populations strategy is widely acknowledged as an effective technique to foster exploration without compromising the diversity within the population. Thus, Wang et al. [31] proposed a new adaptive multi-population DE algorithm (AMPDE). It utilizes multi-population, with each subpopulation being assigned a crossover operator and disturbed based on this crossover operator. The comprehensive adjustment of subpopulation sizes is based on the relative contributions of each subpopulation to the acquired external archive. The two-stage local search helps to improve the diversity of the external archive and the quality of solutions. Wu et al. [32] proposed that EDEV is not just a collection of mutation strategies, but rather a high-level set comprising multiple DE variants. This set includes JADE [22], EPSDE [33], and CoDE [34]. The population of EDEV is divided dynamically into several subpopulations, which include three indicator subpopulations and one reward subpopulation. Each of the smaller indicator subpopulations is associated with a specific mutation strategy, whereas the larger reward subpopulation is allocated as an additional reward to the mutation strategy that is currently performing the best. In this way, dynamic allocation of computational resources among mutation strategies is realized, and it is expected that the optimal mutation strategy will receive the most computational resources. Mallipeddi et al. [33] developed a DE algorithm with mutation strategy and a set of parameter values (EPSDE). In EPSDE, a set of different mutation strategies, along with their respective control parameter values, coexist and compete to produce offspring throughout the entire evolution process. A novel approach named Composite Differential Evolution (CoDE) was introduced by Wang et al. [34]. This method employs three distinct mutation strategies and three sets of control parameters. The mutation strategies and different control parameters are randomly combined to generate experimental vectors. Li et al. [35] proposed a new method, which has been developed to replace the grouping method in MPEDE, and the new grouping method utilizes policy sorting to allocate computing resources to different policies.

In the past two decades since the initial proposal of the basic DE algorithm, numerous variants have been developed. Nonetheless, there has been limited research dedicated to the exploration of the neighborhood surrounding elite solutions. This unexplored territory holds great promise, as it aligns with the principle of optimality. Deng et al. [36] introduced the Elite-Regeneration (ERG) framework, which defines the elite population as a collection of individuals that exhibit high fitness values. Through the selection process in DE, these elites are regenerated. The regeneration process includes generating a new individual from the search space surrounding each elite individual. This new individual is created by sampling from probability models such as Gaussian or Cauchy distributions. Inspired by this, the EBJADE proposed in this paper combines the multi group multi strategy mechanism with the elite solution regeneration mechanism. Tian et al. [37] proposed a novel numerical optimization algorithm based on DE, which aims to balance exploration and exploitation by leveraging neighborhood information. To efficiently address the search needs of each individual, the algorithm constructs a set of elite individuals using a ring topology. It then adaptively selects appropriate elite individuals based on their performance within their neighborhoods. This selection process serves to guide the search. Based on the ideas of DE/EDA, Dong et al. [38] proposes another method that combines distribution estimation algorithms with differential evolution for global optimization. A hybrid differential evolution and distribution estimation algorithm based on neighborhood search is proposed [39], combining the advantages of distribution estimation algorithm and differential evolution algorithm. At the same time, two mutation operators are used to enhance the search ability of the algorithm, and a chaotic strategy is introduced to update the parameters of DE. Fan et al. [40] used alternative probability models in sampling to improve population diversity. And then combining the algorithm with DE and adopting an adaptive strategy to improve the convergence speed of the algorithm. In order to fully utilize the powerful development of DE and the powerful exploration of EDA, this paper proposes an improved differential evolution algorithm IDE-EDA for mixed distribution estimation algorithms [41].

Indeed, the main changes and advancements in Differential Evolution (DE) have primarily revolved around adaptive control parameter adjustment. Nevertheless, there have been additional enhancements made by modifying the structure and mechanisms of the core DE algorithm, as well as introducing new mutation rules. These modifications aim to enhance DE’s ability to perform local and global searches and address challenges such as stagnation or premature convergence. Research on elite solution neighborhoods has demonstrated the potential of these neighborhoods in generating superior solutions.

Propose algorithm

In this section, divide it into three parts. The first part introduces two mutation strategies, namely the novel mutation strategy proposed in this paper, DE/current to pbest/1 with external archive, and parameter adaptive mechanism. Then the second part introduces multiple populations strategies. Finally, we used the sampling method used in elites regeneration to sample the candidate individuals around the elite solution.

Two search strategies

The two search strategies have different characteristics, one with strong exploration ability and the other with strong exploitation ability. Each subpopulation has different search strategies to to provide the ability to avoid falling into local optima and provide larger steps.

  1. DE/current-to-ord/1 (abbreviated as ord)

In order to overcome the limitations of the JADE strategy, which exhibits fast but less reliable convergence performance, a more refined search strategy has been introduced. This strategy is more greedy and less exploratory, with a clearer directionality that moves closer to the direction of the best solution in the current population and away from the direction of the worst solution in the current population.

Vi,G+1=Xi,G+Fi×(Xptbest,GXi,G)+Fi×(Xptmedian,GXptworst,G)

(7)

Where, Xi,G represents the individual in the Gth generation, Xptbest,G represents the top pt% elite individuals in the population in the Gth generation, Xptmedian,G represents the middle pt% individuals in terms of fitness in the population in the Gth generation, and Xptworst,G represents the bottom pt% individuals in terms of fitness in the population in the Gth generation.

In the mutation equation, this paper observes that by incorporating a target function value with sorting in this mutation strategy, the objective vector is not consistently pulled towards the identical optimal position identified by the entire population. When local optima are present, this increases the possibility of escaping from local optima. Moreover, the second perturbation part of the mutation is formed by the difference vector in the direction of a randomly selected vector from the top P vectors with better fitness values that have been pre-sorted. Hence, the directed perturbation in the proposed mutation can be analogized to the notion of a gradient, as the difference vector aligns from the inferior vector towards the superior vectors. Due to the fact that the difference vector is directed from the worst vector to the better vector [42]. In contrast to the DE/current-to-pbest/1 strategy, this approach consistently adopts the direction of the best vector for all vectors, while also moving opposite to the direction of the worst vector. This strategy facilitates the algorithm in exploring various sub-regions around the objective vector within the search space during the optimization process. As a result, this optimization process, it has the potential to achieve enhanced performance on specific problem instances.

The search strategy is shown in Fig 1, with stars representing the current individual, circles representing the actual optimal solution,while other three individuals are shown in squares and labeled as S1 to S3. Sort S1, S2, and S3 as ptworst, ptmedian, and ptbest, respectively. L1 and L2 representing the search directions from the current individual to S3 and S1 to S2, respectively.

Open in a separate window

Fig 1

Search strategy for DE/current to ord/1 utilized in this paper.

  1. DE/current-to-pbest/1 with external archive

This mutation strategy is used in the JADE algorithm. The DE/current-to-pbest/1 with external archive algorithm possesses excellent global search capability and is able to find relatively optimal solutions in the parameter space. It generates new solution vectors by utilizing the differences between the current solution and the historical solutions, aiding in the exploration of the entire search space.

Vi,G+1=Xi,G+Fi×(Xpbest,GXi,G)+Fi×(Xr1,GX˜r2,G)

(8)

Where, Xi,G represents the individual i in the population at generation G. Xi,pbest refers to the top p% elite individuals in the population at generation G. The values r1 is chosen from the individuals in the subpopulation popi, and the values r2 is chosen from the individuals in the subpopulation popi and the external archive A, where r1r2pbest. popi indicates the subpopulation.

The DE algorithm’s level of success heavily relies on the choice of the scaling factor F and crossover rate Cr. These factors hold significant importance as they strongly impact the algorithm’s effectiveness, efficiency, and robustness. Moreover, determining the optimal values for these control parameters across various problems with differing characteristics at various stages of exploitation can be challenging. Hence, the current population impacts the future best control parameters. Therefore, in order to achieve good performance, the proposed algorithm includes new mutation and parameter adaptation methods used in JADE.

For each individual Xi in every generation G, the crossover probability Cri is generated independently using a normal distribution with an average value of μCr and a standard deviation of 0.1.

Cri=randni(μCr,0.1)

(9)

Additionally, the crossover probabilities Cri in generation g are truncated to fall within the range of [0, 1]. The set SCr represents all the successful crossover probabilities. The average μCr is initially set to 0.5 and is subsequently updated at the end of each generation to

μCr=(1c)×μCr+c×meanA(SCr)

(10)

where c is a positive constant between 0 and 1 and meanA(·) is the usual arithmetic mean.

Similarly, in every generation G, the mutation factor of each individual Xi is independently generated from a Cauchy distribution with a location parameter of μF and a scale parameter of 0.1.

after that, the generated mutation factors are truncated to 1 if they exceed Fi≥1 or regenerated if Fi≤0. Take SF as the set of all successful mutation factors in the G-generation. The location parameter μF of the Cauchy distribution is initially set to 0.5 and is subsequently updated at the end of each generation.

μF=(1c)×μF+c×meanL(SF)

(12)

where meanL(·) is the Lehmer mean

meanL(SF)=FSFF2FSFF

(13)

Multi-population strategy

Multi-population strategy is considered as an effective technique to promote exploration without reducing population diversity. Taking this into consideration, this paper adopts a multiple populations strategy. The detailed process is as follows:

Firstly, this paper divides the entire population into two indicator subpopulations and one reward subpopulation. They are randomly allocated during each generation, with the two indicator subpopulations labeled as pop1 and pop2, and the reward subpopulation labeled as pop3. The indicator subpopulations have the same size, but they are much smaller compared to the reward subpopulation. Let pop represent the total population, as shown below.

pop=i=1,2,3popi

(14)

Where, NP represents the size of pop, while NPi represents the size of popi. δi signifies the ratio between popi and pop. Therefore, it is expressed as

NPi=δi×NP

(15)

i=1,2,3δi=1

(16)

In this paper just let δ1 = δ2, firstly, two subpopulations based on different performance metrics are randomly assigned two different DE strategies. The rewarding subpopulation is also randomly assigned to one of the DE strategies. This allocation process is repeated in every generation. As the algorithm proceeds, after every recent generations (ng), this paper determines the most effective DE search strategy (best) in the previous iteration based on the ratio between cumulative fitness improvement and cost function evaluation.

best=maxi=1,2(nsiΔfesi)

(17)

Where, nsi is the successful numbers of the offspring vectors generated improvement from the most recent ng generations of the ith composing DE strategy and Δfesi is the quantity of function evaluations consumed.

In the following generations (ng), subpopulations that demonstrate the best performance in the DE search strategy will be rewarded. The periodic determination of the best DE search strategy and the allocation operator for rewarding subpopulations occurs, with ng indicating the interval. This approach guarantees that the optimal mutation strategy utilizes the maximum computational resources available, as intended in this paper.

Elites regeneration

According to the proximate optimality principle (POP) [43], this paper posits that proficient solutions possess analogous structures that have been employed across nearly all heuristic algorithms.

The sampling method used in elites regeneration primarily stems from EDA, as explicitly employing probabilistic models in EDA offers significant advantages over other types of EAs. Therefore, this paper strives to leverage the benefits of sampling solutions to maintain a high level of population diversity. As a result, the adoption of elites regeneration includes the process of sampling alternative individuals near elite solutions using a probabilistic model following the selection operation.

In [17] suggests that due to the finite variance of the Gaussian distribution. Generating candidate solutions solely through a Gaussian distribution restricts the diversity of the population. Unlike the Gaussian distribution, the Cauchy distribution has a shape that approaches the axes very slowly, to the extent that the expectation does not converge. The results indicate that the Cauchy distribution has an infinite variance, resulting in stronger diversity among the generated individuals. Utilizing Gaussian or Cauchy probability models regenerates new individuals around elite individuals.

Within the domain of probability theory, the frequently utilized continuous probability distribution is the Gaussian distribution, which is scaled by the standard deviation and transformed by the mean value μ. The description of the probability density of this distribution is as follows:

Gaussian(μ,σ2)=1σ12πe12(xμσ)2

(18)

The expected value and variance of the Cauchy distribution are indeterminate. This distribution is characterized by two parameters: the location parameter x0, which determines the peak position of the distribution, and the scale parameter, which determines the half-width value at the half-maximum distribution. The equation that describes the probability density function of the Cauchy distribution is as follows:

cauchy(x0,γ)=1πγ(xx0)2+γ2

(19)

After performing selection operations, the fitness values of offspring individuals are sorted, and solutions with higher fitness are referred to as high-quality elites.

According to the assumption of the aforementioned population-based optimization process, elite individuals are utilized to re-exploring new individuals. Previous studies have indicated that the likelihood of generating superior solutions in the later stages is lower compared to the early stages. Therefore, the size of the elite population needs to be dynamically adjusted, a direct deterministic linear reduction technique has been introduced. The calculation formula for its size is as follows:

EP=round(EPmax(EPmaxEPmin)/maxFES×FES)

(20)

Where EP represents the scale of the elite population. EPmax is the maximum value for the elite group, specified as NP divided by 10. EPmin is the minimum value, set to three individuals. FES denotes the number of function evaluations, and maxFES represents the maximum function evaluations for the problem dimension. With an increase in the number of FES, EP gradually decreases from EPmax to EPmin. To ensure that the size of the elite population is an integer, the round function is applied to round the value.

Assign the mean parameter μ of the Gaussian distribution or the location parameter x0 of the Cauchy distribution to each elite individual. Additionally, it is reasonable to set the standard deviation σ of the Gaussian distribution and the scale parameter γ of the Cauchy distribution to 0.005 in the 30-D problem through the parameter sensitivity analysis. As a result, during each generation, EP individuals are created using an explicit probabilistic model, such as the Gaussian or Cauchy model. These newly generated individuals are similar to their elite parental solutions, with each individual corresponding to a specific elite parent. The process of creating these individuals is defined as follows:

Ni={Gaussian(Ei,0.005)ifrand(0,1)<0.5Cauchy(Ei,0.005)otherwise

(21)

Where Ei represents the ith elite vector and Ni represents the ith newly generated individual.

Next, the newly sampled individuals are compared to their elite parent individuals. If the newly sampled individuals exhibit better fitness values, they replace their parents and become part of the next generation. This mechanism is designed to regenerate individuals near the best solutions found in the search space, thereby exploring promising areas in each generation. The goal is to enhance the convergence speed of the algorithm, guided by the elite individuals, while maintaining population diversity during the sampling process. The pseudo code for EBJADE is shown in Algorithm 1.

In EBJADE, the individual sorting process and elite solution regeneration generate some additional calculations. In order to compare the time complexity of the original DE algorithm and the EBJADE algorithm, it is necessary to evaluate the computational cost. Due to the use of random partitioning in subpopulation partitioning, its computing time is T(NP), its time is complexity O(1). Sort individuals in the population based on fitness, with a time complexity of O(NP×log(NP)). The time complexity when implementing mutation strategies in subpopulations is O(popi×D), and the total complexity for each generation is O(NP×D). The time complexity of regenerating individuals with better ranking in the elite solution regeneration process is O(EP×D). Therefore, the total time complexity of the EBJADE algorithm is O(Gmax×(1+NP×log(NP)+NP×D+EP×D)). Finally simplified as O(Gmax×(NP×D), where Gmax is the maximal generations. The overall time complexity is comparable to the original DE, without increasing the time complexity.

Algorithm 1 pseudo code of EBJADE

1: Initial parameters including ng, NP, δi, MaxFes;

2: Initialize the pop randomly distributed in the solution space;

3: Set NPi = δi×NP; μCR = 0.5;μF = 0.5; A = ∅; nsi = 0 and Δfesi for i = 1, 2, 3;

4: Randomly divide pop into pop1,pop2 and pop3 with respect to their size;

5: Randomly select a subpopulation popi(i = 1, 2, 3) and combine popi with pop3.Let popi = popi∪pop3 and NPi = NPi+NP3;

6: Set fes = 0;

7: while fes ≤ MaxFes do

8: Sort fitness values f(X1,G),f(X2,G),,f(XNP,G) in ascending order

9: for i = 0 to pop1 do

10: Generate Cri = randni(μCr1,0.1), Fi = randci(μF1,0.1)

11: Generate a mutant vector Vi,G using Eq (8)

12: Generate a trial vector Ui,G using Eq (5)

13: Evaluate trial UG; fes++;

14: end for

15: for i = 0 to pop2 do

16: Generate Cri = randni(μCr2,0.1), Fi = randci(μF2,0.1)

15: Generate a mutant vector Vi,G using Eq (7)

16: Generate a trial vector Ui,G using Eq (5)

17: Evaluate trial UG; fes++;

18: end for

19: if mod(g,ng) = = 0 then

20: k=maxi=1,2(nsiΔfesi);

21: end if

22: Randomly partition pop into pop1, pop2 and pop3

23: Let popk = popk∪pop3,(k = 1, 2)

24: Sort fitness values f(X1,G),f(X2,G),,f(XNP,G) in ascending order

25: Calculate size of the elite population using Eq (20)

26: Select the best EP solutions from the whole population

27: for i = 0 to EP do

28: if rand ≤ 0.5 then

29: Sample a new individual Ni,G around the ith

30: Elite individual Ei,G from the Gaussian distribution

31: else

32: Sample a new individual Ni,G around the ith

33: Elite individual Ei,G from the Gaussian distribution

34: end if

35: Evaluate trial NG; fes++;

36: end for

37: Randomly remove solutions from A so that |A| ≤ NP

38: Update μCr1, μCr2 using Eq (10)

39: Update μF1, μF2 using Eq (12)

40: end while

Numerical experiments and comparisons

In this section, the computational results of the proposed algorithm will be discussed and compared with other state-of-the-art algorithms.

Extensive experiments were conducted using 30 benchmark functions from CEC 2014 to evaluate the effectiveness of our proposed algorithm.These functions can be classified into four groups: unimodal functions (F1-F3), simple multimodal functions (F4-F16), hybrid functions (F17-F22), and composition functions (F23-F30). Detailed descriptions of the functions can be found in [44].

In our proposed algorithm, this paper followed the setting from JADE, where the population size was set to 100, 200, and 400 for 30-D, 50-D, and 100-D, respectively. The recommended parameters for the standard deviation σ of the Gaussian distribution and the scale parameter γ of the Cauchy distribution are used in the ERG framework. These parameter values were chosen because they produced the best optimization results. The experimental results can be divided into three parts to systematically evaluate the optimization performance of the algorithm, while emphasizing the unique components of the proposed algorithm. Firstly, the numerical results and statistical comparisons of the proposed algorithm are described. Furthermore, the impact of parameter sensitivity analysis on algorithm performance was discussed. Lastly, a comparison was made between EBJADE and other state-of-the-art algorithms, including DE variants and variance matrix adaptive evolution strategy (CMA-ES) [45], as well as the winner a hybrid CMA-ES method which combines IPOP-CMA-ES and an iterated local search method (ICMAES-ILS) of CEC2014 [46].

Experiments setup

To evaluate the algorithm’s performance, this paper utilizes the solution error metric. The obtained best error value and standard deviation less than 10−8 are considered as zero [44]. For the CEC2014 benchmark and each function, the maximum number of function evaluations (FEs) is determined as 10000 multiplied by D, where D represents the dimensionality of the problem. This terminal criteria ensures a consistent evaluation across different benchmark functions. To ensure the reliability of the results, each experiment for every function and algorithm is repeated independently 50 times. This repetition helps capture the variation in the algorithm’s performance and provides a more robust evaluation.

To assess the performance of the algorithms [47], two non-parametric statistical hypothesis tests are employed. The first test is the Friedman test, which converts the performance of each algorithm on functions into final rankings. The null hypothesis for this test states that "there is no performance difference among all algorithms," while the alternative hypothesis suggests that "there is a performance difference among all algorithms."

The second test is the Wilcoxon signed-rank test for multiple comparisons, which examines the differences among all algorithms across all functions. The Wilcoxon signed-rank test is employed with a significance level of 0.05 [40]. By utilizing the Wilcoxon signed-rank test, we analyze the variable R+. It represents the sum of ranks for functions where the first algorithm outperforms the second algorithm. Similarly, the variable R represents the sum of ranks for functions where the second algorithm outperforms the first algorithm. A higher rank signifies a greater performance difference. Within each competing function, the values in the "better," "equal," and "worse" categories indicate the number of instances where the first algorithm outperforms, equals, or underperforms the second algorithm, respectively. Based on the test results, a comparative symbol (+, =, and −) is assigned to evaluate the performance of any two algorithms. The plus sign (+) signifies that the proposed algorithm outperforms other algorithms significantly, the equal sign (=) indicates no significant difference between the two algorithms, and the minus sign (−) points that the proposed algorithm significantly underperforms other algorithms. In the context of hypothesis testing, the null hypothesis posits that there is no notable discrepancy in the mean outcomes between the two sample groups. Conversely, the alternative hypothesis proposes that there is a remarkable difference in the mean outcomes among the two samples. The p-value obtained from the test is subsequently compared to a predetermined significance level. If the p-value is less than or equal to the significance level, the null hypothesis is rejected. The p-values corresponding to the significance level are emphasized in bold.

Numerical results and statistical comparisons of the proposed algorithm

In this section, this paper will discuss the statistical analysis outcomes of applying the Friedman and Multi-Problem Wilcoxon tests to compare the proposed algorithms. The comprehensive statistical results of other algorithms and the proposed algorithms on the CEC2014 benchmark functions can be found in Table 1.

Table 1

Average ranks for all algorithms across all problems and all dimensions using CEC2014.

Algorithm30D50D100DMean RankingRank
EBJADE1.671.631.701.671
EBJADEwithoutERG2.732.532.382.552
JADE2.652.702.522.623
ord2.953.133.403.164
Friedman-p-value0.0000.0000.000

Table 1 presents the average rankings of the proposed algorithms after conducting the Friedman test using CEC2014, including JADE as the baseline algorithm. The best rankings are highlighted in bold, and the second best rankings are indicated by underlining. From the table, it can be observed that the p-values computed by the Friedman test are less than 0.05 in all dimensions. Based on this, it can be concluded that there are significant differences in the performance of the algorithms.

Besides, it is evident that our proposed algorithm outperforms both the single-strategy algorithm and the algorithm without elite solution regeneration in all dimensions. This demonstrates the effectiveness of multi-strategy adaptation and elite solution regeneration. Our proposed algorithm achieved top ranking in the 10, 50, and 100 dimensions, demonstrating the effectiveness of the dual-strategy mechanism through the use of elite solution regeneration, where it obtained second place in the 50 and 100 dimensions. JADE secured second place in the 30 dimensions, with no significant difference compared to EBJADEwithoutERG. In terms of average ranking, EBJADE took first place, followed by EBJADEwithoutERG and JADE. This observation confirms the positive impact of our proposed mutation strategy on the JADE algorithm.

According to the statistical analysis results in Table 2, this paper can draw the following conclusions: EBJADE outperforms other algorithms using CEC2014, including single-strategy algorithms and algorithms without elite solution regeneration, showing significant advantages in all dimensions. These conclusions were obtained through Wilcoxon tests and statistical analysis conducted between EBJADE and other algorithms. Therefore, based on this test, EBJADE demonstrates clear superiority over single-strategy algorithms and algorithms without elite solution regeneration in all dimensions.

Table 2

Wilcoxon’s test results for EBJADE algorithms and other algorithms using CEC2014 functions for D = 30, 50 and 100.

DimensionAlgorithmsR+Rp-valueDec.
30-DEBJADE vs EBJADEwithoutERG283170.000+
EBJADE vs JADE207690.036+
EBJADE vs ord264610.006+
50-DEBJADE vs EBJADEwithoutERG289620.004+
EBJADE vs JADE316.534.50.000+
EBJADE vs ord287910.019+
100-DEBJADE vs EBJADEwithoutERG2991070.029+
EBJADE vs JADE2901160.048+
EBJADE vs ord379270.000+

Open in a separate window

Parameter analysis

An analysis was conducted to investigate the impact of five adjustable parameters in EBJADE, including the population size NP, parameters of DE/current-to-ord/1 mutation strategy pt, the ratio δ1 (as δ1 = δ2) between the indicator population and the overall population, the generation gap, ng, which is used to periodically determine the mutation strategy based on recent best performance and the scale parameters in elite solution regeneration The performance of EBJADE was evaluated by varying parameter. The analysis aimed to understand how these parameters affect the performance of EBJADE and to find the optimal values for achieving desired results.

In the parameter sensitivity analysis, the candidate values for δ1 include 0.2, 0.15, 0.25, and 0.3, the candidate values for ng include 10, 30, 50, and 80, and the candidate values for Sort vector parameter pt% include 0.05, 0.1, 0.15, 0.2 0.25, 0.3. When analyzing one parameter, the other parameters is set to its default value. The EBJADE with default parameter values is referred to as the standard EBJADE. A Wilcoxon rank−sum test with a significance level of 0.05 was conducted between the standard EBJADE and other EBJADE versions with different parameter values. The symbols "+" "=", and "−" represent better, similar, and worse performance of the standard EBJADE compared to the respective EBJADE versions. The results of the sensitivity analysis for parameters δ1 and ng are recorded in Table 3 and the Sort vector parameter pt% is recorded in Table 4.

Table 3

The computation results of EBJADE with different settings of δ1 and ng for 30 benchmark functions with 50 experimental runs.

D = 30EBJADE
δ1 = 0.15
EBJADE
δ1 = 0.2
EBJADE
δ1 = 0.25
EBJADE
δ1 = 0.3
EBJADE
ng = 10
EBJADE
ng = 30
EBJADE
ng = 50
EBJADE
ng = 80
EBJADE
STD
F19.75e+024.02e+025.06e+026.25e+027.04e+029.41e+028.82e+027.24e+027.66e+02
F20.00e+000.00e+000.00e+000.00e+001.70e−110.00e+000.00e+000.00e+000.00e+00
F32.89e−019.34e+005.59e+009.19e+004.51e+006.71e+004.83e+004.69e+007.16e−01
F49.60e−291.27e+006.11e−299.36e−293.81e−281.84e−281.27e+001.17e−281.55e−28
F52.03e+012.00e+012.00e+012.00e+012.00e+012.00e+012.00e+012.00e+012.00e+01
F61.02e+019.95e+001.04e+011.03e+019.97e+001.04e+019.92e+001.02e+019.58e+00
F70.00e+001.48e−040.00e+000.00e+001.53e−094.43e−045.66e−141.34e−090.00e+00
F80.00e+000.00e+000.00e+000.00e+000.00e+000.00e+000.00e+000.00e+000.00e+00
F92.36e+012.24e+012.27e+012.21e+012.19e+012.20e+012.26e+012.22e+012.15e+01
F102.50e−036.25e−033.75e−034.16e−038.33e−038.33e−039.99e−038.33e−034.58e−03
F111.65e+031.51e+031.52e+031.53e+031.54e+031.52e+031.54e+031.52e+031.52e+03
F122.68e−011.65e−011.88e−011.74e−011.73e−011.73e−011.68e−011.63e−011.64e−01
F132.07e−012.10e−012.06e−012.09e−011.96e−012.06e−012.06e−011.94e−011.99e−01
F142.45e−012.28e−012.24e−012.26e−012.34e−012.34e−012.33e−012.34e−012.28e−01
F152.99e+002.28e+002.36e+002.42e+002.31e+002.34e+002.32e+002.35e+002.37e+00
F169.51e+009.53e+009.48e+009.46e+009.37e+009.33e+009.34e+009.40e+009.28e+00
F171.14e+038.77e+031.18e+031.23e+031.16e+034.42e+039.44e+031.80e+041.22e+03
F181.10e+028.01e+017.21e+016.48e+018.37e+017.70e+018.31e+011.61e+028.49e+01
F194.70e+004.80e+004.88e+004.58e+004.88e+004.66e+004.72e+004.82e+004.80e+00
F201.38e+031.54e+031.35e+032.33e+031.86e+031.73e+031.16e+031.62e+031.17e+03
F215.23e+021.09e+039.25e+022.06e+033.26e+027.78e+026.45e+033.67e+032.90e+02
F221.48e+021.15e+021.16e+021.29e+021.20e+021.09e+021.31e+021.17e+021.27e+02
F232.90e+022.90e+022.90e+022.90e+022.90e+022.90e+022.90e+022.90e+022.90e+02
F242.01e+022.01e+022.01e+022.01e+022.01e+022.01e+022.01e+022.01e+022.01e+02
F252.09e+022.09e+022.08e+022.08e+022.09e+022.09e+022.09e+022.09e+022.08e+02
F261.00e+021.00e+021.00e+021.00e+021.00e+021.00e+021.00e+021.00e+021.00e+02
F273.75e+023.75e+023.81e+023.83e+023.76e+023.79e+023.81e+023.78e+023.73e+02
F284.23e+024.23e+024.19e+024.19e+024.21e+024.26e+024.24e+024.25e+024.22e+02
F291.09e+071.13e+071.06e+071.03e+079.51e+069.93e+061.05e+079.46e+061.04e+07
F308.03e+027.31e+027.90e+027.20e+028.25e+028.22e+028.54e+027.63e+027.55e+02
+/ = /−9/21/08/21/13/26/15/22/35/24/16/23/17/23/07/22/1−/−/−

Open in a separate window

Table 4

The computation results of EBJADE with different settings of pt for 30 benchmark functions with 50 experimental runs.

D = 30EBJADE
pt = 0.05
EBJADE
pt = 0.1
EBJADE
pt = 0.15
EBJADE
pt = 0.2
EBJADE
pt = 0.25
EBJADE
STD
F11.91E+031.28E+035.06E+021.12E+035.63E+027.66E+02
F21.77E-080.00E+000.00E+001.40E-140.00E+000.00E+00
F31.04E+011.94E+015.59E+001.00E+018.40E+007.16E-01
F46.12E-281.13E-286.11E-291.27E+001.06E-281.55E-28
F52.00E+012.00E+012.00E+012.00E+012.00E+012.00E+01
F69.24E+009.01E+001.04E+019.29E+009.36E+009.58E+00
F75.45E-042.46E-040.00E+001.16E-102.41E-130.00E+00
F80.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
F92.40E+012.30E+012.27E+012.24E+012.15E+012.15E+01
F101.08E-021.04E-023.75E-031.04E-024.16E-034.58E-03
F111.51E+031.50E+031.52E+031.57E+031.52E+031.52E+03
F121.66E-011.62E-011.88E-011.58E-011.73E-011.64E-01
F132.09E-012.04E-012.06E-012.07E-012.01E-011.99E-01
F142.52E-012.45E-012.24E-012.29E-012.39E-012.28E-01
F152.43E+002.34E+002.36E+002.35E+002.34E+002.37E+00
F169.31E+009.38E+009.48E+009.31E+009.34E+009.28E+00
F171.28E+031.36E+031.18E+031.29E+033.39E+041.22E+03
F181.20E+021.21E+027.21E+019.75E+019.32E+018.49E+01
F194.96E+004.95E+004.88E+004.74E+004.79E+004.80E+00
F201.53E+031.98E+031.35E+031.84E+031.08E+031.17E+03
F215.83E+033.73E+039.25E+023.16E+031.13E+032.90E+02
F221.46E+021.07E+021.16E+021.06E+021.18E+021.27E+02
F232.90E+022.90E+022.90E+022.90E+022.90E+022.90E+02
F242.01E+022.01E+022.01E+022.01E+022.01E+022.01E+02
F252.08E+022.09E+022.08E+022.09E+022.09E+022.08E+02
F261.00E+021.00E+021.00E+021.00E+021.00E+021.00E+02
F273.74E+023.66E+023.81E+023.75E+023.81E+023.73E+02
F284.22E+024.20E+024.19E+024.22E+024.23E+024.22E+02
F291.20E+071.17E+071.06E+071.05E+071.08E+071.04E+07
F309.16E+028.97E+027.90E+028.18E+028.06E+027.55E+02
+/ = /−21/7/216/10/412/11/719/6/514/12/4−/−/−

Open in a separate window

The data provided in Table 3 reveals that EBJADE is not sensitive to parameters δ1 and ng for many benchmark functions, including F1, F8, F13, and F18-F25. Additionally, EBJADE versions with different parameter values rarely outperform the standard EBJADE, indicating that the parameter settings of the standard EBJADE are reasonable. However, it is worth noting that for function F18, EBJADE with δ1 = 0.3 surpasses the standard EBJADE.

The data in Table 4 indicates that the standard parameter value pt = 0.3 yields better values than other candidate solutions, and EBJADE is sensitive to the parameter pt. In addition, EBJADE versions with different parameter values are rarely superior to standard EBJADE, indicating that the parameter settings of standard EBJADE are reasonable.

In parameter sensitivity analysis, the candidate values for NP include 50, 100,150, 200, 300, 400 and the candidate values for scale parameters σ2, γ, include 0.001, 0.005, 0.01, 0.05. Using the Friedman test for ranking, it converts the performance of algorithms with different parameters on the function into the final ranking. The results of the sensitivity analysis for parameters NP is recorded in Table 5 and the scale parameters σ2, γ are recorded in Table 6. Tables ​Tables55 and ​and66 show the rank values of all variables, the most competitive result is shown in boldface.

Table 5

The computation results of EBJADE with different settings of NP.

Dimension\NP50100150200300400Friedman-p-value
30-D3.933.023.203.333.553.970.163
50-D4.173.453.453.203.423.320.326
100-D4.583.733.183.323.232.950.007

Open in a separate window

Table 6

The computation results of EBJADE with different settings of scale parameters.

Dimension\scale parameters0.0010.0050.010.05Friedman-p-value
30-D2.381.883.052.680.001
50-D2.822.381.882.920.002
100-D2.572.702.552.180.409

Open in a separate window

The number of function variables is a key parameter that determines the difficulty of identifying the global optimum. Therefore, high-dimensional functions are more difficult to solve than low dimensional functions. In different dimensions of problems. To determine the optimal standard deviation and scale parameter values for each dimension, different parameter values are used to evaluate the results of the algorithm for three different dimensions of problems.

The data provided in Table 5 indicates that whether NP is too large or too small greatly affects the results.It is reasonable to set NP to 100, 200, 400 at 30-D, 50-D, and 100-D respectively. Table 6 summarizes the rank values measured at dimensions 30, 50, and 100, using 50 independent runs for each function. The optimal values for the 30-D, 50-D, and 100-D groups are 0.005, 0.01, and 0.05, respectively. These results indicate that increasing dimensionality requires a relatively large sampling range to achieve optimal performance.

Comparison against state-of-the-art DE variants

All parameter settings of these DE variants are listed in Table 7. Table 8 presents the average rankings of the compared algorithms after conducting the Friedman test on dimensions D = 30, 50, and 100 using CEC2014. It is evident from the table that the calculated p-values from the Friedman test are less than 0.05 for all dimensions. Therefore, this paper can conclude that there are significant differences in performance among the algorithms.

Table 7

Parameter settings.

AlgorithmsParameters initial settings
EBJADENP = 100, 200,400, μCR = μF = 0.5, c = 0.1, pt = 0.3, δi = 0.1, γ = σ2 = 0.005,0.01,0.05
CoDENP = 30, [F = 1.0, CR = 0.1], [F = 1.0, CR = 0.9], [F = 0.8, CR = 0.2]
jDENP = 100; τ1 = τ2 = 0.1; Fl = 0.1; Fu = 0.9
SaJADENP = 100, 200, 400, μF = μCR = μs = 0.5; c = 0.1; p = 0.05
EPSDENP = 50; F ∈ [0.4, 0.9] and CR ∈ [0.1, 0.9] with stepsize = 0.1
SHADENP = 100, μCR = μF = 0.5, H = 100, c = 0.1, p = 0.2
L-SHADENP = 18×D, μCR = μF = 0.5, H = 100, c = 0.1, p = 0.2

Open in a separate window

It is interesting to note that EBJADE ranks third in 30 dimensions, but achieves the second only to L-SHADE in other dimensions. Although there is a slight difference in average rankings between EBJADE (ranked second) and the first-ranked L-SHADE, it is not significant. This indicates that EBJADE consistently performs well across multiple dimensions. Compared to excellent DE variants, it has certain competitiveness.

Table 8

Average ranks for all algorithms across all problems for D = 30, 50, and 100 using CEC2014.

Algorithm30D50D100DMean RankingRank
L-SHADE2.402.682.402.491
EBJADE3.572.832.572.992
SHADE3.202.983.183.123
SaJADE4.133.453.533.704
jDE4.875.274.504.885
EPSDE3.985.135.324.816
coDE5.175.656.505.777
Friedman-p-value0.0000.0000.000

Open in a separate window

According to the Wilcoxon test shown in Table 9, EBJADE outperforms EPSDE significantly in all dimensions. In the 30-dimensional case, only EPSDE and EBJADE exhibit a significant difference. As the dimension increases, jDE and CoDE show significant differences from EBJADE in the 50 and 100 dimensions. In the 100-dimensional case, SaJADE also demonstrates a significant difference. However, SHADE and L-SHADE does not exhibit any significant differences across all dimensions.

Table 9

Wilcoxon’s test results for EBJADE algorithms and other state-of-the-art DE variants using CEC2014 functions.

DimensionAlgorithmsR+Rp-valueDec.
30-DEBJADE vs jDE2121660.581=
EBJADE vs SaJADE227980.083=
EBJADE vs coDE250.5127.50.140=
EBJADE vs EPSDE254970.046+
EBJADE vs SHADE83.5192.50.097=
EBJADE vs L-SHADE992520.052=
50-DEBJADE vs jDE289620.004+
EBJADE vs SaJADE2541240.118=
EBJADE vs coDE361450.000+
EBJADE vs EPSDE328500.001+
EBJADE vs SHADE2471040.069=
EBJADE vs L-SHADE132.5218.50.275=
100-DEBJADE vs jDE353820.002+
EBJADE vs SaJADE2981080.031+
EBJADE vs coDE413220.000+
EBJADE vs EPSDE378280.000+
EBJADE vs SHADE2751210.062=
EBJADE vs L-SHADE1392670.154=

Open in a separate window

S10S12 Tables present the mean and standard error results of advanced DE algorithm variants on 30-D, 50-D, and 100-D CEC 2014 benchmark functions, respectively. The results indicate that the EBJADE algorithm improves exploratory capabilities and enhances the performance of the algorithm. It either significantly outperforms other advanced algorithms or performs comparably. In the 30-D tests, more than half of the functions exhibit better performance compared to jDE, CoDE, and EPSDE algorithms. While five to six individual functions perform worse, our proposed algorithm outperforms or is comparable to SaJADE and SHADE algorithms for two-thirds of the functions. Compared to L-SHADE, one-third of the functions are better, with comparable performance. Notably, it surpasses all other advanced algorithms for the unimodal function F2 and the multimodal functions F11, F12, F15, and F16.

In the 50-D and 100-D tests, the effectiveness of the EBJADE algorithm is further enhanced. In the 50-D test, it outperforms jDE, CoDE, and EPSDE algorithms in more than 20 functions. Specifically, it performs better in 24 functions compared to CoDE and 23 functions compared to EPSDE algorithms. While the improvement over the SaJADE algorithm is not as significant, it still performs better than in the 30-D test. It shows an increase of 6 functions compared to the SHADE algorithm. Compared with L-SHADE, the proposed algorithm has improved performance. In the 100-D test, jDE, CoDE, and EPSDE algorithms show improvement in 2, 3, and 3 functions respectively, while SaJADE improves in one function and SHADE improves in two functions. Performance comparable to 50D Compared with L-SHADE. The results demonstrate that the EBJADE algorithm performs better than other algorithms in many multimodal functions, and its performance becomes more significant as the dimension increases.

In Figs ​Figs22 and ​and3,3, EBJADE has the fastest convergence speed on 11 out of 30 functions. Its convergence speed is higher than all other algorithms on functions F1, F5, F7, F9, f15, and F17. The convergence speed of EBJADE and SHADE on functions F9, F10, F11, F12, F15, F16, and F22 is very close, ranking first, and slightly faster than other algorithms. The convergence rates of EBJADE and saJADE on functions F2, F8, and F18 are not significantly different, and they are also superior to other algorithms. The convergence speed of EBJADE ranks second among the two functions. On functions F4 and F24, only saJADE converges slightly faster than EBJADE. EBJADE performs slightly worse on functions F3 and F6, while all algorithms have similar convergence rates on functions F13, F14, F19, F23-F28, and F30. The algorithm performance of SHADE and saJADE is relatively good, with SHADE in the first or parallel first position among 10 functions, and saJADE in the first or parallel first position among 8 functions.

Open in a separate window

Fig 2

Convergence graphs of CEC2014 F1-F15 benchmark functions with 50 variable.

Open in a separate window

Fig 3

Convergence graphs of CEC2014 F16-F30 benchmark functions with 50 variable.

Comparison against CMA-ES and variants

In this section, we compare the EBJADE with CMA-ES and its variant combines IPOP-CMA-ES and an iterated local search method (ICMAES-ILS) using CEC2014 for dimensions D = 50 and 100. For ICMAES-ILS, the initial step size is set to 120. The Wilcoxon’s Test results for EBJADE algorithms and other state-of-the-art CEA-ES algorithms in Table 10.

Table 10

Wilcoxon’s test results for EBJADE algorithms and other state-of-the-art CEA-ES algorithms using CEC2014 functions.

DimensionAlgorithmsR+Rp-valueDec.
50-DEBJADE vs CMAES3121530.102=
EBJADE vs ICMAESILS1382130.341=
100-DEBJADE vs CMAES2632020.530=
EBJADE vs ICMAESILS1872380.510=

Open in a separate window

S13 and S14 Tables present the mean and standard error results of CMA-ES and its variant on 50-D, and 100-D CEC 2014 benchmark functions, respectively. Its performance is somewhat competitive with the comparative algorithm performance. In the 50-D tests, nearly half of the functions showed better or close performance than the CMA-ES and ICMAES-ILS algorithms. Two-thirds of the proposed algorithms have better results compared to CMAES, and one-third of the proposed algorithms have better results compared to ICMAES-ILS. In the 50-D tests, it also has a similar competitiveness. Notably, it surpasses all other advanced algorithms for the function F8 the functions F10, F16, F18,F21, and F28.

As can be seen from the comparison in Table 10, all statistics are not significant, which means that EBJADE does not significantly outperform the comparison algorithms.A larger R+ compared with R indicates that one algorithm performs better than another algorithm in terms of the number of functions it performs worse than the other algorithm.CMAES has a smaller R+ in all dimensions, while ICMAES-ILS has a larger R+, indicating that EBJADE’s performance on CEC2014 is slightly better than CMAES but slightly inferior to ICMAES-ILS, and more pronounced on 50D. However, overall, its optimization performance on the CEC 2014 benchmark is similar. The numerical routine of covariance matrix decomposition usually requires O(n3) calculation in an n-dimensional search space [48]. S15 Table presents the computating time comparison comparing CMAES and EBJADE on 30 benchmark functions of 50-D and 100-D in CEC2014, respectively. Comparare in Windows 10 on Intel (R) Core (TM) i7-6700 2.60 GHz processor and 16gb RAM, using c + + programming language and Code Block writing algorithm. Therefore, from the perspective of time complexity and computation time, the EBJADE algorithm has certain competitiveness.

Conclusion

To improve the overall performance of the basic DE algorithm, introducing a new mutation strategy to improve the JADE algorithm, introducing multiple populations mechanisms to allocate computing resources, and a regeneration mechanism for elite solutions. The combination of JADE’s strategy and new strategies, one with strong exploration ability and the other with strong exploitation ability. The proposed mutation strategy enhances global and local search capabilities and improves convergence speed. According to the POP, exploring the neighborhood of exceptional individuals represents a highly promising search space capable of generating superior solutions.

Numerous experiments on the CEC 2014 benchmark suite have shown that EBJADE outperforms several other efficient and popular DE variants, as well as CMA-ES and variants, with no significant differences. Experimental analysis shows that different mutation strategies generally require different control parameters. Two new parameters have been introduced in EBJADE, namely the ratio of the indicator population to the overall population, and the generation gap used to periodically determine the nearest best mutation strategy. Experiments have shown that on most benchmark functions, EBJADE is not sensitive to these two new parameters.

Supporting information

S1 Table

Comparison of the effectiveness of the mechanisms using CEC2014 functions with 30 variables.

(PDF)

Click here to view.(52K, pdf)

S2 Table

Comparison of the effectiveness of the mechanisms using CEC2014 functions with 50 variables.

(PDF)

Click here to view.(52K, pdf)

S3 Table

Comparison of the effectiveness of the mechanisms using CEC2014 functions with 100 variables.

(PDF)

Click here to view.(52K, pdf)

S4 Table

The computation results of EBJADE with different settings of scale parameters for CEC2014 functions with 30 variables.

(PDF)

Click here to view.(48K, pdf)

S5 Table

The computation results of EBJADE with different settings of scale parameters for CEC2014 functions with 50 variables.

(PDF)

Click here to view.(48K, pdf)

S6 Table

The computation results of EBJADE with different settings of scale parameters for CEC2014 functions with 100 variables.

(PDF)

Click here to view.(47K, pdf)

S7 Table

The computation results of EBJADE with different settings of NP for CEC2014 functions with 30 variables.

(PDF)

Click here to view.(49K, pdf)

S8 Table

The computation results of EBJADE with different settings of NP for CEC2014 functions with 50 variables.

(PDF)

Click here to view.(48K, pdf)

S9 Table

The computation results of EBJADE with different settings of NP forCEC2014 functions with 100 variables.

(PDF)

Click here to view.(49K, pdf)

S10 Table

Test results for EBJADE algorithms and other state-of-the-art DE variants using CEC2014 functions with 30 variables.

(PDF)

Click here to view.(54K, pdf)

S11 Table

Test results for EBJADE algorithms and other state-of-the-art DE variants using CEC2014 functions with 50 variables.

(PDF)

Click here to view.(54K, pdf)

S12 Table

Test results for EBJADE algorithms and other state-of-the-art DE variants using CEC2014 functions with 100 variables.

(PDF)

Click here to view.(55K, pdf)

S13 Table

Test results for EBJADE algorithms and other state-of-the-art CEA-ES algorithms using CEC2014 functions with 50 variables.

(PDF)

Click here to view.(46K, pdf)

S14 Table

Test results for EBJADE algorithms and other state-of-the-art CEA-ES algorithms using CEC2014 functions with 100 variables.

(PDF)

Click here to view.(46K, pdf)

S15 Table

Computing times results for EBJADE algorithms and CEA-ES algorithms using CEC2014 functions with 50 and 100 variables.

(PDF)

Click here to view.(32K, pdf)

Funding Statement

This work was supported by Natural Science Foundation of Liaoning Province(2023-MS-222); Scientific Research Fund of Liaoning Provincial Education Department(LJKMZ20220916、LJKZ0583); Department of Science and Technology of Liaoning Province(2022JH2/101300253). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. None of the authors received a salary from any funder.

Data Availability

All relevant data are within the manuscript and its Supporting Information files.

References

1. Storn R. and Price K., “Differential Evolution—A simple and efficient adaptive scheme for global optimization over continuous spaces,” International Computer Science Institute Technical Report, Tech. Rep. TR-95-012,1995. [Google Scholar]

2. Storn R., Price K., Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim. 1997;11 (4): 341–359. [Google Scholar]

3. Price K., Storn R., and Lampinen J., “Differential evolution: a practical approach to global optimization”, Heidelberg: Springer; 2005. [Google Scholar]

4. Jena C., Basu M., Panigrahi C.K., Differential evolution with Gaussian mutation for combined heat and power economic dispatch, Soft Comput. 2016; 20 (2): 681–688. [Google Scholar]

5. Mlakar U., Fister I., Brest J., Potocˇnik B., Multi-objective differential evolution for feature selection in facial expression recognition systems,Expert Syst. Appl. 2017; 89: 129–137. [Google Scholar]

6. Qi J., Xu B., Xue Y., Wang K., Sun Y., Knowledge based differential evolution for cloud computing service composition, J. Ambient Intell. Human. Comput. 2018; 9: 565–574. [Google Scholar]

7. Lin L., Zhu M., Efficient tracking of moving target based on an improved fast differential evolution algorithm,IEEE Access PP 2018; 6:1–1. [Google Scholar]

8. Das S., Mullick S.S., Suganthan P.N., Recent advances in differential evolution-an updated survey, Swarm Evolu t. Comput. 2016; 27: 1–30. [Google Scholar]

9. Zhong, Xuxu Duan, Meijun Zhang, Xiao Cheng, Peng..A hybrid differential evolution based on gaining-sharing knowledge algorithm and harris hawks optimization. .PLoS One,2021; 16(4): e0250951. [PMC free article] [PubMed] [Google Scholar]

10. Li X., Yin M., Modified differential evolution with self-adaptive parameters method, J. Combin. Optim. 2014; 31 (2): 546–576. [Google Scholar]

11. Wang Y., Li X H., Huang T., Li L., Differential evolution based on covariance matrix learning and bimodal distribution parameter setting, Appl. Soft Comput. 2014; 18: 232–247. [Google Scholar]

12. Mohamed AW, Hadi AA, Jambi KM. Novel mutation strategy for enhancing SHADE and LSHADE algorithms for global numerical optimization.Swarm and Evolutionary Computation. 2019; 50:100455. [Google Scholar]

13. Das S., Abraham A., Chakraborty U.K., Konar A., Differential evolution using a neighborhood based mutation operator, IEEE Trans. Evol. Comput. 2009; 13 (3): 526–553. [Google Scholar]

14. Larraanaga P., Lozano J.A., Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation, Kluwer Academic Publishers, 2001. [Google Scholar]

15. Liang X., Chen H., Lozano J.A., A Boltzmann-based estimation of distribution algorithm for a general resource scheduling model, IEEE Trans. Evol. Comput. 2015; 19 (6): 793–806. [Google Scholar]

16. Zhou A., Sun J., Zhang Q., An estimation of distribution algorithm with cheap and expensive local search methods, IEEE Trans. Evol. Comput. 2015; 19 (6): 807–822. [Google Scholar]

17. Hauschild M.Pelikan, An introduction and survey of estimation of distribution algorithms, Swarm Evol. Comput. 2011; 1 (3): 111–128. [Google Scholar]

18. Das S., Suganthan P.N., Differential evolution: a survey of the state-of-the-art, IEEE Trans. Evol. Comput. 2011; 15 (1): 4–31. [Google Scholar]

19. Das S., Mullick S.S., Suganthan P.N., Recent advances in differential evolution-an updated survey, Swarm Evolut. Comput. 2016; 27: 1–30. [Google Scholar]

20. Al-Dabbagh R.D., Neri F., Idris N., Baba M.S., Algorithm design issues in adaptive differential evolution: review and taxonomy,Swarm Evolut. Comput. 2018; 43: 284–311. [Google Scholar]

21. Zhang J., Sanderson A.C., JADE: adaptive differential evolution with optional external archive, IEEE Trans. Evol. Comput. 2009; 13 (5): 945–958. [Google Scholar]

22. Qin A. K. and Suganthan P. N., “Self-adaptive differential evolution algorithm for numerical optimization,” in Proc. IEEE CEC, 2005,pp. 1785–1791. [Google Scholar]

23. Qin A. K., Huang V. L., and Suganthan P. N., “Differential evolution algorithm with strategy adaptation for global numerical optimization,” IEEE Trans. Evol. Comput., 2009; 13 (2): 398–417. [Google Scholar]

24. Brest J., Greiner S., Boskovic B., Mernik M., Zumer V., Self-adapting control parameters in differential evolution: a comparative study on numerical bench- 481 mark problems, IEEE Trans. Evol. Comput. 2006; 10: 646–657. [Google Scholar]

25. Gong W., Cai Z., Ling C. X. and Li H., "Enhanced Differential Evolution With Adaptive Strategies for Numerical Optimization," in IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics),2011; 41, (2): 397–413. doi: 10.1109/TSMCB.2010.2056367 [PubMed] [CrossRef] [Google Scholar]

26. Zhong W., Liu J., Xue M., and Jiao L., “A multiagent genetic algorithm for global numerical optimization,” IEEE Trans. Syst., Man, Cybern. B, Cybern., 2004. 34, (2): 1128–1141. doi: 10.1109/tsmcb.2003.821456 [PubMed] [CrossRef] [Google Scholar]

27. Tanabe R., f*ckunaga A., Success-history based parameter adaptation for differential evolution, in: Proceedings of the IEEE Congresson Evolutionary Computation2013, 2013: 71–78. [Google Scholar]

28. Tanabe R., f*ckunaga A., Improving the Search Performance of SHADE Using Linear Population Size Reduction, in Proc. IEEE CEC, 2014, pp. 1658–1665. [Google Scholar]

29. Zhenyu Meng;Cheng Yang.Hip-DE Historical population based mutation strategy in differential evolution with parameter adaptive mechanism.Information Sciences,2021; 562: 44–77. [Google Scholar]

30. Xxa C, Lei T B, Yza C, et al. NFDDE: A novelty-hybrid-fitness driving differential evolution algorithm. Information Sciences, 2021;579:33–54. [Google Scholar]

31. Wang Xianpeng, and Tang L. An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization. Information Sciences, 2016:124–141. [Google Scholar]

32. Wu G, Shen X, Li H, Chen H, Lin A, Suganthan P. Ensemble of differential evolution variants. Information Sciences. 2018; 423:172–186 [Google Scholar]

33. Wang Y., Li X H., Huang T., Li L., Differential evolution based on covariance matrixlearning and bimodal distribution parameter setting, Appl. Soft Comput.2014; 18: 232–247. [Google Scholar]

34. Mallipeddi R., Suganthan P.N., Pan Q.K., Tasgetiren M.F., Differential evolution algorithm with ensemble of parameters and mutation strategies, Appl. Soft Comput. 2011; 11 (2): 1679–1696. [Google Scholar]

35. Li Xiaoyu, Wang Lei, Jiang Qiaoyong, Li Ning, Differential evolution algorithm with multi-population cooperation and multi-strategy integration, Neurocomputing, Volume 421, 2021, Pages 285–302. [Google Scholar]

36. Deng L B, Zhang L L, Fu N, et al. ERG-DE: An Elites Regeneration Framework for Differential Evolution.Information Sciences, 2020; 539:81–103. [Google Scholar]

37. Tian M, Gao X, Yan X. Performance-driven adaptive differential evolution with neighborhood topology for numerical optimization. Knowledge-Based Systems, 2019; 188:105008. [Google Scholar]

38. Bing DongZhou Aand Guixu Zhang, "A hybrid estimation of distribution algorithm with differential evolution for global optimization," 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece,2016, pp. 1–7. [Google Scholar]

39. Zhao Fuqing, Shao Zhongshi, Wang Junbiao, and Zhang Chuck. ‘A Hybrid Differential Evolution and Estimation of Distribution Algorithm Based on Neighbourhood Search for Job Shop Scheduling Problems’. International Journal of Production Research 54, no. 4 (16February2016): 1039–60. [Google Scholar]

40. Fan Debin, Lee Jaewan. A Hybrid Estimation of Distribution Algorithm with Differential Evolution based on Self-adaptive Strategy. J. Internet Comput. Serv. 2021. [Google Scholar]

41. Li Yintong, Han Tong, Tang Shangqin, Huang Changqiang, Zhou Huan, Wang Yuan, An improved differential evolution by hybridizing with estimation-of-distribution algorithm, Information Sciences, Volume 619, 2023, Pages 439–456. [Google Scholar]

42. Feoktistov V., Differential Evolution: in Search of Solutions, Springer-verlag,Berlin,Germany, 2006. [Google Scholar]

43. Glover F., Laguna M., Tabu Search Background, Springer, US, 1997. [Google Scholar]

44. Liang J.J., Qu B.Y., Suganthan P.N., Problem definitions and evaluation criteria for the CEC2014. special session and competition on single objective realparameter numerical optimization. [Google Scholar]

45. Karmakar Bishalet al. CMA-ES with exponential based multiplicative covariance matrix adaptation for global optimization, Swarm and Evolutionary Computation.2023; 79. [Google Scholar]

46. Awad Noor H., Ali Mostafa Z., Suganthan Ponnuthurai N., Ensemble of parameters in a sinusoidal differential evolution with niching-based population reduction, Swarm and Evolutionary Computation. 2018; 39. [Google Scholar]

47. García S., Molina D., Lozano M., Herrera F., A study on the use of non-parametrictests for analyzing the evolutionary algorithms’ behavior: a case study on theCEC’2005 special session on real parameter optimization, J. Heuristics2009; 15: 617–644. [Google Scholar]

48. Z. Li, Q. Zhang, An efficient rank-1 update for Cholesky CMA-ES using auxiliary evolution path, in: 2017 IEEE Congress on Evolutionary Computation, CEC, IEEE, 2017, pp. 913–920.

Articles from PLOS ONE are provided here courtesy of PLOS

A novel differential evolution algorithm with multi-population and elites regeneration (2024)
Top Articles
Latest Posts
Article information

Author: Lilliana Bartoletti

Last Updated:

Views: 5829

Rating: 4.2 / 5 (53 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Lilliana Bartoletti

Birthday: 1999-11-18

Address: 58866 Tricia Spurs, North Melvinberg, HI 91346-3774

Phone: +50616620367928

Job: Real-Estate Liaison

Hobby: Graffiti, Astronomy, Handball, Magic, Origami, Fashion, Foreign language learning

Introduction: My name is Lilliana Bartoletti, I am a adventurous, pleasant, shiny, beautiful, handsome, zealous, tasty person who loves writing and wants to share my knowledge and understanding with you.