Evolutionary AlgorithmIn artificial intelligence, an 'evolutionary algorithm' ('EA') is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also loss function). Evolutionary algorithm of the population then takes place after the repeated application of the above operators.Evolutionary algorithms often perform well approximating solutions to all types of problems because they ideally do not make any assumption about the underlying fitness landscape. Techniques from evolutionary algorithms applied to the modeling of biological evolution are generally limited to explorations of microevolutionary processes and planning models based upon cellular processes. In most real applications of EAs, computational complexity is a prohibiting factor. In fact, this computational complexity is due to fitness function evaluation. Evolutionary algorithm is one of the solutions to overcome this difficulty. However, seemingly simple EA can solve often complex problems; therefore, there may be no direct link between algorithm complexity and problem complexity.
national conference on artificial intelligence, (2020)
We propose an anytime algorithm EAMC for the problem of maximizing monotone functions with monotone cost constraints
Cited by0BibtexViews22Links
0
0
Esteban Real, Alok Aggarwal,Yanping Huang,Quoc V. Le
national conference on artificial intelligence, (2019)
A variant of tournament selection by which genotypes die according to their age, favoring the young
Cited by676BibtexViews123Links
0
0
Journal of Machine Learning Research, no. 55 (2019)
Neural Architecture Search can be seen as subfield of AutoML and has significant overlap with hyperparameter optimization and meta-learning
Cited by346BibtexViews23Links
0
0
IEEE Transactions on Evolutionary Computation, no. 6 (2019): 921-934
We present an evolutionary Generative adversarial networks framework for training deep generative models
Cited by63BibtexViews24Links
0
0
Shauharda Khadka, Somdeb Majumdar, Santiago Miret, Evren Tumer, Tarek Nassar, Zach Dwiel, Yinyin Liu,Kagan Tumer
arXiv: Learning, (2019)
Experiments in continuous control demonstrate that Collaborative Evolutionary Reinforcement Learning’s emergent learner can outperform its composite learners while remaining overall sample-efficient compared to traditional approaches
Cited by14BibtexViews7Links
0
0
Tran Ba Trung, Le Tien Thanh, Ly Trung Hieu,Pham Dinh Thanh,Huynh Thi Thanh Binh
Proceedings of the Tenth International Symposium on Information and Communication Technology, pp.170-177, (2019)
This paper proposed a multitask optimization algorithm in the realm of the general Multifactorial Evolutionary Algorithm to solve the multiple instances of CluMRCT problem together
Cited by0BibtexViews6Links
0
0
Zhichao Lu, Ian Whalen, Vishnu Boddeti, Yashesh D. Dhebar,Kalyanmoy Deb,Erik D. Goodman,Wolfgang Banzhaf
arXiv: Computer Vision and Pattern Recognition, (2018)
NSGA-Net affords a number of practical benefits: the design of neural network architectures that can effectively optimize and trade-off multiple, possibly, competing objectives, advantages afforded by population based methods being more effective than optimizing weighted linear c...
Cited by33BibtexViews5Links
0
0
NeurIPS, (2018)
The Stochastic gradient descent step can be interpreted as a coevolution mechanism where individuals under distinct optimizers evolve independently and interact with each other in the evolution step to hopefully create promising candidate solutions for the generation
Cited by15BibtexViews28Links
0
0
arXiv: Neural and Evolutionary Computing, (2018)
We present an evolutionary algorithm to find better deep convolutional neural networks topologies
Cited by6BibtexViews12Links
0
0
Yukang Chen,Qian Zhang, Chang Huang, Lisen Mu,Gaofeng Meng,Xinggang Wang
arXiv: Neural and Evolutionary Computing, (2018)
We have proposed a method for neural architecture search by integrating evolution algorithm and reinforcement learning into a unified framework
Cited by1BibtexViews13Links
0
0
Esteban Real, Sherry Moore, Andrew Selle,Saurabh Saxena, Yutaka Leon Suematsu,Quoc V. Le,Alex Kurakin
ICML, (2017)
In this paper we have shown that neuro-evolution is capable of constructing large, accurate networks for two challenging and popular image classification benchmarks; neuro-evolution can do this starting from trivial initial conditions while searching a very large space; the proce...
Cited by637BibtexViews90Links
0
0
Risto Miikkulainen, Jason Zhi Liang, Elliot Meyerson,Aditya Rawal, Dan Fink, Olivier Francon, Bala Raju,Hormoz Shahrzad, Arshak Navruzyan, Nigel Duffy,Babak Hodjat
arXiv: Neural and Evolutionary Computing, (2017)
E results in this paper show that the evolutionary approach to optimizing deep neural networks is feasible: e results are comparable hand-designed architectures in benchmark tasks, and it is possible to build real-world applications based on the approach
Cited by406BibtexViews21Links
0
0
Omer Berat Sezer, Omer Berat Sezer,Murat Ozbayoglu,Erdogan Dogdu
Procedia Computer Science, (2017): 473-480
In this study we propose a model that combines genetic algorithms and neural networks together in a stock trading system in such a way that, the features that are provided to the neural network are the optimized technical analysis buy-sell trigger points
Cited by4BibtexViews6
0
0
International Conference on Machine Learning, (2015)
We evaluated an Long Short-Term Memory without input gates, an Long Short-Term Memory without output gates, and an Long Short-Term Memory without forget gates
Cited by1167BibtexViews126Links
0
0
Evolutionary Computation, IEEE Transactions  , no. 2 (2015): 167-187
A similar method was presented by Nadi and Khader with the difference that the sampling process is merged with crossover: two parents create two offspring; alleles that are common to both parent are maintained while the rest are set according to a probability vector derived from ...
Cited by261BibtexViews12Links
0
0
Evolutionary Computation, IEEE Transactions  , no. 99 (2014): 1
The adaptive strategy attempts to maintain a proper ratio of the identified knee points to all non-dominated solutions in each front by adjusting the size of the neighborhood of each solution in which the solution having the maximum distance to the hyperplane is identified as the...
Cited by192BibtexViews6Links
0
0
Evolutionary Computation, IEEE Transactions  , no. 3 (2014): 445-460
Our experimental results are compared with reported results of DBEA-Eps in and those of NSGAIII and Multi-objective Evolutionary Algorithm Based on Decomposition-PBI in for DTLZ1-DTLZ4 problems with 3, 5, 8, 10 and 15 objectives and further compared with the results obtained from...
Cited by173BibtexViews12Links
0
0
IJDMMM, no. 3 (2013): 261-276
We propose a new global approach to the model tree learning and compare it with classical top-down inducers
Cited by13BibtexViews5Links
0
0
Evolutionary Computation, no. 1 (2012): 91-133
All multi-objective diversity treatments based on behavioral distances dominate the genotypic diversity treatments, the control experiment, and NEAT
Cited by200BibtexViews10Links
0
0
GECCO (Companion), (2012)
This paper has described a novel framework named DEAP, that combines the flexibility and power of the Python programming language with a clean and lean core of transparent Evolutionary Computation components that facilitate rapid prototyping of new Evolutionary Algorithm, and pro...
Cited by48BibtexViews5Links
0
0
Keywords
Evolutionary AlgorithmEvolutionary ComputationEvolutionary AlgorithmsGenetic AlgorithmsGenetic AlgorithmMultiobjective OptimizationOptimization ProblemSearch SpaceConvergenceFitness Function
Authors
Kalyanmoy Deb
Paper 5
Lothar Thiele
Paper 4
Zixing Cai
Paper 3
Yong Wang
Paper 3
Qingfu Zhang
Paper 2
Quoc V. Le
Paper 2
Yuren Zhou
Paper 2
Tapabrata Ray
Paper 2
Ben Paechter
Paper 2
Marcin Czajkowski
Paper 2