From the introduction of a manuscript that I recently submitted for review
The practice of Machine Learning research can be characterized as the effective semiprincipled reduction of learning problems to problems for which robust and efficient solution techniques exist – ideally ones with provable bounds on their use of time and space. In a recent paper Bennett and Parrado-Hern´andez (2006) describe the synergistic relationship between the fields of machine learning (ML) and mathematical programming (MP). They remark:
“Optimization lies at the heart of machine learning. Most machine learning problems reduce to optimization problems. Consider the machine learning analyst in action solving a problem for some set of data. The modeler formulates the problem by selecting an appropriate family of models and massages the data into a format amenable to modeling. Then the model is typically trained by solving a core optimization problem that Continue reading “Optimization, Adaptation, Machine Learning and Evolutionary Computation” →
Adaptation in selecto-recombinative genetic systems is widely believed to occur by the recombination of pre-adapted genetic material. This belief is at the core of the paradigm under which most GA and all EDA research currently occurs. It underlies the construction of several new varieties of genetic algorithms that purportedly work by combining pre-adapted genetic material in some sophisticated way (e.g. cohort GAs, messy GAs, LLGA, ECGA, BOA, hBOA, etc.).
In this paradigm, each post-selection population is thought to harbor “good” genetic material. Recombination operators, and estimation of distribution procedures, are thought drive adaptation by composing this material to produce good or better individuals in the next generation.When adaptation stalls it is thought to be because “good” genetic material is unavailable, or because recombination of this material was not performed effectively.
Let us call this general set of beliefs the Compositional Paradigm. This paradigm draws its support from Holland’s Building Block Hypothesis. Its widespread acceptance in the GA community signals Continue reading “Critique of the Compositional Paradigm” →
VectorGA is a vectorized implementation of a genetic algorithm in the Matlab programming language.
Matlab is optimized for performing operations on arrays. Loops, especially nested loops, tend to run slowly in Matlab. It is possible to significantly improve the performance of Matlab programs by converting loops into array operations. This process is called vectorization. Matlab provides a rich set of functions and many expressive indexing schemes that make it possible to vectorize code. Such code not only runs faster, it is also shorter, and simpler to understand and change (provided that you know a little about Matlab of course).
Genetic Algorithms that are implemented in C/C++ or Java typically have multiple nested loops. Therefore direct ports of such implementations to Matlab will run very slowly. Many of the nested loops found in a typical GA implementation have been eliminated from VectorGA. The resulting code is short, fast and simple. It is a neat coincidence when the constructs of a programming language match a programming task so well that a program can be written this succintly.
VectorGA is proof that Matlab is a useful language for the rapid prototyping of Genetic Algorithms. This, in addition to Matlab’s extensive data visualization capabilities, make Matlab an extremely useful platform for the experimental analysis of GAs.
VectorGA has been created and tested under Matlab 7 (R14). Out of the box it evolves a population against the one-max fitness function. The royal-roads fitness function has also been included but is not currently being called. If you find vectorGA useful or find any bugs please let me know.
Original vectorGA release site
The latst version of VectorGA is SpeedyGA