In many domains, biological systems have adaptive capacities that far surpass those of man-made systems. My research is based on the hunch that there are a small number of core computational problems that nature has discovered how to tackle incredibly scaleably. And that most of the more remarkable forms of adaptation in the natural world reduce, ultimately, to one or more of these core efficiencies. I’m interested in identifying such efficiencies, and in harnessing them to tackle some of the pressing challenges that computer scientists face when engineering adaptive systems.
Over the past decade I’ve focused on the genetic algorithm. I find it striking that this simple, biologically-plausible model of evolution routinely procures good, often great, solutions to a wide range of optimization problems within relatively short timeframes. By virtue of imitation, genetic algorithms seem to harness the core computational efficiency underlying the remarkable adaptive capacity of natural evolution.
The generative fixation hypothesis is a new account of the nature of this core efficiency. It is based on evidence that genetic algorithms can implement a stochastic non-local search heuristic, called hyperclimbing, extraordinarily scaleably. The generative fixation hypothesis departs from the reigning hypothesis in genetic algorithmics—the building block hypothesis—at a fundamental level. The potential impact of this new hypothesis extends beyond genetic algorithmics to (amongst others) the fields of evolutionary computation, optimization, machine learning, theoretical computer science, and evolutionary biology.