Yesterday, I presented my research on genetic algorithms at the University of Washington.
More Debate, Please!
“… there are many issues in computing that inspire differing
opinions. We would be better off highlighting the differences
rather than pretending they do not exist”
–Moshe Y. Vardi
In an article entitled “More Debate, Please!”, in the January, 2010 issue of Communications of the ACM, Moshe Y. Vardi, editor-in-chief of Communications, writes:
‘Vigorous debate, I believe, exposes all sides of an issue—their strengths and weaknesses. It helps us reach more knowledgeable conclusions. To quote Benjamin Franklin: “When Truth and Error have fair play, the former is always an overmatch for the latter.”’[1]
Vardi goes on to say that as he solicited ideas for the 2008 relaunch of Communications, he was frequently told to keep controversial topics front and center. “Let blood spill over the pages of Communications,” a member of a focus group colorfully urged [1].
When attempting to publish my doctoral research in evolutionary computation journals, I found the sentiments expressed by Vardi to be in short supply. The reviewers seemed much more invested in not rocking the boat than in fostering a climate in which prevailing assumptions can be challenged, and alternate ideas expressed transparently. They seemed, in short, to be inured to the poverty of the field’s foundations, and, for the most part, had little tolerance for someone with a bone to pick with the status quo. “Fall in line, or have your work be rejected,” was the overarching message.
One way this unfortunate state of affairs may be addressed is through the institution of a forum like the Point/Counterpoint section introduced to Communications by Vardi in 2008—a forum where the various controversies that mark our field are periodically featured, and the different sides of each controversy given, as Benjamin Franklin put it, “fair play”. There are several contentious topics in EC. Tapped correctly, many of these topics can be powerful vehicles for learning—not just about the workings of evolutionary algorithms, but, also, about the workings of a vibrant intellectual community. Right now, instead of vigorous, open, ongoing debates in the EC literature, uneasy truces prevail. The community, by and large, steps around the the really big points of contention. Researchers talk past each other to niche audiences. And, if my experience is anything to go by, new lines of criticism, and new modes of analysis are hastily dismissed.
In the absence of a written record of ongoing controversies, new entrants to the field will not have access to the various positions involved. Pressed for time, and confronting the reality of “publish or perish”, most will fall back on the opinions and practices of their advisors. It doesn’t take much to see that in an environment like this, opportunities for learning and advancement will frequently be missed.
A forum for open, ongoing, collegial debate would bring awareness, and transparency to the controversies in our field. It would also (one hopes) inculcate a more welcoming attitude toward alternate approaches, conclusions, and critiques.
Two topics for debate:
EC Theory and First Hitting Time: Is it problematic that so much contemporary theoretical work in EC focuses on “first hitting time”, i.e., the number of fitness evaluations required to find a global optimum? Do we look at first hitting time only because there currently isn’t a well developed, and generally accepted theoretical framework for examining adaptation (the generation of fitter points over time)? If so, isn’t the study of first hitting time a lot like the proverbial search for one’s house keys under the light of a street lamp just because it happens to be dark in one’s house?
The Building Block Hypothesis: Can the building block hypothesis be reconciled with the widely reported utility of uniform crossover? If yes, how? If no, can we—more to the point, should we—be comfortable with this knowledge given the considerable influence of the building block hypothesis on contemporary evolutionary computation research?
What other topics have been under-addressed in the evolutionary computation literature? Leave a comment with your opinion, or a link to your own blog post.
[1] Moshe Y. Vardi. More debate please!, In Communications of the ACM 53(1):5, 2010
Hello Production Environment
My first day at Amazon.com. Hello production evironment. It’s been a while :-)
Hyperclimbing and Decimation
In recent years, probabilistic inference algorithms such as survey propagation and belief propagation have been shown to be remarkably effective at tackling large, random instances of SAT, and other combinatorial optimization problems that lie beyond the reach of previous approaches. These inference algorithms belong to a class of techniques called decimation strategies. Decimation strategies monotonically reduce the size of a problem instance by iteratively fixing partial solutions (partial variable assignments in the case of SAT).
The generative fixation hypothesis essentially states that genetic algorithms work by efficiently implementing a decimation strategy called hyperclimbing.
Hyperclimbing, Genetic Algorithms, and Machine Learning
I’ve identified a promising stochastic search heuristic, called hyperclimbing, for large-scale optimization over massive attribute product spaces (e.g., the set of all binary strings of some length N, where N is very large) with rugged fitness functions. Hyperclimbing works by progressively limiting sampling to a series of nested subsets with increasing expected fitness. At any given step, this heuristic sifts through vast numbers of coarse partitions of the subset it “inhabits”, and identifies ones that partition this set into subsets whose expected fitness values are significantly variegated. Because hyperclimbing is sensitive, not to the local features of a search space, but to certain more global statistics, it is not susceptible to the kinds of issues that waylay local search heuristics.
The chief barrier to the wide and enthusiastic use of hyperclimbing is that it seems to scale very poorly with the number of attributes. If one heeds the seemingly high cost of applying hyperclimbing to large search spaces, this heuristic quickly looses its shine. A key conclusion of my doctoral work is that this seemingly high cost is illusory. I have uncovered evidence that strongly suggests that genetic algorithms can implement hyperclimbing extraordinarily efficiently.
As readers of this blog probably know, genetic algorithms are search algorithms that mimic natural evolution. These algorithms have been used in a wide range of engineering and scientific fields to quickly procure useful solutions to poorly understood (i.e. black-box) optimization problems. Unfortunately, despite the routine use of genetic algorithms for over three decades, their adaptive capacity has not been adequately accounted for. Given the evidence that genetic algorithms can implement efficient hyperclimbing, I’ve proposed a new explanation for the adaptive capacity of these algorithms. This new account—the generative fixation hypothesis—promises to spark significant advances in the fields of genetic algorithmics and discrete optimization.
The discovery that hyperclimbing is efficiently implementable also promises to have a non-negligible impact on the ecology of machine learning research. Optimization and machine learning are, after all, intimately related. Overlooking a few exceptions, the practice of machine learning research, can be characterized as the effective reduction of difficult learning problems to optimization problems for which efficient algorithms exist. In other words, the machine learning problems that can effectively be tackled are in large part those that can in practice be reduced to optimization problems that can be tackled efficiently. Currently, this largely limits the class of tractable machine learning problems to the class of learning problems that can in practice be reduced to convex optimization problems [1] . The identification of general-purpose non-convex optimization heuristics with efficient implementations (e.g. hyperclimbing), thus, has the potential to significantly extend the reach of machine learning.
For a description of hyperclimbing, and evidence that genetic algorithms can implement this heuristic efficiently, please see my dissertation
[1] Kristin P. Bennett and Emilio Parrado-Hernandez. The interplay of optimization and machine learning research. Journal of Machine Learning Research, 7:1265–1281, 2006.
Dissertation Deposition
I deposited my dissertation today.
Click here to see the final version (single spaced for easy reading).
Back to the Future: A Science of Genetic Algorithms
From the preface to my dissertation:
The foundations of most computer engineering disciplines are almost entirely mathematical. There is, for instance, almost no question about the soundness of the foundations of such engineering disciplines as graphics, machine learning, programming languages, and databases. An exception to this general rule is the field of genetic algorithmics, whose foundation includes a significant scientific component.
The existence of a science at the heart of this computer engineering discipline is regarded with nervousness. Science traffics in provisional truth; it requires one to adopt a form of skepticism that is more nuanced, and hence more difficult to master than the radical sort of skepticism that suffices in mathematics and theoretical computer science. Many, therefore, would be happy to see science excised from the foundations of genetic algorithmics. Indeed, over the past decade and a half, much effort seems to have been devoted to turning genetic algorithmics into just another field of computer engineering, one with an entirely mathematical foundation.
Broadening one’s perspective beyond computer engineering, however, one cannot help wondering if much of this effort is not a little misplaced. Continue reading “Back to the Future: A Science of Genetic Algorithms”
Red Dots, Blue Dots
In this blog entry I’d like to showcase just one of a number of remarkable findings that comprise the basis for the generative fixation hypothesis—a new explanation for the adaptive capacity of recombinative genetic algorithms.
Consider the following stochastic function which takes a bitstring of length as input and returns a real value as output.
fitness(bitstring) accum = 0 for i = 1 to 4 accum = accum + bitstring[pivotalLoci[i]] end if accum is odd return a random value from normal distribution N(+0.25,1) else return a random value from normal distribution N(-0.25,1) end
The variable pivotalLoci is an array of four distinct integers between 1and which specifies the location of four loci—let’s call them A, B, C, D—of an input bitstring that matter in the determination the bitstring’s fitness. These four loci are said to be pivotal. Continue reading “Red Dots, Blue Dots”
Thanks!
Thanks everyone for your good wishes. My defense went smoothly, and I got some excellent suggestions from my committee.
Dissertation Defense
My dissertation defense is scheduled for Friday June 19, 2009. I’m currently working on my presentation. Wish me luck :-)