Mutation Rate Evolution
Mutation rates in nature have traditionally been thought of as the product of two opposing evolutionary forces: adaptability and adaptedness. As an organism becomes more adapted to its environment, the amount by which it can adapt further is diminished, a fact that is manifest genetically as a dwindling repertoire of potentially beneficial mutations. As an organism adapts to an unchanging environment, therefore, mutation in general becomes almost exclusively detrimental to the organism, and natural selection should thus be expected to favor a decrease in mutation rate. In fact, Leiberman and Feldman showed mathematically that, were it physiologically possible, the mutation rate would evolve to zero in an unchanging environment, and they called this finding the "general reduction principle". In an environment that changes over time, on the other hand, there is interminable room for improvement, manifest genetically as a continuously renewed and hence unending supply of potentially beneficial mutations. In a changing environment, therefore, it is easy to see how a certain amount of mutation might be beneficial or even vital. Under the simplest possible regime of environmental change and genetic response, Leigh calculated that the optimal mutation rate was, not surprisingly, equal to the rate of environmental change -- a result that was later shown to also be the evolutionarily stable strategy.
We tested the "general reduction principle" by measuring mutation rates of E. coli as it evolved in an unchanging laboratory environment. What we found did not corroborate the "general reduction principle"; in fact, not only was the mutation rate not reduced during evolution in this constant environment, but in three of twelve independent E. coli populations, the mutation rate spontaneously increased by roughly two orders of magnitude! So we decided that a fresh look at the underlying theory was in order. My colleague, Paul Sniegowski, first came up with the alternative theory that would explain our findings and has since been shown to be correct: variants with elevated mutation rates ("mutators") may acquire a particular beneficial mutation -- however rare it may be -- before the rest of the population, and because the beneficial mutation remains linked to the elevated mutation rate, the fixation of that beneficial mutation thus drives the elevated mutation rate to fixation. In this "mutator hitchhiking hypothesis" (P. D. Sniegowski, P. J. Gerrish, R. E. Lenski, Nature 387, (1997)), the usual causality is reversed: instead of mutation rates evolving to adjust adaptation, it is adaptation that indirectly and haphazardly adjusts mutation rate. Put differently, mutation rate evolution is a biproduct of adaptation at other loci, an evolutionary after-thought so to speak, that has little or nothing to do with any benefit or detriment that the evolved mutation rate may subsequently incur.
The mutator hitchhiking hypothesis made a lot of sense and explained our findings quite nicely. What it didn't really explain, however, was why only increases in mutation rate were observed; after all, a variant with a decreased mutation rate (an "anti-mutator") should have an evolutionary advantage because it produces fewer mutations and most mutations are detrimental. The answer to this question is two-pronged. First of all, from a purely genetical standpoint, mutator mutations are much easier to come by than anti-mutator mutations, because mutator mutations are generally loss-of-function mutations that hinder or knock out replication, proofreading or repair genes, whereas anti-mutator mutations are generally gain-of-function or compensatory mutations that improve or add functionality in these gene regions. There is thus a potentially strong mutational bias favoring increased mutation rate. Secondly, there is a population-dynamical bias favoring increased mutation rate: when a mutator acquires a new beneficial mutation, its evolutionary advantage is immediate; on the other hand, a new lineage with a decreased mutation rate has an advantage only in the long term, due to its relatively slow shedding of deleterious load. Natural selection, because it is a short-sighted process, is much more likely to respond to the immediate benefit of hitchhiking mutators than to the long-term benefit of anti-mutators, and the result is an inescapable evolutionary bias toward increasing mutation rates.
Theory (Andre & Godelle, Genetics...) and experiment (P. D. Sniegowski, P. J. Gerrish, R. E. Lenski, Nature 387, (1997)) corroborated the above logic, showing that mutation rates in asexual populations are indeed unstable and prone to ratchet-like increase. Yet a troubling question remained: when would this trend of net mutation rate increase stop? Or would it ever stop? Would it instead elevate the mutation rate to levels that are intolerable? If this were the case, it would lead to a rather absurd conclusion: it would suggest that the same adaptive process that allows a population to thrive is the process that drives the mutation rate through the roof and thereby drives the population extinct.
We tested the "general reduction principle" by measuring mutation rates of E. coli as it evolved in an unchanging laboratory environment. What we found did not corroborate the "general reduction principle"; in fact, not only was the mutation rate not reduced during evolution in this constant environment, but in three of twelve independent E. coli populations, the mutation rate spontaneously increased by roughly two orders of magnitude! So we decided that a fresh look at the underlying theory was in order. My colleague, Paul Sniegowski, first came up with the alternative theory that would explain our findings and has since been shown to be correct: variants with elevated mutation rates ("mutators") may acquire a particular beneficial mutation -- however rare it may be -- before the rest of the population, and because the beneficial mutation remains linked to the elevated mutation rate, the fixation of that beneficial mutation thus drives the elevated mutation rate to fixation. In this "mutator hitchhiking hypothesis" (P. D. Sniegowski, P. J. Gerrish, R. E. Lenski, Nature 387, (1997)), the usual causality is reversed: instead of mutation rates evolving to adjust adaptation, it is adaptation that indirectly and haphazardly adjusts mutation rate. Put differently, mutation rate evolution is a biproduct of adaptation at other loci, an evolutionary after-thought so to speak, that has little or nothing to do with any benefit or detriment that the evolved mutation rate may subsequently incur.
The mutator hitchhiking hypothesis made a lot of sense and explained our findings quite nicely. What it didn't really explain, however, was why only increases in mutation rate were observed; after all, a variant with a decreased mutation rate (an "anti-mutator") should have an evolutionary advantage because it produces fewer mutations and most mutations are detrimental. The answer to this question is two-pronged. First of all, from a purely genetical standpoint, mutator mutations are much easier to come by than anti-mutator mutations, because mutator mutations are generally loss-of-function mutations that hinder or knock out replication, proofreading or repair genes, whereas anti-mutator mutations are generally gain-of-function or compensatory mutations that improve or add functionality in these gene regions. There is thus a potentially strong mutational bias favoring increased mutation rate. Secondly, there is a population-dynamical bias favoring increased mutation rate: when a mutator acquires a new beneficial mutation, its evolutionary advantage is immediate; on the other hand, a new lineage with a decreased mutation rate has an advantage only in the long term, due to its relatively slow shedding of deleterious load. Natural selection, because it is a short-sighted process, is much more likely to respond to the immediate benefit of hitchhiking mutators than to the long-term benefit of anti-mutators, and the result is an inescapable evolutionary bias toward increasing mutation rates.
Theory (Andre & Godelle, Genetics...) and experiment (P. D. Sniegowski, P. J. Gerrish, R. E. Lenski, Nature 387, (1997)) corroborated the above logic, showing that mutation rates in asexual populations are indeed unstable and prone to ratchet-like increase. Yet a troubling question remained: when would this trend of net mutation rate increase stop? Or would it ever stop? Would it instead elevate the mutation rate to levels that are intolerable? If this were the case, it would lead to a rather absurd conclusion: it would suggest that the same adaptive process that allows a population to thrive is the process that drives the mutation rate through the roof and thereby drives the population extinct.