This is the fourth paper that's critical of the ENCODE hype. The first was Sean Eddy's paper in Current Biology (Eddy, 2012). The second was a paper by Niu and Jiang (2012), and the third was a paper by Graur et al. (2013). In my experience this is unusual since the critiques are all directed at how the ENCODE Consortium interpreted their data and how they misled the scientific community (and the general public) by exaggerating their results. Those kind of criticisms are common in journal clubs and, certainly, in the blogosphere, but scientific journals generally don't publish them. It's okay to refute the data (as in the arsenic affair) but ideas usually get a free pass no matter how stupid they are.
In this case, the ENCODE Consortium did such a bad job of describing their data that journals had to pay attention. (It helps that much of the criticism is directed at Nature and Science because the other journals want to take down the leaders!)
Ford Doolittle makes some of the same points made in the other papers. For example, he points out that the ENCODE definition of "function" is not helpful. However, Ford also does a good job of explaining why the arguments in favor of junk DNA are still valid. He says ...
My aim here is to remind readers of the structure of some earlier arguments in defense of the junk concept (10) that remain compelling, despite the obvious success of ENCODE in mapping the subtle and complex human genomic landscape.The emphasis is on the "C-Value Paradox" by which he means the tons of work on variation of genome size. The conclusion from all that effort—dating back to the 1960s—was that large genomes contain a huge amount of non-functional DNA, or junk DNA. This DNA is still junk despite the fact that it may bind transcription factors and contain regions of modified chromatin. These sites are called "functional elements" (FE) in the ENCODE papers even though they probably don't have a "function" by any meaningful sense of the word.
Ford proposes a thought experiment based on our understanding of genome sizes. He notes that lungfish have a huge genome (130,000 Mb) while pufferfish (Takifugu) have a much smaller genome (400 Mb) than we do. (Our genome is 3,200 Mb.) He also notes that there are many closely related species of amphibians, plants, and protists, that differ significantly in genome size.
Here's the thought experiment ...
Suppose that there had been (and probably, some day, there will be) ENCODE projects aimed at enumerating, by transcriptional and chromatin mapping, factor footprinting, and so forth, all of the FEs in the genomes of Takifugu and a lungfish, some small and large genomed amphibians (including several species of Plethodon), plants, and various protists. There are, I think, two possible general outcomes of this thought experiment, neither of which would give us clear license to abandon junk.The first outcome is that all genomes, regardless of size, will have approximately the same number of FEs as the human genome. Since all these species have about the same number of genes, this means that each gene requires a constant number of FEs and it's just lucky that the human genome is almost entirely "functional." There would still have to be lots of junk in species with larger genomes. This result would make it difficult to explain why pufferfish survive with only one eighth as much DNA as humans.
The second outcome of the thought experiment would be that FEs correlate with C-value regardless of complexity. In other words, very similar species will have very different numbers of FEs in spite of the fact that they have the same numbers of genes and the same level of complexity. This is the expected result if FEs are mostly spurious binding sites that have no function but it would be difficult to explain if FEs are really doing something important in regulating gene expression.
The Doolittle thought experiment is similar to Ryan Gregory's Onion Test [see Genome Size, Complexity, and the C-Value Paradox]. In both cases, those who proclaim the death of junk DNA are challenged to explain how their hypothesis is consistent with what we know about variation in genome size. I think it's pretty obvious that the ENCODE leaders haven't thought about the evidence for junk DNA.
There are several other interesting points in Ford Doolittle's paper—most of which I agree with. I want to quote some of them because he says it so much better than I. Here's his critique of the definition of function used by the scientists in the ENCODE Consortium.
A third, and the least reliable, method to infer function is mere existence. The presence of a structure or the occurrence of a process or detectable interaction, especially if complex, is taken as adequate evidence for its being under selection, even when ablation is infeasible and the possibly selectable effect of presence remains unknown. Because our genomes have introns, Alu elements, and endogenous retroviruses, these things must be doing us some good. Because a region is transcribed, its transcript must have some fitness benefit, however remote. Because residue N of protein P is leucine in species A and isoleucine in species B, there must be some selection-based explanation. This approach enshrines "panadaptationism," which was forcefully and effectively debunked by Gould and Lewontin (34) in 1979 but still informs much of molecular and evolutionary genetics, including genomics. As Lynch (39) argues in his essay "The Frailty of Adaptive Hypotheses for the Origins of Adaptive Complexity,"Ford is correct. The ENCODE leaders don't seem to be particularly knowledgeable about modern evolutionary theory.
This narrow view of evolution has become untenable in light of recent observations from genomic sequencing and population genetic theory. Numerous aspects of genomic architecture, gene structure, and developmental pathways are difficult to explain without invoking the nonadaptive forces of genetic drift and mutation. In addition, emergent biological features such as complexity, modularity, and evolvability, allof which are current targets of considerable speculation, may be nothing more than indirect by-products of processes operating at lower levels of organization.Functional attribution under ENCODE is of this third sort (mere existence) in the main.
Ford Doolittle makes another point that also came up during my critique of Jonathan Wells' book The Myth of Junk DNA. Wells knows that SOME transposons have secondarily acquired a function so he assumes that ALL transposons must have a function. There are many scientists who fall into the same trap—they find a function for one or two particular features of the genome and then leap to the conclusion that all such features are functional.
Doolittle uses the example of lncRNAs (long non-coding RNAs). A very small number of them have been shown to have a function but that does not mean that all of them do. Ford's point is that the way science operates it is guaranteed that someone will find a function for at least one lncRNA because biology is messy.
Regulation, defined in this loose way, is, for instance, the assumed function of many or most lncRNAs, at least for some authors (6,7,44,45). However, the transcriptional machinery will inevitably make errors: accuracy is expensive, and the selective cost of discriminating against all false promoters will be too great to bear. There will be lncRNAs with promoters that have arisen through drift and exist only as noise (46). Similarly, binding to proteins and other RNAs is something that RNAs do. It is inevitable that some such interactions, initially fortuitous, will come to be genuinely regulatory, either through positive selection or the neutral process described below as constructive neutral evolution (CNE). However, there is no evolutionary force requiring that all or even most do. At another (sociology of science) level, it is inevitable that molecular biologists will search for and discover some of those possibly quite few instances in which function has evolved and argue that the function of lncRNAs as a class of elements has, at last, been discovered. The positivist, verificationist bias of contemporary science and the politics of its funding ensure this outcome.What he didn't say—probably because he's too kind—is that these same pressures (pressure to publish and pressure to get funding) probably lead to incorrect claims of function.
It should be obvious to all of you that Ford Doolittle expects that the outcome of the thought experiment will be that "functional elements" (i.e. binding sites etc.) will correlate with genome size. This means that FEs aren't really functional at all—they are part of the junk. Does it make sense that our genome is full of nonfunctional bits of DNA? Yes, it does, especially if the spurious binding sites are neutral. It even makes sense if they are slightly delterious because Doolittle understands Michael Lynch's arguments for the evolution of nonadaptive complexity.
Assuming these predictions are borne out, what might we make of it? Lynch (39) suggests that much of the genomic- and systems-level complexity of eukaryotes vis à vis prokaryotes is maladaptive, reflecting the inability of selection to block fixation of incrementally but mildly deleterious mutations in the smaller populations of the former.It's clear that the ENCODE leaders don't think like this. So, what motivates them to propose that our genome is full of regulatory sites and molecules when there seems to be a more obvious explanation of their data? Doolittle has a suggestion ...
[A fourth misconception] may be a seldom-articulated or questioned notion that cellular complexity is adaptive, the product of positive selection at the organismal level. Our disappointment that humans do not have many more genes than fruit flies or nematodes has been assuaged by evidence that regulatory mechanisms that mediate those genes’ phenotypic expressions are more various, subtle, and sophisticated (57), evidence of the sort that ENCODE seems to vastly augment. Yet there are nonselective mechanisms, such as [constructive neutral evolution], that could result in the scaling of FEs as ENCODE defines them to C-value nonadaptively or might be seen as selective at some level higher or lower than the level of individual organisms. Splits within the discipline between panadaptationists/neutralists and those researchers accepting or doubting the importance of multilevel selection fuel this controversy and others in biology.I agree. Part of the problem is adaptationism and the fact that many biochemists and molecular biologists don't understand modern concepts in evolution. And part of the problem is The Deflated Ego Problem.
It's a mistake to think that this debate is simply about how you define function. That seems to be the excuse that the ENCODE leaders are making in light of these attacks. Here's how Ford Doolittle explains it ...
In the end, of course, there is no experimentally ascertainable truth of these definitional matters other than the truth that many of the most heated arguments in biology are not about facts at all but rather about the words that we use to describe what we think the facts might be. However, that the debate is in the end about the meaning of words does not mean that there are not crucial differences in our understanding of the evolutionary process hidden beneath the rhetoric.This reminds me of something that Stephen J. Gould said in Darwinism and the Expansion of Evolutionary Theory (Gould, 1982).
The world is not inhabited exclusively by fools and when a subject arouses intense interest and debate, as this one has, something other than semantics is usually at stake.
Doolittle, W.F. (2013) Is junk DNA bunk? A critique of ENCODE. Proc. Natl. Acad. Sci. (USA) published online March 11, 2013. [PubMed] [doi: 10.1073/pnas.1221376110]
Graur, D., Zheng, Y., Price, N., Azevedo, R. B., Zufall, R. A., and Elhaik, E. (2013) On the immortality of television sets: "function" in the human genome according to the evolution-free gospel of ENCODE. Genome Biology and Evolution published online: February 20, 2013 [doi: 10.1093/gbe/evt028]
Gould, S. J. (1982) "Darwinism and the expansion of evolutionary theory." Science 216: 380-387.
Eddy, S. R. (2012) The C-value paradox, junk DNA and ENCODE. Current Biology, 22(21), R898. [preprint PDF]
Niu, D. K., and Jiang, L. (2012) Can ENCODE tell us how much junk DNA we carry in our genome?. Biochemical and biophysical research communications 430:1340-1343. [doi: 10.1016/j.bbrc.2012.12.074]