More Recent Comments

Showing posts sorted by relevance for query encode. Sort by date Show all posts
Showing posts sorted by relevance for query encode. Sort by date Show all posts

Tuesday, March 13, 2018

Making Sense of Genes by Kostas Kampourakis

Kostas Kampourakis is a specialist in science education at the University of Geneva, Geneva (Switzerland). Most of his book is an argument against genetic determinism in the style of Richard Lewontin. You should read this book if you are interested in that argument. The best way to describe the main thesis is to quote from the last chapter.

Here is the take-home message of this book: Genes were initially conceived as immaterial factors with heuristic values for research, but along the way they acquired a parallel identity as DNA segments. The two identities never converged completely, and therefore the best we can do so far is to think of genes as DNA segments that encode functional products. There are neither 'genes for' characters nor 'genes for' diseases. Genes do nothing on their own, but are important resources for our self-regulated organism. If we insist in asking what genes do, we can accept that they are implicated in the development of characters and disease, and that they account for variation in characters in particular populations. Beyond that, we should remember that genes are part of an interactive genome that we have just begun to understand, the study of which has various limitations. Genes are not our essences, they do not determine who we are, and they are not the explanation of who we are and what we do. Therefore we are not the prisoners of any genetic fate. This is what the present book has aimed to explain.

Thursday, September 06, 2012

The ENCODE Data Dump and the Responsibility of Science Journalists

ENCODE (ENcyclopedia Of DNA Elements) is a massive consortium of scientists dedicated to finding out what's in the human genome.

They published the results of a pilot study back in July 2007 (ENCODE, 2007) in which they analyzed a specific 1% of the human genome. That result suggested that much of our genome is transcribed at some time or another or in some cell type (pervasive transcription). The consortium also showed that the genome was littered with DNA binding sites that were frequently occupied by DNA binding proteins.

THEME

Genomes & Junk DNA
All of this suggested strongly that most of our genome has a function. However, in the actual paper the group was careful not to draw any firm conclusions.
... we also uncovered some surprises that challenge the current dogma on biological mechanisms. The generation of numerous intercalated transcripts spanning the majority of the genome has been repeatedly suggested, but this phenomenon has been met with mixed opinions about the biological importance of these transcripts. Our analyses of numerous orthogonal data sets firmly establish the presence of these transcripts, and thus the simple view of the genome as having a defined set of isolated loci transcribed independently does not seem to be accurate. Perhaps the genome encodes a network of transcripts, many of which are linked to protein-coding transcripts and to the majority of which we cannot (yet) assign a biological role. Our perspective of transcription and genes may have to evolve and also poses some interesting mechanistic questions. For example, how are splicing signals coordinated and used when there are so many overlapping primary transcripts? Similarly, to what extent does this reflect neutral turnover of reproducible transcripts with no biological role?
This didn't stop the hype. The results were widely interpreted as proof that most of our genome has a function and the result featured prominently in the creationist literature.

Thursday, August 07, 2014

The Function Wars: Part IV

The world is not inhabited exclusively by fools and when a subject arouses intense interest and debate, as this one has, something other than semantics is usually at stake.
Stephan Jay Gould (1982)
This is my fourth post on the function wars.

The first post in this series covered the various definitions of "function" [Quibbling about the meaning of the word "function"]. In the second post I tried to create a working definition of "function" and I discussed whether active transposons count as functional regions of the genome or junk [The Function Wars: Part II]. I claim that junk DNA is DNA that is nonfunctional and it can be deleted from the genome of an organism without affecting its survival, or the survival of its descendants.

In the third post I discussed a paper by Rands et al. (2014) presenting evidence that about 8% of the human genome is conserved [The Function Wars: Part III]. This is important since many workers equate sequence conservation with function. It suggests that only 8% of our genome is functional and the rest is junk. The paper is confusing and I'm still not sure what they did in spite of the fact that the lead author (Chris Rands) helped us out in the comments. I don't know what level of sequence similarity they counted as "constrained." (Was it something like 35% identity over 100 bp?)

My position if is that there's no simple definition of function but sequence conservation is a good proxy. It's theoretically possible to have selection for functional bulk DNA that doesn't depend on sequence but, so far, there are no believable hypothesis that make the case. It is wrong to arbitrarily DEFINE function in terms of selection (for sequence) because that rules out all bulk DNA hypotheses by fiat and that's not a good way to do science.

So, if the Rands et al. results hold up, it looks like more that 90% of our genome is junk.

Let's see how a typical science writer deals with these issues. The article I'm selecting is from Nature. It was published online yesterday (Aug. 6, 2014) (Woolston, 2014). The author is Chris Woolston, a freelance writer with a biology background. Keep in mind that it was Nature that started the modern functions wars by falling hook-line-and-sinker for the ENCODE publicity hype. As far as I know, the senior editors have not admitted that they, and their reviewers, were duped.

Sunday, September 09, 2012

Ed Yong Updates His Post on the ENCODE Papers

For decades we've known that less than 2% of the human genome consists of exons and that protein encoding genes represent more than 20% of the genome. (Introns account for the difference between exons and genes.) [What's in Your Genome?]. There are about 20,500 protein-encoding genes in our genome and about 4,000 genes that encode functional RNAs for a total of about 25,000 genes [Humans Have Only 20,500 Protein-Encoding Genes]. That's a little less than the number predicted by knowledgeable scientists over four decades ago [False History and the Number of Genes]. The definition of "gene" is somewhat open-ended but, at the very least, a gene has to have a function [Must a Gene Have a Function?].

We've known about all kinds of noncoding DNA that's functional, including origins of replication, centromeres, genes for functional RNAs, telomeres, and regulatory DNA. Together these functional parts of the genome make up almost 10% of the total. (Most of the DNA giving rise to introns is junk in the sense that it is not serving any function.) The idea that all noncoding DNA is junk is a myth propagated by scientists (and journalists) who don't know their history.

We've known about the genetic load argument since 1968 and we've known about the C-Value "Paradox" and it's consequences since the early 1970's. We've known about pseudogenes and we've known that almost 50% of our genome is littered with dead transposons and bits of transposons. We've known that about 3% of our genome consists of highly repetitive DNA that is not transcribed or expressed in any way. Most of this DNA is functional and a lot of it is not included in the sequenced human genome [How Much of Our Genome Is Sequenced?]. All of this evidence indicates that most of our genome is junk. This conclusion is consistent with what we know about evolution and it's consistent with what we know about genome sizes and the C-Value "Paradox." It also helps us understand why there's no correlation between genome size and complexity.

Tuesday, June 19, 2007

What is a gene, post-ENCODE?

Back in January we had a discussion about the definition of a gene [What is a gene?]. At that time I presented my personal preference for the best definition of a gene.
A gene is a DNA sequence that is transcribed to produce a functional product.
This is a definition that's widely shared among biochemists and molecular biologists but there are competing definitions.

Now, there's a new kid on the block. The recent publication of a slew of papers from the ENCODE project has prompted many of the people involved to proclaim that a revolution is under way. Part of the revolution includes redefining a gene. I'd like to discuss the paper by Mark Gerstein et al. (2007) [What is a gene, post-ENCODE? History and updated definition] to see what this revolution is all about.

The ENCODE project is a large scale attempt to analyze and annotate the human genome. The first results focus on about 1% of the genome spread out over 44 segments. These results have been summarized in an extraordinarily complex Nature paper with massive amounts of supplementary material (The Encode Project Consortium, 2007). The Nature paper is supported by dozens of other papers in various journals. Ryan Gregory has a list of blog references to these papers at ENCODE links.

I haven't yet digested the published results. I suspect that like most bloggers there's just too much there to comment on without investing a great deal of time and effort. I'm going to give it a try but it will require a lot of introductory material, beginning with the concept of alternative splicing, which is this week's theme.

The most widely publicized result is that most of the human genome is transcribed. It might be more correct to say that the ENCODE Project detected RNA's that are either complimentary to much of the human genome or lead to the inference that much of it is transcribed.

This is not news. We've known about this kind of data for 15 years and it's one of the reasons why many scientists over-estimated the number of humans genes in the decade leading up to the publication of the human genome sequence. The importance of the ENCODE project is that a significant fraction of the human genome has been analyzed in detail (1%) and that the group made some serious attempts to find out whether the transcripts really represent functional RNAs.

My initial impression is that they have failed to demonstrate that the rare transcripts of junk DNA are anything other than artifacts or accidents. It's still an open question as far as I'm concerned.

It's not an open question as far as the members of the ENCODE Project are concerned and that brings us to the new definition of a gene. Here's how Gerstein et al. (2007) define the problem.
The ENCODE consortium recently completed its characterization of 1% of the human genome by various high-throughput experimental and computational techniques designed to characterize functional elements (The ENCODE Project Consortium 2007). This project represents a major milestone in the characterization of the human genome, and the current findings show a striking picture of complex molecular activity. While the landmark human genome sequencing surprised many with the small number (relative to simpler organisms) of protein-coding genes that sequence annotators could identify (~21,000, according to the latest estimate [see www.ensembl.org]), ENCODE highlighted the number and complexity of the RNA transcripts that the genome produces. In this regard, ENCODE has changed our view of "what is a gene" considerably more than the sequencing of the Haemophilus influenza and human genomes did (Fleischmann et al. 1995; Lander et al. 2001; Venter et al. 2001). The discrepancy between our previous protein-centric view of the gene and one that is revealed by the extensive transcriptional activity of the genome prompts us to reconsider now what a gene is.
Keep in mind that I personally reject the premise and I don't think I'm alone. As far as I'm concerned, the "extensive transcriptional activity" could be artifact and I haven't had a "protein-centric" view of a gene since I learned about tRNA and ribosomal RNA genes as an undergraduate in 1967. Even if the ENCODE results are correct my preferred definition of a gene is not threatened. So, what's the fuss all about?

Regulatory Sequences
Gerstein et al. are worried because many definitions of a gene include regulatory sequences. Their results suggest that many genes have multiple large regions that control transcription and these may be located at some distance from the transcription start site. This isn't a problem if regulatory sequences are not part of the gene, as in the definition quoted above (a gene is a transcribed region). As a mater of fact, the fuzziness of control regions is one reason why most modern definitions of a gene don't include them.
Overlapping Genes
According to Gerstein et al.
As genes, mRNAs, and eventually complete genomes were sequenced, the simple operon model turned out to be applicable only to genes of prokaryotes and their phages. Eukaryotes were different in many respects, including genetic organization and information flow. The model of genes as hereditary units that are nonoverlapping and continuous was shown to be incorrect by the precise mapping of the coding sequences of genes. In fact, some genes have been found to overlap one another, sharing the same DNA sequence in a different reading frame or on the opposite strand. The discontinuous structure of genes potentially allows one gene to be completely contained inside another one’s intron, or one gene to overlap with another on the same strand without sharing any exons or regulatory elements.
We've known about overlapping genes ever since the sequences of the first bacterial operons and the first phage genomes were published. We've known about all the other problems for 20 years. There's nothing new here. No definition of a gene is perfect—all of them have exceptions that are difficult to squeeze into a one-size-fits-all definition of a gene. The problem with the ENCODE data is not that they've just discovered overlapping genes, it's that their data suggests that overlapping genes in the human genome are more the rule than the exception. We need more information before accepting this conclusion and redefining the concept of a gene based on analysis of the human genome.
Splicing
Splicing was discovered in 1977 (Berget et al. 1977; Chow et al. 1977; Gelinas and Roberts 1977). It soon became clear that the gene was not a simple unit of heredity or function, but rather a series of exons, coding for, in some cases, discrete protein domains, and separated by long noncoding stretches called introns. With alternative splicing, one genetic locus could code for multiple different mRNA transcripts. This discovery complicated the concept of the gene radically.
Perhaps back in 1978 the discovery of splicing prompted a re-evaluation of the concept of a gene. That was almost 30 years ago and we've moved on. Now, many of us think of a gene as a region of DNA that's transcribed and this includes exons and introns. In fact, the modern definition doesn't have anything to do with proteins.

Alternative splicing does present a problem if you want a rigorous definition with no fuzziness. But biology isn't like that. It's messy and you can't get rid of fuzziness. I think of a gene as the region of DNA that includes the longest transcript. Genes can produce multiple protein products by alternative splicing. (The fact that the definition above says "a" functional product shouldn't mislead anyone. That was not meant to exclude multiple products.)

The real problem here is that the ENCODE project predicts that alternative splicing is abundant and complex. They claim to have discovered many examples of splice variants that include exons from adjacent genes as shown in the figure from their paper. Each of the lines below the genome represents a different kind of transcript. You can see that there are many transcripts that include exons from "gene 1" and "gene 2" and another that include exons from "gene 1" and "gene 4." The combinations and permutations are extraordinarily complex.

If this represents the true picture of gene expression in the human genome, then it would require a radical rethinking of what we know about molecular biology and evolution. On the other hand, if it's mostly artifact then there's no revolution under way. The issue has been fought out in the scientific literature over the past 20 years and it hasn't been resolved to anyone's satisfaction. As far as I'm concerned the data overwhelmingly suggests that very little of that complexity is real. Alternative splicing exists but not the kind of alternative splicing shown in the figure. In my opinion, that kind of complexity is mostly an artifact due to spurious transcription and splicing errors.
Trans-splicing
Trans-splicing refers to a phenomenon where the transcript from one part of the genome is attached to the transcript from another part of the genome. The phenomenon has been known for over 20 years—it's especially common in C. elegans. It's another exception to the rule. No simple definition of a gene can handle it.
Parasitic and mobile genes
This refers mostly to transposons. Gerstein et al say, "Transposons have altered our view of the gene by demonstrating that a gene is not fixed in its location." This isn't true. Nobody has claimed that the location of genes is fixed.
The large amount of "junk DNA" under selection
If a large amount of what we now think of as junk DNA turns out to be transcribed to produce functional RNA (or proteins) then that will be a genuine surprise to some of us. It won't change the definition of a gene as far as I can see.
The paper goes on for many more pages but the essential points are covered above. What's the bottom line? The new definition of an ENCODE gene is:
There are three aspects to the definition that we will list below, before providing the succinct definition:
  1. A gene is a genomic sequence (DNA or RNA) directly encoding functional product molecules, either RNA or protein.
  2. In the case that there are several functional products sharing overlapping regions, one takes the union of all overlapping genomic sequences coding for them.
  3. This union must be coherent—i.e., done separately for final protein and RNA products—but does not require that all products necessarily share a common subsequence.
This can be concisely summarized as:
The gene is a union of genomic sequences encoding a coherent set of potentially overlapping functional products.
On the surface this doesn't seem to be much different from the definition of a gene as a transcribed region but there are subtle differences. The authors describe how their new definition works using a hypothetical example.

How the proposed definition of the gene can be applied to a sample case. A genomic region produces three primary transcripts. After alternative splicing, products of two of these encode five protein products, while the third encodes for a noncoding RNA (ncRNA) product. The protein products are encoded by three clusters of DNA sequence segments (A, B, and C; D; and E). In the case of the three-segment cluster (A, B, C), each DNA sequence segment is shared by at least two of the products. Two primary transcripts share a 5' untranslated region, but their translated regions D and E do not overlap. There is also one noncoding RNA product, and because its sequence is of RNA, not protein, the fact that it shares its genomic sequences (X and Y) with the protein-coding genomic segments A and E does not make it a co-product of these protein-coding genes. In summary, there are four genes in this region, and they are the sets of sequences shown inside the orange dashed lines: Gene 1 consists of the sequence segments A, B, and C; gene 2 consists of D; gene 3 of E; and gene 4 of X and Y. In the diagram, for clarity, the exonic and protein sequences A and E have been lined up vertically, so the dashed lines for the spliced transcripts and functional products indicate connectivity between the proteins sequences (ovals) and RNA sequences (boxes). (Solid boxes on transcripts) Untranslated sequences, (open boxes) translated sequences.
This isn't much different from my preferred definition except that I would have called the region containing exons C and D a single gene with two different protein products. Gerstein et al (2007) split it into two different genes.

The bottom line is that in spite of all the rhetoric the "new" definition of a gene isn't much different from the old one that some of us have been using for a couple of decades. It's different from some old definitions that other scientists still prefer but this isn't revolutionary. That discussion has already been going on since 1980.

Let me close by making one further point. The "data" produced by the ENCODE consortium is intriguing but it would be a big mistake to conclude that everything they say is a proven fact. Skepticism about the relevance of those extra transcripts is quite justified as is skepticism about the frequency of alternative splicing.


Gerstein, M.B., Bruce, C., Rozowsky, J.S., Zheng, D., Du, J., Korbel, J.O., Emanuelsson, O., Zhang, Z.D., Weissman, S. and Snyder, M. (2007) What is a gene, post-ENCODE? History and updated definition. Genome Res. 17:669-681.

The ENCODE Project Consortium (2007) Nature 447:799-816. [PDF]

[Hat Tip: Michael White at Adaptive Complexity]

Thursday, September 06, 2012

What in the World Is Michael Eisen Talking About?

I've been trying to keep up with the ENCODE PR fiasco so I immediately click on a link to Michael Eisen's blog with the provocative title it is NOT junk. The article is: This 100,000 word post on the ENCODE media bonanza will cure cancer.

Michael Eisen is an evolutionary biologist at the University of California at Berkeley. He's best known, to me, as the brother of Jonathan Eisen.

Michael, like me and hundreds of other scientists, is upset by the ENCODE press releases. One of them is: Fast forward for biomedical research: ENCODE scraps the junk.
The hundreds of researchers working on the ENCODE project have revealed that much of what has been called 'junk DNA' in the human genome is actually a massive control panel with millions of switches regulating the activity of our genes. Without these switches, genes would not work – and mutations in these regions might lead to human disease. The new information delivered by ENCODE is so comprehensive and complex that it has given rise to a new publishing model in which electronic documents and datasets are interconnected.
Here's the interesting thing. Many of us are upset about the press releases and the PR because we don't think the ENCODE data disproves junk DNA. Michael Eisen's perspective is entirely different. He's upset because, according to him, junk DNA was discredited years ago.
The problems start before the first line ends. As the authors undoubtedly know, nobody actually thinks that non-coding DNA is ‘junk’ any more. It’s an idea that pretty much only appears in the popular press, and then only when someone announces that they have debunked it. Which is fairly often. And has been for at least the past decade. So it is more than just intellectually lazy to start the story of ENCODE this way. It is dishonest – nobody can credibly claim this to be a finding of ENCODE. Indeed it was a clear sense of the importance of non-coding DNA that led to the ENCODE project in the first place. And yet, each of the dozens of news stories I read on this topic parroted this absurd talking point – falsely crediting ENCODE with overturning an idea that didn’t need to be overturned.
Eisen is wrong, junk DNA is alive and well. In fact almost 90% of our genome is junk.

This is what makes science so much fun.


Tuesday, April 29, 2014

Creationists admit that junk DNA may not be a "myth" after all

Creationists in general, and Intelligent Design Creationists in particular, feel very threatened by the idea that most of our genome is junk. We know why they feel threatened: it's because a genome full of junk doesn't seem like something gods would design on purpose. It's pretty hard to reconcile junk DNA with with gods that spend so much effort designing bacterial flagella.

The creationists get very excited whenever a group of scientists publish evidence for function in junk DNA and they could hardly contain themselves when the ENCODE preliminary results were published in 2007 because the ENCODE Consortium said that most of the human genome was functional. You will recall that the creationists fell hook line and sinker for the ENCODE publicity hype in September 2012 when the ENCODE leaders came right out and said that their analysis of the entire genome shows there is almost no junk in the human genome.

The creationists, just like the ENCODE leaders, were very resistant to all of the scientific evidence for junk DNA. Both groups showed a remarkable ignorance of four decades of work leading to the conclusion that our genomes are full of junk DNA [see ENCODE, Junk DNA, and Intelligent Design Creationism ]. Creationists, and even some scientific opponents of junk DNA, quote Jonathan Wells' book The Myth of Junk DNA as an authority of the issue.

Now, I wrote a pretty extensive review of The Myth of Junk DNA showing where mistakes were made and why the evidence still favored lots of junk DNA in our genome [The Myth of Junk DNA by Jonathan Wells]. That was in 2011. Here's how Jonathan Wells responded ... [Jonathan Wells Sends His Regrets].
Oh, one last thing: “paulmc” referred to an online review of my book by University of Toronto professor Larry Moran—a review that “paulmc” called both extensive and thorough. Well, saturation bombing is extensive and thorough, too. Although “paulmc” admitted to not having read more than the Preface to The Myth of Junk DNA, I have read Mr. Moran’s review, which is so driven by confused thinking and malicious misrepresentations of my work—not to mention personal insults—that addressing it would be like trying to reason with a lynch mob.
The ENCODE Consortium has decided that it had better backtrack a little on the subject of junk DNA. Their recent PNAS article (Kellis et al., 2014) pretends that the publicity hype of September 2012 never existed and, even if it did, they may have been right to conclude that 80% of our genome is functional. It all depends on how you define function. Apparently they have just discovered that lots of scientists define it in a way that the ENCODE Consortium overlooked in September 2012.

Now they just want to make sure that everyone knows they have done their homework and they acknowledge that there's a wee bit of a controversy—but they weren't wrong! They just have a different way of defining function.

This puts some of the creationists in a difficult position. Some of them are actually willing to conceded that there's a lot of junk DNA in our genome while other are only willing to concede that the case for function may not be quite as rock solid as they thought.

Here's how an anonymous creationist explains the backtracking of the ENCODE Consortium on Evolution News & Views (sic): Defining "Functional": The Latest from ENCODE.

He/she starts off with the obligatory snipe at "Darwinists" and the obligatory misrepresentation of the case for junk DNA. He/she is referring to the Kellis et al. paper ...
First, the paper is a remarkably restrained and balanced response to some of the rather intemperate criticisms of ENCODE from hard-core Darwinists who insist that (a) ONLY an evolutionary approach yields valid information about functionality, (b) evolutionary theory necessarily implies that most of our DNA is junk, and (c) junk DNA provides evidence that Darwinian evolution is a fact. In other words this paper is a model of rational and civil scientific discourse, in contrast to what we have come to expect from some hard-core Darwinists.
(See the quote above from Jonathan Wells for an example of "a model of rational and civil scientific discourse.")

The Evolution News & Views post concludes with ...
The authors conclude that all three approaches must be taken into account, though a simple intersection of the three (which would include only DNA sequences that meet the test of functionality for all three approaches) would be far too restrictive. Unfortunately, the authors do not specify exactly how the three approaches could be integrated to yield a single reliable estimate of the percentage of functional DNA.

So the debate continues.
Believe it or not, that last sentence ("So the debate continues") is pretty remarkable considering that the creationists have steadfastly refused to admit that there is a scientific debate. Over the past decade, they have consistently claimed that the evidence is in and it shows that gods did it after all most of our genome is functional.

Maybe I'm being overly optimistic but it looks to me like some creationists are actually disagreeing with Jonathan Wells. Stay tuned.


Kellis, M. et al. (2014) Defining functional DNA elements in the human genome. Proc. Natl. Acad. Sci. (USA) April 24, 2014 published online [doi: 10.1073/pnas.1318948111 ]

Wednesday, June 25, 2014

The Function Wars: Part I

This is Part I of the "Function Wars: posts. The second one is on The ENCODE legacy.1

Quibbling about the meaning of the word "function"

The world is not inhabited exclusively by fools and when a subject arouses intense interest and debate, as this one has, something other than semantics is usually at stake.
Stephan Jay Gould (1982)
The ENCODE Consortium tried to redefine the word “function” to include any biological activity that they could detect using their genome-wide assays. This was not helpful since it included a huge number of sites and sequences that result from spurious (nonfunctional) binding of transcription factors or accidental transcription of random DNA sequences to make junk RNA [see What did the ENCODE Consortium say in 2012?]..

I believe that this strange way of redefining biological function was a deliberate attempt to discredit junk DNA. It was quite successful since much of the popular press interpreted the ENCODE results as refuting or disproving junk DNA. I believe that the leaders of the ENCODE Consortium knew what they were doing when they decided to hype their results by announcing that 80% of the human genome is functional [see The Story of You: Encode and the human genome – video, Science Writes Eulogy for Junk DNA]..

The ENCODE Project, today, announces that most of what was previously considered as 'junk DNA' in the human genome is actually functional. The ENCODE Project has found that 80 per cent of the human genome sequence is linked to biological function.

[Google Earth of Biomedical Research]

Friday, January 04, 2013

Science Magazine Chooses ENCODE Results as One of the Top Ten Breakthroughs in 2012

Science magazine (published by AAAS) was one of the major news sources that fell hook, line and sinker for the ENCODE/Nature publicity campaign last September [Science Writes Eulogy for Junk DNA]. It even published a laudatory three page profile of Ewan Birney, the man responsible for misrepresenting the ENCODE results as evidence that most of our genome is functional [Ewan Birney: Genomics' Big Talker].

I was somewhat apprehensive when I saw that the editors of Science had picked the ENCODE results as one of the top ten breakthroughs [Genomics Beyond Genes]. Would the editors continue to promote the idea that most of the human genome is functional?

Tuesday, September 11, 2012

ENCODE/Junk DNA Fiasco: The IDiots Don't Like Me

Casey Luskin has devoted an entire post to discussing my views on junk DNA. I'm flattered. Read it at: What an Evolution Advocate's Response to the ENCODE Project Tells Us about the Evolution Debate.

Let's look at how the IDiots are responding to this publicity fiasco. Casey Luskin begins with ...
University of Toronto biochemistry professor Larry Moran is not happy with the results of the ENCODE project, which report evidence of "biochemical functions for 80% of the genome." Other evolution-defenders are trying to dismiss this paper as mere "hype".

Yes that's right -- we're supposed to ignore the intentionally unambiguous abstract of an 18-page Nature paper, the lead out of 30 other simultaneous papers from this project, co-authored by literally hundreds of leading scientists worldwide, because it's "hype." (Read the last two or so pages of the main Nature paper to see the uncommonly long list of international scientists who were involved with this project, and co-authored this paper.) Larry Moran and other vocal Internet evolution-activists are welcome to disagree and protest these conclusions, but it's clear that the consensus of molecular biologists -- people who actually study how the genome works -- now believe that the idea of "junk DNA" is essentially wrong.

Tuesday, December 04, 2012

Sean Eddy on Junk DNA and ENCODE

Sean Eddy is a bioinformatics expert who runs a lab at the Howard Hughes Medical Institute (HHMI) Janelia Farm Research Campus in Virginia (USA).1 Sean was one of the many scientists who spoke out against the ENCODE misinterpretation of their own results [ENCODE says what?].

Most people now know that ENCODE did not disprove junk DNA (with the possible exception of creationists and a few kooks).

Sean has written a wonderful article for Current Biology where he explains in simple terms why there is abundant evidence for junk (i.e. nonfunctional) DNA [The C-value paradox, junk DNA and ENCODE] [preprint].

Here's a quotation from the article to pique your interest.
Recently, the ENCODE project has concluded that 80% of the human genome is reproducibly transcribed, bound to proteins, or has its chromatin specifically modified. In widespread publicity around the project, some ENCODE leaders claimed that this biochemical activity disproves junk DNA. If there is an alternative hypothesis, it must provide an alternative explanation for the data: for the C-value paradox, for mutational load, and for how a large fraction of eukaryotic genomes is composed of neutrally drifting transposon-derived sequence. ENCODE hasn’t done this, and most of ENCODE’s data don’t bear directly on the question. Transposon‑derived sequence is generally expected to be biochemically active by ENCODE’s definitions — lots of transposon sequences are inserted into transcribed genic regions, mobile transposons are transcribed and regulated, and genomic suppression of transposon activity requires DNA‑binding and chromatin modification.

The question that the ‘junk DNA’ concept addresses is not whether these sequences are biochemically ‘active’, but whether they’re there primarily because they’re useful for the organism.


1. More importantly, he's an alumnus of Talk.origins.

Sunday, July 19, 2015

The fuzzy thinking of John Parrington: pervasive transcription

Opponents of junk DNA usually emphasize the point that they were surprised when the draft human genome sequence was published in 2001. They expected about 100,000 genes but the initial results suggested less than 30,000 (the final number is about 25,0001. The reason they were surprised was because they had not kept up with the literature on the subject and they had not been paying attention when the sequence of chromosome 22 was published in 1999 [see Facts and Myths Concerning the Historical Estimates of the Number of Genes in the Human Genome].

The experts were expecting about 30,000 genes and that's what the genome sequence showed. Normally this wouldn't be such a big deal. Those who were expecting a large number of genes would just admit that they were wrong and they hadn't kept up with the literature over the past 30 years. They should have realized that discoveries in other species and advances in developmental biology had reinforced the idea that mammals only needed about the same number of genes as other multicellular organisms. Most of the differences are due to regulation. There was no good reason to expect that humans would need a huge number of extra genes.

That's not what happened. Instead, opponents of junk DNA insist that the complexity of the human genome cannot be explained by such a low number of genes. There must be some other explanation to account for the the missing genes. This sets the stage for at least seven different hypotheses that might resolve The Deflated Ego Problem. One of them is the idea that the human genome contains thousands and thousands of nonconserved genes for various regulatory RNAs. These are the missing genes and they account for a lot of the "dark matter" of the genome—sequences that were thought to be junk.

Here's how John Parrington describes it on page 91 of his book.
The study [ENCODE] also found that 80 per cent of the genome was generating RNA transcripts having importance, many were found only in specific cellular compartments, indicating that they have fixed addresses where they operate. Surely there could hardly be a greater divergence from Crick's central dogma than this demonstration that RNAs were produced in far greater numbers across the genome than could be expected if they were simply intermediates between DNA and protein. Indeed, some ENCODE researchers argued that the basic unit of transcription should now be considered as the transcript. So Stamatoyannopoulos claimed that 'the project has played an important role in changing our concept of the gene.'
This passage illustrates my difficulty in coming to grips with Parrington's logic in The Deeper genome. Just about every page contains statements that are either wrong or misleading and when he strings them together they lead to a fundamentally flawed conclusion. In order to critique the main point, you have to correct each of the so-called "facts" that he gets wrong. This is very tedious.

I've already explained why Parrington is wrong about the Central Dogma of Molecular Biology [John Avise doesn't understand the Central Dogma of Molecular Biology]. His readers don't know that he's wrong so they think that the discovery of noncoding RNAs is a revolution in our understanding of biochemisty—a revolution led by the likes of John A. Stamatoyannopoulos in 2012.

The reference in the book to the statement by Stamatoyannopoulos is from the infamous Elizabeth Pennisi article on ENCODE Project Writes Eulogy for Junk DNA (Pennisi, 2012). Here's what she said in that article ...
As a result of ENCODE, Gingeras and others argue that the fundamental unit of the genome and the basic unit of heredity should be the transcript—the piece of RNA decoded from DNA—and not the gene. “The project has played an important role in changing our concept of the gene,” Stamatoyannopoulos says.
I'm not sure what concept of a gene these people had before 2012. It appears that John Parrington is under the impression that genes are units that encode proteins and maybe that's what Pennisi and Stamatoyannopoulos thought as well.

If so, then perhaps the publicity surrounding ENCODE really did change their concept of a gene but all that proves is that they were remarkably uniformed before 2012. Intelligent biochemists have known for decades that the best definition of a gene is "a DNA sequence that is transcribed to produce a functional product."2 In other words, we have been defining a gene in terms of transcripts for 45 years [What Is a Gene?].

This is just another example of wrong and misleading statements that will confuse readers. If I were writing a book I would say, "The human genome sequence confirmed the predictions of the experts that there would be no more than 30,000 genes. There's nothing in the genome sequence or the ENCODE results that has any bearing on the correct understanding of the Central Dogma and there's nothing that changes the correct definition of a gene."

You can see where John Parrington's thinking is headed. Apparently, Parrington is one of those scientists who were completely unaware of the fact that genes could specify functional RNAs and completely unaware of the fact that Crick knew this back in 1970 when he tried to correct people like Parrington. Thus, Parrington and his colleagues were shocked to learn that the human genome only had only 25,000 genes and many of them didn't encode proteins. Instead of realizing that his view was wrong, he thinks that the ENCODE results overthrew those old definitions and changed the way we think about genes. He tries to convince his readers that there was a revolution in 2012.

Parrington seems to be vaguely aware of the idea that most pervasive transcription is due to noise or junk RNA. However, he gives his readers no explanation of the reasoning behind such a claim. Spurious transcription is predicted because we understand the basic concept of transcription initiation. We know that promoter sequences and transcription binding sites are short sequences and we know that they HAVE to occur a high frequency in large genomes just by chance. This is not just speculation. [see The "duon" delusion and why transcription factors MUST bind non-functionally to exon sequences and How RNA Polymerase Binds to DNA]

If our understanding of transcription initiation is correct then all you need is a activator transcription factor binding site near something that's compatible with a promoter sequence. Any given cell type will contain a number of such factors and they must bind to a large number of nonfunctional sites in a large genome. Many of these will cause occasional transcription giving rise to low abundance junk RNA. (Most of the ENCODE transcripts are present at less than one copy per cell.)

Different tissues will have different transcription factors. Thus, the low abundance junk RNAs must exhibit tissue specificity if our prediction is correct. Parrington and the ENCODE workers seem to think that the cell specificity of these low abundance transcripts is evidence of function. It isn't—it's exactly what you expect of spurious transcription. Parrington and the ENCODE leaders don't understand the scientific literature on transription initiation and transcription factors binding sites.

It takes me an entire blog post to explain the flaws in just one paragraph of Parrington's book. The whole book is like this. The only thing it has going for it is that it's better than Nessa Carey's book [Nessa Carey doesn't understand junk DNA].


1. There are about 20,000 protein-encoding genes and an unknown number of genes specifying functional RNAs. I'm estimating that there are about 5,000 but some people think there are many more.

2. No definition is perfect. My point is that defining a gene as a DNA sequence that encodes a protein is something that should have been purged from textbooks decades ago. Any biochemist who ever thought seriously enough about the definition to bring it up in a scientific paper should be embarrassed to admit that they ever believed such a ridiculous definition.

Pennisi, E. (2012) "ENCODE Project Writes Eulogy for Junk DNA." Science 337: 1159-1161. [doi:10.1126/science.337.6099.1159"]

Sunday, February 12, 2017

ENCODE workshop discusses function in 2015

A reader directed me to a 2015 ENCODE workshop with online videos of all the presentations [From Genome Function to Biomedical Insight: ENCODE and Beyond]. The workshop was sponsored by the National Human Genome Research Institute in Bethesda, Md (USA). The purpose of the workshop was ...

  1. Discuss the scientific questions and opportunities for better understanding genome function and applying that knowledge to basic biological questions and disease studies through large-scale genomics studies.
  2. Consider options for future NHGRI projects that would address these questions and opportunities.
The main controversy concerning the human genome is how much of it is junk DNA with no function. Since the purpose of ENCODE is to understand genome function, I expected a lively discussion about how to distinguish between functional elements and spurious nonfunctional elements.

Tuesday, September 11, 2012

ENCODE/Junk DNA Fiasco: John Timmer Gets It Right!

John Timmer is the science editor at Ars Technica. Yesterday he published the best analysis of the ENCODE/junk DNA fiasco that any science writer has published so far [Most of what you read was wrong: how press releases rewrote scientific history].

How did he manage to pull this off? It's not much of a secret. He knew what he was writing about and that gives him an unfair advantage over most other science journalists.

Let me show you what I mean. Here's John Timmer's profile on the Ars Technica website.
John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. John has done over a decade's worth of research in genetics and developmental biology at places like Cornell Medical College and the Memorial Sloan-Kettering Cancer Center. He's been a speaker at the annual meeting of the National Association of Science Writers and the Science Online meetings, and he's one of the organizers of the Science Online NYC discussion series. In addition to being Ars' science content wrangler, John still teaches at Cornell and does freelance writing, editing, and programming.
See what I mean? He has a degree in biochemistry and another one in molecular biology. People like that shouldn't be allowed to write about the ENCODE results because they might embarrass the scientists.

Friday, September 07, 2012

More Expert Opinion on Junk DNA from Scientists

The Nature issue containing the latest ENCODE Consortium papers also has a New & Views article called "Genomics: ENCODE explained" (Ecker et al., 2012). Some of these scientist comment on junk DNA.

For exampleshere's what Joseph Ecker says,
One of the more remarkable findings described in the consortium's 'entrée' paper is that 80% of the genome contains elements linked to biochemical functions, dispatching the widely held view that the human genome is mostly 'junk DNA'. The authors report that the space between genes is filled with enhancers (regulatory DNA elements), promoters (the sites at which DNA's transcription into RNA is initiated) and numerous previously overlooked regions that encode RNA transcripts that are not translated into proteins but might have regulatory roles.
And here's what Inês Barroso, says,
The vast majority of the human genome does not code for proteins and, until now, did not seem to contain defined gene-regulatory elements. Why evolution would maintain large amounts of 'useless' DNA had remained a mystery, and seemed wasteful. It turns out, however, that there are good reasons to keep this DNA. Results from the ENCODE project show that most of these stretches of DNA harbour regions that bind proteins and RNA molecules, bringing these into positions from which they cooperate with each other to regulate the function and level of expression of protein-coding genes. In addition, it seems that widespread transcription from non-coding DNA potentially acts as a reservoir for the creation of new functional molecules, such as regulatory RNAs.
If this were an undergraduate course I would ask for a show of hands in response to the question, "How many of you thought that there did not seem to be "defined gene-regulatory elements" in noncoding DNA?"

I would also ask, "How many of you have no idea how evolution could retain "useless" DNA in our genome?" Undergraduates who don't understand evolution should not graduate in a biological science program. It's too bad we don't have similar restrictions on senor scientists who write News & Views articles for Nature.

Jonathan Pritchard and Yoav Gilad write,
One of the great challenges in evolutionary biology is to understand how differences in DNA sequence between species determine differences in their phenotypes. Evolutionary change may occur both through changes in protein-coding sequences and through sequence changes that alter gene regulation.

There is growing recognition of the importance of this regulatory evolution, on the basis of numerous specific examples as well as on theoretical grounds. It has been argued that potentially adaptive changes to protein-coding sequences may often be prevented by natural selection because, even if they are beneficial in one cell type or tissue, they may be detrimental elsewhere in the organism. By contrast, because gene-regulatory sequences are frequently associated with temporally and spatially specific gene-expression patterns, changes in these regions may modify the function of only certain cell types at specific times, making it more likely that they will confer an evolutionary advantage.

However, until now there has been little information about which genomic regions have regulatory activity. The ENCODE project has provided a first draft of a 'parts list' of these regulatory elements, in a wide range of cell types, and moves us considerably closer to one of the key goals of genomics: understanding the functional roles (if any) of every position in the human genome.
The problem here is the hype. While it's true that the ENCODE project has produced massive amounts of data on transcription binding sites etc., it's a bit of an exaggeration to say that "until now there has been little information about which genomic regions have regulatory activity." Twenty-five years ago, my lab published some pretty precise information about the parts of the genome regulating activity of a mouse hsp70 gene. There have been thousands of other papers on the the subject of gene regulatory sequences since then. I think we actually have a pretty good understanding of gene regulation in eukaryotes. It's a model that seems to work well for most genes.

The real challenge from the ENCODE Consortium is that they question that understanding. They are proposing that huge amounts of the genome are devoted to fine-tuning the expression of most genes in a vast network of binding sites and small RNAs. That's not the picture we have developed over the past four decades. If true, it would not only mean that a lot less DNA is junk but it would also mean that the regulation of gene expression is fundamentally different than it is in E. coli.



[Image Credit: ScienceDaily: In Massive Genome Analysis ENCODE Data Suggests 'Gene' Redefinition.

Ecker, J.R., Bickmore, W.A., Barroso, I., Pritchard, J.K. (2012) Genomics: ENCODE explained. Nature 489:52-55. [doi:10.1038/489052a]
Yoav Gilad
& Eran Segal

Saturday, November 19, 2022

How many enhancers in the human genome?

In spite of what you might have read, the human genome does not contain one million functional enhancers.

The Sept. 15, 2022 issue of Nature contains a news article on "Gene regulation" [Two-layer design protects genes from mutations in their enhancers]. It begins with the following sentence.

The human genome contains only about 20,000 protein-coding genes, yet gene expression is controlled by around one million regulatory DNA elements called enhancers.

Sandwalk readers won't need to be told the reference for such an outlandish claim because you all know that it's the ENCODE Consortium summary paper from 2012—the one that kicked off their publicity campaign to convince everyone of the death of junk DNA (ENCODE, 2012). ENCODE identified several hundred thousand transcription factor (TF) binding sites and in 2012 they estimated that the total number of base pairs invovled in regulating gene expression could account for 20% of the genome.

How many of those transcription factor binding sites are functional and how many are due to spurious binding to sites that have nothing to do with gene regulation? We don't know the answer to that question but we do know that there will be a huge number of spurious binding sites in a genome of more than three billion base pairs [Are most transcription factor binding sites functional?].

The scientists in the ENCODE Consortium didn't know the answer either but what's surprising is that they didn't even know there was a question. It never occured to them that some of those transcription factor binding sites have nothng to do with regulation.

Fast forward ten years to 2022. Dozens of papers have been published criticizing the ENCODE Consortium for their stupidity lack of knowledge of the basic biochemical properties of DNA binding proteins. Surely nobody who is interested in this topic believes that there are one million functional regulatory elements (enhancers) in the human genome?

Wrong! The authors of this Nature article, Ran Elkon at Tel Aviv University (Israel) and Reuven Agami at the Netherlands Cancer Institute (Amsterdam, Netherlands), didn't get the message. They think it's quite plausible that the expression of every human protein-coding gene is controlled by an average of 50 regulatory sites even though there's not a single known example any such gene.

Not only that, for some reason they think it's only important to mention protein-coding genes in spite of the fact that the reference they give for 20,000 protein-coding genes (Nurk et al., 2022) also claims there are an additional 40,000 noncoding genes. This is an incorrect claim since Nurk et al. have no proof that all those transcribed regions are actually genes but let's play along and assume that there really are 60,000 genes in the human genome. That reduces the average number of enhancers to an average of "only" 17 enhancers per gene. I don't know of a single gene that has 17 or more proven enhancers, do you?

Why would two researchers who study gene regulation say that the human genome contains one million enhancers when there's no evidence to support such a claim and it doesn't make any sense? Why would Nature publish this paper when surely the editors must be aware of all the criticism that arose out of the 2012 ENCODE publicity fiasco?

I can think of only two answers to the first question. Either Elkon and Agami don't know of any papers challenging the view that most TF binding sites are functional (see below) or they do know of those papers but choose to ignore them. Neither answer is acceptable.

I think that the most important question in human gene regulation is how much of the genome is devoted to regulation. How many potential regulatory sites (enhancers) are functional and how many are spurious non-functional sites? Any paper on regulation that does not mention this problem should not be published. All results have to interpreted in light of conflicting claims about function.

Here are some example of papers that raise the issue. The point is not to prove that these authors are correct - although they are correct - but to show that there's a controvesy. You can't just state that there are one million regulatory sites as if it were a fact when you know that the results are being challenged.

"The observations in the ENCODE articles can be explained by the fact that biological systems are noisy: transcription factors can interact at many nonfunctional sites, and transcription initiation takes place at different positions corresponding to sequences similar to promoter sequences, simply because biological systems are not tightly controlled." (Morange, 2014)

"... ENCODE had not shown what fraction of these activities play any substantive role in gene regulation, nor was the project designed to show that. There are other well-studied explanations for reproducible biochemical activities besides crucial human gene regulation, including residual activities (pseudogenes), functions in the molecular features that infest eukaryotic genomes (transposons, viruses, and other mobile elements), and noise." (Eddy, 2013)

"Given that experiments performed in a diverse number of eukaryotic systems have found only a small correlation between TF-binding events and mRNA expression, it appears that in most cases only a fraction of TF-binding sites significantly impacts local gene expression." (Palazzo and Gregory, 2014)

One surprising finding from the early genome-wide ChIP studies was that TF binding is widespread, with thousand to tens of thousands of binding events for many TFs. These number do not fit with existing ideas of the regulatory network structure, in which TFs were generally expected to regulate a few hundred genes, at most. Binding is not necessarily equivalent to regulation, and it is likely that only a small fraction of all binding events will have an important impact on gene expression. (Slattery et al., 2014)

Detailed maps of transcription factor (TF)-bound genomic regions are being produced by consortium-driven efforts such as ENCODE, yet the sequence features that distinguish functional cis-regulatory sites from the millions of spurious motif occurrences in large eukaryotic genomes are poorly understood. (White et al., 2013)

One outstanding issue is the fraction of factor binding in the genome that is "functional", which we define here to mean that disturbing the protein-DNA interaction leads to a measurable downstream effect on gene regulation. (Cusanovich et al., 2014)

... we expect, for example, accidental transcription factor-DNA binding to go on at some rate, so assuming that transcription equals function is not good enough. The null hypothesis after all is that most transcription is spurious and alterantive transcripts are a consequence of error-prone splicing. (Hurst, 2013)

... as a chemist, let me say that I don't find the binding of DNA-binding proteins to random, non-functional stretches of DNA surprising at all. That hardly makes these stretches physiologically important. If evolution is messy, chemistry is equally messy. Molecules stick to many other molecules, and not every one of these interactions has to lead to a physiological event. DNA-binding proteins that are designed to bind to specific DNA sequences would be expected to have some affinity for non-specific sequences just by chance; a negatively charged group could interact with a positively charged one, an aromatic ring could insert between DNA base pairs and a greasy side chain might nestle into a pocket by displacing water molecules. It was a pity the authors of ENCODE decided to define biological functionality partly in terms of chemical interactions which may or may not be biologically relevant. (Jogalekar, 2012)


Nurk, S., Koren, S., Rhie, A., Rautiainen, M., Bzikadze, A. V., Mikheenko, A., et al. (2022) The complete sequence of a human genome. Science, 376:44-53. [doi:10.1126/science.abj6987]

The ENCODE Project Consortium (2012) An integrated encyclopedia of DNA elements in the human genome. Nature, 489:57-74. [doi: 10.1038/nature11247]

Monday, August 05, 2019

Religion vs science (junk DNA): a blast from the past

I was checking out the science books in our local bookstore the other day and I came across Evolution 2.0 by Perry Marshall. It was published in 2015 but I don't recall seeing it before.

The author is an engineer (The Salem Conjecture) who's a big fan of Intelligent Design. The book is an attempt to prove that evolution is a fraud.

I checked to see if junk DNA was mentioned and came across the following passages on pages 273-275. It's interesting to read them in light of what's happened in the past four years. I think that the view represented in this book is still the standard view in the ID community in spite of the fact that it is factually incorrect and scientifically indefensible.

Sunday, January 01, 2023

The function wars are over

In order to have a productive discussion about junk DNA we needed to agree on how to define "function" and "junk." Disagreements over the definitions spawned the Function Wars that became intense over the past decade. That war is over and now it's time to move beyond nitpicking about terminology.

The idea that most of the human genome is composed of junk DNA arose gradually in the late 1960s and early 1970s. The concept was based on a lot of evidence dating back to the 1940s and it gained support with the discovery of massive amounts of repetitive DNA.

Various classes of functional DNA were known back then including: regulatory sequences, protein-coding genes, noncoding genes, centromeres, and origins of replication. Other categories have been added since then but the total amount of functional DNA was not thought to be more than 10% of the genome. This was confirmed with the publication of the human genome sequence.

From the very beginning, the distinction between functional DNA and junk DNA was based on evolutionary principles. Functional DNA was the product of natural selection and junk DNA was not constrained by selection. The genetic load argument was a key feature of Susumu Ohno's conclusion that 90% of our genome is junk (Ohno, 1972a; Ohno, 1972b).

Sunday, September 16, 2012

Read What Mike White Has to Say About ENCODE and Junk DNA

One of the good things to come out of this ENCODE/junk DNA fiasco is that I've discovered a number of excellent scientists who aren't afraid to speak out on behalf of science. One of them is Mike White, a systems biologist at the Center for Genome Sciences and Systems Biology, Washington Univ. School of Medicine, St. Louis (USA). He blogs at The Finch & Pea.

Mike published an impressive article on the Huffington Post a few days ago. This is a must-read for anyone interested in the controversy over junk DNA: A Genome-Sized Media Failure. Here's part of what he says ...
If you read anything that emerged from the ENCODE media blitz, you were probably told some version of the "junk DNA is debunked" story. It goes like this: When scientists realized that classical, protein-encoding genes make up less than 2% of the human genome, they simply assumed, in a fit of hubris, that the rest of our DNA was useless junk. (You might have also heard this from your high school or college teacher. Your teacher was wrong.) Along came the ENCODE consortium, which found that, far from being useless, junk DNA is packed with functionality. And so everything scientists thought they knew about the genome was wrong, wrong wrong.

The Washington Post headline read, "'Junk DNA' concept debunked by new analysis of human genome." The New York Times wrote that "The human genome is packed with at least four million gene switches that reside in bits of DNA that once were dismissed as 'junk' but that turn out to play critical roles in controlling how cells, organs and other tissues behave." Influenced by misleading press releases and statements by scientists, story after story suggested that debunking junk DNA was the main result of the ENCODE studies. These stories failed us all in three major ways: they distorted the science done before ENCODE, they obscured the real significance of the ENCODE project, and most crucially, they mislead the public on how science really works.

What you should really know about the concept of junk DNA is that, first, it was not based on what scientists didn't know, but rather on what they did know about the genome; and second, that concept has held up quite well, even in light of the ENCODE results.
Way to go, Mike!

In the past week, lot's of scientists have demonstrated that they don't know what they're talking about when they make statements about junk DNA. I don't expect any of those scientists to apologize for misleading the public. After all, their statements were born of ignorance and that same ignorance prevents them from learning the truth, even now.

However, I do expect lots of science journalists to write follow-up articles correcting the misinformation that they have propagated. That's their job.


Friday, January 04, 2013

Intelligent Design Creationists Choose ENCODE Results as the #1 Evolution Story of 2012

The folks over at Evolution News & Views (sic) have selected the ENCODE papers as the Number 1 evolution-related story of 2012. Naturally, they fell for the hype of the ENCODE/Nature publicity campaign as you can see from the blog title: Our Top 10 Evolution-Related Stories: #1, ENCODE Project Buries "Junk DNA".

Most of the article is just the reposting of an article by Casey Luskin but some anonymous editor has added ...
Editor's note: For the No. 1 slot among evolution-related news stories of 2012, this one was an easy pick. The publication of the ENCODE project results detonated what had been considered among the sturdiest defenses that Darwinian evolutionary theory could still fall back upon: "Junk DNA." Casey Luskin's initial reporting is featured below. See also our response to the ensuing controversy over ENCODE ("Why the Case for Junk DNA 2.0 Still Fails").
Normally I would make fun of the creationists for misunderstanding the real scientific results in the papers that were published last September but, in this case, there are lots of real scientists who fell into the same trap.

Even Science magazine selected the ENCODE results as a top-ten breakthrough and noted that 80% of the human genome now has a function [Science Magazine Chooses ENCODE Results as One of the Top Ten Breakthroughs in 2012]. Oh well, I guess I'll just have to be content to point out that many scientists are as stupid as many Intelligent Design Creationists!

I can still mock the creationists for claiming that "Darwinian evolutionary theory" supports junk DNA.