More Recent Comments

Friday, October 04, 2013

David Klinghoffer Wants Clear Definitions

A few days ago I mentioned that definitions were important and I asked my students to look at twenty definitions of "evolution" [The Many Definitions of Evolution]. I was surprised to discover that some of you don't think it's very important to agree on how we define important terms and concepts.

David Klinghoffer agrees with me. He recently posted an article on Evolution News & Views (sic) where he called for clear definitions [Terry Mattingly: In the Evolution Debate, Clear Definitions Are Among the Casualties]. Let's see what he has to say ...
The point cannot be hammered home too often: In media coverage of the evolution debate, a standard trick, the one that stands out the most for slipperiness, is the refusal to define common terms. What is "evolution," or "creationism," or "intelligent design"? Readers may think they know. The reporter may think he knows. Usually, the shades of meaning get blurred, with the suspiciously consistent effect of casting evolution skeptics into a bad light.
Oh dear. Klinghoffer thinks that we are guilty of using definitions that make creationists look bad. He quotes from an article by Terry Mattingly who says ....
[T]he committee that produces the Associated Press Stylebook needs to urge mainstream journalists to be more careful when using the words "evolution" and "creationism." Each of those terms has a half dozen or so finely tuned definitions, depending on who is using them at any given moment.

For example, a person who accepts a creation narrative with a "young earth" and a timeline with seven 24-hour days will certainly embrace the creationist label. But what about a person who believes that creation unfolded over billions of years, involved slow change over time, a common tree of descent for species and ages of micro-evolutionary change?
That's simple. Both are creationists [On Describing IDiots as Creationists] [Creationism Continuum] [What Is Creationism?] (The last two posts attempt to deal with some Sandwalk readers who think that their preferred definition is the only correct definition.)

Mattingly then tackles a more difficult definition ....
Similar things happen with the term evolution, which as the Blessed Pope John Paul II once observed, is best discussed in terms of different schools of evolutionary thought, some of which are compatible with Christian faith and some of which are not...

The word "evolutionist" certainly applies to someone who believes life emerged from a natural, materialistic, random process that was without design or purpose. But what about someone who accepts that theory on the biological front, but believes that there is scientific evidence that our universe was finely tuned to produce life? What about someone who says that creation contains evidence best thought of as the signature of its creator (Carl Sagan, for example). What about people who insist they are doctrinaire Darwinists, but still see cracks in the old neo-Darwinian creeds? Are "theistic evolutionists" really believers in "evolution" in the eyes of the truly secular academic powers that be? And so forth and so on.
This is definitely a problem. As we see, Mattingly is terribly confused about the meaning of "evolution" and the difference between it and "evolutionary theory." I agree that we need to be clear about what we mean and I've tried to do that [What Is Evolution?]. (BTW, "theistic evolutionist" is just a euphemism for a particular kind of "creationist.")

Mattingly doesn't give us an answer. I guess he was too busy complaining.

Let's see what Klinghoffer has to say since he's convinced that this is an important issue. How do the IDiots define "evolution" and "Darwinism" and what do they have to say about modern evolutionary theory? How do they define "creationist"?

Waiting .........


Christian de Duve (1917-2013)

Christian de Duve died last May. He was the man who discovered peroxisomes and he did important work on other cell compartments such as lysozomes. He was awarded the Nobel Prize in Physiology or Medicine in 1974 but, unfortunately, his name is not as widely recognized as it should be.

I met him a few times when he was working on his book "Blueprint for a Cell" and his second book, "Vital Dust." These books explore a unique perspective on the origin of life and they should be consulted by anyone who is interested in that topic.

Read the obituary by Fred Opperdoes in PLoS Biology: A Feeling for the Cell: Christian de Duve (1917–2013). You will get to know a scientist whose life is worth celebrating.

You'll also learn how a Belgian gentleman can behave in a way that we in North America cannot yet copy. Perhaps in a few years our countries will also become civilized.
Dr. Christian de Duve remained active until the very end of his life, as this photograph taken in his last year demonstrates. He finished his last book Sept vies en une: Mémoires d'un Prix Nobel only a few months before he passed away. When he felt that both his health and strength were rapidly subsiding, he decided to end his life at the age of 95. He chose to die by an act of euthanasia, while surrounded by his children.


Intelligent Design Creationists Make a Prediction: How Did It Work Out?

Intelligent Design Creationism is often criticized for not making testable predictions. But as it turns out the movement HAS made a number of predictions. For example, they predicted twenty years ago that "Darwinism" would be dead by now and everyone would believe in God.

Okay, so that one didn't work out very well. What about the other predictions? Barry Arrington, that well-known science expert, lets us know about a prediction that turned out to be correct according to his understanding of biology! [Let’s Put This One To Rest Please]
Elizabeth Liddle from a prior post: “Darwinian hypotheses make testable predictions and ID hypotheses (so far) don’t.”

This statement is breathtakingly false. Let us take just one example. For years Darwinists touted “junk DNA” as not just any evidence but powerful, practically irrefutable evidence for the Darwinian hypothesis. ID proponents disagreed and argued that the evidence would ultimately demonstrate function.

Not only did both hypotheses make testable predictions, the Darwinist prediction turned out to be false and the ID prediction turned out to be confirmed.

EL, you are entitled to your own private opinion. You are not entitled to your own private facts. And when you make it up as you go like this, be sure you will be called out.
Did you remember to turn off your irony meters?

Wanna see some examples of predictions that would falsify Intelligent Design Creationism? Go to: Predictions of Intelligent Design Creationism.


Welcome Trust Sanger Institute Misleads Public About Junk DNA

Khurana er al. (2013) have just published a nice paper in Science where they analyzed 1009 human genomes in order to detect variants that might be linked to certain diseases (especially cancer). They focused on noncoding regions since it is much harder to recognize mutations in regulatory regions and these are leading candidates for cancer-causing mutations. What they did was identify conserved sequences and look for variants withing those presumptive regulatory sequences.

There's nothing in the paper about junk DNA and nothing about the overall organization of the human genome. Indeed, the tone of the paper is exactly what you would expect from a group of scientists who know that parts of noncoding DNA are involved in gene regulation.

But here's what the press release from the Welcome Trust Sanger Institute says [New technique identifies novel class of cancer's drivers].
Sieving through 'junk' DNA reveals disease-causing genetic mutations

Researchers can now identify DNA regions within non-coding DNA, the major part of the genome that is not translated into a protein, where mutations can cause diseases such as cancer.

Their approach reveals many potential genetic variants within non-coding DNA that drive the development of a variety of different cancers. This approach has great potential to find other disease-causing variants.

Unlike the coding region of the genome where our 23,000 protein-coding genes lie, the non-coding region - which makes up 98% of our genome - is poorly understood. Recent studies have emphasised the biological value of the non-coding regions, previously considered 'junk' DNA, in the regulation of proteins. This new information provides a starting point for researchers to sieve through the non-coding regions and identify the most functionally important regions.
Here's a few facts that you need to keep in mind.
  1. In spite of what the press release says, we understand a great deal about the 98% of our genome that doesn't encode protein.
  2. We've known about regulatory regions for half a century. It's simply not correct to imply that our knowledge is "recent."
  3. No knowledgeable scientist ever said that all non-coding regions were junk in spite of what the press release says.
  4. The paper does not provide a "starting point" to identify functionally important regions. We already have a pretty good idea about which parts of the noncoding genome are functional and which parts aren't.
How are we going to put a stop to this kind of misleading press release? Should we blame the authors of the paper and hold them accountable for any misrepresentations of their data by the universities, institutes, and companies that employ them?

Note: It took the IDiots less than 24 hours to exploit the stupidity of the Welcome Trust Sanger Institute. See: Helpful for non-Darwinists: Uses of junk DNA.


Khurana, E., Fu, Y., Colonna, V., Mu, X.J. et al. (2013) Integrative annotation of variants from 1,092 humans: application to cancer genomics. Science 2013 [doi: 10.1126/science.1235587]

Thursday, October 03, 2013

Science Doesn't Have All the Answers but Does It Have All the Questions?

Jerry Coyne has been following the debate between Steven Pinker and Leon Wieseltier on the topic of scientism [see The final round: Pinker vs. Wieseltier on scientism]. Jerry seems to agree with both Pinker and Wieseltier that there are "two magisteria" (science and humanities) ...
[Wieseltier] calls for a “two magisteria” solution, with science and humanities kept separate, but with “porous boundaries.” But that is exactly what Pinker called for, too! Wieseltier claims that Pinker and other advocates of scientism advocate “totalistic aspirations,” i.e., the complete takeover of humanities by the sciences (“unified field theories,” Wieseltier calls them), but Pinker explicitly said that he wasn’t calling for that.

...

As you can see above, Steve never argued that science is, or should be, supreme in all the contexts. Indeed, in his earlier piece he noted that art and literature, while they might be informed in some ways by science, nevertheless have benefits independent of science. To me, those benefits include affirming our common humanity, being moved by the plight of others, even if fictional, and luxuriating in the sheer beauty of music, words, or painting. (Note, though, that one day science might at least explain why we apprehend that beauty.)
I'm not sure how Pinker, Wieseltier, and Coyne are defining science but it's clear that they aren't using the same definition I use.

I think that science is a way of knowing based on evidence and logic and healthy skepticism. I think that all disciplines seeking knowledge use the scientific approach. This is the broad definition of science used by many philosophers and scientists.

Maarten Boudry discusses, and accepts, this definition in his chapter on "Loki's Wager and Lauden's Error" in Philosophy of Pseudescience: Reconsidering the Demarcation Problem. Boudry says that the distinction between the ways of knowing used by biologists, philosophers, and historians are meaningless and there's no easy way to distinguish them (territorial demarcation). On the other hand, there is a way to distinguish between good scientific reasoning and bad scientific reasoning like Holocaust denial.
I have expressed little confidence in the viability of the territorial demarcation problem, and even less interest in solving it. Not only is there no clear-cut way to disentangle epistemic domains like science and philosophy, but such a distinction carries little epistemic weight. The demarcation problem that deserves our attention is the one between science and pseudoscience (and the analogous ones between philosophy and pseudophilosophy and between history and pseudohistory).
Sven Ove Hanson is more specific because he actually defines "science in a broad sense" in a way that I have been using it for several decades. This is from his chapter on "Defining Pseudoscience and Science" in Philosophy of Pseudescience: Reconsidering the Demarcation Problem.
Unfortunately neither "science" nor any other established term in the English language covers all the disciplines that are parts of this community of knowledge disciplines. For lack of a better term, I will call them "science(s) in the broad sense." (The German word "Wissenschaft," the closest translation of "science" into that language, has this wider meaning; that is, it includes all the academic specialties, including the humanities. So does the Latin "scientia.") Science in a broad sense seeks knowledge about nature (natural science), about ourselves (psychology and medicine), about our societies (social science and history), about our physical constructions (technological science), and about our thought construction (linguistics, literary studies, mathematics, and philosophy). (Philosophy, of course, is a science in this broad sense of the word.)
If this is what we mean by science" then there's no difference between the ways we try to acquire knowledge in the humanities or the natural sciences and the debate between Pinker and Wieseltier takes on an entirely different meaning.

There aren't "two magisteria" but only one. Unless, of course, someone is willing to propose a successful non-scientific way of knowing. I have asked repeatedly for examples of knowledge ("truth") that have been successfully acquired by any other way of knowing. So far, nobody has come up with an answer so we can tentatively conclude that science (in the broad sense) is the only valid way of acquiring true knowledge.

Clearly we don't have all the answers to everything so it's clear that neither science nor anything else has all the answers. What about the questions? Are there any knowledge questions that science (in the broad sense) can't address? I don't think there are. I think "science" covers all the questions even though it doesn't (yet) have all the answers.

If this is "scientism" then I'm guilty. What is the alternative? Is it revelation (revealed truth)? Or is there some other way of knowing that I haven't heard about?


What Do You Do When All the Reviews Are Bad?

Charles Marshall has reviewed Darwin's Doubt in last week's issue of Science. The title says it all: When Prior Belief Trumps Scholarship.

Here's a sample of what a bad review looks like.
... when it comes to explaining the Cambrian explosion, Darwin's Doubt is compromised by Meyer's lack of scientific knowledge, his "god of the gaps" approach, and selective scholarship that appears driven by his deep belief in an explicit role of an intelligent designer in the history of life.
Ouch!

So far the Intelligent Design Creationists have a perfect record. Every single review of Darwin's Doubt by a scientist has been negative. None of them like the book.

What do you do under those circumstances? Remember, that the minions of the Discovery Institute aggressively hyped this book in the Spring before it was published. It was supposed to be the book that destroyed Darwinism.1

Not to worry. The IDiots have an excuse ... in fact they have several.
  1. Ignore the main criticism and focus on details. This is what Stephen Meyer is doing in his response to Charles Marshall's review: When Theory Trumps Observation: Responding to Charles Marshall's Review of Darwin's Doubt.
  2. Most reviewers ignore the main arguments. This is the defense offered by David Klinghoffer, that well-known defender of Intelligent Design Creationism, and a non-scientist: A Taxonomy of Evasion: Reviewing the Reviewers of Darwin's Doubt.
  3. At least we got their attention. This is what makes David Klinghoffer proud, "Marshall's review stands out. It's important. Not only because Marshall is a distinguished paleontologist writing in one of the world's two most importance science journals ..." [Stephen Meyer Answers Charles Marshall on Darwin's Doubt]. Casey Luskin uses the same excuse in when he writes [Teamwork: New York Times and Science Magazine Seek to Rebut Darwin's Doubt,
    It's now evident that, their previous denials notwithstanding, Darwin defenders have been unnerved by Darwin's Doubt. On the same day last week, both the world's top newspaper (the New York Times) and one of the world's top scientific journals (Science) turned their attention to the problem posed by Stephen Meyer.
  4. Publicize reviews by non-scientists That's what Denyse O'Leary does in Astonishing innovation: Bethell’s review of Darwin’s Doubt defies tradition, tells you what is in the book. David Klinghoffer does it too: The American Spectator Warmly Welcomes Darwin's Doubt.
That's what you do if all the reviews and bad and you are an IDiot.


1. There were half-a-dozen earlier books that were also supposed to have destroyed Darwinism.

Stephen Meyer Says that "Homology" Is a Problem in Molecular Evolution

Stephen Meyer argues (in Darwin's Doubt) that the Cambrian explosion cannot be explained by evolution but it can be explained by Intelligent Design Creationism.

His main thesis is that all the animals appeared suddenly in the Cambrian and there's no evidence that they arose from ancestors living earlier in the Precambrian. Unfortunately for him, there IS plenty of evidence in the form of molecular evolution. By comparing genes and proteins we can show that all the animal groups are related to one another and that their common ancestors are spread out over a considerable period of time as shown in the phylogenetic tree below from a paper by Dunn et al. (2008).

This evidence is a serious problem for Meyer so he has to deal with it in his book. He tries to discredit the entire field of molecular evolution by challenging the basic assumptions [Stephen Meyer Says That Constant Mutation Rates Are a "Questionable Assumption"], by setting up a strawman [Stephen Meyer Says Molecular Data Must Be Wrong Because Different Genes Evolve at Different Rates], and by pointing out that molecular dating is not precise [Stephen Meyer Says Molecular Evidence Must Be Wrong Because Scientists Disagree About the Exact Dates]. His most ridiculous argument1 against molecular evolution is that the results must be wrong because there are no transitional fossils from before the Cambrian Explosion! [The Cambrian Conundrum: Stephen Meyer Says (Lack of) Fossils Trumps Genes]

None of those arguments stand up to close scrutiny but, as I warned you last week, there are actually five arguments against the validity of molecular evolution [Darwin's Doubt: The Genes Tell the Story?].

Are you ready for the final argument showing that molecular evidence must be discounted?

Tuesday, October 01, 2013

The Many Definitions of Evolution

I have a favorite definition of evolution [What Is Evolution?]. It's a definition based on population genetics and it helps us to decide on what counts as evolution and what doesn't. It's a minimal definition. There's more to evolution than that but you have to establish a boundary.

When you start discussing evolution you have to begin by establishing your definitions and declaring what version of evolutionary theory you support. This is especially important if you are debating extensions of evolutionary theory and it's even more important if you are debating a creationist. Creationists need to understand that evolution is both a Fact and a Theory, for example. If they don't understand that then they don't understand anything about evolution.

Creationists aren't the only problem. Even non-creationists get confused about evolution. Most don't know the difference between evolution and natural selection and that makes it difficult to talk about molecular evolution and a host of other topics. It's hard to explain junk DNA to an adaptationist and it's hard to show you why Michael Behe is wrong in The Edge of Evolution if you've never heard of Neutral Theory and random genetic drift.

Tomorrow in class we're going to talk about defining evolution and we're going to talk about the seriousness of misconceptions. If you have misconceptions about evolution then you can't really have a serious debate with a creationist—or with a fellow evolutionist.

I thought I'd point my students to a few of the definitions on the web. It's shocking to see how many different definitions of evolution there are and it's shocking to see how often websites ignore any mechanisms other than natural selection. You will be surprised at how many supposedly reputable sources get it wrong. It's no wonder everyone is confused.

Which ones do you like?
  1. What is Evolution?
  2. What Is Evolution?
  3. Another curious aspect of the theory of evolution is that everybody thinks he understands it. I mean philosophers, social scientists, and so on. While in fact very few people understand it, actually, as it stands, even as it stood when Darwin expressed it, and even less as we now may be able to understand it in biology. Jacques Monod (1974)
  4. What is evolution? Darwin's brilliant idea
  5. WHAT IS EVOLUTION?
  6. What Is Evolution Anyway?
  7. Get Answers: Evolution
  8. Understanding Science & Evolution
  9. The Teaching of Evolution
  10. What is evolution?
  11. What is Evolution?
  12. What Evolution Is and What It Isn’t
  13. An introduction to evolution
  14. What is Evolution?
  15. Evolution Is Change in the Inherited Traits of a Population through Successive Generations
  16. What is evolution?
  17. What is Evolution?
  18. Introduction: Evolution
  19. What is evolution?
  20. WHAT IS EVOLUTION?
  21. What Is Evolution?


Biologist Sean Carroll in Toronto

Biologist Sean Carroll of evo-devo fame was in Toronto last Friday as part of the 2013 Science Festival. I had a chance to talk to him in the afternoon at an event organized by the Centre for Inquiry. There were about a dozen people there.

Sean is the Vice President for Science Education at the Howard Hughes Medical Institute in Washington DC (USA). We talked a lot about science education. One of the issues he brought up was The Clergy Letter Project. According to Sean, this is an important example of co-operation between scientists and religious leaders to support the teaching of evolution in American public schools.

I understand the politics but I find it ironic that scientists seek out religious leaders to support the teaching of evolution in the public schools. These same scientists are the first ones to trot out the US Constitution whenever Christian fundamentalists try to teach creationism. You can't have it both ways. Either religious leaders have no say in what is taught in science classes or they do.

Sean Carrol gave a talk in the evening to an audience of about one thousand. His subject was Jacques Monod and the role of chance. Coincidentally, that's also the subject of his latest book, Brave Genius: A Scientist, a Philosopher, and Their Daring Adventures from the French Resistance to the Nobel Prize.

Sean emphasized the role of chance and accident in evolution, picking up on Monod's seminal work Chance and Necessity. From studying biology you reach the inescapable conclusion that life is the product of blind chance. It has no meaning or purpose. Monod was not the first person to recognize this but he's the one who made the best case back in 1971.

After his talk I reminded Sean that this conclusion puts science in conflict with almost all religious beliefs making it difficult for clergy to support the correct teaching of science and religion in the public schools.

"Hey, what can you do!" he said, just as someone took our picture.1

I have a signed copy of his book.


1. Not really. We were actually discussing something else.

Monday, September 30, 2013

The Problems With The Selfish Gene

Lots of people fail to understand that the "selfish gene" is a metaphor. They criticize Richard Dawkins for promoting the idea that genes can actually take on the characteristics of selfishness.

Andrew Brown and Mary Midgley are prominent examples of people with this kind of misunderstanding and Jerry Coyne has set them straight in Poor Richard’s Almanac: Andrew Brown and the Pope go after The Selfish Gene and “Selection pressures” are metaphors. So are the “laws of physics.”

However, there are two other problem with the metaphor. The first is rather trivial, it refers to the fact that it's actually alleles, or variants, of a gene that are "selfish." Dawkins knows this. He explains it in his book but I don't think he puts enough emphasis on the concept and in most parts of the book he uses "gene" when he should be saying "allele." I grant that The Selfish Allele is not a catchy title.

Monday's Molecule #217

Last week's molecule was the 5′ cap structure on eukaryotic mRNA. Lot's of people got it right. The winners were Mark Sturtevant and Jacob Toth [Monday's Molecule #216].

As you know, the general public is very gullible. Millions of people have been duped into taking various supplements on the grounds that these supplements will improve their health and/or correct for a deficiency in their diet. These people will freely donate millions of dollars to the quacks who prey on their stupidity. Today's molecule is one of these supplements. Give the common name and the specific name that identifies this particular variant.

Email your answers to me at: Monday's Molecule #217. I'll hold off posting your answers for at least 24 hours. The first one with the correct answer wins. I will only post the names of people with mostly correct answers to avoid embarrassment. The winner will be treated to a free lunch.

There could be two winners. If the first correct answer isn't from an undergraduate student then I'll select a second winner from those undergraduates who post the correct answer. You will need to identify yourself as an undergraduate in order to win. (Put "undergraduate" at the bottom of your email message.)

Friday, September 27, 2013

Dark Matter Is Real, Not Just Noise or Junk

UPDATE: The title is facetious. I don't believe for one second that most so-called "dark matter" has a function. In fact, there's no such thing as "dark matter." Most of our genome is junk. I mention this because one of the well-known junk DNA kooks is severely irony-impaired and thought that I had changed my mind.
A few hours ago I asked you to evaluate the conclusion of a paper by Venters and Pugh (2013) [Transcription Initiation Sites: Do You Think This Is Reasonable?].

Now I want you to look at the Press Release and tell me what you think [see Scientists Discover the Origins of Genomic "Dark Matter"].

It seems pretty clear to me that Pugh (and probably Venters) actually think they are on to something. Here's part of the press release quoting Franklin "Frank" Pugh, a Professor in the Department of Molecular Biology at Penn State.
The remaining 150,000 initiation machines -- those Pugh and Venters did not find right at genes -- remained somewhat mysterious. "These initiation machines that were not associated with genes were clearly active since they were making RNA and aligned with fragments of RNA discovered by other scientists," Pugh said. "In the early days, these fragments of RNA were generally dismissed as irrelevant since they did not code for proteins." Pugh added that it was easy to dismiss these fragments because they lacked a feature called polyadenylation -- a long string of genetic material, adenosine bases -- that protect the RNA from being destroyed. Pugh and Venters further validated their surprising findings by determining that these non-coding initiation machines recognized the same DNA sequences as the ones at coding genes, indicating that they have a specific origin and that their production is regulated, just like it is at coding genes.

"These non-coding RNAs have been called the 'dark matter' of the genome because, just like the dark matter of the universe, they are massive in terms of coverage -- making up over 95 percent of the human genome. However, they are difficult to detect and no one knows exactly what they all are doing or why they are there," Pugh said. "Now at least we know that they are real, and not just 'noise' or 'junk.' Of course, the next step is to answer the question, 'what, in fact, do they do?'"

Pugh added that the implications of this research could represent one step towards solving the problem of "missing heritability" -- a concept that describes how most traits, including many diseases, cannot be accounted for by individual genes and seem to have their origins in regions of the genome that do not code for proteins. "It is difficult to pin down the source of a disease when the mutation maps to a region of the genome with no known function," Pugh said. "However, if such regions produce RNA then we are one step closer to understanding that disease."
I'm puzzled by such statements. It's been one year since the ENCODE publicity fiasco and there have been all kinds of blogs and published papers pointing out the importance of junk DNA and the distinct possibility that most pervasive transcription is, in fact, noise.

It's possible that Pugh and his postdoc are not aware of the controversy. That would be shocking. It's also possible that they are aware of the controversy but decided to ignore it and not reference any of the papers that discuss alternate explanations of their data. That would be even more shocking (and unethical).

Are there any other possibilities that you can think of?

And while we're at it. What excuse can you imagine that lets the editors of Nature off the hook?

P.S. The IDiots at Evolution News & Views (sic) just love this stuff: As We Keep Saying, There's Treasure in "Junk DNA".


Venters, B.J. and Pugh, B.F. (2013) Genomic organization of human transcription initiation complexes. Nature Published online 18 September 2013 [doi: 10.1038/nature12535] [PubMed] [Nature]

The Extraordinary Human Epigenome

We learned a lot about genes and gene expression in the second half of the 20th century. We learned that genes are transcribed and we have a pretty good understanding of how transcription initiation complexes are formed and how transcription works.

We learned how transcription is regulated through promoter strength, activators, and repressors. Activators and repressors bind to DNA and those binding sites can lie at some distance from the promoter leading to formation of loops of DNA that bring the regulatory proteins into contact with the transcription complex. Much of our basic understanding of this process was derived from detailed studies of bacteriophage and bacterial genes.

THEME:
Transcription

Later on we learned that eukaryotic genes expression was very similar and regulation also required repressors and activators. We discovered that gene expression was associated with chromatin remodeling that opened up regions of the chromosome that were tightly bound to histones in 30nm or higher order structures.

Building on studies in prokaryotes, we learned about temporal gene regulation and differentiation. Much of the work was done in model organisms like Drosophila, yeast, C. elegans, and various mammalian cells in culture.

By the end of the century I was pretty confident that what I wrote in my textbook was a fair representation of the fundamental concepts in gene expression and regulation.

Turns out I was wrong as I just discovered this morning when I read the opening paragraph of a review by Rivera and Ren (2013). Here's what they say ...
More than a decade has passed since the human genome was completely sequenced, but how genomic information directs spatial- and temporal-specific gene expression programs remains to be elucidated (Lander, 2011). The answer to this question is not only essential for understanding the mechanisms of human development, but also key to studying the phenotypic variations among human populations and the etiology of many human diseases. However, a major challenge remains: each of the more than 200 different cell types in the human body contains an identical copy of the genome but expresses a distinct set of genes. How does a genome guide a limited set of genes to be expressed at different levels in distinct cell types?
Wow! The textbooks need to be rewritten! We didn't learn anything in the last century!

It took me the whole first paragraph of this paper to realize that the rest of it was probably going to be worthless unless you were interested in technical details about the field. That's because I'm not as smart as Dan Graur. He only read the title, "Mapping Human Epigenomes" and the abstract before concluding that the authors were speaking in newspeak1 [A “Leading Edge Review” Reminds Me of Orwell (and #ENCODE)].

The Rivera and Ren paper is a "Leading Edge" review in the prestigious journal Cell. It covers all the techniques used to study methylation, histone modification and binding, transcription factor binding, and nucleosome positioning at the genome level. According to the authors, people like me were fooled by studies on individual genes, purified factors, and in vitro binding assays. That didn't really tell us what was going on.

Apparently, the most effective way of learning about the regulation of gene expression in humans is to analyze the entire genome all at once and read off the data from microarrays and computer monitors. (After shoving it through a bunch of code.)
Overwhelming evidence now indicates that the epigenome serves to instruct the unique gene expression program in each cell type together with its genome. The word "epigenetics," coined half a century ago by combining "epigenesis" and "genetics," describes the mechanisms of cell fate commitment and lineage specification during animal development (Holliday, 1990; Waddington, 1959). Today, the "epigenome" is generally used to describe the global, comprehensive view of sequence-independent processes that modulate gene expression patterns in a cell and has been liberally applied in reference to the collection of DNA methylation state and covalent modification of histone proteins along the genome (Bernstein et al., 2007; Bonasio et al., 2010). The epigenome can differ from cell type to cell type, and in each cell it regulates gene expression in a number of ways—by organizing the nuclear architecture of the chromosomes, restricting or facilitating transcription factor access to DNA, and preserving a memory of past transcriptional activities. Thus, the epigenome represents a second dimension of the genomic sequence and is pivotal for maintaining cell-typespecific gene expression patterns.

Not long ago, there were many points of trepidation about the value and utility of mapping epigenomes in human cells (Madhani et al., 2008). At the time, it was suggested that histone modifications simply reflect activities of transcription factors (TFs), so cataloging their patterns would offer little new information. However, some investigators believed in the value of epigenome maps and advocated for concerted efforts to produce such resources (Feinberg, 2007; Henikoff et al., 2008; Jones and Martienssen, 2005). The last five years have shown that epigenome maps can greatly facilitate the identification of potential functional sequences and thereby annotation of the human genome. Now, we appreciate the utility of epigenomic maps in the delineation of thousands of lincRNA genes and hundreds of thousands of cis-regulatory elements (ENCODE Project Consortium et al., 2012; Ernst et al., 2011; Guttman et al., 2009; Heintzman et al., 2009; Xie et al., 2013b; Zhu et al., 2013), all of which were obtained without prior knowledge of cell-type-specific master transcriptional regulators. Interestingly, bioinformatic analysis of tissue-specific cis-regulatory elements has actually uncovered novel TFs regulating specific cellular states.
So, what are all these new discoveries that now elucidate what was previously unknown; namely, "how genomic information directs spatial- and temporal-specific gene expression programs"?

This is a very long review full of technical details so let's skip right to the conclusions.
Six decades ago, Watson and Crick put forward a model of DNA double helix structure to elucidate how genetic information is faithfully copied and propagated during cell division (Watson and Crick, 1953). Several years later, Crick famously proposed the "central dogma" to describe how information in the DNA sequence is relayed to other biomolecules such as RNA and proteins to sustain a cell’s biological activities (Crick, 1970). Now, with the human genome completely mapped, we face the daunting
task to decipher the information contained in this genetic blueprint. Twelve years ago, when the human genome was first sequenced, only 1.5% of the genome could be annotated as protein coding, whereas the rest of the genome was thought to be mostly "junk" (Lander et al., 2001; Venter et al., 2001). Now, with the help of many epigenome maps, nearly half of the genome is predicted to carry specific biochemical activities and potential regulatory functions (ENCODE Project Consortium, et al., 2012). It is conceivable that in the near future the human genome will be completely annotated, with the catalog of transcription units and their transcriptional regulatory sequences fully mapped.
I hope they hurry up. Not only do I have to re-write my description of the Central Dogma2 but I'm going to have to re-write everything I thought I knew about regulation of gene expression and the organization of information in the human genome. That's going to take time so I hope the epigeneticists will publish lots more whole genome studies in the near future so I can understand the new model of gene expression.

Keep in mind that this paper was published in Cell where it was rigorously reviewed by the leading experts in the field. It must be right.


[Image Credit: Moran, L.A., Horton, H.R., Scrimgeour, K.G., and Perry, M.D. (2012) Principles of Biochemistry 5th ed., Pearson Education Inc. page 647 [Pearson: Principles of Biochemistry 5/E] © 2012 Pearson Education Inc.]

1. Newspeak was first described in 1984 proving, once again, that George Orwell (Eric Arthur Blair) was a really smart and prescient guy. For another example see: What Is "Science" According to George Orwell?.

2. Apparently I didn't read the Crick (1970) paper as carefully as they did.

Rivera, C.M. and Ren, B. (2013) Mapping Human Epigenomes. Cell 155:39-55 [doi: 10.1016/j.cell.2013.09.011]

Transcription Initiation Sites: Do You Think This Is Reasonable?

I'm interested in how scientists read the scientific literature and in how they distinguish good science from bad science. I know that when I read a paper I usually make a pretty quick judgement based on my knowledge of the field and my model of how things work. In other words, I look at the conclusions first to see whether they conflict with or agree with my model.

Many of my colleagues do it differently. They focus on the actual experiments and reach a conclusion based on how the perceive the data. If the experiments look good and the data seems reliable then they tentatively accept the conclusions even if they conflict with the model they have in their mind. They are much more likely to revamp their model than I am.

I'm about to give you the conclusions from a recently published paper in Nature. I'd like to hear from all graduate students, postdocs, and scientists on how you react to those conclusions. Do you think the conclusions are reasonable (as long as the experiments are valid) or do you think that the conclusions are unreasonable, indicating that there has to be something wrong somewhere?

The paper is Venters and Pugh (2013). It's title is Genomic organization of human transcription complexes. You don't need to read the paper unless you want to get into a more detailed debate. All I want to hear about is your initial reaction to their final two paragraphs.
Consolidated genomic view of initiation

...The discovery that transcription of the human genome is vastly more pervasive than what produces coding mRNA raises the question as to whether Pol II initiates transcription promiscuously through random collisions with chromatin as biological noise or whether it arises specifically from canonical Pol II initiation complexes in a regulated manner. Our discovery of ~150,000 non-coding promoter initiation complexes in human K562 cells and more in other cell lines suggests that pervasive non-coding transcription is promoter-specific, regulated, and not much different from coding transcription, except that it remains nuclear and non-polyadenylated. An important next question is the extent to which transcription factors regulate production of ncRNA.

We detected promoter transcription initiation complexes at 25% of all ~24,000 human coding genes, and found that there were 18-fold more non-coding complexes than coding. We therefore estimate that the human genome potentially contains as many as 500,000 promoter initiation complexes, corresponding to an average of about one every 3 kilobases (kb) in the non-repetitive portion of the human genome. This number may vary more or less depending on what constitutes a meaningful transcription initiation event. The finding that these initiation complexes are largely limited to locations having well-defined core promoters and measured TSSs indicates that they are functional and specific, but it remains to be determined to what end. Their massive numbers would seem to provide an origin for the so-called dark matter RNA of the genome, and could house a substantial portion of the missing heritability.
Looking forward to hearing from you.

Keep in mind that this is a Nature paper that has been rigorously reviewed by leading experts in the field. Does that influence your opinion?


Venters, B.J. and Pugh, B.F. (2013) Genomic organization of human transcription initiation complexes. Nature Published online 18 September 2013 [doi: 10.1038/nature12535] [PubMed] [Nature]