More Recent Comments

Friday, July 24, 2015

John Parrington and the C-value paradox

We are discussing John Parrington's book The Deeper Genome: Why there is more to the human genome than meets the eye. This is the second of five posts on: Five Things You Should Know if You Want to Participate in the Junk DNA Debate

1. Genetic load
John Parrington and the genetic load argument
2. C-Value paradox (this post)
John Parrington and the c-value paradox
3. Modern evolutionary theory
John Parrington and modern evolutionary theory
4. Pseudogenes and broken genes are junk
John Parrington discusses pseudogenes and broken genes
5. Most of the genome is not conserved
John Parrington discusses genome sequence conservation


2. C-Value paradox

Parrington addresses this issue on page 63 by describing experiments from the late 1960s showing that there was a great deal of noncoding DNA in our genome and that only a few percent of the genome was devoted to encoding proteins. He also notes that the differences in genome sizes of similar species gave rise to the possibility that most of our genome was junk. Five pages later (page 69) he reports that scientists were surprised to find only 30,000 protein-coding genes when the sequence of the human genome was published—"... the other big surprise was how little of our genomes are devoted to protein-coding sequence."

Contradictory stuff like that makes it every hard to follow his argument. On the one hand, he recognizes that scientists have known for 50 years that only 2% of our genome encodes proteins but, on the other hand, they were "surprised" to find this confirmed when the human genome sequence was published.

He spends a great deal of Chapter 4 explaining the existence of introns and claims that "over 90 per cent of our genes are alternatively spliced" (page 66). This seems to be offered as an explanation for all the excess noncoding DNA but he isn't explicit.

In spite of the fact that genome comparisons are a very important part of this debate, Parrington doesn't return to this point until Chapter 10 ("Code, Non-code, Garbage, and Junk").

We know that the C-Value Paradox isn't really a paradox because most of the excess DNA in various genomes is junk. There isn't any other explanation that makes sense of the data. I don't think Parrington appreciates the significance of this explanation.

The examples quoted in Chapter 10 are the lungfish, with a huge genome, and the pufferfish (Fugu), with a genome much smaller than ours. This requires an explanation if you are going to argue that most of the human genome is functional. Here's Parrington's explanation ...
Yet, despite having a genome only one eighth the size of ours, Fugu possesses a similar number of genes. This disparity raises questions about the wisdom of assigning functionality to the vast majority of the human genome, since, by the same token, this could imply that lungfish are far more complex than us from a genomic perspective, while the smaller amount of non-protein-coding DNA in the Fugu genome suggests the loss of such DNA is perfectly compatible with life in a multicellular organism.

Not everyone is convinced about the value of these examples though, John Mattick, for instance, believes that organisms with a much greater amount of DNA than humans can be dismissed as exceptions because they are 'polyploid', that is, their cells have far more than the normal two copies of each gene, or their genomes contain an unusually high proportion of inactive transposons.
In other words, organisms with larger genomes seem to be perfectly happy carrying around a lot of junk DNA! What kind of an argument is that?
Mattick is also not convinced that Fugu provides a good example of a complex organism with no non-coding DNA. Instead, he points out that 89% of this pufferfish's DNA is still non-protein-coding, so the often-made claim that this is an example of a multicellular organism without such DNA is misleading.
[Mattick has been] a true visionary in his field; he has demonstrated an extraordinary degree of perseverance and ingenuity in gradually proving his hypothesis over the course of 18 years.

Hugo Award Committee
Seriously? That's the best argument he has? He and Mattick misrepresent what scientists say about the pufferfish genome—nobody claims that the entire genome encodes proteins—then they ignore the main point; namely, why do humans need so much more DNA? Is it because we are polyploid?

It's safe to say that John Parrington doesn't understand the C-value argument. We already know that Mattick doesn't understand it and neither does Jonathan Wells, who also wrote a book on junk DNA [John Mattick vs. Jonathan Wells]. I suppose John Parrington prefers to quote Mattick instead of Jonathan Wells—even though they use the same arguments—because Mattick has received an award from the Human Genome Organization (HUGO) for his ideas and Wells hasn't [John Mattick Wins Chen Award for Distinguished Academic Achievement in Human Genetic and Genomic Research].

For further proof that Parrington has not done his homework, I note that the Onion Test [The Case for Junk DNA: The onion test ] isn't mentioned anywhere in his book. When people dismiss or ignore the Onion Test, it usually means they don't understand it. (For a spectacular example of such misunderstanding, see: Why the "Onion Test" Fails as an Argument for "Junk DNA").


24 comments :

John Harshman said...

Are you implying that Mattick writes science fiction by using "Hugo Award Committee" in your little box, or was that a happy accident?

Faizal Ali said...

You know, as irriitating as the IDiots and other creationists can be, these examples of misunderstandings by scientists and science writers who supposedly are not encumbered by superstitious beliefs are even more depressing. In the examples above, the errors of Mattick and Parrington are not due to ignorance, but stem from a simple inability to follow the logic of an argument. The lack of basic critical thinking skills, IOW. How is this possible in people so highly educated?

Mikkel Rumraket Rasmussen said...

I dunno why that posted twice and with errors. I'll try again:
Larry you really should write a book about this. There are too many shitty books on junk DNA now by hyping know-nothings. Get together with some people who know their stuff (Graur, Lynch? I'm sure you know some people), that know how to accurately present all of the information, both for and against, and write one.

If you're not going to write it, who is? The molecular biology community seems to be in dire need of such a book. :)

Piotr Gąsiorowski said...

I understand thus particular HUGO stands for the HUman Genome Organisation, so it's a happy accident. There's many a truth revealed by concidence.

Piotr Gąsiorowski said...

thus > this

Mikkel Rumraket Rasmussen said...

I agree. I'm positively flabbergasted by this. I suspect they are simply not aware of the force of these arguments, they are only superficially familiar with them and just haven't bothered thinking about it, instead just following along with the hype mindlessly thinking they are at the forefront of science.

I mean that has to be it, because the alternative is that they are either stupid or engaging in deliberate misinformation.

Piotr Gąsiorowski said...

Oops, silly me. I somehow didn't notice the OP explained it.

Larry Moran said...

I don't get it either. Why wouldn't you do a bit more research if you are going to write a book about the human genome?

Parrington runs a research lab. I wonder if he showed copies of the manuscript to his graduate students and colleagues? Didn't any of them pick up on the errors and misunderstandings?

I know who's going to be helping me with my book. It's people who disagree with my thesis and people who aren't afraid to tell me when I'm wrong.

Jmac said...

Can you leak it to me first Larry?

Anonymous said...

But, but, but -- The South African Lungfish with 3540% the DNA content of "mammals" is a diploid with 2n = 34. Not polyploid. Did he mean that it's a diploidized ancient polyploid? Or is this a mistake? Or is this the result of knowing that at least one lungfish is tetraploid (2n - 68) and one has 2n = 54, plus assuming this means that all the high-DNA-content lungfishes are polyploid? By the way, lunglfish, or at least one lungfish species, have a LOT of old, mutated transposons.

(Polyploidy. One of the buttons I respond to.)

Dave Carlson said...

Okay, perhaps I don't understand the point that Parrington is trying to make, but how is acknowledging that large genomes often "contain an unusually high proportion of inactive transposons" a solid argument against the notion that much of the genome is non-functional? It seems to imply the opposite to me...

Alex SL said...

Thanks for this interesting if somewhat depressing series of posts.

Many of the arguments appear a bit confused to me, but because I am not myself a genome researcher I cannot judge beyond "given everything else I know about biology, this sounds more plausible to me than the opposite". However, even apart from the issues already pointed out in the post the following really stands out to me because I have done cytology on polyploid plants and read a lot about the matter:

John Mattick, for instance, believes that organisms with a much greater amount of DNA than humans can be dismissed as exceptions because they are 'polyploid', that is, their cells have far more than the normal two copies of each gene

How is that supposed to work? When a polyploidy event happens, everything gets duplicated. That means that at that moment, the ratios between different genomic elements are precisely the same.

In most polyploid plants, the next step seems to be quite some selective pressure to downsize the genome: tetraploids are often less than 2x as big as diploids, and octoploids are again often less than 2x as big as tetraploids, and so on. (There are exceptions.) It seems logical to assume that a lot of what will be thrown out was junk in the first place but also, at least in the long run, and unless the plant really needs another 20k functional genes with a specialised function - an a priori extremely implausible suggestion - everything but two functional copies of each gene.

Also, the logic of the argument seems to assume that the small, mean and lean genomes are the polyploids ones... but of course within each group of closely related species the big ones are polyploid.

There must be something here I don't get.

Georgi Marinov said...

Some massive genomes are polyploid, but that does not explain all of their DNA content.

Here is an example of the limited success we've had so far sequencing huge genomes

http://www.genomebiology.com/2014/15/3/R59

23.2Gb, 82% repeats.

Note, of course, that ancient polyploidization and massive expansion of repetitive elements are not mutually exclusive, they can and do *both occur

The most interesting organisms with respect to that question are dinoflagellates - single-celled, but with massive genomes (up to 200Gb), and they're not polypoloid (they also have many many extremely weird features, but let's just talk about genome size and content). Why? The working model based on sequencing small pieces of DNA has been that genes in dinoflagellates exist in tandem arrays of multiple copies of the same gene. But that's not really what they found when they partially sequenced one of the smallest genomes in the group. So lots of unknowns there. Anyway. theory predicts that such genomes would arise if the effective population size is low. However, that is usually not the case for single-celled algae with huge absolute populations. And there have been very few studies of N_e in dinoflagellates (I am aware of one study that tried to estimate it, and the estimate was indeed very low). So if when the 100Gb dinoflagellate genomes are sequenced one day, and if they turn out to be full of apparent "junk", and it turns out that N_e is indeed very low for some weird reason, the proponents of the view that there is no junk DNA will have a lot of explaining to do - if all that complexity is what it takes to make a human, why do protists exhibit it too? That argument would apply even if it turned out that the more massive representatives of those genomes are indeed mostly tandem arrays of the same genes without a lot of TEs and intergenic DNA, because there is a lot of evidence, from multiple protists in fact, that epigenetic mechanisms, which are a big topic of the book we're discussing, are even more complex in some unicellular organisms than they are in humans.

Alex SL said...

Thanks for that extensive reply, but unfortunately my real problem is understanding Parrington's (really Mattick's) argument, not ignorance of the many repeats in large genomes.

Georgi Marinov said...

I wasn't replying to you directly.(also, the formatting of your reply that I saw did not display the italics so there was some confusion regarding that)

There is no real logic behind what Mattick says -- I have yet to see him truly engage with the theoretical arguments against his position even once

Faizal Ali said...

@Larry.

I know who's going to be helping me with my book.

Are you just speaking hypothetically, or do you really have a book in the works on this?

Faizal Ali said...

Maybe he's only concerned with the human genome? That we must be "special" in some way?

Dave Carlson said...

I suppose that's possible, but our genome has quite a lot of inactive transposons as well.

Anonymous said...

@ Larry,

I hope you do have a book in the works.

Anonymous said...

Just because a DNA doesn't seem translationally or transcriptionally active does not mean it can't be active in terms of regulation, providing redundancy, providing a means of adaptive variation, having structural roles, roles in cellular differentiation, or in one case optical roles.

Sternberg points out:

"Why the elaborate repositioning of so much "junk" DNA in the rod cells of nocturnal mammals? The answer is optics. A central cluster of chromocenters surrounded by a layer of LINE-dense heterochromatin enables the nucleus to be a converging lens for photons, so that the latter can pass without hindrance to the rod outer segments that sense light. In other words, the genome regions with the highest refractive index -- undoubtedly enhanced by the proteins bound to the repetitive DNA -- are concentrated in the interior, followed by the sequences with the next highest level of refractivity, to prevent against the scattering of light. The nuclear genome is thus transformed into an optical device that is designed to assist in the capturing of photons. This chromatin-based convex (focusing) lens is so well constructed that it still works when lattices of rod cells are made to be disordered. Normal cell nuclei actually scatter light.

So the next time someone tells you that it "strains credulity" to think that more than a few pieces of "junk DNA" could be functional in the cell -- that the data only point to the lack of design and suboptimality -- remind them of the rod cell nuclei of the humble mouse."

Given that DNA may have more than just a coding role but may even have gotten recruited for such unexpected roles like being a lens, the C-value paradox might be explainable by the fact that DNA might have unique applications in a variety of organisms than just mere coding for proteins. Large or small amounts of DNA that have function may or may not contribute to the overall complexity of the organism because the excess may not necessarily be for coding functions.

Junk in one organism doesn't imply junk in another. So ENCODE could be right about the human genome even if Ferns have junk DNA....

There is the phenomenon of redundancy whereby duplicates provide function in the way or redundancy, and so extra DNA not immediately used is not necessarily an indication it is junk.

The central dogma does not necessarily imply all the information for life resides in the DNA, especially ontogenic information. Some candidates for other information storage mechanisms apart from DNA are the glyco protein complexes that implement the "sugar code". Hence the information that creates the complexity of an organism might be outside the nuclear complex, and thus there is no necessity that DNA be larger for more complex organisms if most of the information to generate that complexity lies outside the nuclear complex.

Since the information processing of DNA is not restricted to ACGTs only, but also methylation marks and indirectly histone modifications, large amounts of nuclear DNA might be the normal way an organism's sugar code and other cytoplasmic memories recruit DNA.

And as pointed out with the mouse eye, junk DNA might have function in unexpected ways in various organisms.

It is really too early to insist that the C-value paradox is solved merely by invoking junk arguments.

nmanning said...

Did you copy paste that whole piece of garbage by yourself, or did you get your mom to help?

judmarc said...

Just to take one example from that long screed, how exactly does DNA that is neither translationally nor transcriptionally active provide redundancy to DNA that is active?

In the same way a broken screwdriver provides redundancy for one that is in good working condition, I presume, i.e., not at all.

Jass said...

@liarsfordarwin,

"So the next time someone tells you that it "strains credulity" to think that more than a few pieces of "junk DNA" could be functional in the cell -- that the data only point to the lack of design and suboptimality -- remind them of the rod cell nuclei of the humble mouse."

So what you are trying to say is that if you knockout the so-called junk DNA in nocturnal mammals, they are going to be blind at night, which will decrease their fitness, which will be less likely for them to survive. Right?

I wonder what would happen and you would knockout so-called junk DNA from a human or a chimpanzee genome?

Faizal Ali said...

You're struggling with semantics, KevNick. If one says that no more than a few pieces of non-coding DNA serve a function, and therefore most of it is junk, it does not contradict this to demonstrate that a few pieces of non-coding DNA do serve a function.