More Recent Comments

Saturday, April 27, 2013

DNA: Nature Celebrates Ignorance

Some freelance science writer named Philip Ball has published an article in the April 25, 2013 issue of Nature: Celebrate the Unknowns.

The main premise of the article is revealed in the short blurb under the title: "On the 60th anniversary of the double helix, we should admit that we don't fully understand how evolution works at the molecular level, suggests Philip Ball."

What nonsense! We understand a great deal about how evolution works at the molecular level. Perhaps Philip Ball meant to say that we don't understand the historical details of how a particular genome evolved, but even that's misleading.

I've commented before on articles written by Philip Ball. In the past, he appeared to be in competition with Elizabeth Pennisi of Science for some kind of award for misunderstanding the human genome.

SEED and the Central Dogma of Molecular Biology - I Take Back My Praise
Shoddy But Not "Junk"?

Let's look at what the article says ...
The more complex picture now emerging raises difficult questions that this outsider knows he can barely discern. But I can tell that the usual tidy tale of how 'DNA makes RNA makes protein' is sanitized to the point of distortion. Instead of occasional, muted confessions from genomics boosters and popularizers of evolution that the story has turned out to be a little more complex, there should be a bolder admission — indeed a celebration — of the known unknowns.
That little tidy tale was discarded forty years ago by all knowledgeable scientists. We've known for at least that long that many functional DNA sequences aren't even transcribed (e.g. origins of replication, centromeres). We've known for even longer that many genes make functional RNAs instead of proteins. There was even a Nobel Prize awarded in 1989 for some of these RNAs [Sidney Altman] [Tom Cech]. We've known about regulatory sequences for more than four decades. It's in all the textbooks.

Of course there are still unknowns, but I think we have a pretty good understanding of genes and gene expression and a pretty good understanding of the composition of our genome [see What's in Your Genome?].
A student referring to textbook discussions of genetics and evolution could be forgiven for thinking that the 'central dogma' devised by Crick and others in the 1960s — in which information flows in a linear, traceable fashion from DNA sequence to messenger RNA to protein, to manifest finally as phenotype — remains the solid foundation of the genomic revolution. In fact, it is beginning to look more like a casualty of it.
For the real meaning of the Central Dogma see: The Central Dogma Dies Again! (not). The original Sequence Hypothesis (DNA --> RNA --> protein) only applies to protein-encoding genes. No molecular biologist believes that this linear pathway explains all there is to gene expression and genome function. They haven't thought that for at least 30 years—longer if they are as old as me.

Although it remains beyond serious doubt that Darwinian natural selection drives much, perhaps most, evolutionary change, it is often unclear at which phenotypic level selection operates, and particularly how it plays out at the molecular level.
Most genetic change is due to the fixation of neutral alleles by random genetic drift. We've known that since the 1960s. We have a good understanding of genome evolution due to population geneticists like Michael Lynch. Anyone who wants to understand (or write about) this issue should have read his book. Philip Ball mentions Michael Lynch below but at this point in his article he seems to have forgotten it.

You don't have to agree with Lynch's view on the nonadaptive evolution of complexity but you do have to be aware of it. It will keep you from making foolish statements in Nature.

Philip Ball next mentions the ENCODE project, pointing out that pervasive transcription ended up "challenging the old idea that much of the genome is junk." To his credit, he shows that he is aware of the controversy ...
Some geneticists and evolutionary biologists say that all this extra transcription may simply be noise, irrelevant to function and evolution. But, drawing on the fact that regulatory roles have been pinned to some of the non-coding RNA transcripts discovered in pilot projects, the ENCODE team argues that at least some of this transcription could provide a reservoir of molecules with regulatory functions — in other words, a pool of potentially 'useful' variation. ENCODE researchers even propose, to the consternation of some, that the transcript should be considered the basic unit of inheritance, with 'gene' denoting not a piece of DNA but a higher-order concept pertaining to all the transcripts that contribute to a given phenotypic trait3.
Right. We know about regulatory RNA ... been in the textbooks since about 1980. No, we don't need to redefine a gene.
The ENCODE findings join several other discoveries in unsettling old assumptions. For example, epigenetic molecular alterations to DNA, such as the addition of a methyl group, can affect the activity of genes without altering their nucleotide sequences. Many of these regulatory chemical markers are inherited, including some that govern susceptibility to diabetes and cardiovascular disease. Genes can also be regulated by the spatial organization of the chromosomes, in turn affected by epigenetic markers. Although such effects have long been known, their prevalence may be much greater than previously thought.
Like the man says, nothing new here. And I don't think control of gene expression by chromatin remodeling is any more prevalent than we thought back in the 1970s when we assumed that almost every gene was controlled in this manner.
Another source of ambiguity in the genotype–phenotype relationship comes from the way in which many genes operate in complex networks. For example, many differently structured gene networks might result in the same trait or phenotype. Also, new phenotypes that are viable and potentially superior may be more likely to emerge through tweaks to regulatory networks than through more risky alterations to protein-coding sequences. In a sense this is still natural selection pulling out the best from a bunch of random mutations, but not at the level of the DNA sequence itself.
Is there any credible biochemist or molecular biologist who doesn't know that mutations in regulatory sequences can have big effects? I don't think so. Are there any science writers who think this is new stuff? Apparently there are. Why don't they buy a textbook?
Researchers are also still not agreed on whether natural selection is the dominant driver of genetic change at the molecular level. Evolutionary geneticist Michael Lynch of Indiana University Bloomington has shown through modelling that random genetic drift can play a major part in the evolution of genomic features, for example the scattering of non-coding sections, called introns, through protein-coding sequences. He has also shown that rather than enhancing fitness, natural selection can generate a redundant accumulation of molecular 'defences', such as systems that detect folding problems in proteins. At best, this is burdensome. At worst, it can be catastrophic.

In short, the current picture of how and where evolution operates, and how this shapes genomes, is something of a mess. That should not be a criticism, but rather a vote of confidence in the healthy, dynamic state of molecular and evolutionary biology.
Hmmm ... that's not bad, although I wish he would place more emphasis on what we do know and more emphasis on the fact that many scientists, including ENCODE workers, are not familiar with modern evolutionary theory.
Barely a whisper of this vibrant debate reaches the public.
Hmmm ... isn't that interesting? Why have science writers been so negligent about informing the public? Isn't that their job?

The answer is that almost all science writers have been unaware of the "vibrant debate" and most of them are still in the dark. That explains why the public hasn't heard a whisper.
There may also be anxiety that admitting any uncertainty about the mechanisms of evolution will be exploited by those who seek to undermine it. Certainly, popular accounts of epigenetics and the ENCODE results have been much more coy about the evolutionary implications than the developmental ones. But we are grown-up enough to be told about the doubts, debates and discussions that are leaving the putative 'age of the genome' with more questions than answers. Tidying up the story bowdlerizes the science and creates straw men for its detractors. Simplistic portrayals of evolution encourage equally simplistic demolitions.
The mechanisms of evolution have been known for decades. There's very little uncertainty among informed evolutionary biologists. I agree that it's about time we inform the public and other scientists about what happened to evolutionary theory in the 1970s.

Science writers can help but they first have to educate themselves. They could start by learning about the Central Dogma of Molecular Biology and how modern scientists really thought about genes and gene expression in the 20th century.



49 comments :

Georgi Marinov said...

One small mistake - centromeres are transcribed.

http://www.ncbi.nlm.nih.gov/pubmed/23066104

It' true that they are transcribed in order to be heterochromatinized, but it's still transcription

Robert Byers said...

for a creationist the point here is once again a criticism of science publications about errors.
Well then if these scientists can be wrong about these ideas then they can be wrong about other ideas.
Yet evolutionists tell us Thou shalt bnot question science, press or people, or one is denying science etc etc.
What is being sold here is not just a error , as Mr Moran sees it, but a whole right to easily dismiss conclusions or authority on subjects concerning origins.
Monkey see, monkey do.

Joe Felsenstein said...

Larry, I think the key word is "fully" understand. You see, by that standard physics is incomplete too. We do not fully understand, at a molecular level, everything that happens when a cake is baked. So physics must be wrong too, I guess.

Mikkel Rumraket Rasmussen said...

Byers read, Byers fail to comprehend.

... For the quintillionth time.

Larry Moran said...

I understand that point and I think it's important. Why would Nature publish an article suggesting that there's something wrong with evolutionary theory just because some molecular biologists people don't understand why we have such a complicated genome?

Does that reflect the opinion of the editors? Does it explain why they were duped by ENCODE?

Robin said...

I am a little confused. I read the article, but I didn't read the part which said there was something wrong with evolutionary theory.

In fact he explicitly states quite the opposite.

As far as I can see he is criticising how the situation is presented by science writers, which is not the same thing as saying there is something wrong with evolutionary theory.

Whether he is right or wrong about that, surely you should not criticise him, or Nature on the basis of something he didn't say.

SPARC said...

My impression is that Ball, like E. Birney, is trying to paddle back without having to concede that ENCODE was either playing tricks to gain as much hype as possible or was unknowingly completele wrong. Describing ENCODE as being coy about its evolutionary implications when ENCODE's main message was that there no junk DNA in the human genome shows that Ball still doesn't get it.

BTW, if you want to reach a broader public you can post comments on Ball's article at Scientific American which published a re-print.

Robin said...

From this end it looks as though they only paddled part of the way back and everybody paddled a little forward to meet them and then said that nothing had changed.

In 2009 we were confidently told that 95% of DNA was junk. Post ENCODE it has moved to 90% (or even 80% in some cases).

Is everyone so spooked by the ID people that they can't just say that they now think that a further 5% is functional?

Pedro A B Pereira said...

"I am a little confused. I read the article, but I didn't read the part which said there was something wrong with evolutionary theory.

In fact he explicitly states quite the opposite."

He didn't. The problem here is that he apparently doesn't understand the implications of having 80% of the genome being functional for evolutionary theory. *IF* 80% of the genome (or more) is indeed functional then that means that what we know about molecular evolution/genome evolution and population genetics is wrong. So he is effectively saying there is something wrong with TE, its just that he doesn't understand it and thinks these two things are separated. In fact, most people involved in ENCODE don't seem to get it either, or are more interested in making a big marketing splash instead, so that funding and glory abounds.


He also tries to minimize the claims of ENCODE:

"the ENCODE team argues that at least some of this transcription could provide a reservoir of molecules with regulatory functions"

This is not what the ENCODE leaders stated at the time the results were published. They actually stated that *at least* 80% of the genome was functional. Some even went as far as stating that maybe 100% was functional. What Philip Ball says in this article is not representative of what happened at all, and the ENCODE leaders have only changed their tune somehow after all the criticism.

Needless to say, ENCODE didn't prove what was implied. Transcription by itself does not mean functionality, spurious translation and protein/RNA attachment to DNA have been known for ages.

Mikkel Rumraket Rasmussen said...

In 2009 we were confidently told that 95% of DNA was junk. Post ENCODE it has moved to 90% (or even 80% in some cases).
Give references for these claims please.

Georgi Marinov said...

ENCODE's main message was that there no junk DNA in the human genome

That most definitely wasn't ENCODE's message.

Robin said...

Rumraket,

I didn't think it was in doubt, but OK. The 95% figure from 2009 and older can be found in Richard Dawkins "The greatest show on Earth" where he says that 95% of the human genome may as well not be there at all.

This figure is also is referenced on this site (albeit in a less absolutist phrasing) in a review of a New Scientist article: http://sandwalk.blogspot.com.au/2007/07/junk-dna-in-new-scientist.html where Professor Moran says "The New Scientist article acknowledges, correctly, that more than 95% of the genome could still be junk."

The 90% figure is also referenced by Professor Moran here: http://sandwalk.blogspot.com.au/2013/03/encode-junk-dna-and-intelligent-design.html where he talks of "...the 90% of the genome we think is junk"

The 80% figure comes from Dr Merlin Crossley: http://www.abc.net.au/radionational/programs/ockhamsrazor/dna/4644102

The 80% figure is mentioned right at the end.

For my part I don't think that 5% of our DNA is nearly enough to store instructions for building and maintaining an entire human being.

I would be surprised if even 10% is enough. My own prediction is that that we will be hearing the figure that 80% is junk more often in the next ten years. You heard it here first :)

Georgi Marinov said...

1. Why is that you think 5% of our DNA is not nearly enough "store instructions for building and maintaining an entire human being"?

2. Why are people so obsesses with the precise percentage of "functional" DNA in the human genome. In all likelihood, there is no clear delineation between "functional" and "non-functional", which makes any such attempt to give a precise number futile. And ultimately, a huge waste of time - IMHO people's energy would be much better spent trying to understand the precise mechanisms of gene regulation than arguing over whether 5% or 10% of the genome is functional

nmanning said...

"For my part I don't think that 5% of our DNA is nearly enough to store instructions for building and maintaining an entire human being."

Why?

Wavefunction said...

Perhaps Ball should stick to writing books about chemistry. Some of them have been wonderful.

PNG said...

I'd like to hear Larry's take on Ptashne's takedown of a science writer that just appeared in PNAS.

Epigenetics: Core misconcept - Mark Ptashne doi:10.1073/pnas.1305399110

Robin said...

Basically because it would imply that we are no more complex than a basic spreadsheet program or that nature has produced some miraculously efficient algorithm, neither of which seems particularly likely to me.

There are about 3 billion base pairs in the human genome, yes? 5% of that is 150 million base pairs. If we express that in terms that we normally think of computer storage that equates to about 38 megabytes (please correct me if I have my maths wrong).

So now I think of a program of about that length containing the instructions to manufacture an entire human body, brain, nervous system, skeletal system, heart, circulatory system, lungs, skin, muscles, cartilage, immune system etc,etc.

Now I think that I have gained a pretty good feel for what can be achieved with certain program lengths, even with extreme efficiency and for-the-purpose programming languages.

And I will lay a bet that you could not encode the instructions for building a human body in no 38 megabytes.

As I am not out to prove anything I am happy to let the claim ride.

Anonymous said...

I get 37 Mbytes, but then for E coli we would have, if the whole genome was a "program" and rounding the genome to 5Mbp, 1.19 Mbytes. No software that I am aware of can build a bacterium. The one I use just write programs uses 11 Mbytes of space. It certainly can't build an E. coli. Therefore E coli's genome should be much bigger?

I think that the problem is with your metaphor. It equates software, which is circumscribed and limited to the environment where it works, and where it was developed (computers, hardware). It's limited in dimensions (they are mostly "linear"), to biological systems, which have to do with the physical-chemical stuff. Interactions between stuff might give you an idea. The total number of possible pairs of interactions for the genes in E coli would be (rounded to 4000): (4000 - 1) * 4000/2 = 7,998,000. Think of the number of networks we could build with that. Anyway, of course, not all of these interactions happen, and different combinations are available in different conditions ...

Anyway, to the issue at hand. Maybe it is 5% maybe it is 10%, maybe more. I am inclined to between 5% and 10%. I see no problem with this at all.

I enjoy your comments by the way.

Georgi Marinov said...

This is anthropocentric thinking at its worst.

And it does not take much to tear it apart.

Last week the zebrafish genome was officially published. It has 26,000 protein coding genes. Note, this the number of protein coding genes alone.

I am pretty sure you view zebrafish, which aren't that big in size or particularly intelligent, to be less complex than humans.

And that's not even the most extreme example, it's just a recent one. There are plants with almost double the number of protein coding genes compared to us. What do they need them for given how much less "complex" they are compared to us?

The argument "We are so marvelously complex, we must have a lot of DNA coding for that complexity" is not a good one. It does not account for the organisms that are apparently much less "complex" than we are but have a lot more DNA that we know for sure does something. It does not account for the organisms that are of similar complexity but have vastly different genome sizes. And it is based on a flawed understanding of what "complexity" really means in biology, and how it arises.

Anonymous said...

Sorry, I forgot to insist, since we are talking about physical chemistry, interactions are not the whole story. Think of the properties of these chemicals, of how development happens. A human body is not built. It develops ... three dimensions, things present at the same time, growth, osmosis, such physical-chemical phenomena ... etc. It's not just Mbp, it's environment, physics, chemistry ...

Robin said...

Georgi Marinov wrote: "The argument "We are so marvelously complex, we must have a lot of DNA coding for that complexity" is not a good one"

It is a good thing I didn't make that argument then, isn't it?

Georgi Marinov said...

That's precisely the argument you're making. This is what you said:

And I will lay a bet that you could not encode the instructions for building a human body in no 38 megabytes.

Robin said...

I don't know how you got the one statement from the other.

I said what I said.

Robin said...

Georgi, but I am not sure what point you are making in reply.

How does your point about the zebrafish affect my point?

Georgi Marinov said...

Here is another quote from you:

For my part I don't think that 5% of our DNA is nearly enough to store instructions for building and maintaining an entire human being.

Robin said...

Georgi, for example if I look at a particular system and I estimate that we would need an eprom of a given size to hold the control program for it, I am not saying "that system is so marvellously complex that it is going to need a lot of bytes", I am making a judgement based on knowledge and experience.

Georgi Marinov said...

How does your point about the zebrafish affect my point?

Zebrafish illustrates nicely the fallacy of trying to correlates the number of genes in a genome and the amount of 'functional DNA' with 'organismal complexity'.

There is some correlation between the two but on a very very broad level - there is no known prokaryote with 20,000 genes, but within multicellular eukaryotes, it breaks down completely.

And since the number of genes does not seem to correlate with 'complexity' (and note how I usually put the term in quotation marks), why would you think there is a causal relationship between the two?

Also, I would like to point out that I am using the number of genes as proxy for the amount of functional DNA as there isn't another convenient metric to use - the amount of regulatory DNA (and thus the total 'functional DNA') probably scales up linearly with it when roughly similar in size and composition genomes are compared.

Georgi Marinov said...

RobinMonday, April 29, 2013 10:00:00 PM
Georgi, for example if I look at a particular system and I estimate that we would need an eprom of a given size to hold the control program for it, I am not saying "that system is so marvellously complex that it is going to need a lot of bytes", I am making a judgement based on knowledge and experience.


Biological complexity does not arise from the number of 'bytes' in DNA in the same way that complexity does in systems engineered by humans. It's a false analogy.

Robin said...

Negative Entropy,

The point is that physics and chemistry work the same for all organisms but the specific configuration on an organism has got to be pretty much down to the DNA.

Georgi Marinov said...

There wasn't really a discussion of DNA methylation there. And there should have been given the argument he is trying to advance.

But there is a general point there that is correct - a lot of histone mark biology is really about chromatin and transcriptional dynamics and not about epigenetics, but nevertheless gets confused with it. And a lot of histone marks are thought of to play causal role in regulating transcription when this is in fact not the case and they get deposited for other reasons. The example of H3K36me3 and its role in the transcriptional cycle is one I always like to give as an example. This paper that found most of the HDACs on active genes was another beautiful illustration of the same point:

Genome-wide mapping of HATs and HDACs reveals distinct functions in active and inactive genes.
Wang Z, Zang C, Cui K, Schones DE, Barski A, Peng W, Zhao K.
Cell. 2009 Sep 4;138(5):1019-31. doi: 10.1016/j.cell.2009.06.049. Epub 2009 Aug 20

Robin said...

Georgi, think of it this way.

Could the functional part of the genome for a whale be 25 base pairs long?

Of course not, that is ridiculous.

Now let's up that number - 2500 base pairs. Again - ridiculous.

We keep incrementing that number and eventually we will come to a number that is not ridiculous.

So, yes, there is a minimal number of base pairs that will be required in order for the attributes of an animal to be stored by it.

So the question, what is that minimal number that would be required for a particular organism.

I am saying, and I stick by it, that 150,000,000 base pairs is not enough in the case of a human.

That a less complex organism might have more functional DNA does not affect this.

Georgi Marinov said...

You have absolutely no way of defending the 150Mb number other than that that number makes you feel good. That's not a very good argument.

Robin said...

Did I say that anything about that number made me feel good?

Or any way at all?

SPARC said...

Then what conclusion do you think Ewan Birney wanted to put forward when he publically claimed " "The term junk DNA must now be junked"?

Robin said...

In the end I have two premises, firstly that natural selection will not result in some brilliantly efficient process for doing this and secondly that we can reason from looking at the types of functionality and complexity that humans have been able to extract from certain information sizes.

The amount of functionality that nature can get out of a particular information size is not going to be orders of magnitude better than humans can.

Robin said...

As I say, not selling anything, if I am wrong I am wrong.

But I think that my original point still stands - at least from the point of view of how this is communicated to the public at large.

The sizes for functional DNA being talked about now are larger than the sizes being talked about 5 or 6 years ago.

AllanMiller said...

Robin,

I think it's mistaken to look at DNA in terms of 'number of instructions'. There is great extensibility in a particular biochemical function. For example, tiny ribozymes a few bases long will catalyse certain reactions; the same reactions can be catalysed by proteins thousands of amino acids (and hence thrice that number of bases) in length. The latter will be much more specific than the former, but adds an enormous number of bits to the genome for little in the way of linear 'instruction count'.

Piotr Gąsiorowski said...

Anyway, our entire genome is just 2.5% of the size of the marbled lungfish's genome. I'm sure no marbled lungfish would believe that only a tiny fraction of its DNA has a function. It's obvious that at least some 20% must be functional, even if a less complex organism (a human, for example) requires fewer "instructions".

Rolf Aalberg said...
This comment has been removed by the author.
Rolf Aalberg said...
This comment has been removed by the author.
Anonymous said...

But RObin,

You seem to have missed the message. I agree that we can't think that natural selection is an amazing optimizer, but I showed you that your analogy/metaphor breaks easily if you look at something like E coli, which is more "complex" than my text editor, yet uses much less of a "hard drive." Therefore the analogy/metaphor is completely misguided. Program sizes cannot be compared to DNA sequences. That simple. (Not to mistake what I said with the amount of information we could analogously store in DNA. Not the same at all as a biological process). To try and have this hit home (hopefully), a few enzymes might be responsible for putting together lipids. They don't have to do anything else for these lipids to start forming micelles, and structures similar to membranes. That's what lipids do. So the information about how to build a primitive membrane is not really in the DNA, is not in the enzymes. ALl the enzymes do is produce the darn molecules and they do what their properties make them do.

So, yes, it would be ridiculous to think that 1000 base pairs will be enough for a whale, but it is not simple, and the analogy/metaphor is so wrong I can't find words to get this message clear enough to show you how it fails.

Yes, the public gets metaphors and analogies. That does not mean that these metaphors and analogies have to be taken that far.

Best.

Arlin said...

I think you could encode a whale in 25 bp. The problem with looking at "complexity" and "encoding" this way is that evolution leverages the complex propensities of physicochemical systems to build up bigger systems. When we count DNA bp we are not seeing that process. Evolution did not build up Adenine, for instance, atom by atom. There is a physico-chemical propensity for it to form under certain conditions. Evolution did not build up the shape of a mushroom cell by cell-- the reason that there are mushrooms and mushroom clouds and jelly fish that have a similar shape is a matter of fluid dynamics.

So, the way to "encode" a whale in 25 bp is to encode the "on" switch that lights the fuse of the spontaneous whale-assembly process.

Of course, it is incredibly unlikely that there exist conditions for spontaneous whale-assembly out there in the universe anywhere. The propensity for chemical systems to self-organize into whales is very low.

The nature of evolution is to leverage analog systems as well as digital ones. Nature has leveraged a kind of digital system in the form of DNA, which makes hereditary encoding very efficient. But nature also leverages analog system and some of the complexity resides in that, although it is not easy to count.

Anonymous said...

Arlin,

Beautifully explained. Thanks. That's what I was trying to say. (The parts about leveraging, except that instead of calling that the nature of evolution I would call it the nature of life, or of living systems.)

Pedro A B Pereira said...

Just one more example: when I do a PCR (polymerase chain reaction) to copy genes, I don't need any "program instructions". An enzyme will start copying after matching to a similar section on the DNA, it will copy up to a certain point, drop out, repeat, etc. None of this is coded as in a computer program, it's just chemistry. If I wanted to simulate this as computer code I'd have to instruct every single detail into the program to able to simulate this. Even at the DNA level, tree "letters" (a codon) mean an amino-acid. But DNA doesn't need to codify what and how many atoms compose an amino-acid. It just needs 3 letters. Also, if you look at how transcription is regulated, it's far less complex than what you would need to do if you were programing regulation as a set of code instructions.

My point is simply that direct comparisons of amount of code needed for a computer program vs. amount of base pairs on DNA isn't a valid comparison. I fail to see what is it exactly that makes 5% functional DNA sound "little" but 10% sound good. As someone said, there is no actual argument being given here.

Schenck said...

"Why would Nature publish [this] article"

It's pretty clear that it's just for 'ratings', an attention getting article that can potentially boost sales and/or their profile.
This is what Nature has degenerated to. My impression is that most people regard Nature articles as generally garbage (or at least totally over-hyped), BUT everyone will also kill to have such a publication.

Piotr Gąsiorowski said...

Aha, so even garbage can have a function after all!

RBH said...

Has anyone ever done an analysis of the recipes in the French Chef, looking at the relationship between the number of characters in the printed recipe and the complexity of the finished product? Robin, can you help me there?

steve oberski said...

Yes, and those recipes are now much larger than they were 5 or 6 years ago.

My suspicion is that Le Cordon Bleu is up to some hanky panky and those recipes are not nearly large enough to account for the complexity of the typical menu of a french restaurant.

Joe G said...

The known mechanisms of evolution don't appear to do much of anything- see Lenski's 50,000+ generations of E. coli. No new proteins and no new protein machinery- no sign that macroevolution can occur via microevolution.

Not only that we don't know what makes an organism what it is. So, on a molecular level, evolutionism is crap.

As for mutations becomng fixed via random genetic drift- can you please reference some experiments that demonstrate this, or do we have to take your word for that?