Friday, November 06, 2015

The cost of a new gene

Let's think about the biochemical cost associated with adding some new piece of DNA to an existing genome. Michael Lynch has been thinking about this for a long time. He notes that there certainly IS a cost (burden) because the new bit of DNA has to be replicated. That means extra nucleotides have to be synthesized and polymerized every time a cell replicates.

This burden might seem prohibitive for strict adaptationists1 since everything that's detrimental should be lost by negative selection. Lynch, and others, ague that the cost is usually quite small and if it's small enough the detrimental effect might be below the threshold that selection can detect. When this happens, new stretches of DNA become effectively neutral (nearly neutral) and they can be fixed in the genome by random genetic drift.

The key parameter is the size of the population since the power of selection increases as the population size increases. Populations with large numbers of individuals (e.g. more than one million) can respond to the small costs/burdens and eliminate excess DNA whereas populations with smaller numbers of individuals cannot.

Michael Lynch and Georgi Marinov (Hi, Georgi!) have just published a paper where they attempt to calculate the cost of adding DNA as well as the cost associated with transcribing that DNA and translating it into protein (Lynch and Marinov, 2015). One of the goals of the paper is to figure out the overall selective advantage of a new gene given that its product might confer selective advantage when there's an energy cost—in ATP equivalents—associated with every new gene.

Here's how they put it ...
Based on its phenotypic manifestations, a gene may have a multiplicity of advantages, but the energetic cost of replication, maintenance, and expression represents a minimum burden that must be overcome to achieve a net selective advantage. If a genic variant or a novel gene is to be efficiently promoted by natural selection, the net selective advantage (beyond the energetic cost) must exceed the power of drift (defined as 1/Ne for a haploid organism, where Ne is the effective population size).
This is all standard stuff. The innovative parts of the paper are: (1) more specific calculations of costs based on experimental results in the literature, and (2) whether the cost of a gene is related to cell size.

Why is this second goal significant? Because extreme complexity and multicellularity are only seen in eukaryotic cells and eukaryotic cells are much larger than prokaryotic cells. Larger cells require much more protein (and membranes) than small cells so each eukaryotic gene has to produce a lot more proteins than a the same gene in a small cell. The cost of adding a gene to the genome is small compared to the cost of making proteins. Cells could not grow larger and more complex as long as they were limited in their energy production.

Nick Lane and Bill Martin (Lane and Martin, 2010) have argued that eukaryotic cells overcame this limitation due to the inclusion of mitochondria. This allows the cell to produce much more energy making the cost of a gene (mostly protein synthesis) less detrimental than it would be if cell size and complexity increased in prokaryotic cells.

Lynch and Marinov looked at the cost of replicating DNA, the cost of making mRNA, and the cost of making proteins in different cells. The results show that increased cell size does not impose a significantly increased energy burden so there's no need to speculate that mitochondria were necessary for expanded genomes and more complexity. Population genetics can account for the observations.
Taken together, our observations suggest that an energetic boost associated with the emergence of the mitochondrion was not a precondition for eukaryotic genome expansion.
This is probably a bit too much for most of you so I'll concentrate on the other conclusions. These are things you have to know if you want to understand genome evolution.
  1. Bacterial species (prokaryotes) typically have large population sizes. The cost/burden of adding even small amounts of DNA to the genome in these cells is sufficiently detrimental that it can be detected by natural selection. This is why bacterial genomes tend to be small and compact.
    The preceding results indicate that the energetic cost of replicating a DNA segment of even just a few nucleotides (even if nontranscribed) can be sufficient to be perceived by selection in a typical bacterial population with large Ne.
  2. Eukaryotic species, especially multicellular eukaryotic species, tend to have small population sizes. In this case, the detrimental cost of adding extra DNA cannot be detected by natural selection so there's no impediment to expanding the genome.
    In contrast, insertions of even many kilobases often impose a small enough energetic burden relative to the overall requirements of eukaryotic cells to be immune to selection.
  3. Transcription is also expensive so when a new segment of DNA is transcribed as well as replicated it imposes an even greater cost. However, the extra burden of transcription is still not sufficient to make the cost subject to natural selection in eukaryotic species. Junk RNA is not that harmful. Pervasive transcription is not that harmful.
    Although RNA-level costs are frequently greater than those at the DNA level, these are often still not large enough to overcome the power of random genetic drift in eukaryotic cells. This means that many nonfunctional DNAs that are inadvertently, even if specifically, transcribed in eukaryotes (especially in multicellular species) cannot be opposed by selection, a consideration relevant to the debate as to whether transcriptional activity is an indicator of functional significance.
  4. Protein synthesis is expensive. If a new gene has to produce a lot of protein then the cost/burden can be prohibitive, even in eukaryotic species with small populations. This cost has to be overcome by a greater selective advantage to making the proteins. This is why a multicellular eukaryote can have lots of junk DNA making lots of junk RNA but NOT lots of junk protein. It's also why most accidental protein-coding gene duplications usually result in selection for turning off one of the copies.
    However, with the cost at the protein level generally being much greater than that at the RNA level, segments of DNA that are translated can sometimes impose a large enough energetic cost to be susceptible to selection, even in multicellular species. This may explain why redundant duplicate genes commonly experience high rates of nonfunctionalization.

1. And for Intelligent Design Creationists who think that strict adapationism (Darwinism) is the only scientific game in town.

Lane, N., and Martin, W. (2010) The energetics of genome complexity. Nature, 467(7318), 929-934. [doi: 10.1038/nature09486]

Lynch, M., and Marinov, G.K. (2015) The bioenergetic costs of a gene. Proc. Natl. Acad. Sci. (USA) published online Nov. 2, 2105 [doi: 10.1073/pnas.1514974112]

40 comments:

  1. In eukaryotes the probable main evolutionary cost of a new gene (assuming it really is a new piece of DNA in the genome) is that since the new gene will initially occur on only one of the parental homologs it will be a small region of haploidy in an otherwise diploid genome potentially causing pairing problems in meiosis thereby decreasing fertility.

    ReplyDelete
    Replies
    1. Not a bad point that the costs early on would include factors other than the cost of additional nucleotides, etc. But I think from what we know about pairing between homologs that the small duplicated region will most likely just be 'ignored' while the homologs just pair & recombine around it. That is, it would work its way through that challenging time with infrequent problems. The loss of fertility would be small (especially for males), but this cost can be added into the equation.

      Delete
  2. When we're talking about eukaryotic multicellular vs bacterial/single-cellular population sizes, what are the ranges looked at here? Though large multicellular eukaryote population sizes never reach the scales of bacteria, it still seems to me there are many species with population sizes in the millions. Particularly for plants. Yet many plant genomes are colossal.

    How about Jellyfish? They have ridiculous population sizes, how big are their genomes? Is energy abundant for them?

    ReplyDelete
    Replies
    1. How about Jellyfish? They have ridiculous population sizes, how big are their genomes? Is energy abundant for them?

      Not very big, it seems. Larger than in water-fleas, smaller than in sardines:

      The Animal Genome Size Database

      The C values are 0.33 and 0.73 for the two scyphozoan species in the database.

      Delete
    2. Ridiculous population size for a metazoan, but still very small compared to bacteria. And of course, population size and effective population size can differ quite a bit.

      Also, if you look at the figure, the top scale show 1/s (where s is the selection coefficient), and the vertical dotted lines the likely range of N_e. For C. elegans that's in the 10^6 neighborhood, as it is for many invertebrates. So millions indeed.

      But N_e it's on the order of 10^9 and up to one or two logs above that for bacteria. But it cannot get many orders of magnitude larger because population size is not the same as effective population size. Even though there are bacteria estimated to have absolute population sizes 10^20 and larger...

      Delete
  3. Quibble: Not 1 = Ne. 1/Ne. Did they really have the "=" there instead of "/" ?

    ReplyDelete
    Replies
    1. No. That's my fault. I cut and pasted from the PDF. It didn't copy correctly and I didn't proofread.

      Can you not see the paper?

      Delete
    2. It's strange -- when I search for the the phrase "1=Ne" in the pdf, it takes me to that place, where it displays "1/N_e".

      Delete
    3. The display is a picture of the paper, search finds the invisible text... Which in this case contains that error.

      Delete
  4. A similar calculation would show the same thing for insertion of a small piece of junk DNA (say, 4000 bases). We know processes, such as transposition, can do that. There might be some selective advantage to adding or deleting such a piece, but the power of selection in such a case would be inadequate to explain why the piece of junk DNA is inserted there, or that it is actively maintained there.

    A power calculation analogous to Lynch and Marinov's would be needed before credence could be given to any assertion that junk DNA is present because it has a protective effect or because it is needed as spacer.

    ReplyDelete
  5. Polymerization reduces the cost of active transport (where applicable) by lowering intracellular concentration of monomers. This also needs to be included in the terms of the cost equation, as well as stress response based catabolism. Nice to see some thinking around this!

    ReplyDelete
  6. What does this say for whole genome duplication? Does that happen in prokaryotes?

    ReplyDelete
    Replies
    1. I'd like to learn some about this too. A whole gene duplication would involve tons of extra protein being produced (twice as much?)

      Delete
    2. I guess this applies here too, for whole genome duplications?

      It's also why most accidental protein-coding gene duplications usually result in selection for turning off one of the copies.

      Delete
    3. Dazz, I tried to present some of my observations on genetics of polyploids, but there was a glitch in the reply function and they're further down the thread. Perhaps uninteresting, too, but that's a different issue.

      Delete
  7. This means that many nonfunctional DNAs that are inadvertently, even if specifically, transcribed in eukaryotes (especially in multicellular species) cannot be opposed by selection, a consideration relevant to the debate as to whether transcriptional activity is an indicator of functional significance.

    A subtle hint at ENCODE?

    Anyway, this is fascinating stuff, don't have access to the paper but would mostly go well above my head anyway.

    Congrats Georgi

    ReplyDelete
    Replies
    1. It's not very subtle! I removed the reference numbers when I posted the quotation from the paper. The very next line is, "(see the exchange between ref. 18-20)."

      The references are ....

      18. Doolittle, W. F. (2013). Is junk DNA bunk? A critique of ENCODE. Proceedings of the National Academy of Sciences
      110, 5294-5300. [doi: 10.1073/pnas.1221376110]

      19. Graur, D., Zheng, Y., Price, N., Azevedo, R. B., Zufall, R. A., and Elhaik, E. (2013). On the immortality of television sets:“function” in the human genome according to the evolution-free gospel of ENCODE. Genome Biology and Evolution, 5(3), 578-590. [doi: 10.1093/gbe/evt028]

      20. Kellis, M., Wold, B., Snyder, M. P., Bernstein, B. E., Kundaje, A., Marinov, G. K., Ward, L. D., Birney, E., Crawford, G. E., and Dekker, J. (2014). Defining functional DNA elements in the human genome. Proceedings of the National Academy of Sciences, 111(17), 6131-6138. [doi: 10.1073/pnas.131894811]

      It looks like Georgi has left the dark side to enter into training to become a Jedi knight! Michael Lynch is his Yoda. (You'll realize how funny that is if you know Michael Lynch.)

      Delete
    2. That's not really an accurate description - I'm still doing ENCODE-style functional genomics work but now it's in organisms with more interesting genomic biology than mammals.

      Delete
    3. Also, my position on these issues should have been clear from my posts for a long time. In any case, my PhD thesis will become public in January (it is still visible only within the Caltech network) so everyone will be able to read what exactly I had to say about these things two years ago.

      Delete
    4. Thanks Larry. Already read Graur's critique of ENCODE, I remember him pointing out that assuming function when there's transcription is far too loose a criteria for functionality. Off to read the rest of those references.

      @Georgi: So is the issue with ENCODE not so much about the data or their methodology, but their interpretation of the data?

      Delete
    5. Just don't get your info from press releases, read papers. And blogs. That's it.

      Delete
  8. I'm dubious about the primacy of population-size arguments across this boundary. There are so many internal and external mechanistic differences between prokaryotes and eukaryotes, and many of them must have some influence, potentially overriding.

    For example multiple origins of replication, manipulation of chromosomes by cytoskeletal elements vs cell wall points of attachment and intervening growth, chromosome architecture, sexual reproduction and so on.

    Then there is the ecological case. Population size in prokaryotes is slippery anyway, but immediate competition with neighbours is likely to be more significant than a hypothetical LLN effect in a largely imaginary efficiently stirred group of relatives.

    When you add in multicellularity, dynamics change again. A prokaryote has little leisure time. Meanwhile a multicellular eukaryote has to replicate the extra bases into every somatic cell - but then, those somatic cells themselves are likely to be a substantial cost. They bring benefits beyond their cost, of course, which is not necessarily the case for a bit of junk. Still, in a population of well-fed eukaryotes, irrespective of its size, the nutritional cost of the bases is likely to be small, and the selective cost of a few extra is measured against conspecifics, not against a species in a different niche.

    It's not just a constant s value being evaluated by populations of varying size, but highly variable s values depending on a population's closeness to the limits of adequate nutrition, and circumstantial mechanistic effects of radically different life histories and cellular mechanism.

    ReplyDelete
  9. Excellent points. In my evolution class I teach about the bloating of the eukaryote genome as a kind of 'ratchet' process where there are occasional added elements to the genome by regional duplication, transposons, etc., and that selection is not sufficient to remove these since the cost is too low for natural selection to work on it.
    But the detail here, that selection will be more effective on protein coding genes since they cost more - that is really cool. It is going into my description of this. Thanks.

    ReplyDelete
    Replies
    1. But the detail here, that selection will be more effective on protein coding genes since they cost more - that is really cool.

      I think it also contributes to explaining the C-value paradox. The total genome size in macroscopic eukaryotes is free to vary by a few orders of magnitude, whereas the protein-coding part and the number of protein-coding genes are roughly constant (or at least not wildly different across taxa).

      Delete
  10. I once did an isozyme study of cottonwoods, which are considered diploid (with two copies of every gene) but are actually diploidized ancient polyploids. (The study involved basic metabolic genes that virtually all eukaryotes have.) In one species I'd score the results for each protein as having two copies, two copies, two copies -- oops, there were clearly four copies of the next gene. In the next species, there'd be only two copy of that gene, but four of something else.

    It takes a long time to get rid of the extra copies, Most tetraploid plants (with four copies of each gene) that I studied had four copies of each one. It seems to me that very high polyploids are less stable. I can't say that from scoring isozyme gels for high polyploids -- impossibly confusing -- but from flow cytometry results. If you consider the amount of DNA per cell to be 1 in a diploid, closely related tetraploids will have 2, but closely related decaploids (with 10 copies of each gene) will have, say, 4.3 times as much DNA, not 5 times. Some has been loss.

    There are some restrictions on the DNA loss. Obviously, loosing junk DNA isn't a problem (except for getting chromosomes to line up at meiosis, which is often less of a problem than one might think). But consider the case where protein A interacts with protein B. Loosing the gene for A may leave a lot of protein B floating around the cell. Maybe no problem, but sometimes a problem.

    Also, one copy may mutate to produce a somewhat different version of the protein. Sometimes that second version does a different useful job or simply does the same job under different conditions. In nearly all cases I worked on where the plant was tetraploid and a gene had three or more alleles (variants), that gene could be scored as one copy of the gene being invariant, and the second copy have two or more or more variations. Perhaps I'm being unclear; it could be scored as having all the variation confined to just one copy of the gene. I have interpreted this as meaning that one copy of the gene had to be just right. The other copy was free to vary, even fail.

    ReplyDelete
    Replies
    1. In nearly all cases I worked on where the plant was tetraploid and a gene had three or more alleles (variants), that gene could be scored as one copy of the gene being invariant, and the second copy have two or more or more variations. Perhaps I'm being unclear; it could be scored as having all the variation confined to just one copy of the gene. I have interpreted this as meaning that one copy of the gene had to be just right. The other copy was free to vary, even fail.

      Do you mean that, in a polyploid, as long as one gene keeps it's original function, it doesn't matter is a copy of the gene suffers a deleterious mutation, the organism is (or can be) still viable?

      That would confer polyploids a great selective advantage and explain why they're not selected out, if I got this right?

      Thanks for another super interesting post

      Delete
    2. Yes, I think that that's often the case. Often, polyploidy allows a population to "explore" the consequences of many mutations as long as (at least) one copy of the critical genes retains its original function.

      Delete
    3. There are some advantages of polyploidy itself. With more copies of each gene, the necessary amount of protein can be made faster. Therefore, polyploidy is very common in plants of cold arctic and alpine habitats, where chemical reactions are slower and the growing season is short.

      Also, polyploid cells are usually larger than diploid cells. If the polyploid and diploid plants are the same size, polyploids have fewer cells. And sometimes polyploids are bigger or have thicker leaves, etc.

      However, I think by the far most interesting difference is that polyploidization frees up genes to change in potentially important ways.

      Delete
    4. I'm going to have to do some googling on this fascinating stuff. I found this paper:

      THE ADVANTAGES AND DISADVANTAGES OF BEING POLYPLOID (Luca Comai)

      If any of the pros feels like recommending some good read that's appreciated :)

      One thing (or two) that is not clear to me is, if in a diploid there's a recessive and a dominant allele, the dominant rules, but what happens in a tetraploid with 3 recessive alleles vs 1 dominant? I take it from what you said that polyploids can make proteins faster that 2 copies of the gene would be active in a tetraploid, right?

      Dominant - Recessive -> Dominant
      Recessive - Recessive -> Recessive

      So I guess in this example there would be a dominant and a recessive allele both active. How can that be?

      Delete
    5. Dominant and recessive are the ideas to start with, but the reality is more complicated. In many cases, recessive genes are "broken." They either don't make a protein at all, or the protein doesn't work property.

      But consider hair color in cattle. Black is dominant to red, but shorthorn white is co-dominant to black and to red. An animal with the black and shorthorn white alleles has a mix of black and white hairs, and looks grayish. It's called a blue roan.

      In the tetraploids I worked with, nearly all the alleles were expressed -- all made functional proteins.

      In a few cases, one of the alleles was a "null." It didn't make a protein that worked, or at least worked under the testing conditions. It was recessive.

      Delete
    6. I see. So connecting those ideas above, a tetraploid could potentially have four expressed copies of a gene, three of which can be (deleterious) "mutants" and still don't hinder the unmodified copy (enough to be selected out) to perform it's original function? That's really cool stuff

      Delete
    7. Yes, that's one of the possible scenarios.

      Delete
  11. I know it's common to talk about massive bacterial population sizes, but this is a gross oversimplification. Yes some populations of bacterial species in nature are massive, but many bacterial species live in relatively small numbers (on the order of multicellular organisms), see any microbiome paper for examples.
    So my question is, based on the power of drift, do these bacteria found in small population sizes have more variation in their genome size?

    ReplyDelete
    Replies
    1. Can you give examples of free-living bacteria with very low effective population size? I am not aware of any (which, of course, does not mean there aren't).

      In any case, the theory is not that if N_e is low then the genome will grow big. The theory is that if N_e is low, then the genome is shaped to a greater extent by mutational processes and to a lesser extent by selection than it would be if N_e was large. Then what exactly happens depends on the direction of those mutational processes.

      Delete
    2. Reading the DNA-seq papers identify many 'new' bacteria in various environments that, at least by high throughput sequencing, appear to be in low abundance. Will have to dig through some papers to find specific examples. Will be a couple of days though as my lab is moving tomorrow and that's a whole lot of fun.

      Delete
    3. Well yes, but those are low fractional abundances, not necessarily low effective population sizes.

      Delete
    4. Fair enough, but I expect there are populations of bacterial species that do not are similar in size to multicellular organisms. I also understand that a low N_e does not necessitate an increased genome size, but based on the argument used in the post it seems to link the two. Regardless, I would not be surprised if some bacterial species have larger genome sizes at least until the point where the energy of replication starts to make a difference. Not all bacteria are rapidly growing E. coli-like organisms.

      Delete


  12. Taken together, our observations suggest that an energetic boost associated with the emergence of the mitochondrion was not a precondition for eukaryotic genome expansion.



    I have come to the same conclusion, based on other criteria. Specifically, I have concluded that mitochandria are not needed for eukaryotes. This is partly based on the finding that there is no need for extra membrane area for eukaryotes, and especially not for eukaryotes as compared to bacteria. My blog has other explanations, e.g. the Organelle Escape Theory, which explains relations between bacteria and organelles at least as well, or probably better than Margulis´ theory.

    ReplyDelete