Monday, July 11, 2016

A genetics professor who rejects junk DNA

Praveen Sethupathy is a genetics professor at the University of North Carolina in Chapel Hill, North Carolina, USA.

He explains why he is a Christian and why he is "more than his genes" in Am I more than my genes? Faith, identity, and DNA.

Here's the opening paragraph ...
The word “genome” suggests to many that our DNA is simply a collection of genes from end-to-end, like books on a bookshelf. But it turns out that large regions of our DNA do not encode genes. Some once called these regions “junk DNA.” But this was a mistake. More recently, they have been referred to as the “dark matter” of our genome. But what was once dark is slowly coming to light, and what was once junk is being revealed as treasure. The genome is filled with what we call “control elements” that act like switches or rheostats, dialing the activation of nearby genes up and down based on whatever is needed in a particular cell. An increasing number of devastating complex diseases, such as cancer, diabetes, and heart disease, can often be traced back, in part, to these rheostats not working properly.
Theme
Genomes
& Junk DNA
Sethupathy works on regulatory RNAs so, in theory, he should be in a position to know whether the best available evidence supports a genome full of junk DNA or bristling with control elements.

Apparently he has decided that "junk DNA" was a mistake.

In my opinion, he is wrong. I believe the evidence shows unequivocally that 90% of our genome is junk. There are lots and lots of regulatory elements in our genome (among other things) but, along with exons in genes, they make up less than 10% of our genome.

Five things you need to know about junk DNASethupathy is one of those confused researchers who think that all noncoding DNA used to be called junk [see Science journal blows it again]. Such a misconception doesn't necessarily mean that ALL noncoding DNA has to be functional but these two misconceptions ('all noncoding DNA was junk' and 'most of our genome is functional') seem to go together.

I don't understand why you would write about something in your area of expertise (e.g. junk DNA and genomes) without reading up on the literature. If scientists were to do their homework then the most you could say is that you choose to reject all the evidence in favor of junk DNA and concentrate instead on speculations that most of the genome is functional based on evidence that some small percentage of it is functional.


55 comments :

  1. I'm sincerely amazed that there seems to be a new generation of molecular biologists who appear to think the study of gene regulation started in 2005. It's baffling-yet you keep providing examples. As you always say, Larry, no one ever proposed that all intergenic DNA was junk. Identifying and demarcating novel gene regulatory sequences (which I predict happens daily in tens of labs around the world) will simply not detract significantly from the percentage of the intergenic (and intronic) genome that is indeed junk.

    ReplyDelete
    Replies
    1. Brian
      "'m sincerely amazed that there seems to be a new generation of molecular biologists who appear to think the study of gene regulation started in 2005. It's baffling-yet you keep providing examples. As you always say, Larry, no one ever proposed that all intergenic DNA was junk. Identifying and demarcating novel gene regulatory sequences (which I predict happens daily in tens of labs around the world) will simply not detract significantly from the percentage of the intergenic (and intronic) genome that is indeed junk."

      If the length of an intron was relevant for gene timing yet most of its sequences, inside t were not relevant informationally would you consider that segment junk?

      Delete
    2. From Dr. Moran's post you are commenting in:

      "There are lots and lots of regulatory elements in our genome (among other things) but, along with exons in genes, they make up less than 10% of our genome."

      Delete
    3. @Bill Cole

      If the length of an intron was an adaptation required for timing the production of an RNA then that intron would not be junk. I would expect that length to be conserved within the species and in related species.

      As far as I know, there are no examples of such an intron in spite of the fact that such speculation has been discussed in the literature for thirty years. The idea doesn't make a lot of sense since it only applies to the very first RNA product following gene activation and because there are much more efficient ways of regulating the timing of gene expression.

      Delete
  2. What surprises me is the uncritical advancement of the "If it can cause disease when something is wrong, it must be essential to things working right" reasoning. Guys can get breast cancer; it doesn't mean male nipples serve an essential function. The fact that alterations in "junk" can cause disease doesn't mean any of it is essential.

    ReplyDelete
    Replies
    1. Similarly, at the genomic level there is a genetic load of junk DNA -- tied to the mutation rate through the creation of new promoters, e.g. Lynch et al. 2010.

      Delete
  3. About as bad as a publication by a scientist at a regular university in Answers Research journal:

    Big Gaps and Short Bridges: A Model for Solving the Discontinuity Problem
    Answers Research Journal 9 (2016):149–162.
    www.answersingenesis.org/arj/v9/discontinuity_problem.pdf
    Change Laura Tan, Division of Biological Sciences, 102 LeFevre Hall, University of Missouri, Columbia, MO 65211

    ReplyDelete
    Replies
    1. I'd never heard of her before, but she's an actual associate professor. Her publications seem split into two disjunct groups: nuts & bolts molecular biology, published in real journals, and weird creationist speculation, published in creationist "journals". And look at her co-author.

      Tan C. 2015. Using taxonomically restricted essential genes to determine whether two organisms can belong to the same family tree, Answers Research Journal. 413–435.

      Tan C and Tomkins J. 2015. Information processing differences between bacteria and eukarya - implications for the myth of eukaryogenesis, Answers Research Journal. 143–162.

      Tan C and Tomkins J. 2015. Information processing differences between archaea and eukaryotes - implications for the myth of homologs and eukaryogenesis. Answers Research Journal. 121–141.

      The basic message seems to be "these two species are different, therefore their ancestors couldn't have been more similar".

      Delete
    2. So Change Laura Tan is actually deep in it ..
      I wonder how her university copes, despite the disclaimer that what she says is not the University of Missouri position.

      Delete
    3. I'm assuming she is not called upon to teach evolutionary biology.

      Delete
  4. Mice do fine without "junk DNA":
    'Deleting non-coding regions from the genome has no apparent effect.'

    http://www.nature.com/news/2004/041018/full/news041018-7.html

    ReplyDelete
    Replies
    1. I think that's not a strong argument. It detects only effects that are strong enough to attract the attention of a laboratory scientists. A 0.0001 decline in fitness will be effective in nature, but it will not be noticed in Nature.

      Delete
    2. Are there any proven examples of such a low effect actually having an effect in nature or is this just a theoretical calculation? What population size is required for how long? Are you using this as an argument against junk DNA?

      Delete
    3. a) Yes. In analyses that compare genomes from related species selection of that strength is detectable, precisely because it has an effect.
      b) An N_e of 5000 should be sufficient here. That's not a crazy number for mice.
      c) No. Joe is using this as an argument against the method used in in the study referred to by Rolf. The key thing to remember is that neutrality is the null hypothesis. Failure to reject the null is not the same as affirmation of the null and lab sample sizes are far smaller than natural populations in many cases, which means that they will fail to reject the null, even if the null could be rejected from a sample size equal to N_e. That's all pretty basic statistics.

      Delete
    4. What Simon said.'

      I kind of like my new phrasing: that effects that can't be seen in Nature may nevertheless be effective in nature.

      Delete
    5. Simon said to remember that neutrality is the null hypothesis. That means the onus is on adaptationists to show that adaptation is the explanation. In the case of the mouse deletion experiments, the most reasonable explanation is that the parts of the chromosomes that were deleted are junk

      Bringing up theoretical examples where megabases of DNA might have a tiny effect that would be undetectable in the laboratory sounds a lot like special pleading to me. Joe, that's why I asked you what you really think. Based on the available evidence, do you think those mouse sequences were junk or not?

      Simon, what do you think?

      Delete
    6. Joe, that's why I asked you what you really think. Based on the available evidence, do you think those mouse sequences were junk or not?

      Probably mostly junk. But that assessment is based on other evidence, such as The Onion Test, mutational load arguments, and lack of conservation in evolution. Arguments that you know well and will expound on in your book. Basing it on "I didn't see any difference in fitness because the mouse didn't die" is a very weak argument. The mouse could be 10% less fit and you wouldn't detect that.

      Delete
    7. Joe,

      I understand your point about the deletion experiments. However, it's worth keeping in mind that the authors of that study deleted 2Mb of DNA (2,000 kb) and the deletion strains "were indistinguishable from wild-type littermates with regard to morphology, reproductive fitness, growth, longevity, and a variety of parameters assaying general homeostasis."

      I don't think that's as weak an argument as you make out. The deleted segments contained 1,243 "conserved" noncoding sequences. The evidence suggests strongly that the sum total effect of all that sequence (2 Mb) makes no more than a small contribution to fitness that's only hypothetically detected under some imaginary conditions outside the lab.

      I'm going to cover this in my book but I'll describe it as strong evidence with some qualifications. It is, after all, exactly the sort of test one SHOULD do to see if there's junk DNA. The result could have been very different if ENCODE was right.

      Delete
    8. I don't think that's as weak an argument as you make out. The deleted segments contained 1,243 "conserved" noncoding sequences. The evidence suggests strongly that the sum total effect of all that sequence (2 Mb) makes no more than a small contribution to fitness that's only hypothetically detected under some imaginary conditions outside the lab.

      The point is that if there are 1,243 conserved sequences, each could have a small selection coefficient, small enough that these researchers would miss the effect, but large enough to keep the sequences conserved. Mice can have very large effective population sizes. And in computer simulations and theoretical predictions we can show that those do conserve the sequences.

      Let me turn the tables: if the whole stretch of DNA is neutral, where did the conservation of the 1,243 sequences come from? Magic? Or natural selection against deleterious mutations?

      Note that I am not arguing that the whole stretch of DNA is under selection, but you are arguing that the whole stretch is neutral.

      Delete
    9. The criteria for "conservation" is at least 70% sequence similarity over 100 bp. Given that the aligned human and mouse genomes have many gaps, I suspect that many of these regions of sequence similarity aren't actually "conserved."

      Delete
    10. That means the onus is on adaptationists to show that adaptation is the explanation.

      Not if you are making the assertive statement. There's a reason for the asymmetry of asking for p<.5 in statistical tests. If H0 is "neutral" and H is "significant selection" it is appropriate to ask for data that rejects H0, by showing that p(data|H0)<.05. But a failure to reject does not in fact confirm the null, and p(H0|data) may still be pretty low, depending on priors for the data and H0. If I fail to reject neutrality, that really is a different statement than I accept neutrality. Absence of evidence really is not evidence of absence.

      In the case of the mouse deletion experiments, the most reasonable explanation is that the parts of the chromosomes that were deleted are junk.

      Nope. The most reasonable interpretation of these results is that the parts of the genome that were deleted are under weaker selection than can be detected with the sample size. This would include Junk, but can also include non-junk. You can not make that call from the experiment. This is a classical "It can not be determined from the information given" question.

      Bringing up theoretical examples where megabases of DNA might have a tiny effect that would be undetectable in the laboratory sounds a lot like special pleading to me.

      It isn't. Say we were debating whether a coin is fair. I toss the coin 10 times and get 4 heads. The probability of getting a result as deviant from the expected number or more under the null ("coin is fair") is ~.75. We fail to reject. But if the coin is tossed 10,000 times and we get 4,000 heads, we end up rejecting the null soundly. The smaller a population, the larger selection coefficients need to be to have a detectable effect. As a corollary, the smaller a lab population gets, the larger the percentage of the genome for which no selection is detected. If you have a population of 1, you end up with no detectable selection at all. It's not reasonable to then argue that the whole genome is junk.

      don't think that's as weak an argument as you make out. The deleted segments contained 1,243 "conserved" noncoding sequences. The evidence suggests strongly that the sum total effect of all that sequence (2 Mb) makes no more than a small contribution to fitness that's only hypothetically detected under some imaginary conditions outside the lab.

      It isn't "hypothetically detected", if these sequences are conserved. Conservation is precisely the effect of selection and if these sequences are conserved in the wild, then they in fact detected in the actual conditions in the wild. At this point you are arguing away evidence. We did toss the coin 10,000 times, got 4,000 heads and concluded that it is not fair. Now you argue we should toss it 10 times and if we fail to reject the null, we should accept it. That's not "strong evidence", that's just willfully ignoring evidence to the contrary.

      Delete
    11. Larry: I suspect that many of these regions of sequence similarity aren't actually "conserved."

      Earlier you used their conservation as something that would predict a substantial deleterious effect on fitness if they were deleted. OK, if most of them are not actually conserved then a modest amount of selection against deleterious mutants in the ones that are actually conserved goes further to explain why those are conserved.

      Delete
    12. Joe, let me ask the question backwards.

      Suppose, in the say 2MB of deleted DNA, there are N nucleotides that are functional, in the sense that, deleting them or changing their identity affects fitness by a small amount.

      Let us assume each of the N nucleotides affects fitness by some s that *would* be detectable in a wild type population. e.g. s_nuc = 0.0001 for a population of 5000, following Simon.

      So just say s_nuc is the effect of mutating or deleting each and every nucleotide. (Yeah there are problems with this, but bear with me.)

      So then, given that there's no effect on fitness detectable in the lab, so s_lab >= 0.9 or something, and given that we know the stretch of DNA is 2 MB in length.... can't we compute the upper limit on N, the number of functional nucleotides for a given s_nuc, say s_nuc = 0.0001?

      In other words, treat s_nuc as known, a lower limit for s_lab is known, and N as the unknown. Solve for upper limit on N. Possible?

      Now I know this is simplistic and unrealistic, and it can bet tweaked. For example, we break the 2MB into, say, chunks of 15 nucleotides (just suppose we're looking for regulatory elements and say the average RE is 15 nucleotides long.) So then N_re would be the number of functional RE's, not the number of nucleotides.

      Then you treat s_re as known, let's say 0.0001, so that deleting each RE has a known effect on fitness, and you treat the lower limit on the fitness of the mouse s_lab as known >= 0.9. But the number of RE's, N_re is an unknown. You could solve for an upper limit on N_re, as a function of given s_re, right?

      Do I smell collaboration?

      Delete
    13. @Diogenes: Sure. If n nucleotides are under selection against all mutations, and if we can detect a decrease in fitness of 0.1, then deleting the full stretch of DNA is equivalent to mutating all n, so if we don't detect that effect we must have (1-s)^n > 0.9 or s < 1 - 0.9^(1/n). If n is not known but s is, then n < ln(0.9)/ln(1-s).

      For s = 0.0001 I get n < 1053. But note that mouse effective population sizes are more likely to be in the millions, so s could be even smaller.

      Delete
    14. Well I thought it would be harder than that, but I see you use a simple model for epistasis. Now do the same thing for our hypothetical regulatory elements of length 15 nucleotides. The total number of functionally constrained nucleotides would be

      N_re < 1053*15 = 15,795 bp

      So then the fraction of functional DNA is
      < 15,795 bp/ 2Mbp = 0.79%

      Or 99.21% junk.

      Now, correct me if I'm wrong, if the effective population size N_e = 1,000,000, then an

      s >= 1/N_e = 0.000001

      would be detectable by NS in the wild. So to redo the math.

      n < ln(0.9)/ln(1-s) = ln(0.9)/ln(0.999999)

      n < 105,360

      Then the fraction of functional bps, assuming each RE is 15 bps long, is

      15*105,360 /2Mbp = 79%

      So if effective population size in the wild were 1 million, and deleting each RE were detectable by NS in the wild, there could be 105,000 REs and their collective deletion would be undetectable in the lab. Then the lower limit on the fraction of Junk DNA would be 21%. Right?

      Delete
    15. Or to redo the math yet again...

      Suppose we limit ourselves to REs that have s >= 0.00000158. Note that this is only 58% higher than the s I considered last, which was 10^-6. The deletion of one such RE would be detectable by NS in a wild population of N_e = 633,000.

      In this case, the number of REs that could be deleted without detecting them in the lab would be N_re = 66,693.

      So then the fraction of functional bps would be

      N_re*15 bps/2Mbps = 0.50

      And the megabase deletion regions would have to be 50% or more junk.

      So, I think the scientists should do just a few more megabase deletion experiments on mice, as this would firm up the limits, and could eliminate the possibility of mouse DNA being more than 50% functional, with "function" defined by the criteria of affecting fitness at a level detectable by NS in the wild with natural population sizes.

      They're right on the cusp of proving it's mostly junk.

      Delete
  5. Lets grant that your claim is true, and 90% is junk. What conclusion do you make in regard of the quest of origins ?

    ReplyDelete
    Replies
    1. What is "the quest of origins"?

      If it's speculation about the origin of life, then the fact that 90% of our genome is junk is irrelevant.

      Does knowing that our genome is full of junk help you prove that your gods exist?

      Delete
    2. Grasso, you know bacteria pretty much have no Junk DNA, right? So Junk DNA doesn't clarify the origin of all life, right?

      Delete
  6. The fact to be explained is why biochemical systems host non coding section of DNA that have important biological functions,  rather than being evolutionary junk. It would be obtuse to argue that ID predictions were not met, because only a part of junk DNA has been found to have function. If the percentage of junk DNA is 90%, or 10% is not relevant. To quote  Dr. Mae-Wan again:  " Lurking within junk DNA are armies of transposons (mobile genetic elements) that play an indispensable role in ‘natural genetic engineering’ the genome. They make up nearly half of the human genome, and serve as ‘recombination hotspots’ for cutting and splicing, and hence reshuffling the genome. They are also a source of ready to use motifs for gene expression, as well as new protein-coding sequences.The most abundant SINEs are Alu elements, of which 1.4 million copies exist, comprising 10% of the human genome, and are apparently only found in primates. Alu itself cannot move, but depends on enzymes encoded by LINE1 in order to insert itself. New research is suggesting that Alu elements may help create new proteins from existing ones. The reasons the human genome contains so few genes – the latest count is just under 25 000, is that more than half the genes are interrupted and subject to alternative splicing. It appears that about 5% of alternatively spliced internal exons in the human genome originate in an Alu sequence. It suggests that Alu elements can actually jump into genes and, instead of destroying that gene, actually contributes a new coding sequence to it.

    Gil Ast, head of a group in Tel Aviv University, Israel, which has made some of the most significant discoveries about Alu, is understandably pleased. "We believe that Alus allowed the shuffling of genetic information that may have led to the evolution of primates," said Ast.

    So the part of Junk DNA that has function adds to the several other code systems in the cell that explain multicellularity and biological complexity. Junk DNA doesnt help your case, Larry. Get over it.

    ReplyDelete
    Replies
    1. Oh boy, we're off into speculation land. "New research is suggesting that Alu elements may help create new proteins from existing ones." Divide all such observations (if any) by 1.4 million. That's the total number of Alu copies, according to your source. What's the percentage of Alu's that do that?

      And you're conceding that evolutionary natural processes can create new proteins. ID creationists have argued for 2 decades that evolution can't make new proteins, and in "Darwin's Doubt", Stephen Meyer bizarrely claimed that the Cambrian explosion was driven by the Grand Ol' Designer created legions of new proteins from scratch, de novo, because it can't happen any other way besides supernatural intervention.

      Now you're conceding that new proteins can evolve de novo.

      Delete
    2. Grasso's source: "The reasons the human genome contains so few genes – the latest count is just under 25 000, is that more than half the genes are interrupted and subject to alternative splicing." Uggh. Total speculation, not backed up by evidence.

      Of all alternatively spliced genes, what total number of genes have been shown to have functionally relevant alternative splices? I don't mean alternative splices detected from expressed sequence tags or tiny bits of RNA at a copy level of one RNA per cell. I mean how many alternative splices have been shown to be functional, experimentally? Total number. In the literature. Observed.

      Now take that total number and divide by 25,000. What's the percentage of functionally relevant alternative splices?

      Larry has made this point over and over and over and over.

      Delete
    3. Grasso's other source: " "We believe that Alus allowed the shuffling of genetic information that may have led to the evolution of primates," said Ast.

      How many Alu's have been shown to play a role in functionally relevant alternative splices? Total. Observed. In the literature.

      Really, you act like an expert, and you say this shoots down Junk DNA as an argument for evolution. So this is the question. How many Alu's have been shown to play a role in functionally relevant alternative splices?

      I'd guess, off the top of my head, five, ten... fifteen?

      Now divide that by 1.5 million, the total number of Alu's in the human, according to your source. What percentage is that?


      Delete
    4. Grasso, you are all numerator with no denominator.

      Next time don't challenge Junk DNA unless you compute %ages with some kind of denominator counted from the genome.

      Delete
  7. You might want to know that Praveen retracted his use of the term "junk DNA" and posted a long addendum to the original article. I approached him about the confusion and he agreed that his usage was confusing and a "mistake." That is an encouraging response.

    http://www.veritas.org/faith-identity-dna/

    ReplyDelete
    Replies
    1. Unfortunately, the addendum perpetuates the mistaken equation of "junk" and "non-coding". You and Praveen need to understand that it's always been known that some non-coding DNA has regulatory function and is not junk. But this regulatory DNA is only a few percent of the human genome. We're still at 2% genes, around 8% functional non-coding DNA, and 90% junk, not because its function isn't known, but because it evolves in a way that functional DNA should not. Perhaps an addendum to the addendum?

      Delete
    2. Of course we know that. "Junk DNA" isn't even a precise scientific term. And please do not group me with him. I never reviewed that article before publication, and have agreed with your critique from the beginning. I have zero scientific disagreement with you here. Let's not make an argument when none is called for.

      Delete
    3. My apologies. But if you think the addendum fixed anything, you also are confused. "Junk" is in fact a fairly precise term. It refers to DNA that has no biological function. There are a few arguments, about whether DNA whose sequence isn't relevant, only its length, should be called junk, and about whether active transposons should be junk also. But these are minor.

      Delete
    4. I agree the addendum could be more clear. But Junk DNA isn't precise because "function" isn't a precise term. We could say that intergenic regions are "spacers" between genes, and that is their "function." Or as Larry shows, we could say that they have no impact on selection, so they have no "function." IMHO the terminology here is subjective because "junk" is just an imperfect metaphor. It gets more problematic when we deal with DNA that is "doing stuff" that has no selective benefit. I had a maddeningly annoying debate about the proportion of lncRNA that is likely to be functional in an important way. The fact that it was transcribed "meant" it was functional in all cases (they were true pan-functionalists).

      The way that has worked best with confused audiences, for me, is to say that intergenic regions are "sparser" in function than coding regions. It is not that they do not have any function (Nobel prizes were granted for discovering how they regulate genes) but the function is much sparser and tolerant to mutation than coding regions.

      As for Praveen's addendum, I am glad he was willing to publicly change his article. It is not how I would have written it, but I am pretty sure he won't be using the term "Junk" any more. That is about 90% of the problem any ways. I think if you talked to him you would find near complete agreement on the science, but moldable differences in communication.

      Delete
    5. I disagree with most of that. I don't think it's a lack of clarity that's the problem. It's the massive fallacy of equating "non-coding" with "junk", which neither Ohno nor anyone else who used the term ever did. And it's also the equating of biologically irrelevant activity (transcription, transcription factor binding, etc. without consideration of its effect) with function. No, the term "junk" is not the problem, and the problem can't be fixed by ceasing to use the term.

      Delete
    6. Scientifically speaking, I agree with you. You are right. What exactly is our disagreement?

      Delete
    7. I think our disagreement is that you seemed to be claiming that 1) the main problem is using the term "junk DNA" and 2) the addendum improves or clarifies the article rather than making things even worse. If you weren't claiming either of these things, I apologize for misunderstanding.

      Delete
    8. No harm done. I think "junk DNA" is an imprecise term that means all sorts of things to different people. The main problem is not the word but how it its meaning has been distorted in a manner that totally confuses the public. So using it without clearly explaining it is very problematic right now. This doesn't mean we never use it, but is a term that usually confuses in the current moment. And of course, non-coding DNA is not synonymous with "junk," and never was. But this is what some people incorrectly think when we say "junk." This confusion enables some to falsely claim that "junk DNA" has been debunked. There is so much misinformation here that it is just best for most people to avoid the term entirely, lest they say something ill advised. Of course, ideally, many of us should be clearing up the confusion here. So I do think we should talk about it.

      As for the addendum, I'm not endorsing it as the most clear resolution. I would have written it different. I do think, however, that it demonstrates some admirable humility. Would were that true of all of us. Also, I think he won't be saying those problematic things in the future. That is good right?

      Maybe you still disagree with me on this, but let's at least be glad we agree on the science here, and that we are working to communicate the correct story to the public.

      Delete
    9. The correct story is that about 90% of our genome is junk. It has no biological function. It is preserved because natural selection is not powerful enough to remove it. If that's the story you want to communicate then I'm with you.

      The other correct story is that there's no evidence for the existence of supernatural beings and it's anti-science to believe in something without evidence. If that's the correct story of science you want to communicate then I'm with you.

      Delete
    10. The word "evolution" has likewise been distorted in a way that confuses the public, but nobody suggest we abandon it. The term has a clear meaning, and that meaning just has to be made clear and taught to the public. It's the same with "junk DNA".

      Neither the article nor the addendum helps. Humility is a fine way to approach the universe, but humility alone won't do it. Instead, you need to make a careful distinction between what we know and what we don't. The addendum is being too humble about some things we do know. I don't share your confidence that he won't say problematic things in the future, because he doesn't seem to be aware of what the problems are. Science writing needs to be both clear and correct.

      Delete
    11. Well Larry, we are pretty close. I agree...The correct story is that about 90% of our genome is junk. It has no biological function. It is preserved because natural selection is not powerful enough to remove it. I also agree, there's no clear scientific evidence for the existence of supernatural beings. Along these lines, I think it is misguided to try and make scientific arguments for God.

      The place we disagree (I imagine) is my belief in God. In this I follow a long tradition of scientists, including Francis Collins. I don't think that science is intrinsically atheistic. I am in close agreement with Eugenie Scott (a non-theist) who writes,

      "Because creationists explain natural phenomena by saying "God performed a miracle," we tell them that they are not doing science. This is easy to understand. The flip side, though, is that if science is limited by methodological materialism because of our inability to control an omnipotent power's interference in nature, both "God did it" and "God didn't do it" fail as scientific statements. Properly understood, the principle of methodological materialism requires neutrality towards God; we cannot say, wearing our scientist hats, whether God does or does not act. " https://ncse.com/religion/science-religion-methodology-humanism

      We might disagree on this, but our common ground should be enough. Right?

      Delete
    12. And if you still have issues with his addendum, I understand that. I encourage you to contact Praveen directly and explain what would make things more clear. Maybe he will change what he wrote, or issue a new clarification. His email is easily found online. I can't say for sure how he will respond, but I was impressed by his response to me.

      Delete
    13. If I were you, I'd encourage Praveen to read Larry's extensive list of past posts on the subject of junk DNA, including the history of the term. He would find them informative.

      Delete
  8. Also, you might want to know that both Praveen and I are working with BioLogos to better help religious communities understand the overwhelming evidence for evolution. If we say things that are not opportune, please do contact us directly. The last thing we want to do is further confuse the public.

    ReplyDelete
    Replies
    1. The original article was confusing and the addendum is even worse. You have a lot of work to do if you really want to help religious communities understand science.

      Delete
    2. I agree on many levels. First, I do not think most scientists are familiar with how creationists pounce on specific usage of words. Most scientists publishing inopportune statements about junk DNA think they are having an internal to science debate. They do not know how their statements are being interpreted outside of science. You do. I hope you actually contact authors directly in the future. Hopefully, with that information, they might adjust their language. That would do much to help all of our efforts.

      Beyond "junk" the bigger problem in religious communities is Doug Axe's new book and the Ark Encounter. I'm hoping in particular you can write some more about Axe.

      Peace.

      Delete
    3. @ S Joshua Swamidass

      I'm curious as to what you think scientists can do to change the mind of someone who finds something like the Ark Encounter convincing?

      Delete
    4. That is a really important question. I imagine you are accustomed to close minded YEC that are impervious to evidence. For most of the public (about 44% that are YEC in the US) the situation is different. Most of them have, frankly, been lied to by people they trust. Moreover, they have been told that all scientists are irrational bent to extinguish their faith. How should we respond as scientists?

      I think we need to take very seriously our responsibility to engage the public with science. We need to do this respectfully, avoiding name calling, etc. And we need to partner with religious leaders to do this, to bridge the gap in trust. Essentially, as scientists, we need to become effective "ambassadors" of science.

      Thankfully, this is starting to happen. I am advisor for the www.scienceforseminaries.org program that is partnering with seminaries to incorporate mainstream science into religious leader curriculum. Some of my colleagues just published a truly excellent book on Grand Canyon geology, targeted at YEC religious audiences. https://www.amazon.com/Grand-Canyon-Monument-Ancient-Earth/dp/0825444217 And one of my friends has an effective blog that does help YEC see science correctly. https://thenaturalhistorian.com/

      The reason why YEC is embraced, in the vast majority of cases, is ignorance, not stubbornness. We just need to find ways to build trust with those that do not know the science, but are willing to learn. And trust will be just as important as the evidence.

      Delete
    5. I wish you luck, but I'm not optimistic. The BioLogos project has very little success to show so far, for all its good intentions.

      Delete
    6. Thanks for the well-wishes. We shall see. I have some hope, but only time will tell. At the very least, this is the good fight. It is worth trying.

      Delete