Creation and Evolution Blog

This blog has been superceded, and is only here for archive purposes. The latest blog posts, depending on topic, can be found at one of the blogs at the new location!

Discusses creation and evolution, mostly from a creation perspective.

Thursday, June 23, 2005

Evolution, Chance, and Design (T.O. CB940)

One of the more common criticisms that creationists have to evolutionists is to believe that all of this arose by chance. Evolutionists are always quick to say "natural selection is anything but chance." While I wholly agree with the evolutionists statement, I want to take the time to point out why it doesn't answer the creationist's criticism at all.

This includes a rebuttal of Talk.Origin's response to CB940, though my inspiration for writing this was elsewhere.

So instead of starting out addressing them point-by-point, let me give you a more general discussion.

Is Evolution "Chance"?



In evolutionary theory, there are mechanisms which can be included as evidence of evolution and mechanisms which cannot. For example, if a genome were designed to change itself in a specific in response to a stimuli, or in any specific manner, this would not count as evidence for evolution, because the process for doing this is already there. It is "designed in" for lack of better terminology, whether you believe that design to be evolutive or creative. The evidence could be taken easily either way.

What evolutionists say, however, is that often organisms change in non-specific ways. There are many options for non-specific changes -- nearly an infinitude of them. In each organism, some amount of non-specific changes occur, and those changes which allow the organism to reproduce more are in fact reproduced more often, and those changes which inhibit the organism's reproduction are in fact reproduced less. If no changes were non-specific, and really, if most of the larger changes aren't non-specific, then it is no longer evolution that can be claimed, but a machined designed in a specific way.

So, you see, there are two sides to this -- a generative side and a selective side. The selective side is definitely non-random. HOWEVER, for something to count as evolution and not a "built-in", the generative aspect must be non-specific. When most people talk about randomness they usually aren't talking about randomness in the sense of a mathematical randomness, but simply meaning that it occurs in a non-specific way or direction. As such we can definitely say that while natural selection is surely non-random, the generative mechanism for evolution surely is, otherwise it would have to be progressive creation rather than a true evolution.

To see this in play see Mutations Are Random and Misconception: “Evolution means that life changed ‘by chance.’ ”. In fact, that second link has a great summation of the issue:


Random mutation is the ultimate source of genetic variation, however natural selection, the process by which some variants survive and others do not, is not random.


Can Random Change, When Selected Non-Randomly, Produce ALL the Diversity of Life?



Dawkins has attempted to explain how this is reasonable using computer simulations. The simulations he proposed in The Blind Watchmaker are about generating the phrase "methinks it is like a weasel" by random typing. He proposed typing in random characters, and then letting it "breed" by duplicating itself with minor changes. Whatever phrase is closest to "methinks it is like a weasel" is kept and the rest are discarded, and it starts over from there. However, this does not get at the main problem in two separate places: (1) the intermediate forms make no sense whatsoever. For life to evolve, the intermediate organisms have to make some sort of sense for them to propogate at all (this is another reason why I disbelieve evolution -- the idea that all intermediate forms are stable from molecules to man is just foolish). (2) the model of evolution is actually quite theistic, as there is a model that has to be met.

In Climbing Mount Improbable, Dawkins tries a different, and much better approach. He has models of such things as spider webs. Each spider has a different set of techniques. Each one is slightly modified from generation to generation, and the ones that survive are the ones that catch the most flies. This is a much better model, showing natural selection in a much truer light. However, what Dawkins fails to realize, and others who say that "evolution is not random" also fail to realize, is that Dawkin's model has pre-coded certain types of web-spinning available, and that is what is varied by. By doing this, he only confirms what creationists already believe -- that there is quite a lot of variety in the world, and species can mix it up and produce different offspring. Natural selection only adds the not-quite-as-ingenious-as-they-think compendium that "dead things don't reproduce".

So how can I say that Dawkin's model is good, but still think that evolution is wrong?

Why Dawkins Changes Are Inadequate for Proving Evolution as the Primary Cause of Biodiversity



First of all let me point out that creationists don't believe that all of evolution is wrong. For more information on that point, see another article of mine. What we believe is that new structures are not produced from nothing without the hand of an intelligent agent. Dawkin's simulation used only a shuffling of pre-existing parts, not the creation of new parts. In fact, that's all a program can do -- rearrange existing parts. So what would be required of Dawkin's program to show the kind of evolution that creationists don't believe is possible?

It would require the program to show something new happening. Something not contained within the original pattern of the program. For example, a spider deciding, whether after one or a million gradual generations, "you know what, I think instead of using this web to catch flies, I'm going to do something else entirely." Or likewise, taking the nearest non-spider relative of spiders, and, without precoding the special spider information in the program, using any natural selection method you want, showing how a spider can be produced from something else. Show the generation of the web material itself and the idea of forming a web without any reference to such activities in the program. Also remember that evolution has to explain the existance of so many genetic codal variations (not the sequence, but the base->protein coding itself), how such a change in code would not utterly destroy any existing organism.

Likewise, for the articles I've investigated in Talk.Origin's post, the same problems apply. For example, in the robotics one, they have the program select the best, most mobile arm from a selection process (selected for movement) from a collection of arms randomly assembled from parts. That's all well and good, but evolution claims that the parts themselves were generated by the same process, without any reference to arms or movement in the code for it. Not just a shuffling of existing parts, but the creation of new parts for something not even intended in the code. For example, if the computer selection for movement had discovered a new material when it is not even programmed to look for new materials, and had then selected the best one and even knew the appropriate way to place the new material within the newly created part which it was not programmed to build, and perhaps even found something that performed the task at hand better than the specific movement mechanism it was trying to achieve.

Under those circumstances, I would start taking notice, but not in this simple shuffling of existing parts which both creationists and evolutionists have agreed upon since time immemorial. Remember, breeding was done for thousands of years by creationists, and it was never thought to be inconsistent, because it was not variation, or even beneficial variation, that creationists objected to, but getting something from nothing.

Examples of How This Does and Doesn't Work



If this sort of mechanism really worked, for example, it would make my job as a computer programmer a whole lot easier. To enhance a program, I could simply write another program to make random changes to it, and then test each one to see if it was a better program. I could remove the need to ever be creative! In fact, it could probably make changes I never thought of. Of course, programs don't work like this. The only thing you'll get in this case is a broken program. Now, it is possible to write a program that made changes to itself -- both randomly and in response to stimuli. However, the mechanisms for change and the categories by which it changes would be pre-coded in the program -- the result of design. Perhaps the exact combination would be a chance event, but it would be within the context of a designed mechanism, not outside of it as evolution claims.

So, am I claiming that all randomized mutations will kill the organism? No. In fact, it is the result of good design that our bodies are able to compensate for faults. However, to view the faults as a fundamental part of how the body came about would be foolish.

What About Beneficial Mutations?



So are all random mutations negative? I would say no. Many say, "well, that shows it -- evolution is at least possible, because you could always just imagine a sequence of beneficial mutations to lead you from A to B". However, that greatly exaggerates the character of such changes. First of all, as I mentioned earlier, it would require a sequence of stable intermediates. That such intermediates exist is only theoretical, and I think highly doubtful. However, let's look more precisely about what beneficial mutations look like.

For example, for a temporary beneficial modification, if your backup system was disabled, that would save electrical energy and manpower for other things. It (a) wouldn't be really helpful in the long run, and (b) wouldn't go any further in explaining how the backup system arose, but it could be classified as a beneficial mutation. But no matter how many of those stack up, you still won't be able to convert an image-processing program into a word processor through random changes, no matter what your selection process is (unless you do a non-evolutionary one, like for "methinks it is a weasel").

Information Theory Requires an Intelligent Source for Information



This is always the problem that evolutionists will run into with DNA, because it is a code. As Arthur Wilder-Smith pointed out, the kind of diversity that exists in the world is the result of novel information implanted into life. Life + Time + Energy will not get you the kind of changes required for the types of fundamental changes proposed in evolutionary theory, with any evolutionary mechanism for selection. Time + Energy + Life + Information will, but true information is always imposed by an intelligent party. Dembski calls this the law of conservation of information.

Can code be rearranged, perhaps at random? Certainly, but notice that the rearrangement mechanism rearranges in such a way as makes sense to the organism -- in a designed way, according to designed, pre-built categories. Now, these mechanisms can lead to a vast amount of variation. The variation possible in people from just two parents with no genomic modifications is more than the number of electrons believed to be in the universe, with every member still being fundamentally human. Add in rearrangements, and you get even more. However, all of this variation occurs within the categories and specifications of the code. The genomic modularity hypothesis (which is enjoying more and more evidence all the time) says that the genome can even rearrange itself according to predefined specifications and categories. However, when the code undergoes a random mutation, then the system becomes less stable. Like other codal systems, it cannot withstand too many unplanned changes before becoming unstable.

The Creationist Hypothesis -- Genomic Modularity



I've mentioned genomic modularity before on this site, but I think, since we're talking about genomic change here, it's good to expand upon. Genomic modularity is the creationist hypothesis that genomes are essentially modular in different areas. That some genes are built for changing in specific ways in response to specific events. There is more and more evidence for this, as we are finding hotspots for change in many genomes, and also finding that some organisms can invoke as-yet-not-understood changes in result to environmental stress, like the SOS mechanism of bacteria.

Now, I'm sure that evolutionists will say that evolution can create such changing mechanisms themselves. Hopefully the previous argument will show that this just doesn't make sense. However, granting that, let's figure out if there are any sort of adaptations that evolutionists might even agree that evolution cannot do. I hate to be presumptuous, but I would say that evolution would not be compatible with full-fledged mechanisms for handling an environment that the organism had never experienced. For example, if there was a full-fledged mechanism for handling life in the vacuum of space within an organism, we would find it odd for that to have evolved in population that had never had any contact with space. This would clearly indicate that there was a pre-coded design for doing so, and such a designer.

Some links about genomic modularity from www.nwcreation.net:



Does the Nylon Bug Prove that Evolution Works?



This has a lot of relevance to things such as the Nylon bug, which evolutionists have pointed to as an example of evolution at work. The logic is essentially this: (a) nylon is new, (b) bacteria can't possibly have always eaten Nylon, (c) therefore this nylon-eating bacteria must be the result of evolution, and (d) we even know which genes mutated and in what ways, and it even includes a frame-shift.

That is all well and good, but without further experiment we cannot know if this was the result of a designed mechanism or if it is truly evolution at work. For example, it has been pointed out that there were many transposons at work here. Transposons form an integral part of the genomic modularity hypothesis, as essentially "work units" for change. Likewise, the idea of a frame-shift is not unusual for bacteria, as many bacteria read multiple enzymes from a single strand of DNA through frame-shifting. So the creationist hypothesis would be that the bacteria was built to modify its digestive protein based on the abundance of available material. When Nylon abounds but no normal food source exists, the bacteria alters its genome to eat on whatever is available.

Some facts in support of the creationist hypothesis:


  • The change happened quickly -- it is not widely known, but creationists usually propose faster mechanisms for biodiversity than evolutionists -- it's just that, as this essay points out, there are limitations and boundaries to that diversity.

  • The change utilized known transposons, which are one of the keys to genomic modularity.

  • The change affected two genes in concert. Evolutionary theory would have this be highly unlikely in such a short timeframe, but makes perfect sense as a predefined mechanism for change in response to food supply.

  • The new genes were significantly changed, not the gradual kind of change that evolution suggests. Remember, evolution agrees with creationism that completely random generation of genes is ludicrous, therefore such a large change necessitated to produce the enzyme in such a short time without beneficial intermediaries is not indicative of evolution. Evolutionists claim that this was a simple frame-shift mutation, but in fact there were many more changes than that.



Two interesting links on this are:



So how would one test whether the creationist or evolutionist hypothesis was correct? By taking the non-modified form of the bacteria and adding it to nylon in several separated populations. If each population undergoes essentially the same change, then this is obviously a directed mechanism. If only some populations survive, and each that does survives in a completely different way, utilizing completely different adaptations on completely different genes, while it does not completely invalidate the creationist hypothesis, it lends much more credence to the evolutionist one.

Also of interest is one of the concluding paragraphs of the AiG article mentioned above:


P. aeruginosa was first named by Schroeter in 1872.10 It still has the same features that identify it as such. So, in spite of being so ubiquitous, so prolific and so rapidly adaptable, this bacterium has not evolved into a different type of bacterium. Note that the number of bacterial generations possible in over 130 years is huge—equivalent to tens of millions of years of human generations, encompassing the origin of the putative common ancestor of ape and man, according to the evolutionary story, indeed perhaps even all primates. And yet the bacterium shows no evidence of directional change—stasis rules, not progressive evolution. This alone should cast doubt on the evolutionary paradigm. Flavobacterium was first named in 1889 and it likewise still has the same characteristics as originally described.


So, we have a fast-mutating species, and after millions of generations of reproduction, it still retains the basic properties as originally described, and is still identifiable as itself. You may disagree, but I find this quite evident of the idea presented in this essay -- that information cannot arise from nothing, but can recombine in specific, preprogrammed ways for specific purposes, but remains bound to those mechanisms and categories as they were originally designed.

Clearly the mechanisms of genomic change are long overdue for study, as evolutionary assumptions have plagued them (namely, presuming the lack of predetermined adaptation mechanisms). However, for the short time that studying them has been in vogue, the creationist hypothesis seems to be bearing fruit.

Comments:
A very interesting article. I observe re information that natural selection operates by either re-expressing, isolating (say, ecologically or geographically) or changing some information (say, via mutation). In the first case, information is conserved. In either of the latter two cases, information is lost. There is no apparent way to gain information, so extrapolating natural selection to assume it does so appears to be wishful thinking.
 
This is incorrect, mutations can involve very large duplication of existing genes. Once those genes are duplicated, they are free to mutate on their own. Furthermore, given classic information theory, any duplication increases information content.
Educated readers will know, however, that DNA is nothing at all like computer code - comparisons of DNA to software are bound to be empty of meaning.
 
"mutations can involve very large duplication of existing genes. Once those genes are duplicated, they are free to mutate on their own."

This is nice in theory, but it's just not true. The fact is that duplicate genes are still functional, and when they suffer from mutations they are removed from the gene pool by purifying selection.

Gene duplication does not merely result in a second, redundant copy. In fact, there is some evidence that gene duplication is part of a Lamarckian-type genetic mechanism.

Your statement relies on two premises: (a) that duplicated genes are redundant -- this is false, and (b) that duplicated genes therefore don't have the same selective pressures and are therefore free to mutate without restraint -- also false. For more information on this, see:

http://www.pnas.org/cgi/content/abstract/0503922102v1

(despite its impressive title, the contents agree with the above)

In addition, let's pretend for a moment that your assumptions are true. If natural selection is not acting on a gene, that means that the _only_ things affecting them are mutations, which are more or less random. Therefore, you have an even larger problem than before, because you really do have to mutate through the entire search space in finding a gene. For information on why this is highly improbable, see this article from Protein Science on the ability to generate novel proteins from gene duplication:

http://www.proteinscience.org/cgi/content/abstract/ps.04802904v1

So, not only is the premise incorrect, but the conclusion is unlikely even given the premises.

"Furthermore, given classic information theory, any duplication increases information content."

That's true according to Shannon information theory, but that only measures information statistically, and ignores semantics. So, while that is useful, what is needed is an increase of semantic information in non-designed ways.

Here's an AiG article about the semantic extension to Shannon Information theory:

http://www.answersingenesis.org/tj/v10/i2/information.asp

For some quantitative looks at the semantic information content of genes, see:

http://www.trueorigin.org/spetner1.asp

"Educated readers will know, however, that DNA is nothing at all like computer code - comparisons of DNA to software are bound to be empty of meaning."

Like all analogies, there are flaws, but I think to say that it is "nothing at all" similar would be an overstatement, as they basically deal with the same types of constraints, though the coding for function is quite different.
 
You cannot say that gene duplication always gets removed by further selection or whatever was your argument. It might be what happens most of the time, but that doesn't mean it happens always.

You also seem to ignore the scale of these random-walk searches. Yes, it is glacially slow, but it isn't in a hurry either.

If you just want to believe in god, go ahead. But twisting facts to your liking and planting lies upon lies is inmoral.
 
You cannot say that gene duplication always gets removed by further selection or whatever was your argument

Perhaps you should actually look at what my argument was. It was that duplicated genes are still subject to natural selection. The gene duplication theory hinges on duplicated genes being functionless, and thus being able to freely walk the search space without being selected upon until they are beneficial, and then being selected favorably.

What is found instead is that gene duplications are very functional, and that mutations in duplicated genes are still subject to selection, and therefore are not free to walk the search space as hypothesized in the gene-duplication model.

And, as I also pointed out, it wouldn't matter if it could because the chances of it finding a useful mutation even if it were given freedom from selection is minutely small. Plus, you have the cost of selection, which limits the number of mutational events (of any type, not just base pair) between man and ape's most recent ancestor as being about 1,667, which is far too few. See Haldane's Dilemma
 
Hello,

I must take issue with this comment:

"If this sort of mechanism really worked, for example, it would make my job as a computer programmer a whole lot easier. To enhance a program, I could simply write another program to make random changes to it, and then test each one to see if it was a better program. I could remove the need to ever be creative! In fact, it could probably make changes I never thought of. Of course, programs don't work like this. The only thing you'll get in this case is a broken program."

Actually, there are programs that do work exactly like this, and produce highly complex, functional and efficient programs as an end product. They are usually called genetic algorithms, and they explicitly mimic evolution as a problem-solving technique: beginning with a population of daughter programs, they randomly mutate each one, evaluate the effects of these mutations to see if they are beneficial, then allow the beneficially mutated programs to "reproduce" and join the next generation. Genetic algorithms are already being used in industry to schedule flights at airports and assembly lines in factories, to evolve new antimicrobial compounds for use in cleansers, to increase the efficiency of engines and turbines, and more. There have even been instances where genetic algorithms, by making random changes to a program and then filtering those changes through a non-random process of selection, have produced results that outperform programs explicitly written by human beings to achieve the same goal. See:

http://www.talkorigins.org/faqs/genalg/genalg.html#examples

Your statement that "Of course, programs don't work like this" appears to be in conflict with the facts.
 
ebonmuse -

The problem is that genetic algorithms do not work like evolutionists claim that animal genetics work.

Specifically, they do not allow random changes to programs. Instead, they allow random changes to a small dataset. What makes genetic algorithms (and any other non-deterministic algorithm, for that matter) work is that there is a very large base of the program that is immutable. The "randomized changes" are only randomized within a specific subset of the program or data, and used to accomplish a specific, precoded purpose.

If genetic algorithms worked like evolutionists think that organism genetics works, they would break themselves far before they produced anything useful.

So, while the changes are non-deterministic, to call them random would be incorrect, as the changes can only occur within pre-coded bounds, and the program is in fact expecting there to be changes within certain parameters.

Remember -- genetic algorithms still take programmers to write. They have to be carefully crafted to work at all!

Anyway, I've written about this a little more extensively here:

http://crevobits.blogspot.com/2005/08/genetic-algorithms.html
 
"The problem is that genetic algorithms do not work like evolutionists claim that animal genetics work."

I don't agree. Allow me to address your comments one at a time.

"Specifically, they do not allow random changes to programs. Instead, they allow random changes to a small dataset. What makes genetic algorithms (and any other non-deterministic algorithm, for that matter) work is that there is a very large base of the program that is immutable. The 'randomized changes' are only randomized within a specific subset of the program or data, and used to accomplish a specific, precoded purpose."

I don't think this conveys an accurate impression of how a genetic algorithm works. A genetic algorithm, in general terms, has three components: a pool of candidate programs (the "organisms"); a piece of code that causes those candidate programs to reproduce, mutate, and swap segments of code (I don't know of a standardized term for this, but let's call it the "evolver"); and a piece of code that evaluates candidate programs to determine how effective they are at solving the specified task (the "fitness function"). I presume that the first of those three components, the population of organisms, is what you refer to as the "small dataset", and the latter two collectively make up what you describe as the "very large base" of the program.

First of all, I take issue with your characterization of the relative sizes of these components. Neither the fitness function nor the evolver need be very large or complex - very often they can be quite simple indeed. Conversely, the population pool need not be small; depending on the parameters of the simulation, it can contain thousands or millions of members, and the complexity of any given individual depends on the complexity of the task it represents a solution to. It is entirely possible that the amount of code that mutates and evolves in the course of a GA far exceeds the amount of code that does not.

Secondly, I must object to your implicit claim that the immutability of the other two components is an issue. Of course the evolver and the fitness function are not subject to change. That is just what we would expect, and I would argue that that is analogous to organic evolution. Together, these two components set the rules for the simulation; they are not themselves part of the population, but instead create the "world" in which the population exists. In this respect, these components are analogous to the laws of physics, which define the parameters of an organism's environment, and which do not change.

So when you say this:

"...the changes can only occur within pre-coded bounds, and the program is in fact expecting there to be changes within certain parameters."

- that is the same as evolution in the wild. In organic evolution, too, changes can only occur within certain boundaries - those boundaries being within the genomes of living things. Mutations cannot occur independently of any living thing, nor can they change the physical laws defining the world in which those living things exist. Similarly, the "program" - life, the species, the genome, what have you - is likewise "expecting" change to occur, in the sense that genomes are structured so as to make it possible for evolution to happen.

Finally, you said:

"Remember -- genetic algorithms still take programmers to write. They have to be carefully crafted to work at all!"

I hope you realize that that is not an argument against evolution. At best, it is an argument for theistic evolution. And please do note that genetic algorithms can produce outcomes incorporating information that their programmers did not have.

In any case, the fact still remains that genetic algorithms, by combining random changes with a non-random process of selection, very often produce impressively complex, tightly efficient, and highly functional results. That is, of course, the essence of how the evolutionary process works in living things as well. Genetic algorithms are not an exact simulation of how evolution works in the wild, and it would be absurd to expect them to be; but they do show clearly that the evolutionary process constantly derided by creationists as incapable of producing true novelty, in reality, can and does work.
 
"I presume that the first of those three components, the population of organisms, is what you refer to as the "small dataset", and the latter two collectively make up what you describe as the "very large base" of the program."

Not really. Perhaps if you were only referring to the genetic algorithm portion of a program that might be true to some extent, but then you wouldn't have a very useful program. Did you read the link I provided?

There are many problems with genetic algorithms within the context of modelling life. First of all, the functions for reproduction are not contained within the genetic algorithm at all. In an organism, pretty much everything is in the genome -- instructions for the organism to grow, develop, copy itself, etc. In genetic algorithms, none of these portions are subject to mutational effects, but in evolutionary theory these happen with the same regularity as everywhere else. Not only must a mutational change provide instructions for the adult, but also it has to be comprehensible to the zygote and intermediates as well. Also, the genome contains all of the instructions for the machinery itself. Second of all, the genetic algorithms cannot usually be subject to extinction. So, you can have the full freedom of the search space when nothing makes sense. Biologically, this would lead to extinction. For example, if state Y required passing through state X, but state X was an organismal dead-end, the biological species would go to extinction while the genetic algorithm would keep on chugging. Finally, the expressivity of the coding properties in genetic algorithms is (a) very low, and (b) has an inordinately large set of stable possibilities. In computer programming, expressivity is a direct relation to how chaotic the underlying programmed system is. For example, of all of the basic cellular automata studied by Stephen Wolfram, only one of them -- rule 110 -- was turing-equivalent. Rule 110 is also the most chaotic of them. Genetic algorithm programmers usually use non-chaotic "genetic" elements, which means that their programs will have both (a) an unusually high set of comprehensible sequences and (b) an unusually limitted set of possibilities. Life is even more chaotic than turing machines -- IIRC only about 1 in 10^11 proteins within a search space are reactive AT ALL, much less in a functionally useful way (please check my numbers -- I forget where this was referenced and may be off). Finally, organisms have multiple, relatively-independent subsystems, while genetic algorithms do not.

Anyway, I have to stop now, or I'll miss my bus. I'll try to finish up later, but I think most of my responses are simply going to be extensions of the above or a reference to the link I gave you earlier:

http://crevobits.blogspot.com/2005/08/genetic-algorithms.html
 
Wow.. I'm still digesting all of this information. I'm very impressed by how you've collected all of this.

I'm not very "scientific" so this is almost too much for my finite mind to handle... =) But wow, this is very cool. Thanks!!!
 
You're confusing the example. You said, "Whatever phrase is closest to "methinks it is like a weasel" is kept and the rest are discarded, and it starts over from there. However, this does not get at the main problem in two separate places: (1) the intermediate forms make no sense whatsoever."

Actually, the intermediate forms do make sense as defined by the rules of the example. It's all about survival, my friend. You're operating with the assumption that evolution is working toward something (in this case an understandable phrase). This is a common misconception.

So, "Mathunks it as bike a veesel" makes more sense than "jhdfasjdgasjdgshd" because its construction allows it to survive and reproduce. It is more "fit" than a random jumble of letters. And as long as it is more fit --able to survive and reproduce-- it's existence makes "sense" (i.e. its construction is advantagous)
 
Excellent blog article. found your blog through CRSnet.

The origin of information is a brickwall that the evolutionists keep crashing into.
 
Plus, you have the cost of selection, which limits the number of mutational events (of any type, not just base pair) between man and ape's most recent ancestor as being about 1,667, which is far too few.

Please explain why, if that number is correct, it is 'far too few'.
 
The number of mutational events that appear to have occurred between chimps and humans is in the millions. If you half that (since we are going from a common ancestor), you are still _far_ above 1,667.
 
The number of mutational events that appear to have occurred between chimps and humans is in the millions. If you half that (since we are going from a common ancestor), you are still _far_ above 1,667.

You are completely unaware of what "Haldane's dilemma" actually indicates, that much is obvious.
If yoiu are going to refer to an issue like that, it seems reasonable that you should at least have an understanding of the basics.
Haldane's model dealt with fixed beneficial substitutions, not all substitutions.
As an example of how out of thye ballpark your take on Haldane's dilemma is, consider this:

Any two humans differ by millions of substitutions. According to your take, no two humans can be related...
 
"Haldane's model dealt with fixed beneficial substitutions, not all substitutions."

That is correct, but do you honestly think that out of the millions of differences between chimps and humans, less than 1% of them are neutral?

I'm also pretty sure that the differences I am quoting from are fixed in the population, but I will have to double-check.

For more information, see:

http://www.nwcreation.net/wiki/index.php?title=CB121
 
That is correct, but do you honestly think that out of the millions of differences between chimps and humans, less than 1% of them are neutral?


Um, no - I think that most of them are neutral. Please slow down and re-think your statement.


I'm also pretty sure that the differences I am quoting from are fixed in the population, but I will have to double-check.

No, they are not. In fact, they cannot be, for as I mentioned, any two humans differ by some several million nucleotides, therefore, it is impossible to determine how many substituions, especially neutral ones, are in fact fixed.


For more information, see:

http://www.nwcreation.net/wiki/index.php?title=CB121


You may have noticed that it is on a creationist site. ReMine the electrical engineer who has been misrepresenting the issue for years.

For example, he claims that 1667 fixed beneficial mutations are too few. But he does not know what traits the ancestor had! Therefore, simple logic dictates that he cannot know how many are too few. I know this because I have asked him on many occasions and he simply ignores the question.

Further, in his terrible book, he implies that it would take more than 500,000 such changes if evolution were true. This is, of course, quite stupid - the human genome only has 25-30-,000 genes!

Further still, ReMine acknowledges that neutral mutations can also contribute to phenotypic differences and that they accumulate at a higher rate than do beneficial mutations, not to mention single nucleotide polymorphisms.
 
"Um, no - I think that most of them are neutral. Please slow down and re-think your statement."

There was a missing "not" in that statement. Thanks for pointing it out!

"In fact, they cannot be, for as I mentioned, any two humans differ by some several million nucleotides, therefore, it is impossible to determine how many substituions, especially neutral ones, are in fact fixed."

No it is not. It would be based on sequencing a wide range of humans. If the change exists in a wide sample of humans, it would be shown to be fixed. Most of the differences between chimps and humans are fixed. There are 35 million base substitution differences, as well as 5 million insertion/deletion events (totalling about 40 million nucleotides).

"You may have noticed that it is on a creationist site."

Yes. In fact, I wrote it. So what? Are creationists wrong by definition?

"ReMine the electrical engineer who has been misrepresenting the issue for years."

Really? I guess that's why the editors of peer-reviewed journals have been telling him they won't publish because the issue is already well-known?

"For example, he claims that 1667 fixed beneficial mutations are too few. But he does not know what traits the ancestor had!"

This is silly, because we know the number of mutational events between chimps and humans. Likewise, I would imagine that there would need to be at least that many changes just for going to obligate bipedalism.

"Further, in his terrible book, he implies that it would take more than 500,000 such changes if evolution were true. This is, of course, quite stupid - the human genome only has 25-30-,000 genes!"

Do you think genomes change an entire gene at a time? You should read Behe's peer-reviewed article about how long it takes just to change three _amino acids_. In addition, the regulatory elements are not included in gene counts, and are just as important if not more so. In addition, a better count would be of proteins, not genes, which then goes into the hundreds of thousands.

"ReMine acknowledges that neutral mutations can also contribute to phenotypic differences and that they accumulate at a higher rate than do beneficial mutations, not to mention single nucleotide polymorphisms."

This is true, but the rate of change would simply have to be astronomical, unlike any changes we've ever seen.

The fact is, you are basically limitted to 1,667 beneficial mutations. That's barely even enough to build a full gene, much less account for the beneficial differences between chimp and man.
 
"I guess that's why the editors of peer-reviewed journals have been telling him they won't publish because the issue is already well-known?"

This is quite incorrect. Editors told him they wouldn't publish on another issue because it was on material that had been published 25 years ago. It wasn't about Haldane's Dilemma.

"The fact is, you are basically limitted to 1,667 beneficial mutations. That's barely even enough to build a full gene, much less account for the beneficial differences between chimp and man."

Also quite incorrect. Since these are beneficial mutations, they are all gene modifying mutations. Walter has also acknowledged that in the same time, you could get 25,000 neutral expressed mutations. So, how many base pairs does this come out to be? I have gone a few rounds with Walter on this, using his own assumptions. Here is what Walter wrote:

"Evolutionists do not get to assign the 1,667 mutations any way they please, say, as "regulatory genes" or as "mutations with a large effect". Nature does not work that way. Rather, the preponderance of mutations will be of the ordinary kind, with a small effect. Let me illustrate the concept with crude figures: about 1500 mutations with an ordinary small effect, 100 more for re-positioning genes on chromosomes (inversions and so forth), 60 as gene duplications, and 7 mutations to regulatory genes that have a larger effect – for a total of 1,667."

The Human Genome Project estimates that the average gene consists of 3000 base pairs. Just taking Mr. ReMine’s assumed 60 gene duplications, this would provide for 180,000 new base pairs not present in the common ancestor, and this is just from a small fraction of the 1,667 beneficial mutations. It is obvious that 1,667 beneficial mutations and 25,000 neutral mutations could add up to a great deal of base pair differences.

The funny thing was that when I pointed this out to ReMine, he complained that I had misrepresented him. In dealing with him over the years, I have learned that anything he did not explicitly say - even if it is a given from his argument - amounts to misrepresentation if he doesn't like the outcome.

The Haldane argument is bunk, plain and simple. I have personally challenged ReMine to defend, and all he does is dodge and weave. See: http://www.baptistboard.com/cgi-bin/ultimatebb.cgi/topic/36/111/4.html?
 
http://all-too-common-dissent.blogspot.com/2006/01/haldanes-dilemma-another-creationist.html
 
mqlicng pcgo jlurgnvm hvfeons nfaswjh qugcrtnfl qnum http://www.hvsnpkgb.eylvqdnbo.com
 
ubtvdsp rogt bljed bcktp hdnvml rewqguvn lorh byameqs zeir
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?