The Original Poster asks
Why aren't the IDists jumpign on this thread to defend the EF?
So what does Sal have to say?
Maybe we have more fun things to do with our time..... I'm willing to respond to questions, but since we are talking about the EF, it would be good to know how many of you have Bill Dembski's book, Design Inference. If any of you are arguing agaist a book you haven't read, or have current access to, I find it difficult to justify spending time debating book reviews by people completely unfamiliar with the literature. Given some of the anti-ID comments I've read so far, it appears: 1. the book hasn't been read by those criticizing it 2. if it has been read, it seems there has been a mis-comprehension Independent of whether the EF is legitimate, it can't be discussed fairly until it is represented accurately. So far, I've seen little evidence that it's being accurately represented, much less legitimately criticized. Salvador Cordova
OK. Most of the time i'll just post what Sal has to say, after all he later claims there is a example of the application of the EF in there. In Sal's posts, not anybody else's.
Sal hits back at his critics who've been complaining that "wtf?" basically. To come along and say ID is more like some sort of new wave therapy that only works if you give it a chance is pathetic. The claims for the EF are quite simple. And they should be demonstrable.
The fact some like 2nd classs demands things like this indicates that he is unwilling to give Bill's work a charitable reading and that he ignores facts which I have repeatedly explained some things to him and which he doesn't accept.Not looking good for Sal's post number 2. He then posts a quote from his Lord and Master Debmski
The Explanatory Filter faithfully represents our ordinary practice of sorting through things we alternately attribute to law, chance, or design. In particular, the filter describes how copyright and patent offices identify theft of intellectual property …. Entire industries would be dead in the water without the Explanatory Filter. Much is riding on it. Using the filter, our courts have sent people to the electric chair. Bill DembskiIt's his get out of jail free card. The first line is key. We'll see why later. Sal then posts
The Design Inference
The point of this passage is to show Bill Dembski is arguing in the ordinary course of human affairs an Explanatory Filter is applied.Predictably this is met by they crowds with "boo, cheat, we wuz conned". After all, an example was plainly requested.
Now, the typical Darwinist will argue there is not one court case that references Dembski's work. Such an argument is a deliberate misrepresentation by Darwinists of Dembski's writings. The sense of what Dembski is saying is that there are ordinary practices in detecting design every day. The label he applies is the Explanatory Filter, he is not claiming his work was used by the courts.
The math and the book are simply a formalization of this ordinary practice. It does not automatically imply an object is intelligently designed if it passes the EF. Passing the EF means that by ordinary practice we would label a thing as having the property of being designed. We say biological forms are designed in such and such a way. Even in the common language of Darwinists, the word "design" is hard to avoid. This is natural because they implicitly use the EF. Darwinists will then argue that such "designed" objects can be shown to be the product of mindlessly arranged physical processes.
When mindless Darwinists try to solve the OOL problem, they are trying to find a way an object, namely life, which has features that pass the EF (like it's computer architecture), via mindless means. They are trying to solve the problem of how mindless physical forces can create computers, and computers pass the EF.
What the formalized EF shows is that Darwinism is a square circle type theory. The EF is used to show the inherent self-contradictions Darwinisms claims. No Free Lunch elaborated on this.
Thus, given these considerations, it is evident 2nd class is unwilling to give Dembski's work a charitable reading. The others, given their attitude, I have about the same regard for. I invite such individuals to continue believing Darwinism and all its falsehoods. Men love darkness, that is quite evident by the commitment I seen in many Darwinists. So, I have little intention of trying to persuade those who would rather think they are the product of mindless purposeless forces.
With respect however to David Heddle, that is another story. I have never insisted anyone accept ID as science. What I do object to is the labeling of self-contradictory Darwinian metaphysics as science. Dembski's math makes a devastating case against Darwinian pseudo science. One need not accept ID to see the veracity of his arguments against mindless Darwinism. That's the power of the EF!
It appears brother Heddle has some negative views about Bill personally. I have seem brother Heddle liken Dembski to Dawkins. I would hope that would not impede David from seeing the fact the Bill has made a devastating critique of Dawkins Blindwatchmaker. I have said many times, one may view ID proponents as absolute scoundrels, and even if true, that does not change the facts at the end of the day. Mindless Darwinism was not the mechanism by which the Intelligent Designer created life.
Next up from Sal:
[Apoligies to Dave for mis-stating his situation with Dembski. Though I don't agree that Dave is being as charitable as he could be to Bill, I will defer to Dave to articulate his position on Dembski versus me trying to articulate it for Dave.]
First the EF in ordinary human practice. Then we can explore it in the context of biological evolution.
We often infer design because we have seen humans building certain artifacts. The design inference is believable for practically everyone when dealing with artifacts we believe humans are capable of making. The design inference for biology is rejected by many because they have not seen:
1. God or space aliens
2. God or space aliens in the act of making a new life form
That is respectable reservation, and if one will be hindered from accepting ID until direct observation of the designer is first made, then I respect that, such a one is under no obligation to accept ID, and perhaps nothing I or any ID proponent says will be convincing until they see with their own eyes the Intelligent Designer face-to-face. That position I have sympathy for, but it is a position I have no hope of changing. A mircale will have to be the cure for such sitations.....
[However, I will say from a math standpoint, ID's criticism of minldess abiogenesis and Darwinian evolution are mathematically sound. The ulitimate claim of ID is a separate issue, but it's criticism of minldess abiogenesis and Darwinian evolution are quite sound. So if for nothing else, the study of ID will bear this out, even if the ultimate inference to intelligent design is rejected (i.e. Hubert Yockey, Jack Trevors, etc.).]
That said, let us first look at the ordinary practice of using the EF with man-made objects since the EF applies to these cases. Applying the EF to man-made cases helps understand applying the EF in biology.
Consider a copyright infringement case (which Bill includes as valid instances of the EF).
Let's say a particular work of literature is plagiarized without giving proper attribution to the original author. Let's say it's particularly egregious where entire pages are copied verbatim. Let's say the plagiarism happens in a journal article. In such case the journal article is illegal. [note: Francis Beckwith told me he was involved in uncovering a major case of this, and it was this incident that I'm thinking of.]
Yes, the illegal article is of human design simply because it is the product of humans, but can we detect design above and beyond the fact that journal articles are already designed? Yes. The EF helps us determine if the article has a plagiarized design.
The detection of plagiarism is the detection of a design. There are of course mathematical details as to how strong the inference of detecting a plagiarized design is, but at some point the weight of evidence will convince law enforcement that a plagiarized design has been detected. The detection of a plagiarized design is an application of an Explanatory Filter.
In brief, there is a very large space of possible English paragraphs, even larger of pages written in English. Even allowing that two writers are writing on the same topic, the space of possible ways to write on the topic are enormous. The chances that two people would write several verbatim identical pages are astronomical. We don't have exact numbers, but estimates are sufficient for a court of law. The fact that a PHYSICAL ARTIFACT, namely, the illegal article conforms to an independent pattern (some one else's writings), allows us to infer a plagiarized design. Detecting a plagiarized design is detection of an intelligent design in the act of plagiarism.
We know that neither chance nor regularity are adequate causes, we thus infer design. Of course the inference is far more believeable because the intelligent agencies are directly observable, but we can say, on the presumption that such agencies exist, design is detectable.
If one can accept that detection of plagiarized designs in literature is a valid instance of the EF, then I can start discussion detecting ID in biology.
So if we can accept plagiarized designs do what with the what now?
And the we're going to start a discussion about detecting ID in biology? I thought we were going to have an example of the EF, not a discussion about the EF. Sal, as ever never quite delivers.
I said referring to the Explanatory Filter:The detection of plagiarism (sic) is the detection of a design.
Steve H says it isn't. Oh well, Steve H, believe what you want then. The example I gave is an instance of detection according to ID literature. If you wish to close your eyes fine. Assume there was no intelligent design in the act of plagiarism. I can see what a waste it is to talk to you.
Sal is full of comments like "it's a waste of time to talk to you". If it's such a waste of time, why bother even responding to the comment at all? You could be running the EF!
Somebody complains that Sal is all talk, no numbers, as predicted before he appeared.
Sal is doing a fine job persuading ID sympathetic readers don't ya think? If taunts don't legitimize Darwinism, does not demonstrating the EF legitimize it instead?
Oh well, it seems I'm not going to convince you. I tried to open the discussion to help educated those who really want to learn. In your case, you're invited to keep you mind closed. I have no reason to cater to questions from the closed-of-mind. Believe in Darwinism all you want. Nothing is stopping you. Nothing I say will convince you.
That's fine. I'm here to encourage the readers sympathetic to ID. Your attitude, bottom feeder, helps persuade them that the naysaying by people on your side is just rooted in snide taunting, that you'll find any excuse to believe Darwinism. Fine. Do you best to persuade the ID sympathetic readers with you taunts. Such taunts don't legitimize Darwinism.
And now, what appears (having read the rest of the thread so far, reader, I know whats coming!) to be the actual example of the workings of the Mystery of the Explanatory filter.
First he links to Deniable Darwin
Then a Quote:
Linguists in the 1950's, most notably Noam Chomsky and George Miller, asked dramatically how many grammatical English sentences could be constructed with 100 letters. Approximately 10 to the 25th power, they answered. This is a very large number. But a sentence is one thing; a sequence, another. A sentence obeys the laws of English grammar; a sequence is lawless and comprises any concatenation of those 100 letters. If there are roughly (10^25) sentences at hand, the number of sequences 100 letters in length is, by way of contrast, 26 to the 100th power. This is an inconceivably greater number. The space of possibilities has blown up, the explosive process being one of combinatorial inflation.
Say we have 100 sentences to compare in a passage, and the sentences were 100 characters on average in length. What would be the rough probabliy that two authors could independently arrive at the same 100-sentence passage based on the parameters suggested by Chomsky? 10^25^100?
basic CSI with respect to the plagerism design is roughly
log2(10^25^100) bits
One does not have to accept UPB of 500 bits, but Dembski gives good reason why this is a decent benchmark. Many PKI encryption schemes in the 1990s were protected with a mere 64 bits.
If we saw evidence of this level of copying we would not attribute the plagiarism to :
1. chance
2. law
Whether we decide the plagiarism was an act of ID is separate issue, but the circumstantial evidence would be compelling enough in most courts of law.
Ahh, there, didn't that feel good Sal? Finally after all these years, a demo of the EF!
Well, not quite. But Sal, having already got us to agree (well, it's the internet!) that plagiarism detectin == Explanatory filter in action it seems he's on a winner. Not everybody thinks so
In response to
I didn't just say it isn't, I gave reasons why it isn't, which you have not addressed.and similar items Sal responds with
It's true I don't address things that I think are a waste of time or are no fun. However, if any ID sympathizers out there want a response I'll give it for their sake, not yours.Sure, like the purpose of Sal's life (knowingly lying to people IMHO) is so great. I will enjoy it. Thanks Sal. Wallow away in not demonstrating the EF why don't cha.
In the mean time, you're thus invited to wallow in your resolve to believe in the non-design of life. Enjoy your pointless and purposeless existence, as that's what Darwinism says of your life.
He then quotes Dawkins. In his mind he's doing some sort of that their words may be used against them kinda thing i'm sure. Sal thinks Sal is real funny.
the universe we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil and no good, nothing but pointless indifferenceSo far Sal has not demonstrated otherwise. Somebody says
Richard Dawkins
and Sal quotes him, because I suspect Sal's been here before.
that of pure random chance. Your calculation says nothing about law, or a combination of law and chance
To clarify, the fact that there are numerous possibilies in what can be printed with grammatically correct English language sentences precludes the opportunity for law alone being the cause. Physical artifacts with the capacity of bearing information are not reducible to law. Shannon alluded to this in his famous paper.There is some reaction now, and now people are saying that Sal will not only not demonstrate the EF but that he will claim it has already been demonstrated.
The Explanatory Filter does not deal with the possible combinations of law and chance, thus your concern is valid, but it has been pondered as I shall describe.
No Free Lunch attempts to show why these combinations of "law and chance" must themselves be information rich.
The paper: Information as a Measure of Variation attempt to quantify what happens when you go from simple distributions to ones that are more specific. Each distribution has an associated bit value with what it infuses into an artifact.
If it can be shown that these more complex distributions are information rich, and that the a priori existence of such a distribution is more unlikely than random chance, then it is sufficient on average to only look at simple distributions. It doesn't negate the possibility of some combination of law and chance, but it puts a figure on the a priori likelihood of such a paths existing.
For example, it is improbable that 500 coins in a room on the floor will be heads. It is theoretically possible that there exists a robot governed by deterministic laws which can take the coins in a room and ensure any initial condition of coins in the room will eventually result in 500 coins being heads by the operation of the robot. However, the a priori probability of such a machine existing in the first place (via a stochastic process) is on average more remote than the chance of 500 coins being heads. A bit value can then be assigned to the a priori probability of the robot being the source of a new probability distribution.
Surely there are philosophical issues with Dembski's proof in this matter, however, Behe's Edge of Evolution goes the empirical route to argue the effectiveness of Natural Selection (a supposed combination of law and chance) in the wild. Behe has many closet sympathizers, not the least of which are neutralists of various colors.
Sal selects another morsel to respond to, ignoring an almost solid consensus against him
How can a scientific theory be devastated by an argument that hinges on philosophy?
The theory can be devastated if it shown (indpendent any philosopy) to be self-defeating. If the kind of selection (fitness landscape) to create complexity is itself highly specific, then that shows Darwinism fails to justify the very claim it pretends to make. There is No Free Lunch.
The difficulty can be readily apparent in taking a typical fish and trying to evolve it via selective breeding into something like a bird. Even granting millions of years, this seems like quite a stretch, not to mention the intermediates might have to be awfully strange....
You could of course argue the amazingness of a bird is simply a post-dictive surprisal. In that case, you don't need any scientific explanation at all except to say it happened. But such an answer would not seem very scientific.
Then follows about 14 minutes later with a bit of Q+A with the message board, it seems. Good sort of Q+A where you get to pick the Q and A.
When you talk about numerous possibilities, are you talking about what's possible given the antecedent conditions of this particular event, or what's possible under any conditions?
I'm talking about the space of all possible outcomes that are grammatically correct.
I wrote:
Physical artifacts with the capacity of bearing information are not reducible to law. Shannon alluded to this in his famous paper.
You asked:Can you point me to the allusion? According to my understanding of classical info theory, messages can bear information even if they're generated deterministically. Shannon's probabilities reflect the prior ignorance of the receiver rather than actual indeterminacy at the sending end.
Let me clarify. I was referring to physical artifcats with the capacity for conveying information. In whatever way that information bearing capacity is derived, it cannot be derived from a law like property which preculdes the possibility of uncertain outcomes.Then, as if we were not dead on the floor and reeling already follows up with the, er, devastating comeback to himself of quoting somebody...
See: A Mathematical Theory of Communication. Refer to Figure 7. As p approaches 0 or 1 (and thus has less uncertainty), the capacity for information decreases. When it is 0 or 1 (no uncertainty), there is no capacity for bearing information. It was Firgure 7 I was referring to when I said Shannon allude to this in his famous paper.
2nd class wrote:
I'm surprised that you're still claiming that it's sufficient to look at only simple distributions.
You may be surprised, but you did represent my position correctly. Consider the implication though if the distribution is complex (for whatever reason). To give a concrete exmple say we have a pair of dice that on Tuesday have distribution that favors the number 7 through many rolls, and Wednesday the number 12. We find this to be the case every week. The complex distribution begins to raise questions in and of itself, especially when there is no a priori reason to expect it's existence. That is the question raised for Darwinism, why did it favor certain complex designs in the past, many make no sense in terms of the products of a blind watchmaker.
Furthermore, Darwinism and abiogensis are trying to prove that UN-remarkable distributions (or physical phenomena) will inevitably lead to the complexity of life as we know it. If the distributions seem fine tuned toward a goal, then this would tend to refute the Darwinian view.
The logical implications of such a position are pretty extreme. According to your a priori regress, anything that isn't characterized by a simple (I assume you mean near-uniform) distribution is unlikely.
No. Near uniform is not the same a simple. A loaded die will have a simple distribution, it is not near-uniform. Information as a measure of variation will tell you how many bits result from deviations from uniformity will create, and it's not much. Look at the difference in bits in distribution from Seatle rains to Sahar rains, and it's only a piddly 6 or 7 bits.
Your implicit assumption is that the natural state of existence is pure randomness, so we shouldn't expect anything to behave in a law-like fashion. And yet we observe many law-like phenomena, and we generally ascribe them to nature rather than design. How do you explain that?
The existence of natural law is design argument. Maybe not the best one, but it is a design argument. If there is law in the universe, there is a Lawgiver, and the Laws seem fine tuned for scientific discovery. That was the heart of Davies Mind of God book for which he was awarded the 1 million dollar Templeton Prize. Davies by the way is Darwinist.
What is in question in biology is that even irrespective of whether the laws of physics are intelligently designed or not, is there another layer of design (recall the example of the OSI model of communication with it's 7 layers of design).
The argument that biology is the product of natural law seems pretty indefinsible in light of Shannon's paper.
The argument that it's the product of chance, almost no one will defend (not even most Darwinists).
The argument that it's the product of "natural" selection was refuted by the No Free Lunch Theorems.
Yeah, gotta love that the fact that laws exists is proof for ID. Nice'n'easy that. Sal thinks he's all done now. Just a few more quotes to pick off and "answer".
2ndclass wrote:
I don't see how the NFL Theorems can do any such thing without making completely unjustified assumptions
Then you may go on believing:
1. Natural Selection is a coherent scientific idea despite very serious definitional complications as described by Lewontin in his 2003 santa Fe paper: Four Complications in Understanding the Evolutionary Process. The notion itself is suspect since being lucky can arguably be a selective "trait".
2. Selection can generate large scale biological novelty from random mutations (whatever "random mutations" really means), despite the fact of no direct experimental evidence to the affirmative.
The ones who are making unjustified assumptions appear to be the Darwinists, not the ID proponents. You don't have to accept ID to see their critique of Darwinism is quite sound. Further, even independent of No Free Lunch, there are serious population genetic issues such as:
1. Speed of Limits of evolution as limited by Haldane's dilemma.
2. Speed limits of evolution as limited by Nachman's U-Paradox
There are numerous other problems even with the generous assumption Free Lunch happens.
But to go back to the plagerism and copy example, why do biological systems look like imitations of computers. Is that a post-dictive projection on our part, or do you think biological systems are actually instances of molecular computers?
Two peer-reviewed papers have been accepted that dispute the possibilty of Free Lunches in the origin of life (OOL). One paper by Trevors and Abel, and another by Albert Voie. Darwinian evolution will fail as a solution to OOL. Solving the OOL problem means solving the appearance of "plagiarism" with the computers of life, or dare I say our computers appear to have plagiarized a design someone else had in mind since the beginning of time....
Did I blink, or was there a demo of the EF on this thread we're looking at or not? It's kinda hazy. So much verbiage. So little Math. Hang on, still got a few non believers in the house?
Hang on Sal, you've gone off track there. We wanted a numeric example of he EF in action.
I gave one for the plagerized text.
There is a comparable argument for the biological computer, and that is why I began with the notion of plagerized text. There is the strong appearance of imitated design in human and biological computers.
Numbers? Well, 2ndclass is a computer scientist, perhaps he can give a specificaiton for the minimum number of bits or parts required to implement a self-replicating Turing Machine. Every number I've seen so far yields astronomically remote probabilities. How about von Neumann's suggested number of 150,000 parts? If we look at just the connections alone we are talking on the order of 150,000 bits. This is not a mere computer, but one that is self-replicating.
By the way, intelligently genetic algorithms are designed and purposeful in industry practice. Citing the existence of intelligently designed genetic algorithms whose salient features would not exist without intelligence would not seem to support the idea that blind mindless forces can create and implement genetic algorithms from scratch.
But even granting that mindlessly originated genetic algorithms operate in nature, they've not been empirically shown to generate the kind of complexity in question.
I gave one for the plagerized text?
Has the world gone mad?
Sal is, in general, a bit incoherent, so bear with me
Where and how does it show
Quote
The argument that biology is the product of natural law seems pretty indefinsible in light of Shannon's paper.
if not in light of figure 7, the only mention of Shannon so far.
See Trevors and Abel's Peer reviewed paper in Cell biology were they refer exactly to this:
No natural mechanism of nature reducible to law can explain the high information content of genomes. This is a mathematical truism, not a matter subject to overturning by future empirical data. The cause-and-effect necessity described by natural law manifests a probability approaching 1.0. Shannon uncertainty is a probability function (−log2 p). When the probability of natural law events approaches 1.0, the Shannon uncertainty content becomes miniscule (−log2 p = −log2 1.0 = 0 uncertainty). There is simply not enough Shannon uncertainty in cause-and-effect determinism and its reductionistic laws to retain instructions for life. Prescriptive information (instruction) can only be explained by algorithmic programming. Such DNA programming requires extraordinary bit measurements often extending into megabytes and even gigabytes. That kind of uncertainty reflects freedom from law-like constraints.
Trevors and Abel
Chance and necessity do not explain the origin of life
Cell Biology 2004
This also answers the question of the relevance to the Explantory Filter. Life cannot be the sole result of law like properties of nature, it must transcend it. The EF requires the object not be the result of :I like the way Sal declares things. I've answered your question, now buzz off.
1. Law
2. Chance
The issue of Shannon information shows that life is not the product of #1.
Steve H wrote:
Natural laws are descriptions of regularities in nature. They don't need to be fine-tuned to allow discovery - we can discover them simply because they are regularities. Also, you can't choose to break natural laws.
Although you supposition is on the surface reasonable, it is mistaken. If the Big Bang parameters were not finely tuned, the approximately classical behavior of the universe would not be in evidence. It would be very difficult to make any inferences or detect regularities whatsovever.Sal then brings in the big guns, another quote.Sal likes quoting people. People don't necessarily like Sal quoting them however.
Most scientists have tacitly assumed that an approximately non-quantum (or "classical" to use the jargon) world would have emerged automatically from the big bang, even from a big bang in which quantum effects dominated. Recently, however, Hartle and Gell-Mannn have challenged this assumption. They argue that the existence of an approximately classical world, in which well-defined material objects exist at distinct loacations in space, and in which there is a well-defined concept of time, requires special cosmic initial conditions. Their calculations indicate that, for the majority of initial states, a generally classical world would not emerge. In that case the separability of the world into distinct objects occupying definite positions in a well-defined background space-time would not be possible. There would be no locality. It seems likely that in such a smeared-out world, one could know nothing without knowing everything .Odd post now, Sal is losing it.
Paul Davies
Mind of God
QuoteAnd its a biologocal computer because... it looks like a biological computer? You seem to be straying down the "it looks designed to me" path.Quote
Doesn't make a difference. "It looks like a computer" is sufficient for it to be classified as a desiged object according to ID literature. Whether you believe intelligence is required to create such correspondences between the appearance of design and actual design is your choice, but it can be shown that Darwinian processes will not even generate the appearance of design of such objects, ergo Darwinism cannot generate real designs, ergo, Darwinism is false.
Now, onto more cherry picked objections
2nd class wrote:
don't know what you mean when you say that there are philosophical issues with his proof.
The philosopical issue is that Dembski's math shows on a priori grounds the unlikelihood of a particular set of distributions. If we were to hypothesize the infinite space of all possible distributions, then yes his math shows that only a few of them would exist to create specified complexity with respect to the simpler set of distiributions.It's all so simple when Sal explains it. I'm clear now. Few questions left thou.
IF, and a bit IF, that distribution existed in nature, would we still infer design? This is exactly the issue with the fine tuning arguments. We can only compare fine tuning with hypothetical math entities, not other observable universes where fine tuning doesn't exist.
However the solution to the impasse exists in biology, thus making the philosophical issue moot. In biology, we can go the route Behe did in Edge of Evolution and empirically determine if we can observe such "magical" distributions existing in the wild. The empirical evidence strongly suggests such magical distributions do not exist in the wild, nor do we have sound theoretical or empirical reasons to believe they ever did.
The solution to the impasse in cosmology is a bit more subtle and maybe not going into here. Suffice to say, if Tipler or Setterfied are right, I would consider the case for cosmological ID a moot point, and there would not be need to try to see of NFL applies to the universe. Besides NFL works most appropriately on "smaller" problems, like biology rather than cosmology.
RichardHughes said:
So the EF is right because ID literature says it is right?
Come on Sal, the only book that can do that is the Bible!
No, that is not my argument. But I can usderstand the confusion.
What I have stated is what the EF would classify as designed. It classifies computers (be they man-made or self-replicating biological computers) as designed objects.
Whether intelligence is required is separate but related issue. The EF has simply classified the object as having the property of what even colloquially a Darwinist would be inclined to call designed. Biologist study the "design" of systems. Even in that colloquial sense the EF identifies designs.
Notice the this comment on the definition of design:
The principal advantage of characterizing design as the complement of regularity and chance is that it avoids
committing itself to a doctrine of intelligent agency. In practice, when we eliminate regularity and chance, we
typically do end up with an intelligent agent. Thus, in practice, to infer design is typically to end up with a
"designer" in the classical sense. Nevertheless, it is useful to separate design from theories of intelligence and
intelligent agency
Bill Dembski
Notice, Dembski uses the phrase "when we eliminate regularity and chance, weAnd that's where we are today!
typically do end up with an intelligent agent", he didn't insist a priori you always end up identifying the action of an intelligent agency. One can offer that hypothesis as a falsifiable claim in the Popperian sense, and it thus is in the form of scientific hypothesis. Both Bill and myself suggest it is a reasonable and falsifiable hypothesis, enough to, for practical purposes cause people like myself to accept it as fact. I don't recall either of us ever said otherwise.
If you want to believe mindless forces can still create such objects, nothing is stopping you, however, Dembski showed that such mindless forces cannot be mathematically characterized by equations that :
1. describe lawlike regularity
2. describe chance type distributions
Either #1 or #2 would be needed to argue that naturalism will succeed as an explanation. But as Trevors and Abel suggest in their peer-reviewed paper, such a quest would be ill-fated as it is doomed from the start. It will search to overturn a mathematical turism not subject to being overturned by any possible future empirical discovery.
The thead begins here
It may be done next year too. Discovery is like that. No one knows how long it will take. If we knew how long things will take to discover we could prioritize our efforts rather nicely. Unfortunately it doesn’t work that way. How long will it take to discover a cure for aids? How long will it take to develop practical fusion reactors to generate electricity? Nobody can answer these questions. Welcome to science and engineering.
Comment by DaveScot — September 14, 2006 @ 2:40 pm