Loading...

Follow Future and Cosmos on Feedspot

Continue with Google
Continue with Facebook
or

Valid
This year scientist Fred C. Adams published to the physics paper server a massive paper on the topic of cosmic fine-tuning (a topic I have often discussed on this blog). The paper of more than 200 pages (entitled "The Degree of Fine Tuning in our Universe -- and Others") describes many cases in which our existence depends on cases of some number in nature having (against all odds) a value allowing the universe to be compatible with the existence of life. There are several important ways in which the paper goes wrong or creates inappropriate impressions. I list them below.

Problem #1: Using habitability as the criteria for judging fine-tuning, rather than “something as good as what we have.”

Part of the case for cosmic fine-tuning involves the existence of stars. It turns out that some improbable set of coincidences have to occur for a universe to have any stars at all, and some far more improbable set of coincidences have to occur for a universe to have very stable, bright, long-lived stars like our sun.

Adams repeatedly attempts to convince readers that the universe could have had fundamental constants different from what we have, and that the universe would still have been habitable because some type of star might have existed. When reasoning like this, he is using an inappropriate rule-of-thumb for judging fine-tuning. Below are two possible rules-of-thumb when judging fine-tuning in a universe:

Rule #1: Consider how unlikely it would be that a random universe would have conditions as suitable for the appearance and long-term survival of life as we have in our universe.

Rule #2: Consider only how unlikely it would be that a random universe would have conditions allowing some type of life.

Rule #1 is the appropriate rule to use when considering the issue of cosmic-fine tuning. But Adams seems to be operating under a rule-of-thumb such as Rule #2. For example, he tries to show that a universe significantly different from ours might have allowed red dwarf stars to exist. But such a possibility is irrelevant. The relevant consideration: how unlikely is it that we would have got a situation as fortunate as the physical situation that exists in our universe, in which stars like the sun (more suitable for supporting life than red-dwarf stars) exist? Since "red dwarfs are far more variable and violent than their more stable, larger cousins" such as sun-like stars (according to this source),  we should be considering the fine-tuning needed to get stars like our sun, not just any type of star like a red dwarf. 

I can give an analogy. Imagine I saw in the woods a log cabin house. If I am judging whether this structure is the result of chance or design, I should be considering how unlikely it would be that something as good as this might arise by chance. You could do all kinds of arguments trying to show that it is not so improbable that a log structure much worse than the one observed might not be too unlikely (such as arguments showing that it wouldn't be too hard for a few falling trees to make a primitive rain shelter). But such arguments are irrelevant. The relevant thing to consider is: how unlikely would it be that a structure as good as the one observed would appear by chance? Similarly, living in a universe that allows an opportunity for humans to continue living on this planet for billions of years with stable solar radiation, the relevant consideration is: how unlikely that a universe as physically fortunate as ours would exist by chance? Discussions about how microbes might exist in a very different universe (or intelligent creatures living precariously near unstable suns) are irrelevant, because such universes do not involve physical conditions as good as the one we have.

Problem #2: Charts that create the wrong impression because of a “camera near the needle hole” and a logarithmic scale.

On page 29 of the paper Adams gives us a chart showing some fine-tuning needed for the ratio between what is called the fine-structure constant and the strong nuclear force. We see the following diagram, using a logarithmic scale that exaggerates the relative size of the shaded region. If the ratio had been outside the shaded region, stars could not have existed in our universe.


This doesn't seem like that lucky a coincidence, until you consider that creating a chart like this is like trying a make a needle hole look big by putting your camera right next to the needle hole. We know of no theoretical reason why the ratio described in this chart could not have been anywhere between .000000000000000000000000000000000000000000001 and 1,000,000,000,000,000,000,000,000,000,000,000. So by using such a narrow scale, the chart gives us the wrong idea. In a a less misleading chart that used an overall scale vastly bigger, we would see this shaded region as merely a tiny point on the chart, occupying less than a millionth of the total area on the chart. Then we would realize that a fantastically improbable coincidence is required for nature to have threaded this needle hole. 


Needle holes can look big when your eye is right next to them

Problem #3: An under-estimation of the strong force sensitivity.

On page 30 of the paper, Adams argues that the strong nuclear force (the strong coupling constant) isn't terribly fine-tuned, and might vary by as much as 1000 without preventing life. He refuses to accept what quite a few scientists have pointed out: that it would be very damaging for the habitability of the universe if the strong nuclear force were only a few percent stronger. Quite a few scientists have pointed out that in such a case, the diproton (a nucleus consisting of two protons and no neutrons) would be stable, and that would drastically affect the nature of stars. Adams attempt to dismiss such reasoning falls flat. He claims on page 31 that if the diproton existed, it would cause only a “modest decrease in the operating temperatures of stellar cores," but then tells us that this would cause a 1500% change in such temperatures (a change from about 15 million to one million), which is hardly modest. 

Adams ignores the fact that small changes in the strong nuclear force would probably rule out lucky carbon resonances that are necessary for large amounts of carbon to exist. Also, Adams ignores the consideration that if the strong nuclear force had been much stronger, the early universe's hydrogen would have been converted into helium, leaving no hydrogen to eventually allow the existence of water. In this paper, two physicists state, “we show that provided the increase in strong force coupling constant is less than about 50% substantial amounts of hydrogen remain.” What that suggests is that if the strong force had been more 50% greater, the universe's habitability would have been greatly damaged, and life probably would have been impossible. That means the strong nuclear force is 1000 times more sensitive and fine-tuned that Adams has estimated.

Problem #4: An under-estimation of the fine structure constant's sensitivity and sensitivity of the quark mass.

On page 140 of the paper, Adams suggests that the fine structure constant (related to the strength of the electromagnetic force) isn't terribly fine-tuned, and might vary by as much as 10000 times without preventing life. His previous discussion of the sensitivity of the fine structure constant involved merely a discussion of how a change in the constant would affect the origin of elements in the Big Bang. But there are other reasons for thinking that the fine structure is very fine-tuned, reasons Adams hasn't paid attention to. A stellar process called the triple-alpha process is necessary for large amounts of both carbon and oxygen to be formed in in the universe. In their paper “Viability of Carbon-Based Life as a Function of the Light Quark Mass,” Epelbaum and others state that the “formation of carbon and oxygen in our Universe would survive a change” of about 2% in the quark mass or about 2% in the fine-structure constant, but that “beyond such relatively small changes, the anthropic principle appears necessary at this time to explain the observed reaction rate of the triple-alpha process.” This is a sensitivity more than 10,000 times greater than Adams estimates. It's a case that we can call very precise fine-tuning. On  page 140, Adams gives estimates of the biological sensitivity of the quark masses, but they ignore the consideration just mentioned, and under-estimates the sensitivity of these parameters.

Problem #5: The misleading table that tries to make radio-fine tuning seem more precise than examples of cosmic fine-tuning.

On page 140 Adams give us the table below:



The range listed in the third column represents what Adams thinks is the maximum multiplier that could be applied to these parameters without ruling out life in our universe. One problem is that some of the ranges listed are way too large, first because Adams is frequently being over-generous in estimating by how much such things could vary without worsening our universe's physical habitability (for reasons I previously discussed), and second because Adams is using the wrong rule for judging fine-tuning, considering “universes that allow life” when he should be considering “universes as habitable and physically favorable as ours.”

Another problem is that the arrangement of the table suggests that the parameters discussed are much less fine-tuned than a radio that is set to just the right radio station, but most of the fundamental constants in the table are actually far more fine-tuned than such a radio. To clarify this matter, we must consider the matter of possibility spaces in these cases. A possibility space is the range of possible values that a parameter might have. One example of a possibility space is the possible ages of humans, which is between 0 and about 120. For an AM radio the possibility space is between 535 and 1605 kilohertz.

What are the possibility spaces for the fundamental constants? For the constants involving one of the four fundamental forces (the gravitational constant, the fine-structure constant, the weak coupling constant and the strong coupling constant), we know that the four fundamental forces differ by about 40 orders of magnitude in their strength. The strong nuclear force is about 10,000,000,000,000,000,000,000,000,000,000,000,000,000 times stronger than the gravitational force. So a reasonable estimate of the possibility space for each of these constants is to assume that any one of them might have had a value 1040 times smaller or weaker than the actual value of the constant.

So the possibility space involving the four fundamental coupling constants is something like 1,000,000,000,000,000,000,000,000,000,000,000,000 larger than the possibility space involving an AM radio. So, for example, even if the strong coupling constant could have varied by 1000 times as Adam claims, and still have allowed for life, for it to have such a value would be a case of fine-tuning more than 1,000,000,000,000 times greater than an AM radio that is randomly set on just the frequency. For the range of values between .001 and 1000 times the actual value of the strong nuclear force is just the tiniest fraction within a possibility space in which the strong nuclear force can vary by 1040 times. It's the same situation for the gravitational constant and the fine-structure constant (involving the electromagnetic force). Even if we go by Adam's severe under-estimations of the biological sensitivity of these constants, and use the estimates he has made, it is a still a situation that this is fine tuning trillions of times more unlikely to occur by chance than a radio being set on just the right station by chance, because of the gigantic probability space in which fundamental forces might vary by 40 orders of magnitude.

A similar situation exists in regard to what Adams calls on page 140 the vacuum energy scale.  This refers to the density of energy in ordinary outer space such as interstellar space. This is believed to be extremely small but nonzero. Adams estimates that it could have been 10 orders of magnitude larger without preventing our universe's habitability. But physicists know of very strong reasons for thinking that this density should actually be 1060 times or 10120  times greater than it is (it has to do with all of the virtual particles that quantum field theory says a vacuum should be packed with).  So for the value of the vacuum energy density to be as low as it is would seem to require a coincidence with a likelihood of less than 1 in 1050 .  Similarly, if a random number generator is programmed to pick a random number between 1 and 1060 with an equal probability of any number between those two numbers being the random number, there is only a microscopic chance of the number being between 1 in 1050 .

Adam's chart has been cleverly arranged to give us the impression that the fundamental constants are less fine-tuned than a radio set on the right station, but the opposite is true. The main fundamental constants are trillions of times more fine-tuned than a radio set on the right station. The reason is partially because the possibility space involving such constants is a more than a billion quadrillion times larger than the small possibility space involving what station a radio might be tuned to.

Problem #6: Omitting the best cases from his summary table.

Another huge shortcoming of Adams' paper is that he has omitted some of the biggest cases of cosmic fine-tuning from his summary table on page 140.   One of the biggest cases of fine-tuning involves the universe's initial expansion rate. Scientists say that at the very beginning of the Big Bang, the universe's expansion rate was fine-tuned to more than 1 part in 1050 , so that the universe's density was very precisely equal to what is called the critical density.  If the expansion rate had not been so precisely fine-tuned, galaxies would never have formed.  Adams admits this on page 40-41 of his paper. There is an unproven theory designed to explain away this fine-tuning, in the sense of imagining some other circumstances that might have explained it.  But regardless of that, such a case of fine-tuning should be included in any summary table listing the universe's fine-tuning (particularly since the theory designed to explain away the fine-tuning of the universe's expansion rate, called the cosmic inflation theory,  is a theory that has many fine-tuning requirements of its own, and does not result in an actual reduction of the universe's overall fine-tuning even if the theory were true).  So why do we not see this case in Adams' summary table entitled "Range of Parameter Values for Habitable Universe"? 

Adams' summary table also makes no mention of the fine-tuning involving the Higgs mass or the Higgs boson, what is called "the hierarchy problem." This is a case of fine-tuning that so bothered particle physicists that many of them spent decades creating speculative theories such as supersymmetry designed to explain away this fine-tuning, which they sometimes said was so precise it was like a pencil balanced on its head.  Referring to this matter, this paper says, "in order to get the required low Higgs mass, the bare mass must be fine-tuned to dozens of significant places." This is clearly one of the biggest cases of cosmic fine-tuning, but Adams has conveniently omitted it from his summary table. 

Then there is the case of the universe's initial entropy, another case of very precise fine-tuning that Adams has also ignored in his summary table. Cosmologists such as Roger Penrose have stated that for the universe to have the relatively low entropy it now has, the entropy at the time of the Big Bang must have been fantastically small, completely at odds from what we would expect by chance.  Only universes starting out in an incredibly low entropy state can end up forming galaxies and yielding life.  As I discuss here,  in a recent book Penrose suggested that the initial entropy conditions were so improbable that it would be more likely that the Earth and all of its organisms would have suddenly formed from a chance collision of particles from outer space. This gigantic case of very precise cosmic fine-tuning is not mentioned on Adams' summary table. 

Then there is the case of the most precise fine-tuning known locally in nature, the precise equality of the absolute value of the proton value and the absolute value of the electron charge.  Each proton in the universe has a mass 1836 times greater than the mass of each electron. From this fact, you might guess that the electric charge on each proton is much greater than the electric charge on each electron. But instead the absolute value of the electric charge on each proton is very precisely the same as the absolute value of the electric charge on each electron (absolute value means the value not considering the sign that is positive for protons and negative for electrons).  A scientific experimental study determined that the absolute value of the proton charge differs by less than one part in 1,000,000,000,000,000,000 from the absolute value of the electron charge. 

This is a coincidence we would expect to find in fewer than 1 in 1,000,000,000,000,000,000 random universes, and it is a case of extremely precise fine-tuning that is absolutely necessary for our existence.  Since the electromagnetic force (one of the four fundamental forces) is roughly 10 to the thirty-seventh power times stronger than the force of gravity that holds planets and stars together, a very slight difference between the absolute value of the proton charge and the absolute value of the electron charge would create an electrical imbalance that would prevent stars and planets from holding together by gravity (as discussed here).  Similarly, a slight difference between the absolute value of the proton charge and the absolute value of the electron charge would prevent organic chemistry in a hundred different ways. Why has Adams failed to mention in his summary table (or anywhere in his paper) so precise a case of biologically necessary fine-tuning? 

Clearly, Adams has left out from his summary table most of the best cases of cosmic fine-tuning.  His table is like some table entitled "Famous Yankee Hitters" designed to make us think that the New York Yankees haven't had very good hitters, a table that conveniently omits the cases of Babe Ruth, Lou Gehrig, Joe DiMaggio, and Micky Mantle. 

Below is a table that will serve as a corrective for Adams' misleading table.  I will list some fundamental constants or parameters in the first column. The second column gives a rough idea of the size of the possibility space regarding the particular item in the first column. The third column tells us whether the constant, parameter or situation is more unlikely than 1 chance in 10,000 to be in the right range, purely by chance. The fourth column tells us whether the constant, parameter or situation is more unlikely than 1 chance in a billion to be in the right range, purely by chance.   For the cosmic parameters "in the right range" means "as suitable for long-lasting intelligent life as the item is in our universe." The "Yes" answers follow from various sensitivity estimates in this post, in Adams' paper, and in the existing literature on this topic (which includes these items).  For simplicity I'll skip several items of cosmic fine tuning such as those involving quark masses and the electron/proton mass ratio. 


..
Parameter or Constant or Situation Size of Possibility Space More Unlikely Than 1 in 10,000 for Item to Be in Right Range, by Chance? More Unlikely Than 1 in 1,000,000,000 for Item to Be in Right Range, by Chance?
Strong nuclear coupling constant 1040 (difference between weakest fundamental force and strongest one) Yes Yes
Gravitational coupling constant 1040 (difference between weakest fundamental force and strongest one) Yes Yes
Electromagnetic coupling constant 1040 (difference between weakest fundamental force and strongest one) Yes Yes
Ratio of absolute value of proton charge to absolute value of electron charge .000000001 to 1000000000 Yes Yes
Ratio of universe's initial density to critical density (related to initial expansion rate) 1040 to 1/ 1040 Yes Yes
Initial cosmic entropy level Between 0 and some incredibly large number
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
A recent article in The Atlantic is entitled "A Waste of 1000 Research Papers." The article is about how scientists wrote a thousand research papers trying to suggest that genes such as SLC6A4 were a partial cause of depression, one of the leading mental afflictions.  The article tells us, "But a new study—the biggest and most comprehensive of its kind yet—shows that this seemingly sturdy mountain of research is actually a house of cards, built on nonexistent foundations." 

Using data from large groups of volunteers -- between 62,000 and 443,000 people -- the scientific study attempted to find whether there was any evidence that any of the genes (such as SLC6A4 and 5-HTTLPR) linked to depression were more common in people who had depression. "We didn't find a smidge of evidence," says Matthew Keller, the scientist who led the study.  "How on Earth could we have spent 20 years and hundreds of millions of dollars studying pure noise?" asks Keller, suggesting that hundreds of millions of dollars had been spent trying to show a genetic correlation (between genes such as SLC6A4 and depression) that didn't actually exist. 

The article refers us to a blog post on the Keller study, one that comments on how scientists had tried to build up the gene 5-HTTLPR as a causal factor of depression. Below is a quote from the blog post by Scott Alexander:

"What bothers me isn’t just that people said 5-HTTLPR mattered and it didn’t. It’s that we built whole imaginary edifices, whole castles in the air on top of this idea of 5-HTTLPR mattering. We 'figured out' how 5-HTTLPR exerted its effects, what parts of the brain it was active in, what sorts of things it interacted with, how its effects were enhanced or suppressed by the effects of other imaginary depression genes. This isn’t just an explorer coming back from the Orient and claiming there are unicorns there. It’s the explorer describing the life cycle of unicorns, what unicorns eat, all the different subspecies of unicorn, which cuts of unicorn meat are tastiest, and a blow-by-blow account of a wrestling match between unicorns and Bigfoot."

How is it that so many scientists came up with an answer so wrong?  One reason is that they used sample sizes too small. The article in The Atlantic explains it like this:

"When geneticists finally gained the power to cost-efficiently analyze entire genomes, they realized that most disorders and diseases are influenced by thousands of genes, each of which has a tiny effect. To reliably detect these miniscule effects, you need to compare hundreds of thousands of volunteers. By contrast, the candidate-gene studies of the 2000s looked at an average of 345 people! They couldn’t possibly have found effects as large as they did, using samples as small as they had. Those results must have been flukes—mirages produced by a lack of statistical power. "

Is this type of problem limited to the study of genes? Not at all. The "lack of statistical power" problem (pretty much the same as the "too small sample sizes" problem) is rampant and epidemic in modern neuroscience. Today's neuroscientists very frequently produce studies with way-too-low statistical power, studies in which there is a very high chance of a false alarm, because too-small sample sizes were used. 

It is well known that at least 15 animals per study group should be used to get a moderately convincing result. But very often neuroscience studies will use only about 8 animals per study group. If you use only 8 animals per study group, there's a very high chance you'll get a false alarm, in which the result is due merely to chance variations rather than a real effect in nature.  In fact, in her post "Why Most Published Neuroscience Studies Are False," neuroscientist Kelly Zalocusky suggests that neuroscientists really should be using 31 animals per study group to get a not-very-strong statistical power of .5, and 60 animals per study group to get a fairly strong statistical power of .8.  

This is the same “too small sample size” problem (discussed here) that plagues very many or most neuroscience experiments involving animals. Neuroscientists have known about this problem for many years, but year after year they continue in their errant ways, foisting upon the public too-small-sample-size studies with low statistical power that don't prove anything because of a high chance of false alarms.  Such studies may claim to provide evidence that brains are producing thinking or storing memories, but the evidence is "junk science" stuff that does not stand up to critical scrutiny. 

So part of the explanation for why the "depression gene" scientists were so wrong is that they used too-small sample sizes, producing results with way-too-low statistical power. But there's another big reason why they were so wrong: their research activity was driven by wishful thinking.  Showing that genes drive behavior or mental states has always been one of the major items on the wish list of scientists who favor reductionist materialism.  

If you're an orthodox Darwinist, you're pretty much locked in to the idea that genes control everything.  Darwinists believe that a progression from ape-like ancestors to humans occurred solely because of a change in DNA caused by random mutations.  So if you think that some difference in DNA is the sole explanation for the difference between humans and apes,  you've pretty much boxed yourself into the silly idea that every difference between a human and an ape boils down to some gene difference or DNA difference.  I call the idea silly because the genes that make up DNA basically specify proteins, but no one has a coherent idea for how a protein could cause a mental state such as sadness, imagination,  spirituality or curiosity, nor a coherent idea of how a three-dimensional body plan could ever be stored in DNA (written in a "bare bones" language limiting it to only very low-level chemical information such as the amino acids that make up a polypeptide chain that is the beginning of a three-dimensional protein molecule). 

So the scientists wanted very much to believe there were "genes for depression," and genes for almost every other human mental characteristic; and they let their belief desires drive their research activities. 

Does this happen rarely in the world of science? No, it happens all the time. Very much of modern scientific activity is driven by wishful thinking. It is easy to come up with examples that remind us of this "depression genes" misadventure:

  • Countless “low statistical power” neuroscience papers have been written trying to suggest that LTP has something to do with memory, an idea which makes no sense given that LTP is an artificially produced effect produced by high-frequency stimulation, and given that LTP typically lasts only hours or days, in contrast to human memories that can persist for 50 years.
  • Countless “low statistical power” neuroscience papers have been written trying to suggest that synapses or dendritic spines are involved in memory storage, an idea which makes no sense given that synapses and dendritic spines are made up of proteins with lifetimes of only a few weeks or less, and that neither synapses nor dendritic spines last for even a tenth of the time that humans can remember things (50 years or more).
  • Countless “low statistical power” brain imaging studies have tried to show neural correlates of thinking or recall, but they typically show that such activities do not cause more than a 1% change in signal strength, consistent with random variations that we would see even if brains do not produce thinking or memory recall.
  • Having a desire to reconcile the mutually incompatible theories of quantum mechanics and general relativity, physicists wrote thousands of papers filled with ornate speculations about something called string theory, a theory for which no evidence has ever been produced.
  • Faced with biological organisms having functionally complex systems far more complex than any found in the most complex human inventions, and wishing fervently to avoid any hypothesis of design, scientists engaged in countless speculations about how such innovations might have been produced by random mutations of genomes, ignoring not merely the mathematical improbability of such "miracle of luck" events happening so many times, but also the fact that genomes do not specify body plans (contrary to the "DNA is a blueprint for your body" myth advanced to support such speculations).  
  • Faced with an undesired case of very strong fine-tuning involving the Higgs boson or Higgs field, scientists wrote more than 1000 papers speculating about a theory called supersymmetry which tries to explain away this fine-tuning; but the theory has failed all experimental tests at the Large Hadron Collider.  Many of the same scientists want many billions more for the construction of a new and more powerful particle collider, so that their fervent wishes on this matter can be fulfilled. 
  • Faced with an undesired result that the universe's expansion rate at the time of the Big Bang was apparently fine-tuned to more than 1 part in 1,000,000,000,000,000,000,000, scientists wrote more than a thousand speculative “cosmic inflation” cosmology papers trying to explain away this thing they didn't want to believe in, by imagining a never-observed instant in which the universe expanded at an exponential rate (with the speculations often veering into multiverse speculations about other unseen universes). 
  • According to this paper, scientists have written some 15,000 papers on the topic of "neural coding," something that scientists want to believe in because they want to believe the brain is like a computer (which uses coding).  But there is not any real evidence that the brain uses any type of coding other than the genetic code used by all cells, and no one has been able to discover any neural code used for either transmitting information in the brain or storing information in the brain.  Referring to spikes in brain electricity, this paper concludes "the view that spikes are messages is generally not tenable," contrary to the speculations of thousands of neuroscience papers. 
  • Scientists have had no luck in trying to create a living thing in experiments simulating the early Earth, and have failed to create even a single protein molecule in such experiments. But because scientists really, really wish to find extraterrestrial life somewhere (to prove to themselves that the origin of life is easy to naturally occur), scientists want billions of dollars for a long-shot ice-drilling space mission looking for life on Jupiter's moon Europa.
  • When the Large Hadron Collider produced some results that might have been preliminary signs of some new particle, physicists wrote more than 500 papers speculating about the interesting new particle they wanted to believe in, only to find the new particle was a false alarm. 
  • Scientists ran many very expensive experiments attempting to look for dark matter, something that has never been observed, but which scientists devoutly hoped to find, largely to prove to themselves that their knowledge of large-scale cosmic structure is not so small. 

Quite a few of these cases seem to be "imaginary edifices" in which the main controlling factor is what a scientist wants to believe or does not want to believe. 


NASA visual of a long-shot mission to look for life on Europa

The article in The Atlantic states the following, painting a portrait of a science world with very serious problems: 

“ 'We’re told that science self-corrects, but what the candidate gene literature demonstrates is that it often self-corrects very slowly, and very wastefully, even when the writing has been on the wall for a very long time,' Munafo adds. Many fields of science, from psychology to cancer biology, have been dealing with similar problems: Entire lines of research may be based on faulty results. The reasons for this so-called “reproducibility crisis” are manifold. Sometimes, researchers futz with their data until they get something interesting, or retrofit their questions to match their answers. Other times, they selectively publish positive results while sweeping negative ones under the rug, creating a false impression of building evidence."
In the world of science academia, bad ideas arise, and often hang around way too long, even after the evidence has turned against such ideas, because what is really driving things is what scientists want to believe or do not want to believe.  Wishful thinking is in the driver's seat, and the result of that is quite a few houses of cards, quite a few castles in the air, quite a few imaginary edifices that are often sold as "science knowledge."  
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The claim that the human mind is produced by the human brain has always been a speech custom of scientists, rather than an idea that has been established by observations. No one has any idea of how neurons might produce human mental phenomena such as abstract thinking and imagination. Contrary to the predictions of the idea that brains make minds, there are a huge number of case histories showing that human minds suffer surprisingly little damage when massive brain injury or loss of brain tissue occurs. I have published three long posts (here, here, and here) citing many such cases, including cases of epilepsy patients who had little loss of intelligence or memory after they lost half of their brains in a hemispherectomy operation to stop seizures, and patients who had above-average or near-average intelligence despite loss of most of their brains. I will now cite some additional cases of minds little affected by huge brain damage, cases I have not mentioned before.

The cases I will discuss are mainly referred to as abscesses. An abscess is an area of the brain that has experienced necrosis (cell death) because of infection or injury. A medical source refers to an abscess as “an area of necrosis,” and another medical source defines necrosis as “the death of body tissue.” If you do a Google image search for “abscess,” you will see that a brain abscess generally appears as a dark patch in a brain scan. It is roughly correct to refer to an abscess as a brain hole, although the hole is a filled hole, filled mainly with pus, dead cells and fluid. An image of an abscess is below.


The two cases in the quoted paragraph below are reported on page 78 of the book From the Unconscious to the Conscious by physician Gustave Geley. You can read the book here. Astonishingly, Geley refers in the first sentence to a man who lived a year “without any mental disturbance” despite a great big brain abscess that left him with “a brain reduced to pulp”:

"M. Edmond Perrier brought before the French Academy of Sciences at the session of December 22nd, 1913, the case observed by Dr R. Robinson; of a man who lived a year, nearly without pain, and without any mental disturbance, with a brain reduced to pulp by a huge purulent abscess. In July, 1914, Dr Hallopeau reported to the Surgical Society an operation at the Necker Hospital, the patient being a young girl who had fallen out of a carriage on the Metropolitan Railway. After trephining, it was observed that a considerable portion of cerebral substance had been reduced literally to pulp. The wound was cleansed, drained, and closed, and the patient completely recovered."

The following report (quite contrary to current dogmas about brains) was made in a Paris newspaper of a session of the Academy of Sciences on March 24, 1917, and is quoted by Geley on page 79 of his book:

"He mentions that his first patient, the soldier Louis R , to-day a gardener near Paris, in spite of the loss of a very large part of his left cerebral hemisphere (cortex, white substance, central nuclei, etc.), continues to develop intellectually as a normal subject,in despite of the lesions and the removal of convolutions considered as the seat of essential functions. From this typical case, and nine analogous cases by the same operator, known to the Academy, Dr Guepin says that it may now safely be concluded:
(i). That the partial amputation of the brain in man is possible, relatively easy, and saves certain wounded men whom received theory would regard
as condemned to certain death, or to incurable infirmities.
(2). That these patients seem not in any way
to feel the loss of such a cerebral region."

On page 80 of Geley's book we have the following astonishing case involving an abscess in the brain. We are told the boy had “full use of his intellectual faculties” despite a huge brain abscess and a detachment “which amounted to real decapitation”:

"The first case refers to a boy of 12 to 14 years
of age, who died in full use of his intellectual faculties
although the encephalic mass was completely detached
from the bulb, in a condition which amounted to real
decapitation. What must have been the stupefaction
of the operators at the autopsy, when, on opening
the cranial cavity, they found the meninges heavily
charged with blood, and a large abscess involving
nearly the whole cerebellum, part of the brain and
the protuberance. Nevertheless the patient, shortly
before, was known to have been actively thinking.
They must necessarily have wondered how this could
possibly have come about. The boy complained of
violent headache, his temperature was not below
39 °C. (io2.2°F.) ; the only marked symptoms
being dilatation of the pupils, intolerance of light,
and great cutaneous hyperesthesia. Diagnosed as
meningo-encephalitis."

On page 81 we learn of the following equally astonishing case involving a patient who “thought as do other men” despite having three large brain abscesses, each as large as a tangerine:

"A third case, coming from the same clinic, is
that of a young agricultural labourer, 18 years of
age. The post mortem revealed three communicating
abscesses, each as large as a tangerine orange,
occupying the posterior portion of both cerebral
hemispheres, and part of the cerebellum. In spite
of these the patient thought as do other men, so
much so that one day he asked for leave to settle
his private affairs. He died on re-entering the
hospital."

These cases are quite consistent with more modern cases reported in recent decades, cases in which we also see very little loss of function despite massive brain damage. A 2015 scientific paper looked at 162 cases of surgery to treat brain abscess, in which parts of the brain undergo the cell death known as necrosis, often being replaced with a yellowish pus. The article contains quite a few photos of people with holes in their brains caused by the abscesses, holes in their brains of various sizes. The paper says that “complete resolution of abscess with complete recovery of preoperative neuro-deficit was seen in 80.86%” of the patients, and that only about 6% of the patients suffered a major functional deficit, even though 22% of the patients had multiple brain abscesses, and 30% of the abscesses occurred in the frontal lobe (claimed to be the center of higher thought). 

Interestingly, the long review article on 162 brain abscesses treated by brain surgery make no mention at all of amnesia or any memory effects, other than to tell us that “there was short-term memory loss in 5 cases.” If our memories really are stored in our brain, how come none of these 162 cases of brain abscesses seem to have shown an effect at all on permanent memories?

Similarly, a scientific paper on 100 brain abscess cases (in which one fourth of the patients had multiplebrain abscesses) makes no mention of any specific memory effect or thinking effect. It tells us that most of the patients had “neurological focal deficits,” but that's a vague term that doesn't tell us whether intellect or memory was affected. (A wikiepdia.org article says that such a term refers to "impairments of nervespinal cord, or brain function that affects a specific region of the body, e.g. weakness in the left arm, the right leg, paresis, or plegia.")   The paper tells us that after treatment “80 (83.3%) were cured, eight (8.3%) died (five of them were in coma at admission), seven had a relapse of the abscess,” without mentioning any permanent loss of memory or mental function in anyone.

Another paper discusses thousands of cases of brain abscesses, without mentioning any specific thinking effects or memory effects.  Another paper refers to 49 brain abscess patients, and tells us that "the frontal lobe was the most common site," referring to the place that is claimed to be a "seat of thought" in the brain. But rather than mentioning any great intellectual damage caused by these brain holes, the paper says that 39 of the patients “recovered fully or had minimal incapacity,” and that five died.

In 1994 Simon Lewis was in his car when it was struck by a van driving at 75 miles per hour. The crash killed Lewis' wife, and “destroyed a third of his right hemisphere” according to this press account. Lewis remained in coma for 31 days, and then awoke. Now, many years later, according to the press account, “he actually has an IQ as high as the one he had before the crash.” In 1997, according to the press account, Lewis had an IQ of 151, which is 50% higher than the average IQ of 100. How could someone be so smart with such heavy brain damage, if our brains are really the source of our minds?  

These cases are merely a small part of the evidence that large brain damage very often produces only very small effects on mind and memory. The three posts here and here and here give many other cases along the same lines, some suggesting even more dramatically that a large fraction of the brain (often as much as 50% and sometimes as much as 80%) can be lost or removed without causing much memory loss or preventing fairly normal mental function and memory function. The facts of neuroscience do not match the dogmas of neuroscientists, who make unwarranted “brains store memories” and “brains make minds” claims that are in conflict with facts such as medical case histories of high brain damage with little mind damage, the short lifetimes of the proteins that make up synapses, the low signal transmission reliability of noisy synapses, and the failure of scientists to detect any sign of encoded information (other than DNA gene information) in brains.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Apparitions have been by humans throughout history. Skeptics claim that such apparitions are just hallucinations. But there are two reasons for rejecting such a theory. The first reason is that there are very many apparition sightings in which a person sees an apparition of someone (typically someone the observer did not know was in danger), and then later finds out that this person died on the same day (or the same day and hour) as the apparition was seen. We would expect such cases to be extremely rare or nonexistent if apparitions are mere hallucinations, since all such cases would require a most unlikely coincidence. But the literature on apparitions shows that it is quite common for an apparition to appear to someone on the day (or day and hour) of the death of the person matching the apparition. See here and here and here and here for 100 such cases.

The second reason for rejecting claims that apparitions are mere hallucinations is the fact that an apparition is quite often seen or heard by more than one person at the same time. We should not expect any such cases under a theory of apparitions being hallucinations. In a previous post, I described 17 cases of an apparition seen by multiple observers. In this post I'll describe 17 additional such cases not mentioned in my previous post.  

First, I'll describe the appearance of an apparition without human form (almost all of the other cases will involve apparitions with a human form).  On page 28 of the fascinating book Experiences in Spiritualism with Mr. D. D. Home, which can be read here,  we read this account by the viscount Adare, who describes what he saw after experiencing "our beds vibrating strongly" and "raps on the furniture, doors, and all about the room":

We both saw in exactly the same spot, a perfectly white column, it can scarcely be called a figure, as the shape was indistinct. It moved from the dressing table toward Home's bed. Home said he saw a white object about the size of a child floating near the ceiling above his head. I saw it also, but it appeared to me to be in mid air, half way between the bed and the ceiling; it was floating about in a horizontal position; it was like a small white cloud without any well-defined shape...It then slowly floated to the foot of his bed and disappeared. 

Below are some cases from Volume 2 of the classic work on apparitions, “Phantasms of the Living,” which you can read here. I will use the case numbers given in the book. You can click on the links to go the exact page I refer to. 

Page 612, Case 659: A person and his sister were surprised to see at the person's bedside the person's brother, who was believed to be far away doing a job for a reigning prince.  Three weeks later word came that the brother "had died the same night, and the same hour." 

Page 613, Case 660: Two brothers were in bed when they both saw the apparition of a lady to whom their father was engaged. She died suddenly that same night. 

Page 615, Case 662: Between 6 and 7 o'clock, a woman, her brother and her mother all saw Ellen, the woman's sister.  The woman "tried to catch hold of her, but seemed to catch nothing." The next day they found out that Ellen had drowned to death "a little before seven" on the same they saw the apparition.

Page 616, Case 663: A woman saw her husband's mother. When she cried out, the husband looked up, and "the apparition vanished." On the same evening, two male children of the woman saw a strange silent female figure pacing back and forth in their bedroom. It was later found that the husband's mother had died on the same evening. 

Page 617-618, Case 665: Two brothers woke to see standing between their cots the figure of their father. When one of the brothers rose up, the figure vanished.  They later found that their father had died at the same hour. 

Page 622, Case 667: Four people were startled to see an apparition that disappeared just after a woman screamed, "He's dead."  Word later arrived that a person corresponding to the apparition had been murdered on the same night.  The account says, "his spirit appeared to his wife, his child, an elder sister, and myself." 

I will now discuss a case that appears on page 259 of "Phantasms of the Living," as  case #357. But a much earlier source for the sighting is the fascinating and very long 1872 book "Footfalls on the Boundary of Another World" by Robert Dale Owen.  On page 383 the book tells us the following narrative. On October 15, 1785, seeing someone while in America, John Sherbroke brought the sight to the attention of Colonel Wynyard, who identified the person as his brother, John Wynyard.  "Both remained silently gazing on the figure," we are told by Owen. Owen later says Sherbroke later received a letter "begging Sherbroke to break to his friend Wynyard the news of the death of his favorite brother, who had expired on the 15th of October, and at the same hour at which the friends saw the apparition."  Two and a half years later John Sherbroke saw someone who he thought resembled the figure he had seen when the “apparition sighting” occurred. The man turned out to be a man who looked so much like John Wynyard that he had been previously mistaken for him.

On page 377 of the same book by Owen, we have a case of an apparition seen by multiple witnesses. A British Lord traveled away from London, leaving his wife there. While in the Scottish Highlands, he saw an apparition of his wife. Ringing for his servant, the man asked the servant what he saw. The servant replied, "It's my lady!"  The wife had died in London that night.  About a year later, the man's daughter exclaimed that she had seen her deceased mother.  The daughter died on the same night from an illness.  Owen said he received the account in writing from a member of the man's family. 

On page 387-391 of the same book by Owen, we have the account of an apparition seen by Baron de Guldenstubbe in 1854. The Baron reporting seeing the apparition at length. Later he questioned people involved in maintaining the property, and found that two others had seen the same apparition, one of them several times. The account was told to Owen by the Baron. 

On page 398 to 401 of the same book by Owen, we have an account of an apparition seen repeatedly by Madame Hauffe, and also seen once by a forest ranger name Boheim, who said he saw a gray cloud turn into a human form. 

On page 417 to 424 of the same book by Owen, we have a discussion of a dramatic case Owen investigated through interviews with the two witnesses.  A Miss S. saw an apparition of two strange figures several different times, who identified themselves as having a last name of Children. Later a Mrs. R. reported seeing the same two figures,  with the luminous letters "Dame Children" above the female figure.  It was later discovered that a family with the last name of Children had once lived in the house where the apparitions were seen. 

In Camille Flammarion's book "The Unknown," on page 178 (case CLXXVIII) we read that  Minnie Cox reported seeing in 1869 an apparition of her brother, who was far away in Hong Kong.  This occurred an hour after the brother's son (living with Minnie) had reported seeing his father. The next mail which came from China informed her of her brother's death, which had happened on the same day Minnie had seen the apparition. 

A very well-documented case of an apparition seen by multiple observers is the case of the Cheltenham ghost.  Between 1882 and 1886 several observers saw inside and around a particular house an apparition of a mysterious figure in black, who would often be seen with a handkerchief over her face. Rosina Despard saw the figure more than five times.  You can read here the lengthy written testimony of Rosina and the written testimony of several other witnesses who saw the apparition. Rosina testified to "the sudden and complete disappearance of the figure, while in full view" (page 321).  Describing a sighting of the same figure, Edith Morton said, "suddenly I felt a cold, icy shiver, and I saw the figure bend over me" (page 325).  W.H.C. Morton testified to seeing the figure three times. An F.M.K. also reported seeing the figure. An M.E. Brown reported that she and someone else saw a "dark, shadowy figure, dressed in black, and making no noise, glide past us along the passage and disappear round a corner" (page 327), and that she saw the mysterious figure twice more (page 328). 

Perhaps the most famous ghost in history was the mysterious phantom figure of Katie King.  As I describe in this post, the figure was seen by many people over the course of two years, such as the distinguished scientist William Crookes (discoverer of the element thallium).  In that post I quote four different books by four different contemporary authors containing eyewitness testimony of sightings of the mysterious Katie King.  In three of these cases, it's the author of the book who gives part of the eyewitness testimony citing observations of the mysterious figure. Similarly, as described here, the phantom figure of Bien Boa was seen by multiple distinguished witnesses, one a Nobel Prize winner. 

A much more recent case of an apparition seen by multiple observers is the case described by John G. Fuller in his book The Ghost of Flight 401. On page 138 to 139 he describes three witnesses (a flight captain, a stewardess and a flight supervisor) seeing in the first-class section of the aircraft a silent figure dressed as an airline pilot, a figure who wouldn't respond to questions. "My God, it's Bob Loft" said the captain, referring to an airline pilot who had died not long ago in an airline crash that had killed more than 100 passengers. Then, according to Fuller:

The captain in the first-class seat simply wasn't there. He was there one moment -- and not there the next. 

The aircraft was searched, but the figure was never found.



In the scientific paper here,  researchers found that 39 out of 84 apparition witnesses reported that someone else present shared their experience.  The authors of the paper attempted to track down how many people could corroborate such reports. Nine such people had either died or could not be tracked down, leaving 30 left. The authors stated, "In 21 instances out of the 30, the witnesses verified the respondent’s description of the case." 

Such cases sometimes occur in recent years. For example, on pages 4-6 of the 2018 book Already Here by Leo Galland M.D., Leo tells us that while waiting to hear the fate of his son "three hours away" after an accident, both he and his wife saw an apparition of that son. In the same hour of that day, Leo learned that attempts to revive his son had failed. 

Quite a few additional cases of apparitions seen by multiple observers might be found by doing a Google search for "Marian apparition," although such cases are quite different from those I have discussed.  
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Imagine a realtor shows you an apartment for rent. The rent is way too high, and the apartment has a leaky roof, and lots of roaches running about. As you leave the apartment with the realtor, the realtor tries to get you to rent the apartment by saying, “Look, you may not like it, but you have a choice: either rent it, or get used to sleeping out on the cold street.” Such an argument commits the fallacy of a false dilemma, in which someone speaks as if there are only two choices, even though there are a variety of choices. Of course, a renter will almost always have a choice of several different apartments he can rent.

A similar false dilemma is very frequently presented by Darwinists who speak as if we have a choice in believing in orthodox Darwinism or believing in biblical creationism, the idea that all of the earth's species were created only a few thousand years ago. The range of choices is not at all so narrow. Instead of having just two choices in regard to what we can believe about the origin of species, you actually have quite a few belief options you can choose from. In this post I will describe thirteen such choices.

Background: The Fossil Record

The fossil record presents certain constraints on any theory of the origin of species that follows the principle of, “Let's believe the past was as it appears to have been.” But as we will see this constraint is not absolute, because under certain imaginative theories such a principle might actually be dispensed with.

A Darwinist will inevitably describe the fossil record as showing evidence of gradual evolution. But described in the most lean and objective way, that is not necessarily so. What the fossil record does seem to show is very strong evidence that species have  appeared at scattered times during the past 600 million years.

The animal kingdom contains about 36 major divisions called phyla. Contrary to what we might expect under Darwinian assumptions, we do not see most of these phyla appearing during the past 200 million years, in some kind of “cone of increasing diversity.” Instead, most of the animal phyla appear between 600 and 500 million years ago, with no animal phylum appearing after  about 500 million years ago. 

Option #1: Darwinism

There are three tenets of orthodox Darwinism. The first is the belief that all life has descended from a common primitive ancestor. The second is gradualism, the idea that species arise because one species slowly evolves into another species. The third tenet of Darwinism is the idea that new species appear mainly because of random changes or random mutations and natural selection.

There are problems with Darwinism. The first is that we have no direct observational evidence that any of its three main ideas is correct. While we have evidence that species appeared at different intervals during the past 600 million years, this does not prove that such species evolved from earlier ancestors. Such species might have been dropped off by visiting spaceships, or created by some divine creator (to mention two of quite a few possibilities I will discuss). While there is evidence that natural selection can produce microevolution (typically a kind of pruning effect getting rid of unfit members of a population), there is no good evidence that an accumulation of small changes produced by natural selection will result in a new species or a macroscopic biological innovation. What we see in organisms is gigantic amounts of organization, but natural selection is not actually a theory of organization; it's merely a theory of accumulation (that being the word that Darwin used again and again in describing his theory).

A very large objection against the idea of evolution by natural selection is that it cannot explain the early stages (or incipient stages) of any new biological organ, biological system, or biological innovation. Such early stages would almost always fail to produce any reward, so we would not expect that natural selection would cause them to proliferate because of “survival of the fittest.” An additional problem is that Darwinism offers no answer to the origin of life itself, something that cannot be explained by natural selection (which requires life to first exist). 

Option #2: “Third Way” Naturalistic Gradualism

The term “third way” has sometimes been used for the idea that species appear through blind gradual evolution, but that this does not occur mainly because of natural selection. A person following this approach may believe that gradual evolution requires some much more complicated explanation than the simplistic explanation of random mutations and natural selection. The person may appeal to some imagined natural principle of self-organization. Or the person may appeal to ideas such as DNA methlyzation, gene swapping, or epigenetics. There is a web site listing various scientists who take such an approach. You could put under this category a theory such as the neutral theory of evolution.  

Option #3: Biblical Creationism

Biblical creationism is the idea that species appeared all at once, after being created by a divine creator, as described by the Bible. When combined with fundamentalism, biblical creationism typically holds that all species are no older than a few thousand years. The principle problem with this option is that it conflicts with the fossil record, which suggests that species have appeared over a span of many millions of years.

I may note that it is an error to use the term "creationist" to refer to anyone who has not stated that he believes in the biblical account of creation,  because when you do a Google search for "creationism" the first definition you will get is one that specifically refers to the biblical account of creation.  This error of calling critics of Darwinism "creationists" is very often used in a dishonest way by Darwinism zealots, who will call any critic of Darwinism a creationist even when such a person has not identified himself as a believer in the biblical view of creation. Such critics are properly referred to as "Darwinism skeptics." 

Option #4: Intelligent Design

Intelligent design may be very generally described as the idea that species have appeared after design activity from some intelligent designer. The term is actually a broad umbrella that covers quite a few diverse possibilities. A person believing in intelligent design may or may not believe that the earth's species are descended from a common ancestor, and may or may not believe in gradualism, the idea that one species gradually evolves into another. Typically a person believing in intelligent design will believe that some divine agent is the intelligent designer, although some believers in intelligent design say that the nature of the designer cannot be known (perhaps leaving the door open to some extraterrestrial agent as the intelligent designer).

Option #5: Origin of Species by Extraterrestrial Actions

The term panspermia is used for a theory that life originated on Earth after it came here from space, possibly by comets. The term directed panspermia is sometimes used for the idea that life originated on Earth after some spaceship came here and dropped off microorganisms. There is no reason why speculations about extraterrestrial involvement in earthly biology be confined to the origin of life. We can take things further, and speculate that all, most or some species now existing have originated because of extraterrestrial actions. The appearances of species we see in the fossil record could have been mostly caused by extraterrestrial spaceships that dropped off biological organisms on our planet. Or perhaps only some of the most hard-to-explain species have originated after extraterrestrial intervention, including our own species.

This Option #5 has often been suggested by the popular television show Ancient Aliens, which often suggests that the seemingly sudden appearance of human culture may have been the result of tinkering by extraterrestrials.

It's not just a theory, it's a TV series

The advantage of believing in Option 5 is that it can reduce some of the huge improbability problems associated with Darwinism. There are reasons for thinking that the chance of a species such as mankind ever appearing because of random Darwinian evolution is less than 1 in a billion. But let us imagine 100 billion habitable planets in our galaxy. The chance of an intelligent species on at least one of those planets might have been much better than 1 in a billion. And then such an intelligent life form might have caused intelligent life to appear on our planet. Under such a theory, the implausible improbability may be reduced substantially.

Option #6: Teleological Gradualism

The term gradualism refers to the idea that biological species have gradually evolved from other species. One can believe in such an idea without believing that natural selection or random mutations are sufficient to explain why one species would evolve into another species. An alternate idea is that there is some kind of cosmic impetus or life-force that drives species towards higher levels of organization and complexity. We might think of this in a rather mystical or vitalistic sense. Or we might think of such a thing as being a kind of cosmic programming.

It is interesting to note that a divine agent would not necessarily cause species to come into existence through some special intervention. Such an agent might create subtle laws and cosmic algorithms that might cause life to appear and start becoming more and more organized across the universe. Much of this “cosmic programming” might be undiscovered by us.

One can believe in such a thing while also maintaining that natural selection is utterly inadequate to explain biological complexity.

Option #7: Origins Agnosticism

The term agnosticism refers to taking no position on whether or not a deity exists. The term “origins agnosticism” can be used for the stance of taking no position as to how biological species arose. An origins agnostic does not maintain that species arose because of Darwinian evolution by natural selection, does not maintain that species through any gradual process of evolution, does not maintain that species arose because of some special creation, and does not maintain that species arose because of some form of intelligent design. The origins agnostic simply answers, “I don't understand such matters,” or “No one understands such matters” when asked about such topics.

Given the limits of human knowledge and understanding, a strong case can be made that origins agnosticism is actually the most scientific stance that can currently be taken on the issue of the origin of species.

Option #8: Philosophical Immaterialism

Most people take the fossil record and the geological record as something that force us without any flexibility to believe that the physical universe existed for billions of years before man appeared on the scene. But under certain philosophical assumptions, such a thing is not necessarily so.

One interesting philosophical theory, surprisingly easy to defend, is known as idealism or immaterialism. This is the idea that all that exists are minds or mental experiences, and that matter exists only as something within the perceptions of mental agents, having no reality outside of the perceptions or experiences of minds. Under  such a theory, what is called the first four billion years of earth's history undergoes a kind of demotion, becoming something that exists purely as a perceptual detail or conceptual detail. An immaterialist may believe that the history of planet Earth really started when the first humans existed, and first had mental experiences.

To help get a handle on such an idea, consider a newly created video game. The initial level of play may involve acting as Detective Waterson in the detective's living room, with the calendar listing January 1, 1890 as the date. Inside that living room may be a scrapbook showing events from Detective's Waterson's childhood, going back to 1860. But that scrapbook is just kind of what screenwriters call “back story.” The game doesn't really begin in 1860 – it begins in 1890, with the first day of the player's experience playing Detective Waterson. Similarly, what we call the Jurassic Era and the Triassic Era may be mere “back story.” The history of earth – which actually may be just a history of mental experiences – may have begun only when the first mental experiences occurred.  Under such a theory, it may be denied that there ever really existed creatures such as dinosaurs, which may be merely part of a narrative “back story,” and which may be called never-existent because no real mind ever perceived them.

Under such a doctrine of immaterialism, we may need to postulate some non-human mental reality as being the source of human mental reality. Under an immaterialist perspective, all attempts to postulate physical or biological causes for the origin of humanity are mistaken. The thinking goes along the lines of: we are purely mental, and the cause of us must also be purely mental.

Option #9: The Idea Our Planet Is a Technological Simulation

The idea has been widely discussed that we may be part of a computer simulation created by extraterrestrials. Under such a theory, we remove biological evolution as the cause of humanity, and replace such an idea with the idea that extraterrestrial programmers are the cause of humanity.

This idea has many difficulties. One is that it kind of pushes off into the far horizon the question of how minds could ever originally appear. If we are the result of extraterrestrial programming, then any evidence claimed for evolution disappears, becoming just “part of the illusion” or “part of the simulation.” So we are then left with the question: how could these extraterrestrials that programmed our simulation ever have appeared? You can't cite evolution if your assumption has made earthly evolution just “part of the simulation."

Option #10: Hyper-Dimensional Migration of Species

Modern science fiction often talks about space-time wormholes, allowing an instantaneous passage from one part of space to another part of space. Such a concept is higher speculative. Engaging in similar speculations, we may postulate that reality may consist of numerous different dimensions or universes, and there may be portals or wormholes that allow transit from one dimension or universe to another.

Such a possibility may suggest a strange theory about the origin of species. The theory is that many, most or all species have originated in some other universe or other dimension, and have somehow migrated to our planet, after some organisms passed through some wormhole or portal. There might be some reason why such a wormhole or portal might open up very rarely, perhaps only once every few million years.

A disadvantage of such a theory is that it kind of pushes off to the horizon the ultimate reason as to why species originated, since no explanation is provided as to how they might have originated in some other dimension or other universe.

Option #11: Cosmic Learning

A problem with Darwinism is that it asks us to believe in occurrences that seem all-too-unlikely to have occurred given only the non-creative and non-organizational factors of natural selection and random mutations. But let us imagine something that might increase the chance of natural evolution. We can imagine that the universe somehow has a mysterious ability to learn and remember when incredibly improbable fortunate things happen. So imagine billions of galaxies each containing billions of planets. There might be only 1 chance in 1,000,000,000,000,000,000,000,000 that a particular biological innovation would anywhere in the universe. But once that biological innovation had occurred, this might somehow be like some trick that the whole universe had learned. The odds of such an event might then suddenly decrease, going from 1 chance in 1,000,000,000,000,000,000,000,000 to something like 1 chance in 100. Then we might suddenly see that biological innovation occurring with great ease all over the universe.

I have no idea of how such “cosmic learning” might work, but conceivably some theorist might flesh out this vague suggestion. One possibility might be to imagine that the universe itself has a kind of mind, to some degree.

Option 12: Biological Innovations from Unknown Previous Earthly Civilizations

An interesting rarely considered possibility is that our civilization may not be the first high-technology civilization to arise on our planet. It could be that many millions of years ago some technical civilization arose on our planet. Almost all traces of such a civilization could have been lost because of geological activity. If an earlier civilization had existed on Earth, it might have engaged in genetic engineering, creating new forms of life. Some of the species that now exist may have been created by such a civilization.

Option 13: The Origin of Species by Less-Than-Divine Spiritual Entities

Yet another possibility is that one or more species (possibly mankind) were created not by the creator of the universe, but by some poorly understood spiritual entities of lesser power.  Possibilities include angels, demons or any other hypothetical spiritual entities such as mysterious disembodied spirits.  Appealing to such a possibility may have the advantage that it defeats all arguments along the lines that an omnipotent power would not have created creatures so flawed as humans are.  If humans were created by less-than-divine spiritual entities, we can draw no conclusions about whether the creations of such entities would be perfect.  

Conclusion

We have seen that there are many possible belief options regarding the origin of species. I have listed thirteen, and some more imaginative thinker could probably add five or ten more.

There is another factor to consider, which expands the possibility set further.  This is the fact that few of these possibilities are mutually exclusive.  It is possible that multiple causal factors were involved in the origin of species, and that some species appeared for one reason, and other species appeared for others. For example,  it could be that all earthly species arose for some particular reason, and that humanity (with so many unique characteristics) arose because of some very different reason.  When we consider the possibility that species may appear because of a combination of two or more of the 13 items I have listed here, then the total set of belief possibilities would seem to be in excess of 25. 
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
A very accomplished technologist and inventor, Ray Kurzweil has become famous for his prediction that there will before long be a “Singularity” in which machines become super-intelligent (a prediction make in his 2005 book The Singularity Is Near). In his 1999 book The Age of Spiritual Machines, Ray Kurzweil made some very specific predictions about specific years: the year 2009, the year 2019, the year 2029, and the year 2072. Let's look at how well his predictions for the year 2019 hold up to reality.

Prediction #1: “The computational ability of a $4,000 computing device (in 1999 dollars) is approximately equal to the computational capability of the human brain (20 million billion calculations per second). (“2019” timeline prediction, page 203.)

Reality: A $4000 computing device in 1999 dollars is equivalent to about a $6000 dollar computing device today. There is no $6000 computing device that can compute anywhere near as fast 20 million billion calculations per second. A computer such as the Apple iMac Pro (with a price of 5000 dollars) performs at a clock speed of 3.2 gigahertz, which results in about 3 billion operations per second. This article claims that when loaded with 18 core processors (which raises the price above $10,000), the Apple iMacPro can do 11 trillion floating point operations per second (11 teraflops).  Even if we use that figure rather than the clock speed, we still have computing capability about 1000 times smaller than the computing capability predicted by Kurzweil for such a device in 2019. 

Prediction #2: “Computers are now largely invisible and are embedded everywhere – in walls, tables, chairs, desks, clothing, jewelry, and bodies.” (“2019” timeline prediction, page 278.)

Reality: Nothing like this has happened, and while computers are smaller and thinner, they are not at all "largely invisible."

Prediction #3: “Three-dimensional virtual reality displays, embedded in glasses and contact lenses, as well as auditory 'lenses,' are used routinely as primary interfaces for communication with other persons, computers, the Web, and virtual reality.” (“2019” timeline prediction, page 278.)

Reality: Such things are not at all used “routinely,” and I am aware of no cases in which they are ever used. There is very little communication through virtual reality displays, and when it is done it involves bulky apparatus like the Occulus Rift device, which resembles a scuba mask.

Prediction #4: “Most interaction with computing is through gestures and two-way natural language spoken communication.” (“2019” timeline prediction, page 278.)

Reality: Partially correct, if we consider smartphone swipes as a gesture.

Prediction #5: “High-resolution, three dimensional visual and auditory virtual reality and realistic all-encompassing tactile environments enable people to do virtually anything with anybody, regardless of physical proximity." (“2019” timeline prediction, page 278.)

Reality: This sounds like a prediction of some reality similar to the Holodeck first depicted in the TV series Star Trek: The New Generation, or a prediction that realistic virtual sex will be available by 2019. Alas, we have no such things.

Prediction #6: “Paper books or documents are rarely used and most learning is conducted through intelligent, simulated software-based teachers.” (“2019” timeline prediction, page 278.)

Reality: Paper books and documents are used much less commonly than in 1999, but it is not at all true that we have “intelligent, simulated software-based teachers.” For example, you cannot download a “history teacher” app who you can talk to about history like you would talk to a flesh-and-blood history teacher.

Prediction #7: “The vast majority of transactions include a simulated person.” (“2019” timeline prediction, page 279.)

Reality: A large percentage of transactions are electronic, but very few of them involve a simulated person.

Prediction #8: “Automated driving systems are now installed on most roads.” (“2019” timeline prediction, page 279.)

Reality: Although there are a few self-driving cars on the road, 99% of traffic is old-fashioned traffic with human drivers.

Prediction #9: “People are beginning to have relationships with automated personalities and use them as companions, teachers, caretakers and lovers.” (“2019” timeline prediction, page 279.)

Reality: This isn't happening in 2019. You can have a little interaction with an on-screen figure in a video game, but it's a very limited type of thing (such as choosing one of 5 text responses you can make to the person). 

Prediction #10: "Most flying weapons are small -- some as small as insects -- with microscopic flying weapons being researched"("2019" timeline prediction, page 207). 

Reality: The public has not yet even heard of tiny flying weapons.

Prediction #11: "The expected lifespan...has now substantially increased again, to over one hundred."  (“2019” timeline prediction, page 208.)

Reality: The most recent lifespan figure is 78.9 years for US residents. A December 2018 article says the US lifespan has dropped for three years in a row. 

So Kurzweil's predictions for 2019 were very far off the mark. Are there any reasons to think that his predictions for 2029 and 2099 are unlikely to be correct? There certainly are. 

One reason is that Kurzweil never did much to prove his claim that there is a Law of Accelerating Returns causing the time interval between major events to grow shorter and shorter. On page 27 he tries to derive this law from evolution, claiming that natural evolution follows such a law.  But we don't see such a law being observed in the history of life.  Not counting the appearance of humans, by far the biggest leap in biological order occurred not fairly recently, but about 540 million years ago, when almost all of the existing animal phyla appeared rather suddenly during the Cambrian Explosion.  No animal phylum has appeared in the past 480 million years. So we do not at all see such a Law of Accelerating Returns in the history of life.  There has, in fact, been no major leap in biological innovation during the past 30,000 years. 

Kurzweil's logic on page 27 contains an obvious flaw. He states this:

The advance of technology is inherently an evolutionary process.  Indeed, it is a continuation of the same evolutionary process that gave rise to the technology-creating species. Therefore, in accordance with the Law of Accelerating Returns, the time interval between salient advances grows exponentially shorter as time passes.  

This is completely fallacious reasoning, both because the natural history of life has not actually followed a Law of Accelerating Returns, and also because the advance of technology is not a process like the evolutionary process postulated by Darwin.  The evolutionary process imagined by Darwin is blind, unguided, and natural, but the growth of technology is purposeful, guided and artificial.

On the same page, Kurzweil cites Moore's Law as justification for the Law of Accelerating Returns.  For a long time, this rule-of-thumb held true, that the speed of a transistor doubled every two years. But in 2015 Moore himself said, "I see Moore's law dying here in the next decade or so." Machines smarter than humans would require stratospheric leaps forward in computer software, but computer software has never grown at anything like an exponential pace or an accelerating pace.  Nothing like Moore's Law has ever existed in the world of software development.  Kurzweil has occasionally attempted to suggest that evolutionary algorithms will produce some great leap that will speed up the rate of software development. But a 2018 review of evolutionary algorithms concludes that they have been of little use, and states: "Our analysis of relevant literature shows that no one has succeeded at evolving non-trivial software from scratch, in other words the Darwinian algorithm works in theory, but does not work in practice, when applied in the domain of software production." 

The biggest reason for doubting Kurzweil's predictions beyond 2019 is that they are based on assumptions about the brain and mind that are incorrect. Kurzweil is an uncritical consumer of neuroscientist dogmas about the brain and mind. He assumes that the mind must be a product of the brain, and that memories must be stored in the brain, because that is what neuroscientists typically claim. If he had made an adequate study of the topic, he would have found that the low-level facts collected by neuroscientists do not support the high-level claims that neuroscientists make about the brain, and frequently contradict such claims. To give a few examples:

  • There is no place in the brain suitable for storing memories that last for decades, and things like synapses and dendritic spines (alleged to be involved in memory storage) are unstable, "shifting sands" kind of things which do not last for years, and which consist of proteins that only last for weeks.
  • The synapses that transmit signals in the brain are very noisy and unreliable,  in contrast to humans who can recall very large amounts of memorized information without error.
  • Signal transmission in the brain must mainly be a snail's pace affair, because of very serious slowing factors such as synaptic delays and synaptic fatigue (wrongly ignored by those who write about the speed of brain signals), meaning brains are too slow to explain instantaneous human memory recall.
  • The brain seems to have no mechanism for reading memories.
  • The brain seems to have no mechanism for writing memories, nothing like the read-write heads found in computers.
  • The brain has nothing that might explain the instantaneous recall of long-ago-learned information that humans routinely display, and has nothing like the things that allow instant data retrieval in computers.
  • Brain tissue has been studied at the most minute resolution, and it shows no sign of storing any encoded information (such as memory information) other than the genetic information that is in almost every cell of the body.
  • There is no sign that the brain or the human genome has any of the vast genomic apparatus it would need to have to accomplish the gigantic task of converting learned conceptual knowledge and episodic memories into neural states or synapse states (the task would presumably required hundreds of specialized proteins, and there's no real sign that such memory-encoding proteins exist)
  • No neuroscientist has ever given a detailed explanation of how such a gigantic translation task of memory encoding could be accomplished (one that included precise, detailed examples).
  • Contrary to the claim that brains store memories and produce our thinking, case histories show that humans can lose half or more of their brains (due to disease or hemispherectomy operations), and suffer little damage to memory or intelligence (as discussed here). 

Had he made a study of paranomal phenomena, something he shows no signs of having studied, Kurzweil might have come to the same idea suggested by the neuroscience facts above: that the brain cannot be an explanation for the human mind and human memory, and that these things must be largely effects or aspects of some reality that is not neural, probably a spiritual facet of humanity.

Since he believes that our minds are merely the products of our brains, Kurzweil thinks that we will be able to make machines as intelligent as we are, and eventually far more intelligent than we are, by somehow leveraging some "mind from matter" principle used by the human brain. But no one has any credible account of what such a principle could be, and certainly Kurzweil does not (although he tried to create an impression of knowledge about this topic with his book How to Create a Mind).   We already know the details of the structure and physiology of the brain, and what is going on in the brain in terms of matter and energy movement.  Such details do nothing to clarify any "mind from matter" principle that might explain how a brain could generate a mind, or be leveraged to make super-intelligent machines. 


It's like a new religion

Holding mistaken beliefs about minds being made by brains, and memories being stored in brains,  Kurzweil derives from these beliefs various bizarre predictions for 2099 that will not prove accurate, such as the belief that in that year "there is no longer any clear distinction between humans and computers," and "most conscious entities do not have a permanent physical presence," and "the number of software-based humans vastly exceeds those still using native neuron-cell-based computation,"  and "life expectancy is no longer a viable term in relation to intelligent beings"  (page 280). There is a good reason to think that humans will indeed one day no longer be concerned by the prospect of death, but that will probably come from understanding that our minds never arose from our brains, but from a spirit or soul that is imperishable. 

Since he predicts so many fantastic technology advances coming from a Law of Accelerating Returns, there is a certain cosmic problem faced by Kurzweil: the fact that we do not observe any sign of extraterrestrial activity anywhere else in outer space. We would think that if a single civilization got the type of super-powers Kurzweil imagines, that they would have transformed our galaxy; but we can see no signs of such a thing.  Kurzweil suggests a solution to this, one that is quite wacky. What he suggests is that extraterrestrials are too tiny to be noticed. He says on page 257, "A computational-based superintelligence of the late twenty-first century here on Earth will be microscopic in size." From this most dubious assumption, he makes the batty conclusion on page 258 that extraterrestrial spaceships are "thus likely to be smaller than a grain of sand...Perhaps this is one reason we have not noticed them."   
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Thomas Huxley was a nineteenth century writer who is remembered as “Darwin's bulldog.” Not an original thinker, Huxley excelled at debate. He was just the type of pushy pitchman that a rather retiring person like Darwin needed to sell his ideas. Nowadays if someone is asked to remember a quote by Huxley, he or she will be most likely to remember the quote below:

Sit down before fact as a little child, be prepared to give up every preconceived notion, follow humbly wherever and to whatever abysses nature leads, or you shall learn nothing. I have only begun to learn content and peace of mind since I have resolved at all risks to do this.”

But it seems that it was hypocritical for Huxley to have made such a statement. For at a very important point in his life, when he was asked to join a committee investigating phenomena contrary to his preconceived notions, Huxley refused to participate. He acted like the church officials of Galileo's time who supposedly refused to look through the telescope of Galileo.

The story of the committee in question is well told in Chapter VIII of the book “Mysterious Psychic Forces” by the famous astronomer Camille Flammarion (a fascinating book that it is well worth reading in its entirety). The committee was formed by the Dialectical Society of London, founded in 1867. In January 1869 the society resolved, “That the Council be requested to appoint a Committee in conformity with Bye-law VII., to investigate the Phenomena alleged to be Spiritual Manifestations, and to report thereon.” The committee formed consisted of twenty-seven persons, most of whom were skeptical of paranormal claims.  

Thomas Huxley was asked to join the committee, but refused. At the time of his refusal, it was quite clear that there was a very substantial and important body of abnormal psychic phenomena worthy of investigation. In the preceding years countless observers in the US and England had participated in an activity called table turning. Very many reliable observers would report that when a small group of people put their hands on a table, the table would sometimes tilt dramatically or levitate. This phenomenon was scientifically investigated at length by Count Agenor de Gasparin, who had published in 1857 a scientific book describing countless paranormal effects observed under controlled conditions. (Gasparin's research is well-summarized in Chapter VI of Flammarion's book.)   Shockingly, it appeared that the phenomenon of table turning had stood up very well to rigorous scientific experiments.

Moreover, in the years leading up to 1869 there was very widespread discussion in the press of the astonishing events observed at the seances of the medium Daniel Dunglas Home (who had published an account of many such incidents in 1864). A very large number of distinguished witnesses had claimed to have seen all kinds of inexplicable paranormal events occurring around Home, often claiming they had seen him levitate. For example, only the previous year (1868) three very distinguished witnesses in London (the Earl of Dunraven, Lord Lindsay, and Captain C. Wynne) had claimed that on December 16, 1868 they had seen Home float out of a high window and float back into another window. A description of this event (along with many equally amazing testimonies) can be read here on pages 80 to 85 of an 1869 book by one of these people, Windham Thomas Wyndham (the fourth Earl of Dunraven).  Home's fame around 1868-1869 was so great that the next year one of England's foremost scientists (Crookes) investigated him at length, with results described here. Meanwhile, in many places in England and the United States, between 1847 and 1870 there were very many reports of an inexplicable rapping phenomenon, in which loud, inexplicable noises would be reported coming from tables and walls.

So when the Dialectical Society of London asked Thomas Huxley to participate in their investigation of anomalous phenomena in 1869, there was every reason to believe that there was something worthy of looking into – either to substantiate, or to debunk. But Huxley refused to participate, even though the committee was mainly composed of skeptics. Judging from the committee's report, his mind might have been changed if he had participated.

Excerpts of the report of the committee can be read here, and the entire report can be read here. Below are some quotes from the report (go to this link to see the place in the report from which I quote).

  1. "Thirteen witnesses state that they have seen heavy bodies-in some instances men—rise slowly in the air and remain there for some time without visible or tangible support.
  2. Fourteen witnesses testify to having seen hands or figures, not appertaining to any human being, but life-like in appearance and mobility, which they have sometimes touched or even grasped, and which they are therefore convinced were not the result of imposture or illusion.
  3. Five witnesses state that they have been touched, by some invisible agency, on various parts of the body, and often where requested, when the hands of all present were visible.
  4. Thirteen witnesses declare that they have heard musical pieces well played upon instruments not manipulated by an ascertainable agency.
  5. Five witnesses state that they have seen red-hot coals applied to the hands or heads of several persons without producing pain or scorching; and three witnesses state that they have had the same experiment made upon themselves with the like immunity.
  6. Eight witnesses state that they have received precise information through rappings, writings, and in other ways, the accuracy of which was unknown at the time to themselves or to any persons present, and which, on subsequent inquiry was found to be correct.
  7. One witness declares that he has received a precise and detailed statement which, nevertheless, proved to be entirely erroneous.
  8. Three witnesses state that they have been present when drawings, both in pencil and colours, were produced in so short a time, and under such conditions as to render human agency impossible.
  9. Six witnesses declare that they have received information of future events and that in some cases the hour and minute of their occurrence have been accurately foretold, days and even weeks before."

Flammarion states that the committee included "physicists, chemists, astronomers and naturalists, several of them members of the London Royal Society."  At this place in the report, the committee made the following general conclusions:

"These reports, hereto subjoined, substantially corroborate each other, and would appear to establish the following propositions:—
1. That sounds of a varied character, apparently proceeding from articles of furniture, the floor and walls of the room (the vibrations accompanying which sounds are often distinctly perceptible to the touch) occur, without [Pg 292]being produced by muscular action or mechanical contrivance.
2. That movements of heavy bodies take place without mechanical contrivance of any kind or adequate exertion of muscular force by the persons present, and frequently without contact or connection with any person.
3. That these sounds and movements often occur at the times and in the manner asked for by persons present, and, by means of a simple code of signals, answer questions and spell out coherent communications.
4. That the answers and communications thus obtained are, for the most part, of a commonplace character; but facts are sometimes correctly given which are only known to one of the persons present."

Clearly given this astonishing report by the committee, there was something obviously worthy of investigation at the time Thomas Huxley refused to participate in the committee. And the whole issue of whether humans may have psychic powers or some kind of soul or spirit is one that is extremely relevant to the very thing on which Huxley claimed expertise. For if humans have spiritual abilities beyond current explanation, something that could never be explained by natural selection, that is something very relevant to claims about the explanatory power of natural selection. Indeed, the co-originator of the theory of evolution by natural selection (Alfred Russel Wallace) modified his views on the topic partially because of what he had observed relating to paranormal phenomena.

So it seems that London resident Thomas Huxley could have had no reasonable excuse for refusing to participate in the investigation of the Committee of the London Dialectical Society. What reason did Huxley give for not participating? Other than the perfunctory excuse of being too busy, the only excuse he offered was in saying, “I take no interest in the subject,” and that “supposing the phenomena to be genuine – they do not interest me.” How lame an excuse was that? It was made clear to Huxley that the phenomena to be investigated were things such as mysterious powers of levitation and paranormal powers of the mind. How could anyone claim that if such phenomena were genuine, that they would not interest him?

We should give a name for the type of intellectual cowardice and lack of candor which Huxley displayed in this matter. I propose the name below.


It seems rather clear that the reason why Huxley refused to participate is that he wanted to avoid being exposed to evidence that would conflict with his assumptions about the way the world worked and his beliefs about the origin and nature of humans. Huxley's "Collected Essays" (which can be read here) is a book that is filled with negative pronouncements about the paranormal and the miraculous, but we see no sign in the book that Huxley ever did anything to investigate or read up about the paranormal, other than to read the ancient gospels. 

Sadly, the Huxley Syndrome is rampant in modern academia.  So, for example, we have a majority of scientists who fail to study any of the vast evidence for paranormal phenomena, and who give very lame excuses for such scholarly indolence -- for example, the claim that the evidence "is not worthy of attention" or "not suitable for study by scientists" or that phenomena so often observed and documented by so many reliable observers (including world-class scientists, as discussed here, here, and here) are "clearly impossible." 
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
On Thursday physicist Sabine Hossenfelder published a blog post pushing the dogma that all scientific theories have to be falsifiable. The post had this dogmatic title:“Yes, scientific theories have to be falsifiable. Why do we even have to talk about this?” Since Hossenfelder gives not a single reason for believing that all scientific theories must be falsifiable, we may wonder why she is confident about this claim, so confident that she asks, “Why do we even have to talk about this?” She claims that all hypotheses that are not “falsifiable through observation” are hypotheses that “belong into the realm of religion.” Rather than trying to present any reasons for believing this strange claim, she states, “That much is clear, and I doubt any scientist would disagree with that.” Such a claim is neither clear nor logical, and there are many scientists and philosophers who would disagree with it.

To disprove the idea that scientific theories have to be falsifiable, you need merely provide some examples of important scientific theories that could never be falsified. It is very easy to do this. Let's start with one of the most widely discussed and best-loved theories of modern times, the theory that life exists on some other planet. There is no way to falsify this theory, because the universe is too big.

The universe consists of billions of galaxies, and in each of these galaxies there are millions or billions of stars. Astronomers believe that planets are extremely common, and that a large fraction of all stars have planets. What would you need to do to falsify the belief that extraterrestrial life exists? You might think that you could do this in theory by launching some grand fleet of spaceships to search all planets. But such a task would be impossible. There would be too many planets to search, and it would take too long. The speed of light limits travel to other stars. So some grand fleet of spaceships would require billions of years to search all of the other planets in the universe.

Suppose that after, say, five billion years of exploration such a massive fleet of spaceships still reported no signs of extraterrestrial life. Would that falsify the theory that extraterrestrial life exists? It certainly would not. For in the billions of years that such an expedition had been operating, it would always be possible that extraterrestrial life had appeared in one of the places that had already been searched. So searching every single extraterrestrial planet in the universe for life (without finding any) would absolutely not falsify the claim that extraterrestrial life exists.


Galaxies (Credit: NASA)

Could we imagine, perhaps, that such an expedition might place cameras on every planet that it searched, to try to send back to Earth a signal allowing us to say that at this current moment none of them have life? That wouldn't work, because there is no signal that can travel faster than the speed of light. So, for example, if we got a signal from some camera that had been left on some planet 130,000,000 light-years away, and the camera showed no life, that would only prove that part of the planet did not have life 130,000,000 years ago. It would not prove that the planet does not now have life. Nor would it even prove that 130,000,000 years ago the planet had no life, for the planet might have life in some place that the camera was not observing. Besides the fact that you can't get live “showing it as it is right now” signals from planets many light-years away, there is the difficulty that you can't fill a planet with cameras, and there's all kinds of life that cannot be detected with cameras.

Very clearly, the important scientific theory that life exists on other planets is a theory that is not falsifiable. This simple fact destroys the credibility of the claim that a scientific theory has to be falsifiable. There is no logical need to write anything further on the topic. But just for the sake of overkill, I will give some further examples of scientific theories that are not falsifiable.

Another example of a very popular and widely discussed theory is the theory of natural abiogenesis, which is the theory that life naturally arose from non-life. It is theoretically quite possible to get a result supporting this theory. If a lab that was simulating early Earth conditions reported that life had spontaneously arisen from chemicals, that would be evidence supporting the theory of natural abiogenesis. But it is quite impossible to falsify the theory of natural abiogenesis. Even if you did a billion years of experiments trying to produce life from lab chemicals, and none of them were successful, that still would not prove that life had not arisen from chemicals because of some incredibly unlikely once-in-a-galaxy rare event.

A large class of scientific theories that cannot be falsified are those which describe realities on which our existence depends. Human biological existence has very many physical dependencies, so there ends up being quite a few theories describing realities on which our existence depends. To give one example, our existence depends on a strong nuclear force that binds together protons and neutrons in the nucleus of an atom. There is a theory corresponding to this reality, the theory that there exists a force that causes protons to be bound together in the nucleus, despite the very strong electromagnetic repulsion of their positive charges. Could we falsify this theory? No, we could not. We could only falsify it by observing that something on which our existence depends does not exist. But that could never happen.  Similarly, the theory of electromagnetism is a theory describing the basic electrical repulsion and attraction on which biological chemistry depends. We cannot falsify such a theory, since that would involve observing something that is a prerequisite for our existence.

Some other fundamental theories are the kinetic theory of matter (that gas consists of small particles in motion), the cellular theory of life (that cells are the fundamental building blocks of life), and the atomic theory of matter (that atoms are the basic building blocks of life). None of these theories can be falsified. Having made countless observations showing such realities, there is no way that future observations will refute them. We cannot imagine any observations that would cause us to believe that gases do not consist of moving tiny particles, or any observations that would cause us to believe that living things are not made up of cells, or any observations that would cause us to believe that rocks are not made of atoms.

The common expression “you can't prove a negative” is largely correct in suggesting that very many theories cannot be disproved or falsified. For example, you can prove that someone masturbates, but cannot prove that he does not masturbate.

Instead of the principle “a scientific theory must be falsifiable,” a much better principle is “a scientific theory should be either verifiable or falsifiable.” In general, any theory that is either verifiable or falsifiable can be considered a scientific theory. The theory of extraterrestrial life is a scientific theory, because we can easily imagine some simple observational events that would verify it. The theory of natural abiogenesis is a scientific theory because we can easily imagine some simple observational events that could verify it. 

The silly idea that a scientific theory must be falsifiable is one that was advanced by philosopher of science Karl Popper, because of ideological considerations. People such as Popper wanted to stigmatize theories that they disliked. So they advanced the strange, illogical idea that a scientific theory must be falsifiable, hoping that it would help brand certain types of theories and experimental inquiry as being unscientific.

And so, while scientists such as Joseph Rhine were piling up strong experimental evidence for the existence of ESP,  as discussed here, thinkers such as Karl Popper were trying to sell the idea that a scientific theory must be falsifiable. This was ideologically convenient. It could now be claimed that the theory that ESP exists is not scientific, since there is no way that it could be falsified (even a million negative ESP tests would not prove that ESP does not sometimes occur). This kind of effort makes no sense, for the same sword invented to kill ESP kills just as effectively the theory of extraterrestrial life, the theory of natural abiogenesis, the kinetic theory of matter, and quite a few other things that ESP-loathing scientists may prefer to believe in.

I may note that a principle that Scientific American attributesto philosopher of science Karl Popper, the claim that “pseudo-science seeks confirmations and science seeks falsifications,” is bunk and fantasy. Mainstream scientists typically spend most of their time searching for confirmations, and spend almost no time looking to falsify their favorite theories. Typical mainstream scientists spend almost no time attempting to falsify the theories they most love, and they are also extremely bad about examining evidence that seems to contradict such theories. So, for example, neuroscientists spend endless hours trying to confirm their dogmas about brains storing memories and brains producing thoughts, but they pay almost no attention to the many facts that argue against such claims. Modern science academia is a conformity culture with cherished dogmas, and it is a culture that punishes and stigmatizes heretics and contrarian thinkers who attempt to discredit those dogmas. This is the exact opposite of an approach centered around falsification. The fact that scientists aren't very interested in falsifying things is shown by the fact that many science journals never publish negative experimental results.

Karl Popper's essay “Science as Falsification” is a very revealing one, because it makes quite clear that Popper's claim about falsification was not at all something that came from a study of the way scientists actually behave, but was instead a claim that was contrived for the sake of a creating a weapon against a few theories Popper didn't like. Popper makes clear in the paper that he was bothered by the popularity of three theories: Freud's theory of psychoanalysis, Marx's theory of economics and history, and Adler's theory of psychology. Popper reveals in his essay that he invented his theory of falsification to combat such theories. It was actually a poor weapon against Marx's theory and Freud's theory, both of which actually are falsifiable. Freud's theory could be falsified if people became much crazier by going to Freudian psychotherapists, and Marx's theory could be falsified if capitalist countries all became blissfully happy and Marxist countries all became miserable failures.

The whole idea of creating a theory of how science works (“science as falsification”) based on a desire to create a weapon against theories you don't like is one that made no sense at all. Just as Marx and Freud were dogmatic thinkers who got far more influence than they deserved, Popper was a thinker who got far more attention than he deserved. Popper was bothered that the Marxists and Freudians claimed to see confirmations that weren't really confirmations. If people are claiming confirming evidence that isn't really there, a good way to handle that is to advocate tighter standards for confirming evidence, and to show how particular claims of confirming evidence are unfounded – not to be making untrue claims suggesting science is all about falsification rather than confirmation. Were we to follow Popper's bizarre claims such as “confirming evidence should not count except when..it can be presented as a serious but unsuccessful attempt to falsify the theory,” we would have to throw out a large fraction of the scientific results in textbooks.

Obviously untrue, the claim that scientific theories are all falsifiable is simply a rhetorical device, one that is occasionally trotted out by people as an argumentative weapon to use against theories they don't want to believe in (sometimes theories for which there is a great deal of evidence). It's a very ineffective weapon, because it's so easy to find examples of respected scientific theories that are not falsifiable. 
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Judging from the title of the recent book Understanding the Brain: From Cells to Behavior to Cognition by Harvard neuroscientist John E. Dowling, you might think the author does something to explain how cognition such as thinking and memory recall can be produced by cells.  But the author doesn't do anything to explain how such a thing could occur. 

Judging from its index, Dowling's 295-page book makes no mention of the topics of thinking, abstract thinking, ideas, concepts, recall, recognition or reasoning (none of which have an index entry).  The topic of thinking and consciousness doesn't really appear until the last chapter in the book, which starts out on page 253 with this very silly statement: "Human consciousness is just about the last surviving mystery."  The worlds of cosmology, physics, psychology, history and biology are actually filled with 1001 great mysteries that humans don't understand. 

Our biologists do not even know how the simplest prokaryotic cell could have originated, and they do not have a credible account of how eukaryotic cells originated (only a very unbelievable tall tale). Our biologists do not even know what causes a cell to split into two copies of itself; they do not know what causes polypeptide chains to fold into the three-dimensional shapes of proteins; and they do not know what causes a fertilized ovum to progress to become a human baby. Our biologists do not know how humans are able to remember things for 50 years (despite rapid protein turnover in synapses), and how humans can instantly remember things they learned or experienced decades ago.  Our biologists do not have credible explanations of the origin of humans, the origin of language,  or the origin of complex biological innovations (they merely have achievement legends about some of these things). So it is most ridiculous when biologists such as Dowling say things such as "Human consciousness is just about the last surviving mystery," thereby making the modern scientist sound as if he knows a thousand times more than he actually does.  

Equally ridiculous is Dowling's assertion on page 253 that the following mysteries "have been tamed": "the mystery of the origin of the universe, the mystery of life and reproduction, the mystery of the design to be found in nature, the mysteries of time, space and gravity."  No, these mysteries have not at all been "tamed." I will have a future post on why biologists do not actually understand 90% of sex and reproduction; and I may note that gravity is so little understood that it does not even have a place in the standard model of physics.  

Dowling asserts on page 260, "Clearly our rich mental life depends on higher cortical function," but he presents no good evidence to back up this claim. We actually know that crows are quite smart, despite having very small brains that lack a neocortex. We know also that damage to the prefrontal cortex has little effect on mental abilities,  as I documented quite thoroughly in this post, which includes references to many neuroscience papers. 

The assertion quoted above is followed by an appeal to experiments of the scientist Wilder Penfield in which people recalled things after having parts of their brains electrically stimulated. Dowling states on page 262:

Stimulating particular parts of the cortex evoked visual and auditory sensations along with emotions and feelings. Clearly these experiences were evoked from within. 

But it's actually not known whether visual images arising from such stimulation are memories,  hallucinations, or simply vivid imagination. A review of 80 years of experiments on electrical stimulation of the brain uses the word “reminiscences” for accounts that may or may not be memory retrievals. The review tells us, “This remains a rare phenomenon with from 0.3% to 0.59% EBS [electrical brain stimulation] inducing reminiscences.” The review states the following:

We observed a surprisingly large variety of reminiscences covering all aspects of declarative memory. However, most were poorly detailed and only a few were episodic. This result does not support theories of a highly stable and detailed memory, as initially postulated, and still widely believed as true by the general public....Overall, only one patient reported what appeared to be a clearly detailed episodic memory for which he spontaneously specified that he had never thought about it....Overall, these results do not support Penfield's idea of a highly stable memory that can be replayed randomly by EBS. Hence, results of EBS should not, at this stage, be taken as evidence for long-term episodic memories that can sometimes be retrieved.

So the actual experimental results don't support what Dowling has insinuated, and leave us with very great doubt as to whether such reports are of something retrieved "from within" a brain.  You may realize the fallacy of thinking that recalling something during brain stimulation proves something about memory location if you consider the following: when you go to a masseuse, and have your back massaged, you may recall various memories while lying on your stomach, but that doesn't show that your memories are stored in the muscles of your back that are being massaged. 

What is quite possible is that the brain is like some reducing valve or faucet, and that the brain reduces your memory and imagination,  just as a faucet can limit a flow of water to a mere trickle.  Such a reduction may make you more likely to focus on the crummy little details of daily living.  In such a case, electrically stimulating some part of the brain might limit that reduction effect or suppression effect, increasing recall and imagination that are not at all caused by the brain. Similarly, hitting a faucet with a hammer may increase its flow of water, but the water isn't coming from inside the little faucet. 

Dowling then attempts to insinuate on page 262 that brain studies tell us that "certain neurons in the prefrontal cortex become active during the time when the monkey is remembering" where a target is. But, to the contrary, this post (which includes links to many scientific studies) summarizes the evidence that brains show no real signs of looking different or working harder when humans are thinking or recalling things.  Since almost all neurons in the brain are continually active, we should never draw a conclusion based on the mere activity of some neurons while a brain was remembering. 

Dowling then confesses on page 263 that "we are just at the beginning of understanding how neural activity...might relate to consciousness." On the next page he states, "The neural basis of human consciousness seems beyond our experimental reach for the time being."  A few pages later the book ends. It has not provided anything remotely like an explanation for how cells could yield cognition. Dowling hasn't even taken a stab at explaining such a thing.  He has a chapter entitled "From Brain to Mind," but it deals with visual perception.  Of course, there's so much more to "the mind" than just visual perception: imagination, intelligence, abstract thinking, recall, self-hood, and so forth. 

On the topic of memory, Dowling tries to drop little bits of information here and there supporting the idea that the brain is a storage place for our memories. Like almost all neuroscientists writing books on the brain, he mentions the case of patient H.M., who had trouble forming memories after suffering damage to part of his brain called the hippocampus (although he could recall memories learned before that damage). The reliance of neuroscientists on this one case is not at all scientific.  You may establish a cause and effect hypothesis if you correlate many examples of a cause and effect (such as correlating many examples of people smoking with many examples of people having lung cancer).  But it is rather ridiculous to speak (as neuroscientists often do) as if we know some memory problem in someone was caused by some problem in part of his brain. You would need many examples of such a thing before having confidence that the two were causally related.  Similarly, it would be absurd to suggest that our memories are stored in the root canals of our teeth because of the historical case of one patient who had a serious memory problem after having a root canal procedure. 

As usual, the story of HM is wrongly told. Dowling tells us that patient HM "no longer could remember events or facts for more than a few minutes." But a 14-year follow-up study of patient HM (whose memory problems started in 1953) actually tells us that HM was able to form some new memories. The study says this on page 217:

In February 1968, when shown the head on a Kennedy half-dollar, he said, correctly, that the person portrayed on the coin was President Kennedy. When asked him whether President Kennedy was dead or alive, and he answered, without hesitation, that Kennedy had been assassinated...In a similar way, he recalled various other public events, such as the death of Pope John (soon after the event), and recognized the name of one of the astronauts, but his performance in these respects was quite variable. 

Our neuroscientists keep misinforming us about patient HM because it serves their dogmatic purposes to do so.  Even if you were to prove that destruction of the hippocampus prevents the formation of new memories, that would not at all prove that memories are stored in the brain. The hippocampus could simply be kind of a springboard that helps propel sensory experience into some unknown memory storage reality outside of the brain. 

On page 222, Dowling notes, "A well-established but curious observation is that older long-term memories are more persistent in many forms of brain disease than are more recent memories." This fact is actually inconsistent with claims that memories are stored in brains.  We know that the proteins that make up synapses have average lifetimes of only a few weeks, and that the synapses themselves are subject to spontaneous remodeling that makes them unstable. Given such realities, if memories were stored in brains, the ones that you would lose first are the oldest memories, since there would be so much more time for such memories to physically deteriorate. Similarly, if words were written on leaves, the older the writing was, the smaller the chance that it would survive. 

Dowling says on page 222, "It would appear that long-term memories are not permanently stored in the hippocampus but transferred elsewhere, probably various regions of the cortex."  He gives no data backing up this claim,  and there is no evidence for such memory transfer, nor does anyone have any idea of how it could work.  Because there is so much signal noise in the brain, so much noise in synapses, and so much noise and unreliability in the synapses of the cortex in particular, it is not credible that accurate memory information could move from the hippocampus to the cortex.  A scientific paper says, "In the cortex, individual synapses seem to be extremely unreliable: the probability of transmitter release in response to a single action potential can be as low as 0.1 or lower."  Signal transmission in the cortex would require the traversal of many synapses, and in each of these traversals there would be a low likelihood of a successful transmission of the signal. This unreliability of signal transmission in the cortex would be equivalent to signal noise even greater than we see in the bottom diagram, making it impossible for precise memories to transfer from the hippocampus to the cortex (and also making it impossible to precisely recall a detailed memory from the cortex).  


We know that in hemispherectomy operations in which half of a brain is removed to stop epileptic seizures, there is little damage to memory, even though half of the cortex is surgically removed.  Another reason for rejecting the idea of memories transferring from the hippocampus to the cortex is the issue of what can be called signal drowning.  Signal drowning is what happens when there are so many signals from so many sources that a particular signal is effectively drowned out. Such signal drowning would occur in a malfunctioning television which showed the signal from every cable TV channel all at once, or a malfunctioning radio which played simultaneously the music and words from every AM station at the same time. It would seem that in the cortex there would have to be exactly such signal drowning, because each neuron emits a signal very frequently (about once per second or more), and each neuron is connected directly to more than a thousand other neurons. So we can't imagine how the cortex could receive a memory transferred from the hippocampus, as such a signal would get drowned out from all the random signals from other neurons emitting so frequently.  

Facts such as the low cognitive impact of surgical removal of half of the brain, the huge amount of noise in the brain and synapses, the unreliable transmission of synapses, and the short lifetimes of the proteins that make up synapses are examples of very important neuroscience facts (with huge implications) that neuroscientists such as Dowling avoid mentioning in their books, for they do not wish to discuss observations contrary to their dogmatic claims.

Dowling has several pages discussing what is called LTP, trying to make it sound like this short-term effect (produced by artificial methods) has some relevance to memory. Despite all of the "busy work" time neuroscientists have spent on LTP, no scientist has shown that is has any relevance to memory. We know that despite its misleading name (LTP stands for long-term potentiation), LTP is actually a very short-lived affair, almost always lasting less than a few days.  So such a thing cannot account for memories that last for 50 years. 

Like some student who knows nothing about the origin of World War I saying, "I don't exactly understand that," Dowling confesses on page 222, "Exactly how memories are stored in neurons or in neuronal circuits remains a mystery." But why would that be, if memories were actually stored in neurons? We discovered how genetic information is stored in the nucleus of the cell around 1950. Is it really credible that 69 years later we would still have no understanding of how memories are stored in neurons, if they actually were stored in neurons? No, it isn't.  What is far more credible is that memories are not stored in brains, and that is exactly why no one has been able to read a memory from a bit of brain tissue in a lab, even though it is 69 years after laboratory scientists were able to read genetic information from inside cells. 
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
In 2017 Scientific Americanpublished a sharp critique of the theory of cosmic inflation originally advanced by Alan Guth (not to be confused with the more general Big Bang theory). The theory of cosmic inflation (which arose around 1980) is a kind of baroque add-on to the Big Bang theory that arose decades earlier. The Big Bang theory asserts the very general idea that the universe began suddenly in a state of incredible density, perhaps the infinite density called a singularity; and that the universe has been expanding ever since. The cosmic inflation theory makes a much more specific claim, a claim about less than one second of this expansion – that during only a small fraction of the first second of the expansion, there was a special super-fast type of expansion called exponential expansion. ("Cosmic inflation" is a very bad name for this theory, as it creates all kinds of confusion in which people confuse the verified idea of an expanding universe and the shaky idea of cosmic inflation. The term "cosmic inflation" refers not to cosmic expansion in general, but to the very specific idea that the universe's expansion was once a type of expansion -- exponential expansion -- radically faster and more dramatic than its current linear rate of expansion.) 

The article in Scientific America criticizing the theory of cosmic inflation was by three scientists (Anna Ijjas, Paul J. Steinhardt, Abraham Loeb), one a Harvard professor and another a Princeton professor. It was filled with very good points that should be read by anyone curious about the claims of the cosmic inflation theory.  You can read the article on a Harvard web site here. Or you can go to this site by the article's authors, summarizing their critique of the cosmic inflation theory.

Recently a very long scientific paper appeared on the ArXiv physics paper server, a paper with the cute title “Cosmic Inflation: Trick or Treat?” In its very first words the paper's author (Jerome Martin) misinforms us, because he refers to cosmic inflation as something that was “discovered almost 40 years ago.” Discovery is a word that should be used only for observational results in science. Cosmic inflation (the speculation that the universe underwent an instant of exponential expansion) was never discovered or observed by scientists. In fact, it is impossible that this “cosmic inflation” or exponential expansion ever could be observed. During the first 300,000 years of the universe's history, the density of matter and energy was so great that all light particles were thoroughly scattered and shuffled a million times. It is therefore physically impossible that we ever will be able to observe any unscrambled light signals from the first 300,000 years of the universe's history. So we will never be able to get observations that might verify the claim of cosmic inflation theorists that the universe underwent an instant of exponential expansion.

At the end of the paper the author claims that the cosmic inflation theory has “all of the criterions that a good scientific theory should possess.” The author gives only two examples of such things: first, the claim that the cosmic inflation theory is falsifiable, and second that “inflation has been able to make predictions.” His claim that the theory is falsifiable is not very solid. He says that the cosmic inflation theory could be falsified if it were found that the universe did not have what is called a flat geometry, but then he refers us to a version of the cosmic inflation theory that predicted a universe without such a flat geometry. So cosmic inflation theory isn't really falsifiable at all. So many papers have been published speculating about different versions of cosmic inflation theory that the theory can be made to work with any future observations. Harvard astronomer Loeb says here the cosmic inflation theory "cannot be falsified." 

It is not at all true that the cosmic inflation theory has “all of the criterions that a good scientific theory should possess,” or even most of those characteristics. Below is a list of some of the characteristics that are desirable in a good scientific theory. You can have a good scientific theory without having all of these characteristics, but the more of these characteristics that you have, the more highly regarded your scientific theory should be.

  1. The theory is potentially verifiable. While falsification has been widely discussed in connection with scientific theories, it should not be forgotten that the opposite of falsification (verification) is equally important. Every good scientific theory should be potentially verifiable, meaning that there should always be some reasonable hypothetical set of observations that might verify the theory. In the case of the cosmic inflation theory, we can imagine no such observations. The only thing that could verify the cosmic inflation theory would be if we were to look back to the first instant of the universe and observe exponential expansion occurring. But, as I previously mentioned, there is a reason why such an observation can never possibly occur, no matter how powerful future telescopes are. The reason is that the density of the very early universe was so great that all light signals from the first 300,000 years of the history were hopelessly shuffled, scrambled and scattered millions of times.
  2. The theory merely requires us to believe in something very simple. A very desirable characteristic of a scientific theory is that it only requires that we believe in something very simple. An example of a theory with such a characteristic is the theory that the extinction of the dinosaurs was caused by an asteroid collision. Such a theory asks us only to believe in something very simple, merely that a big rock fell from space and hit our planet. Another example of a theory that meets this characteristic is the theory of global warming. In its most basic form, the theory asks us to merely believe in something very simple, that humans are putting more greenhouse gases in the atmosphere, and that such gases raise temperatures (as we know they do inside a greenhouse). But the cosmic inflation theory (the theory of primordial exponential expansion) does not have this simplicity characteristic. All versions of such a theory require complex special conditions in order for this cosmic inflation (exponential expansion) to begin, to last for only an instant, and then to end in less than a second so that the universe ends up with the type of expansion that it now has (linear expansion, not exponential expansion). We need merely look at the papers of the cosmic inflation theorists (all filled with complex mathematical speculations) to see that the theory fails very much to meet this simplicity characteristic of a good scientific theory.  In a recent post, the cosmic inflation pitchman Ethan Siegel tells us, "If you have an inflationary Universe that's governed by quantum physics, a Multiverse is unavoidable."  What that means is the cosmic inflation has the near-infinite baggage of requiring belief in some vast collection of universes. Of course, this is the exact opposite of the simplicity that is desirable in a good theory.
  3. There is no evidence conflicting with the theory. A characteristic of a good scientific theory is that there is no evidence conflicting with the theory. The theory of electromagnetism and the theory of plate tectonics are very good theories, and there is no evidence against them. But there are quite a few observations conflicting with the cosmic inflation theory (the theory of exponential expansion in the universe's first instant). Such observations (sometimes called CMB anomalies) are discussed in this post. The observations are mainly cases in which the cosmic background radiation has some characteristic that we would not expect to see if the cosmic inflation theory were true. A scientific paper says, “These are therefore clearly surprising, highly statistically significant anomalies — unexpected in the standard inflationary theory and the accepted cosmological model.”
  4. The theory makes precise numerical predictions that have been exactly verified to several decimal places very many times. This characteristic is one that the best theories in physics have, theories such as the theory of general relativity, the theory of quantum mechanics, and the theory of electromagnetism. For example, the theory may predict that some unmeasured quantity will be 342.2304, and scientists will measure that quantity and find that it is exactly 342.2304. Or the theory may predict that some asteroid will hit the Moon on exactly 10:30 PM EST on May 23, 2026, and it will then be found (10 days later) that the asteroid did hit the Moon on exactly 10:30 PM EST on May 23, 2026. The cosmic inflation theory does not have this characteristic of a good scientific theory. It makes no exact numerical predictions at all. There have been published several hundred different versions of the cosmic inflation theory, each of which is a different scientific model. Each of those hundreds of models can predict 1000 different things, because the numerical parameters used with the equations can be varied. So the predictions of the cosmic inflation theory are pretty much all over the map, and it is impossible to point to any case in which it made a good precise successful prediction. When advocates of the cosmic inflation theory talk about predictive success, they are talking about woolly kind of predictions (like “the universe will be pretty flat”) rather than exact numerical predictions, and they are talking about one-shot affairs rather than cases in which predictions are repeatedly verified. Many a wrong theory can have an equal degree of predictive success. For example, a bad economic theory may predict various things, and may vaguely predict correctly that the stock market will go up next year.
  5. We continue to get observational signs that the theory is correct. A desirable characteristic of a good scientific theory is that we continue to observe signs suggesting that theory is correct. The theory of plate tectonics has such a characteristic. Every time there is an earthquake in the “Ring of Fire” region that marks the boundaries of continental plates, that's an additional observational sign that the plate tectonics theory is correct. The theory of gravitation continues to send us observational signals every day that the theory is correct. But we do not get any observational signs from the universe that it once underwent an instant of exponential expansion, nor can we logically imagine how such signs could ever come or keep coming from such a primordial event.

So it is clear that Martin's claim that the theory of cosmic inflation has “all of the criterions that a good scientific theory should possess” is not at all true. Saying something similar to what I said above, a New Scientist article puts it this way:

But no measurement will rule out inflation entirely, because it doesn’t make specific predictions. “There is a huge space of possible inflationary theories, which makes testing the basic idea very difficult,” says Peter Coles at Cardiff University, UK. “It’s like nailing jelly to the wall.”

The tall tale of cosmic inflation (exponential expansion at the beginning of the universe) is a modern case of a tribal folktale, told by a small tribe of a few thousand cosmologists. Below is the basic piece of folklore of the cosmic inflation theory:

"At the very beginning, the universe started out with just the right conditions for it to start expanding at a super-fast exponential rate. So for the tiniest fraction of a second, the universe did expand at this explosive exponential rate. Then, BOOM, the universe suddenly switched gears, did a dramatic change, and started expanding at the much slower, linear rate that we now observe."

Why would anyone believe such a story that can never be verified? The answer is: because they have a strong motivation. The arguments given for the cosmic inflation theory are examples of what is called motivated reasoning. Motivated reasoning is reasoning that people engage in not because they have premises or evidence that demand particular conclusions, but because they have a motivation for reaching the conclusion.

The motivation for the cosmic inflation theory was that people wanted to get rid of some apparent fine-tuning in the Big Bang. At about the time the cosmic inflation theory appeared, scientists were saying that the universe's initial expansion rate was just right, and that if it had differed by less than 1 part in 1,000,000,000,000,000,000,000,000,000,000,000,000,000, we would not have ended up with a universe that would have allowed life to exist in it. That type of extremely precise fine-tuning at the very beginning of Time bothers those who want to believe in a purposeless universe. 

Saying that the universe's initial expansion rate was fine-tuned is equivalent to saying that the density was fine-tuned, for the requirement is a very precise balancing involving an expansion rate that is just right for a particular density (or, to state the same idea, a density that is just right for a particular expansion rate).  In a recent very long cosmology paper, scientist Fred Adams notes on page 41 the requirement for a very precise fine-tuning of the universe's initial density (something like 1 in 10 to the sixtieth power, which is a trillionth of a trillionth of a trillionth of a trillionth of a trillionth).  On page 42 Adams states that, "The paradigm of inflation was developed to alleviate this issue of the sensitive fine-tuning of the density parameter."  That was the motivation of the cosmic inflation theory -- to sweep under the rug or get rid of a dramatic case of fine-tuning in nature. 

The folklore mongers who sell cosmic inflation stories may believe that they have got rid of this fine-tuning at the beginning. But they actually haven't. They've merely “robbed Peter to pay Paul,” by getting rid of fine-tuning in one place (in regard to the universe's initial expansion rate) at the price of requiring lots of fine-tuning in lots of other places. That's because all theories of cosmic inflation themselves require enormous amounts of fine-tuning. But with a cosmic inflation theory it may be rather less noticeable, because the required fine-tuning occurs in lots of different places rather than in one place.

Judging from a 2016 cosmology paper,  the cosmic inflation theory requires not just one type of fine-tuning, but three types of fine-tuning. The paper says, “Provided one permits a reasonable amount of fine tuning (precisely three fine tunings are needed), one can get a flat enough effective potential in the Einstein frame to grant inflation whose predictions are consistent with observations.” How on Earth does it represent progress to try to get rid of one case of fine-tuning by introducing a theory that requires three cases of fine-tuning? And the estimate of three fine-tunings in the paper is probably an underestimate, as other papers I have read suggest that 7 or more precise fine-tunings are needed.


This is not theoretical progress

We may compare the cosmic inflation pitchman to some person who wants to sell someone in Manhattan a car. “Think of all the money you'll save!” says the pitchman. “You won't have to pay $40 on subways each week.” But what the pitchman fails to tell you is that when you add up the cost of the monthly car payments, the cost of car insurance, and the cost of a garage parking space (because there's so few parking spaces in Manhattan), the total cost of the car is much more than the cost of the subway. Similarly the pitchmen of cosmic inflation theory tell us that the theory is great because it reduces fine-tuning in one place (in regard to the universe's initial expansion rate), and neglect to tell you that the total amount of fine-tuning (adding up all of the special requirements and fine-tuning needed for cosmic inflation to work) is probably far “worse” if you believe that cosmic inflation occurred.

What has been going on with the cosmic inflation theory is very similar to what went on for decades with the supersymmetry theory, a theory which physicists have been fruitlessly laboring on for decades. Like the cosmic inflation theory, supersymmetry was motivated by a desire to sweep under the rug some fine-tuning. In the case of supersymmetry, the fine-tuning scientists wanted to get rid of was the apparent fact of the Higgs boson or Higgs field being fine-tuned very precisely ("like a pencil standing on its point" is an analogy sometimes given).  An article on the supersymmetry theory discusses the fine-tuning that motivated the theory:

One logical option is that nature has chosen the initial value of the Higgs boson mass to precisely offset these quantum fluctuations, to an accuracy of one in 1016However, that possibility seems remote at best, because the initial value and the quantum fluctuation have nothing to do with each other. It would be akin to dropping a sharp pencil onto a table and having it land exactly upright, balanced on its point. In physics terms, the configuration of the pencil is unnatural or fine-tuned.

Similarly, a paper on an MIT server entitled "Motivation for Supersymmetry" states the following (referring to the many new types of hypothetical particles called "supersymmetric partners" imagined by the supersymmetry theory):

Thus in order to get the required low Higgs mass, the bare mass must be fine-tuned to dozens of significant places in order to precisely cancel the very large interaction terms....However, if supersymmetric partners are included, this fine-tuning is not needed.

Physicists erected the ornate theory of supersymmetry, thinking that they were explaining away this very precise fine-tuning  in nature, "to dozens of significant places." But they failed to see that they were just “robbing Peter to pay Paul,” because the total amount of fine-tuning required by the supersymmetry theory (given all of its many different things that had to be just right) was as great as the fine-tuning that it tried to explain away. So there was no net lessening of fine-tuning even if the supersymmetry theory was true.

The MIT paper above says "many thousands" of science papers have been written about supersymmetry. Most of them spun out ornate webs of speculation, as ornate and unsubstantiated as the gossamer speculations of cosmic inflation theorists.  Supersymmetry has failed all observational tests, and now many physicists are lamenting that they wasted so many years on it. Our cosmic inflation theorists have failed to heed the lesson of the supersymmetry fiasco: that trying to explain away fine-tuning in the universe is a waste of time. 
Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview