Follow Cornwall Alliance - Global Warming Science on Feedspot

Continue with Google
Continue with Facebook


James Hansen (Photo adapted from “Global Justice Now,” by IMGP2630, Flickr Creative Commons)

In 1988, James Hansen confidently predicted that the world would be about 1 degree Celsius warmer today than it was then.

Actually, he offered three scenarios: A, “business as usual,” with rapidly rising carbon dioxide emissions, which would bring that 1 degree; B, “most plausible,” with emissions remaining constant at 1988 levels, which would make the world 0.7 degree warmer today; and C, with emissions rising from 1988 to 2000 and then stabilizing, which would make the world about 0.3 degree warmer today—an outcome he said was highly unlikely.

By and large, the United Nations Intergovernmental Panel on Climate Change has embraced Hansen’s scenarios.

So, how did Hansen, the high priest of global warming fears, and his acolytes do?

The world today is, on average, 0.3 degree warmer than when Hansen set forth his scenarios. In short, for temperature, his scenario C, which he called least likely, has occurred.

But it didn’t occur because CO2 emissions flattened in 2000. No, they kept right on rising. So the condition he said was necessary for the temperature part of scenario C didn’t occur; instead, what continued to occur was pretty much the condition he said would bring on the temperature part of scenario A.

And from that it follows that Hansen’s understanding of what drives global average temperature was—and it remains—wrong. He thinks carbon dioxide is the temperature control knob for the atmosphere.

That idea should, from the start, have struck anyone with just a modicum of common sense that such an idea was nonsense.

Carbon dioxide constitutes 4 hundredths of 1 percent of the atmosphere. That by itself makes it a pretty diminutive player. But there’s much more to the story.

The atmosphere constitutes only a small part of the global climate system, which includes the hydrosphere (oceans, seas, lakes, rivers, and streams), the cryosphere (continental ice sheets, glaciers, and sea ice), the upper lithosphere (the top few feet of land), and the biosphere (terrestrial and aquatic plants and animals)—not to mention solar energy from the Sun, cosmic rays that affect cloudiness, and volcanoes.

The oceans alone weigh nearly 273 times as much of the atmosphere. So atmospheric carbon dioxide, which Hansen insists is the temperature control knob, constitutes about 0.00015 percent (15 ten-millionths) of the combined atmosphere and oceans. At this point we can forget about the cryosphere, lithosphere, and biosphere. Atmospheric carbon dioxide is clearly a teeny tiny fraction of a teeny tiny bit player in Earth’s climate system. (The numbers here are ballpark figures, tainted in part by the mixing of weight, mass, and parts per million, but they adequately convey a sense of proportion.)

To theorize that 15 ten-millionths of an overall system will control the temperature of that interrelated system, or of any large part of it, is prima facie nonsense. What should surprise us is not that the idea has turned out wrong, but that anyone embraced it in the first place and that it’s taking so long for so many people, including many highly intelligent scientists, to abandon it in the face of the clear empirical evidence of its falsehood.

Patrick Michaels and Ryan Maue dismantle Hansen further in their column in June 22’s Wall Street Journal. It’s worth a careful read. Cornwall Alliance authors developed the argument above more fully, and in the context of the Biblical worldview that a wise God designed the climate system, in the 2010 paper “A Renewed Call to Truth, Prudence, and Protection of the Poor: An Evangelical Examination of the Theology, Science, and Economics of Global Warming.”

[This article was first published in The Daily Caller.]

P.S. If you liked this article you might enjoy our Cornwall Alliance Email Newsletter! Sign up here to receive analysis on top issues of the day related to science, economics, and poverty development.

As a thank you for signing up, you will receive a link to watch Dr. Beisner’s 84 minute lecture (with Powerpoint slides) “Climate Change and the Christian: What’s True, What’s False, What’s our Responsibility?” Free!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

“Climate41” by Becker1999, Flickr Creative Commons

The “closest thing to a celebrity scientist in Seattle” is what the Seattle Times calls Cliff Mass, “a meteorologist who specializes in weather prediction and modeling,” according to his faculty bio at the University of Washington’s College of the Environment. Mass, a Professor of Atmospheric Sciences at UW, wrote The Weather of the Pacific Northwest—one of the best-selling books from the University of Washington Press. He firmly believes that Earth has been warming for about the last 130 years, that human activity is an important cause of warming, and that changes in energy technology and efforts to remove carbon dioxide from the atmosphere should be sought to counteract it.

So it’s a pleasant surprise to find him coming to the defense of scientists (and others) who, because they question alleged scientific consensus about climate change, get called “deniers.” In a blog post May 28 titled “Why One Should Never Use the Term ‘Climate Denier’,” Mass enumerates three reasons:

  1. “If plays off the term ‘Holocaust denier'”—a pejorative that applies to those who deny reality of the Nazi Holocaust that killed nearly a third of all the world’s Jews. “The Holocaust is an historical fact,” Mass writes, but “climate change, and particularly anthropogenically forced climate change is another story: there are still major uncertainties regarding climate change, including the magnitude of the human-forced warming and the local impacts. Our models are very clear tha[t] increasing greenhouse gases will warm the planet, how much and spatially varying impacts have a lot of uncertainty.” So “climate denier” is not only offensive and “painful to many in the Jewish community” but also misleading, since the vast majority of scientists who question the causes, magnitude, consequences, and appropriate responses to climate change nonetheless affirm that Earth has warmed and human action has contributed to the warming.
  2. “The terms ‘climate denier’ or ‘climate change denier’ is [sic] usually used for anyone who does not ‘believe’ that virtually all of the change in Earth’s climate over the past half-century was caused by human emission of greenhouse gases.” But, Mass writes, “Seems strange to call someone a climate change denier if they accept that there is climate change and mankind is contributing.” Further, “climate scientists can not [sic] show that humans are entirely to blame for what has happened during the past fifty years. We know that some modes of natural variability have had major impacts (like the Pacific Decadal Oscillation) and that the warming trend and sea level rise has been going on for over a hundred thirty years …—well before human emissions of greenhouse gases had a significant radiative [warming] effect.” Indeed, by the standard of those who hurl around the epithet “climate denier,” “many of my department, one of the leading research centesr in atmospheric sciences of the country, should be considered climate change deniers. Go figure.”
  3. “Climate denier clearly is a pejorative, put-down term that does not win converts or friends. … To secure real action on human-forced climate change one needs to build a consensus of folks with varied political backgrounds. Calling names is not the way to do it.”

Mass goes on to pillory Bill Nye, “The Science Guy” (who isn’t a climate scientist), who “loves to call folks deniers, while he makes exaggerated claims” and asks, “Why does such a poorly informed individual represent science?” He also takes Al Gore—and with him many other limousine liberals—for loudly proclaiming the need to fight anthropogenic global warming yet having lifestyles that pour far more carbon dioxide into the atmosphere than ordinary people.

The ideas that the “deniers” are stopping progress on climate change is [sic] just nonsense. Some of the most knowledgeable, progressive people I know have the worst carbon footprints.  Climate scientists are probably the worst of the bunch. Left-leaning politicians who enjoy traveling to unnecessary meetings (like a certain governor) are another. They know the truth, but they won’t sacrifice in their own lives. See all the big cars being driven around Seattle these days?…. [ellipsis original] those folks are not deniers.  Most are good, card-carrying progressives.

In fact, I have found a strong correlation between heavy use of the phrase climate denier and NOT knowing much about climate.

Well, Mass has made a friend of the Cornwall Alliance. We may disagree about climate change, but we can speak of each other with respect. We hope his thinking on the need for such respect is contagious!

P.S. If you liked this article you might enjoy our Cornwall Alliance Email Newsletter! Sign up here to receive analysis on top issues of the day related to science, economics, and poverty development.

As a thank you for signing up, you will receive a link to watch Dr. Beisner’s 84 minute lecture (with Powerpoint slides) “Climate Change and the Christian: What’s True, What’s False, What’s our Responsibility?” Free!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Click here to sign the Open Letter.

The federal Environmental Protection Agency’s (EPA) proposed rule “Strengthening Transparency in Regulatory Science,” banning the agency’s use of “secret science” in formulating regulations, should be adopted. It is badly needed to assure American taxpayers that the EPA is truly acting in their best interests. Objections are groundless.

Strengthening Transparency in Regulatory Science (STRS) provides that “When promulgating significant regulatory actions, the Agency shall ensure that dose response data and models underlying pivotal regulatory science are publicly available in a manner sufficient for independent validation.” It codifies what was intended in the Secret Science Reform Act of 2015, and the Honest and Open New EPA Science Treatment Act of 2017 (HONEST Act), both of which passed the House but never came up for vote in the Senate.

The rule is important to the integrity of EPA regulations because of a widely recognized crisis: Scientific studies, including those subjected to peer review, frequently are found to be irreproducible or to have false findings because they fail to meet high standards for data archiving, valid statistical analysis, experiment procedures, and other elements of valid research. Scientists widely acknowledge the reality of this crisis.[i]

Click here to sign the Open Letter.

Some opponents of STRS argue that peer review ensures the quality of studies published in refereed journals. But this widespread perception is false. As Richard Smith wrote in “Classical peer review: an empty gun,” published in Breast Cancer Review:

… almost no scientists know anything about the evidence on peer review. It is a process that is central to science—deciding which grant proposals will be funded, which papers will be published, who will be promoted, and who will receive a Nobel prize. We might thus expect that scientists, people who are trained to believe nothing until presented with evidence, would want to know all the evidence available on this important process. Yet not only do scientists know little about the evidence on peer review but most continue to believe in peer review, thinking it essential for the progress of science. Ironically, a faith based rather than an evidence based process lies at the heart of science.

Smith quotes Drummond Rennie, deputy editor of the Journal of the American Medical Association and intellectual father of the international congresses of peer review held quadrennially starting in 1989, as saying, “If peer review was a drug it would never be allowed on the market.”

We have particular interest in epidemiological studies of risks associated with certain toxins and climate studies related to anthropogenic global warming (AGW), also known as anthropogenic climate change (ACC). Many EPA regulations address these two concerns. Scientific societies and journals have often failed to maintain high standards of objectivity and research procedures related to both. Nonetheless, the EPA has formulated regulations based in whole or in part on such studies without adequately assessing them to see whether they meet high standards.[ii]

For example, the EPA has long regulated exposure to nuclear radiation on the basis of the linear no-threshold (LNT) dose-response model. Yet that model arose from a paper, “Genetic Effects of Atomic Radiation,” by the National Academies of Science Committee on the Biological Effects of Atomic Radiation, published in Science in 1956, which has since been severely criticized by many peer-reviewed publications. As the National Association of Scholars points out in a letter to EPA Administrator Scott Pruitt supporting STRS, “This is a consequential matter that bears on a great deal of national public policy, as the LNT model has served as the basis for risk assessment and risk management of radiation and chemical carcinogens for decades. A reassessment of that model could profoundly alter many regulations from the [EPA], the Nuclear Regulatory Commission, and other government agencies.”[iii]

In these circumstances, it is essential that all scientific studies used by regulators to justify regulations meet the highest standard of credibility. That is the aim of STRS.

Click here to sign the Open Letter.

As the EPA pointed out in announcing the proposed rule, “The benefits of EPA ensuring that dose response data and models underlying pivotal regulatory science are publicly available in a manner sufficient for independent validation are that it will improve the data and scientific quality of the Agency’s actions and facilitate expanded data sharing and exploration of key data sets; this is consistent with the conclusions of the National Academies.”

In addition to the claim that peer review alone is sufficient quality control, opponents of the proposed rule raise two primary objections. Both fail.

The most common, and on first blush credible, objection is that the rule would prevent the EPA from using studies that involved confidential information, such as personal health data or corporate proprietary information. In an open letter to EPA Administrator Scott Pruitt, the political-activist Union of Concerned Scientists (UCS) argued, “there are multiple valid reasons why requiring the release of all data does not improve scientific integrity and could actually compromise research, including intellectual property, proprietary, and privacy concerns.”[iv]

We wonder whether such critics have even read the proposed rule. Section 30.5 expressly states:

Where the Agency is making data or models publicly available, it shall do so in a fashion that is consistent with law, protects privacy, confidentiality, confidential business information, and is sensitive to national and homeland security.

And Section 30.9 says:

The Administrator may grant an exemption to this subpart on a case-by-case basis if he or she determines that compliance is impracticable because:

(a) It is not feasible to ensure that all dose response data and models underlying pivotal regulatory science is publicly available in a manner sufficient for independent validation, in a fashion that is consistent with law, protects privacy, confidentiality, confidential business information, and is sensitive to national and homeland security; or

(b) It is not feasible to conduct independent peer review on all pivotal regulatory science used to justify regulatory decisions for reasons outlined in OMB Final Information Quality Bulletin for Peer Review (70 FR 2664), Section IX.

As the Agency’s request for public comment explained:

[C]oncerns about access to confidential or private information can, in many cases, be addressed through the application of solutions commonly in use across some parts of the Federal government. Nothing in the proposed rule compels the disclosure of any confidential or private information in a manner that violates applicable legal and ethical protections. Other federal agencies have developed tools and methods to de-identify private information for a variety of disciplines. The National Academies have noted that simple data masking, coding, and de-identification techniques have been developed over the last half century and that “Nothing in the past suggests that increasing access to research data without damage to privacy and confidentiality rights is beyond scientific reach.”

The UCS letter asserted that concerns about transparency and certainty raised by supporters of the rule “are phony issues that weaponize ‘transparency’ to facilitate political interference in science-based decision making, rather than genuinely address either.” But the irreproducibility crisis is real, not phony. Further, enhanced transparency works against politicization, not for it. This objection is so patently invalid as to suggest that those who offer it are themselves weaponizing confidentiality to facilitate their own political interference in science-based decision making.

A second common objection, raised in the same letter and again credible on first blush, is that “many public health studies cannot be replicated, as doing so would require intentionally and unethically exposing people and the environment to harmful contaminants or recreating one-time events (such as the Deepwater Horizon oil spill).” But what need to be replicable in studies of such events are not the events themselves but the procedures used to collect and analyze data and make inferences from them.

Click here to sign the Open Letter.

Consider, for example, a study that used tree-rings as proxy temperature measurements and purported to find that neither the Medieval Warm Period nor the Little Ice Age had occurred but that a rapid and historically unprecedented warming had begun in the late 19th century—a study that became iconic for claims of dangerous AGW driven by human emissions of carbon dioxide. No one needed to use a time machine to return to the 11th through 20th centuries and regrow trees to recognize that the authors had committed confirmation fallacy by excluding certain data and misused a statistical procedure, resulting in false results. All anyone needed was access to the raw data and the computer code used to analyze it. Yet the study lead author’s long refusal to allow access to raw data and computer code delayed discovery of these errors for years, during which the Intergovernmental Panel on Climate Change, the public, and governments all over the world were led to believe its claims and formulate expensive policies based partly on them.[v]

A few other objections have been raised against the proposed rule, and all are easily rebutted:

  • The EPA should be able to use all available scientific research, even if it is not transparent and replicable, and excluding some may cripple its ability to protect us. On the contrary, using scientific research that is non-transparent and non-replicable increases the likelihood that policy will be grounded on false information.
  • Some existing regulations are based on studies that could not have been used had this rule been in force, and implementing it could lead to rescinding those regulations and thus endangering the public. If enough of the research did satisfy the rule to justify the regulations, they need not be rescinded. If the regulations could not be justified by research that does satisfy the rule, then there is no reason to think the regulations actually protect the public, and they should be rescinded, eliminating their cost, which itself raises risks to the public.[vi]
  • Some studies rely on funding from sources that restrict disclosure of underlying data, and the rule would prohibit the EPA’s use of those. That is precisely why the rule is needed. The funders should change their policies; their refusal suggests they have something to hide.

In conclusion, the EPA’s proposed rule “Strengthening Transparency in Regulatory Science,” banning the agency’s use of “secret science” in formulating regulations, survives objections. Its adoption and implementation will improve, not harm, the EPA’s mission to protect Americans from real environmental risks. It will also reduce the risks caused by unjustified but costly regulations.

Sign the letter now!

Sign Take Secret Science Out of the EPA: An Open Letter in Support of Strengthening Transparency in Regulatory Science Read the petition
First Name
Last Name
State / Province
Educational Degree

I endorse this petition by signing my name.


Add me to your email newsletter list
185 signatures


Latest Signatures
185Timothy BoylePenney Farms, FloridaBS in physics, Mdiv and Dmin in theologyMay 21, 2018
184Dennis PerrySeward, AKMAMay 21, 2018
183S StanglandLake Oswego, OregonBA, English and History, Teacher's CredentialMay 21, 2018
182John JamisonBella Vista, ArkansasMasters Computer ScienceMay 21, 2018
181Ron NortonPrescot, MerseysideACIB (Banking Sector)May 21, 2018
180Charles SparksBS, Agriculture, Wildlife ScienceMay 21, 2018
179Lydia GarrisiFranklin, TennesseeMay 21, 2018
178John WillmoreARTESIA, NMB.S. Industrial Organizational Psychology May 21, 2018
177Terry GoochWhittier, CaMBAMay 21, 2018
176Barry NicholWaxhaw, NCAAS+May 21, 2018
175Lorraine SalmieriMay 21, 2018
174Alan EckardMay 21, 2018
173Tom LehmanMarion, IndianaPh.D. Professor of EconomicsMay 21, 2018
172Harold WestraPipestone, MNM.divMay 21, 2018
171Hendrik KrabbendamChattanooga, TennesseeTh. D. May 21, 2018
170Lisa PursleyLowell, ArEdDMay 21, 2018
169James PursleyLowell, ArPhdMay 21, 2018
168Bruce SchlinkWaterford, MICHIGANnoneMay 21, 2018
167Daniel PalermoBlaine, MNBA HistoryMay 21, 2018
166H.K. LathamVernal, UTMay 21, 2018
165Kate Skogstad Browerville , MNMay 21, 2018
164James O'DonnellHuntington, INM.B.A.May 21, 2018
163Michael HartmanFredericksburg, TXM EdMay 21, 2018
162John BontiusDunnville, ONBAMay 21, 2018
161Mark LandsbaumMcKinney, Texasbachelor's in journalismMay 21, 2018
160Beth SpeadLititz, PABS - Soil ScienceMay 21, 2018
159Sherrill KerzmanPaynesvile, MNBachelor of ScienceMay 21, 2018
158David WilliamsFort Worth, TexasMay 21, 2018
157Robert FalesBangor, PABS EngineeringMay 21, 2018
155George S. Whitten, Sr.Greenwood , MSJDMay 21, 2018
154Kenneth CoykendallAlbany, ORMS, Mechanical EngineeringMay 21, 2018
153Cory HoldenFinland, MNOrg. Mgmt.& CommunicationMay 21, 2018
152Kathy ShepardsonORLANDO, ORANGEBMEMay 21, 2018
151Scott ThomasMay 21, 2018
150Mary Ann ReidMilton, GeorgiaBSNMay 21, 2018
149Guy TakamatsuCampbell, CaliforniaMay 21, 2018
148Roger EllisChurch Hill, TNBS Mechanical EngineeringMay 21, 2018
147Timothy KinderTroy, OhioBachelor's Degree - Biblical StudiesMay 21, 2018
146Patrick CoyneHays, KansasPh.DMay 21, 2018
145Patricia BieberBelvidere, ILAAMay 21, 2018
144lee nasonnew bedford, massachusettsM.Arch.May 21, 2018
143David TrimbleSacramento, CalifornianoneMay 21, 2018
142Charles RadebaughAtlanta, GeorgiaMaster of ArchitectureMay 21, 2018
141Tom ParkerDES PLAINES, ILBS BusinessMay 21, 2018
140Deloris SpeerSachse, TexasHigh School and Tech schoolMay 21, 2018
139Priscilla WestonWannaska, MinnesotaAAS+May 21, 2018
138Caleb NelsonGillette, WYM.DivMay 21, 2018
137Jonathan AlleyANGOLA, INBSc, MS, DOMay 21, 2018
136Carol BrownPlano, TexasBA in Foreign Languages and LiteraturesMay 21, 2018
< >


[i] Among discussions of it are John P.A. Ioannidis’s “Why Most Published Research Findings are False” (PLoS Med 2,8 (2005); Joseph P. Simmons et al., “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant,” Psychological Science 22,11 (2011), 1359–66; C. Glenn Begley and Lee M. Ellis, “Drug development: Raise standards for preclinical cancer research,” Nature 483 (2012), 531–33; and David Randall and Christopher Welser, The Irreproducibility Crisis in Modern Science: Causes, Consequences, and the Road to Reform (New York: National Association of Scholars, 2018). After nearly 40 years as a professor and researcher and over 30 years as a peer reviewer for over 30 professional journals, the National Science Foundation, and the National Institutes of Health, Robert Higgs wrote:

Peer review, on which lay people place great weight, varies from being an important control, where the editors and the referees are competent and responsible, to being a complete farce, where they are not. As a rule, not surprisingly, the process operates somewhere in the middle, being more than a joke but less than the nearly flawless system of Olympian scrutiny that outsiders imagine it to be. Any journal editor who desires, for whatever reason, to reject a submission can easily do so by choosing referees he knows full well will knock it down; likewise, he can easily obtain favorable referee reports. … Personal vendettas, ideological conflicts, professional jealousies, methodological disagreements, sheer self-promotion, and a great deal of plain incompetence and irresponsibility are no strangers to the scientific world …. In no event can peer review ensure that research is correct in its procedures or its conclusions.

[ii] The release, in 2009 and 2011, of thousands of emails exchanged by leading proponents of the view that AGW/ACC is historically unprecedented and probably of so great a magnitude as to justify actions costing trillions of dollars to mitigate it revealed widespread corruption of the peer review process to prevent the publication of high-quality papers that questioned that view or to ensure the publication of low-quality papers that affirmed it. Ivan Kenneally wrote in The New Atlantis of those emails:

These behind-the-scenes discussions among leading global-warming exponents are remarkable both in their candor and in their sheer contempt for scientific objectivity. There can be little doubt after even a casual perusal that the scientific case for global warming and the policy that springs from it are based upon a volatile combination of political ideology, unapologetic mendacity, and simmering contempt for even the best-intentioned disagreement.

Such practices and others that feed a dubious perception of strong scientific consensus have been ably documented in numerous publications, including Ross McKitrick’s “Bias in the Peer Review Process: A Cautionary and Personal Account,” A.W. Montford’s Hiding the Decline: A history of the Climategate affair, and David H. Douglass and John R. Christy’s “A..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In his recent article “Four Reasons Alarmists Are Wrong on Climate Change,” Cornwall Alliance Research Associate for Developing Countries Vijay Jayaraj distinguished what, following widespread usage, he called climate-change alarmists, deniers, and skeptics. The “alarmists” generally think that human emissions of carbon dioxide (and other greenhouse gases) are driving global warming so rapid and eventually of such great magnitude as to be dangerous or even catastrophic and that the appropriate response is to curtail drastically such emissions. The “deniers,” he wrote, “categorically deny the warming trend,” not simply the view that carbon dioxide is a major cause of acknowledged warming. And the skeptics “disagree on the magnitude and cause of warming, but not on whether it’s happening; and they also disagree about the consequences of warming and how mankind should respond.”

A friendly reader wrote to us in response,

It’s interesting you talk of ‘deniers’ vs. ‘skeptics’ on the basis of whether they think there’s any warming or not. I think the difference is different, in that real skeptics question the fundamental underlying basis of earth’s underlying global temperature is driven by the atmosphere’s composition or not, i.e. does the level of CO2 actually have any CAUSATION. The evidence and new thinking emerging is that CO2 does not have any driving cause whatsoever.

You may be aware of the recent work published by Ned Nikolov et al (@NikolovScience), who show that the planetary temperature of a number of the planets in our solar system, including Earth, is determined purely by the combination of atmospheric pressure and solar insolation. They don’t raise this as a new hypothesis, but as an understanding of real observed data. As such, the argument would class them as ‘deniers’. I would not agree, but see it as genuine scientific discovery.

Obviously such new thinking is meeting stiff opposition, but that’s ‘consensus’ for you, which tries to shut down ‘inconvenient’ enquiry and is the real ‘denial’.

We appreciate the distinction. Vijay Jayaraj adopted, with some modification, the terminology most common in the media and, to a lesser but still significant extent, scientific literature when discussing different views on anthropogenic global warming. The term skeptic related to AGW does have considerable elasticity, though, all the way from

  • those who generally grant the Intergovernmental Panel on Climate Change’s view that over the last 60 years or so anthropogenic CO2 emissions have been the primary driver of global warming, but who question the IPCC’s (and warming alarmists’ generally) preference of mitigation over adaptation as response, like Bjørn Lomborg, through
  • those who generally grant that anthropogenic CO2 emissions have played a significant or even majority role in recent warming but who think the magnitude of the warming is not dangerous, or at least not potentially catastrophic, like Patrick Michaels and, more recently, Judith Curry, through
  • those who grant that anthropogenic CO2 emissions probably contribute something to warming but think they likely fall well below majority cause of recent warming or even contribute so little as to be undetectable, like Fred Singer, Roy Spencer, and John Christy, to
  • those who, like our correspondent and for a variety of reasons, think anthropogenic CO2, or CO2 in general, or even any “greenhouse” gas, plays no part in determining global temperature, like some of the folks associated with the organization Principia Scientific.

I’m familiar with the pressure/insolation hypothesis about planetary temperature. If “denier” is defined as one who denies that CO2 and other “greenhouse” gases play any role in determining planetary temperature, those who hold the “pressure/insolation” hypothesis would indeed be classified as “deniers” and so would overlap with some “skeptics.”

I find the hypothesis interesting and plausible, but thus far I’m not persuaded that it’s the full explanation. I’m inclined to think the causes are multiple and include insolation (obviously), pressure, the physics of radiative heat transfer (and hence the role of infrared-absorbing, misnamed “greenhouse,” gases), changes in albedo (including extent of snow and ice cover, and cloud cover, the latter perhaps affected in part by the rate of influx of galactic cosmic rays, which in turn is affected by changes in solar magnetic wind and our solar system’s position relative to other parts of the galaxy), and probably some other factors. But I doubt that it’s possible for now—and perhaps, in principle, ever—to determine the precise proportional contributions of these. The climate system—of which global average surface temperature (insofar as there even is such a thing as average temperature, let alone whether we’re capable of measuring global average temperature with any significant degree [pardon the pun] of accuracy) is but one among many elements—is a coupled non-linear chaotic fluid dynamic system and therefore (I think) in principle unpredictable except in the very broadest terms (not only because we lack sufficient data but also because prediction would require solving the Navier-Stokes equations—a matter on which, not being a mathematician, I’m trusting in the judgment of people like Christopher Essex).

Nonetheless, I’m seeing references to the insolation/pressure hypothesis increasingly frequently, and it will be interesting to watch how it fares over time.

P.S. If you liked this article you might enjoy our Cornwall Alliance Email Newsletter! Sign up here to receive analysis on top issues of the day related to science, economics, and poverty development.

As a thank you for signing up, you will receive a link to watch Dr. Beisner’s 84 minute lecture (with Powerpoint slides) “Climate Change and the Christian: What’s True, What’s False, What’s our Responsibility?” Free!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Is this the End of the Beginning, or the Beginning of the End?

On November 10, 1942, after British and Commonwealth forces defeated the Germans and Italians at the Second Battle of El Alamein, taking nearly 30,000 prisoners, Winston Churchill told the British Parliament, “Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.” In The Hinge of Fate, volume 3 of his marvelous 6-volume history of World War II, published eight years later, he reflected, “It may almost be said, ‘Before Alamein we never had a victory. After Alamein we never had a defeat’.”

The publication of Nicholas Lewis and Judith Curry’s newest paper in The Journal of Climate reminds me of that. The two authors, who for years have focused much of their work on figuring out how much warming should come from adding carbon dioxide to the atmosphere, conclude that it’s at least 30 and probably 50 percent less than climate alarmists have claimed for the last forty years. (As I explain in my blog post, there are reasons to think the alarmists’ error is even greater than 50 percent.) If that is true, then all the reasons for drastic policies to cut carbon dioxide emissions—mainly by replacing coal, oil, and natural gas with wind and solar as dominant energy sources—disappear.

I discuss Lewis and Curry’s article below and provide links to it and discussions of it. But first I want to make just one simple point: For the last fifteen years or more, at least until a year or two ago, it would have been inconceivable that The Journal of Climate, which has been a staunch defender of climate alarmist “consensus” science, would have published their article. That it does so now means the dam has cracked, the water’s pouring through, and the crack will spread until the whole dam collapses.

Is this the beginning of the end of climate alarmists’ hold on the world of climate science and policy, or the end of the beginning? Is it the Second Battle of El Alamein, or is it D-Day? I don’t know, but it is certainly significant. It may well be that henceforth the voices of reason and moderation will never suffer a defeat.

“All the King’s Horses and All the King’s Men,” by Alan Turkus, Flickr Creative Commons

Is Pat Michaels a Prophet?

Shattered Consensus: The True State of Global Warming, edited by climatologist Patrick J. Michaels (then Research Professor of Environmental Sciences at the University of Virginia and the State Climatologist of Virginia, now Senior Fellow in Environmental Studies at the Cato Institute), was published 13 years ago. Its title was at best premature. The “consensus” (in scare quotes because the breadth of agreement is vastly overstated)—that human emissions of carbon dioxide and other “greenhouse” gases would, if unchecked, cause potentially catastrophic global warming—wasn’t shattered then, and it hasn’t shattered since then. At least, that’s the case if when we read “shattered” we think of something like what happens when you drop a piece of fine crystal on a granite counter top: instantaneous disintegration into tiny shards.

But though premature and perhaps a bit hyperbolic, the title might have been prophetic.

From 1979 (publication of “Carbon Dioxide and Climate: A Scientific Assessment” by the National Academy of Sciences) to 2013 (publication of the Intergovernmental Panel on Climate Change’s 5th Assessment Report), what we might call the “establishment” of climate-change scientists had concluded that if the concentration of carbon dioxide (or its equivalent in other “greenhouse” gases) doubled, global average surface temperature would rise by 1.5–4.5 C degrees, with a “best estimate” of about 3 degrees.

But late in the first decade of this century, spurred partly by the failure of the atmosphere to warm as rapidly as the “consensus” expected, various studies began challenging that conclusion, pointing to lower “equilibrium climate sensitivity” (ECS). As the Cornwall Alliance reported four years ago,

IPCC estimates climate sensitivity at 1.5˚C to 4.5˚C, but that estimate is based on computer climate models that failed to predict the absence of warming since 1995 and predicted, on average, four times as much warming as actually occurred from 1979 to the present. It is therefore not credible. Newer, observationally based estimates have ranges like 0.3˚C to 1.0˚C (NIPCC 2013a, p. 7) or 1.25˚C to 3.0˚C with a best estimate of 1.75˚C (Lewis and Crok 2013, p. 9). Further, “No empirical evidence exists to support the assertion that a planetary warming of 2°C would be net ecologically or economically damaging” (NIPCC 2013a, p. 10). [Abbreviated references are identified beginning here.]

Most of the lower estimates of ECS, though, were published in places not controlled by “consensus” scientists and so were written off.

Now, though, a journal dead center in the “consensus”—the American Meteorological Society’s Journal of Climate—has accepted a new paper, “The impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity,” by Nicholas Lewis, an independent climate science researcher in the UK, and Judith Curry, formerly Professor and Chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology and now President of Climate Forecast Applications Network, that concludes that ECS is very likely 50–70 percent as high as the “consensus” range.

Here’s how Lewis and Curry summarize their findings in their abstract, with the takeaways emphasized:

Energy budget estimates of equilibrium climate sensitivity (ECS) and transient climate response (TCR) [increase in global average surface temperature at time of doubling of atmospheric CO2 concentration, i.e., 70 years assuming 1% per annum increase in concentration—ECB] are derived based on the best estimates and uncertainty ranges for forcing provided in the IPCC Fifth Assessment Scientific Report (AR5). Recent revisions to greenhouse gas forcing and post-1990 ozone and aerosol forcing estimates are incorporated and the forcing data extended from 2011 to 2016. Reflecting recent evidence against strong aerosol forcing, its AR5 uncertainty lower bound is increased slightly. Using a 1869–1882 base period and a 2007−2016 final period, which are well-matched for volcanic activity and influence from internal variability, medians are derived for ECS of 1.50 K (5−95%: 1.05−2.45 K) and for TCR of 1.20 K (5−95%: 0.9−1.7 K). These estimates both have much lower upper bounds than those from a predecessor study using AR5 data ending in 2011. Using infilled, globally-complete temperature data gives slightly higher estimates; a median of 1.66 K for ECS (5−95%: 1.15−2.7 K) and 1.33 K for TCR (5−95%:1.0−1.90 K). These ECS estimates reflect climate feedbacks over the historical period, assumed time-invariant. Allowing for possible time-varying climate feedbacks increases the median ECS estimate to 1.76 K (5−95%: 1.2−3.1 K), using infilled temperature data. Possible biases from non-unit forcing efficacy, temperature estimation issues and variability in sea-surface temperature change patterns are examined and found to be minor when using globally-complete temperature data. These results imply that high ECS and TCR values derived from a majority of CMIP5 climate models are inconsistent with observed warming during the historical period.

“Our results imply that, for any future emissions scenario, future warming is likely to be substantially lower than the central computer model-simulated level projected by the IPCC, and highly unlikely to exceed that level,” a press release from the Global Warming Policy Forum quoted Lewis as saying.

Veteran environmental science writer Ronald Bailey, in a report on the new paper in Reason, wrote about the paper:

How much lower? Their median ECS estimate of 1.66°C (5–95% uncertainty range: 1.15–2.7°C) is derived using globally complete temperature data. The comparable estimate for 31 current generation computer climate simulation models cited by the IPCC is 3.1°C. In other words, the models are running almost two times hotter than the analysis of historical data suggests that future temperatures will be.

In addition, the high-end estimate of Lewis and Curry’s uncertainty range is 1.8°C below the IPCC’s high-end estimate.

Commenting on the paper, Cornwall Alliance Senior Fellow Dr. Roy W. Spencer, Principal Research Scientist in Climatology at the University of Alabama-Huntsville and U.S. Science Team Leader for NASA’s satellite global temperature monitoring program, points out that even Lewis and Curry’s figures make two—no, three, counting the last sentence below—assumptions that are at best unknown and quite likely false:

I’d like to additionally emphasize overlooked (and possibly unquantifiable) uncertainties: (1) the assumption in studies like this that the climate system was in energy balance in the late 1800s in terms of deep ocean temperatures; and (2) that we know the change in radiative forcing that has occurred since the late 1800s, which would mean we would have to know the extent to which the system was in energy balance back then.

We have no good reason to assume the climate system is ever in energy balance, although it is constantly readjusting to seek that balance. For example, the historical temperature (and proxy) record suggests the climate system was still emerging from the Little Ice Age in the late 1800s. The oceans are a nonlinear dynamical system, capable of their own unforced chaotic changes on century to millennial time scales, that can in turn alter atmospheric circulation patterns, thus clouds, thus the global energy balance. For some reason, modelers sweep this possibility under the rug (partly because they don’t know how to model unknowns).

But just because we don’t know the extent to which this has occurred in the past doesn’t mean we can go ahead and assume it never occurs.

Or at least if modelers assume it doesn’t occur, they should state that up front.

If indeed some of the warming since the late 1800s was natural, the ECS would be even lower.

With regard to that last sentence, Spencer’s research colleague at the University of Alabama Dr. John R. Christy and two co-authors, Dr. Joseph D’Aleo and Dr. James Wallace, argued in a paper first published in the fall of 2016 and revised in the spring of 2017 that solar, volcanic, and ocean current variations are sufficient to explain all the global warming over the period of allegedly anthropogenic warming, leaving none to blame on carbon dioxide. At the very least, this suggests that indeed “some of he warming since the late 1800s was natural,” and hence “ECS would be even lower” than Lewis and Curry’s estimate.

All of this has important policy implications.

Wisely or not, the global community agreed in the 2015 Paris climate accords to try to limit global warming to at most 2 C degrees, preferably 1.5 degrees, above pre-Industrial levels.

If Lewis and Curry are right, and the warming effect of CO2 is only 50–70% of what the “consensus” has thought, cuts in CO2 emissions need not be so drastic as previously thought. That’s good news for the billions of people living in poverty and without affordable, reliable electricity, and whose hope to gain it is seriously compromised by efforts to impose a rapid transition from abundant, affordable, reliable fossil fuels to diffuse, expensive, unreliable wind and solar and other renewables as chief electricity sources.

And if Spencer (like many others who agree with him) is right that the assumptions behind ECS calculations are themselves mistaken, and Christy (like many others who agree with him) is right that some or all of the modern warming has been naturally driven, then ECS is even lower than Lewis and Curry thought, and there is even less reason for the harmful energy policies sought by the “climate consensus” community.

Regardless, we’re coming closer and closer to the fulfillment of the prophecy in Michaels’s 2005 book: the shattering—or at least the eroding and possibly the disappearance—of the alarmist consensus on anthropogenic global warming.

[This article was edited April 27, 2018, to add the first four paragraphs and the two bold subheads.]

P.S. If you liked this article you might enjoy our Cornwall Alliance Email Newsletter! Sign up here to receive analysis on top issues of the day related to science, economics, and poverty development.

As a thank you for signing up, you will receive a link to watch Dr. Beisner’s 84 minute lecture (with Powerpoint slides) “Climate Change and the Christian: What’s True, What’s False, What’s our Responsibility?” Free!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Los Angeles paints streets white to reduce the city’s “urban heat islands” effect. (Photo courtesy of LA Street Services)

Los Angeles has a new solution to climate change: painting streets white.

Well, not quite a solution to climate change. What it really addresses is what’s called the “urban heat island” (UHI) effect. Cities and towns, with lots of blacktop streets, absorb more energy from sunlight than rural areas, so their ambient air temperatures (especially at night) are warmer. Los Angeles will paint its streets white to reduce its UHI.

It might actually work, though with all those black rubber tires running over those streets 24/7/365 it’s going to be quite a challenge keeping them white enough to make much difference.

But if it has any measurable effect not on local (UHI) temperature but on global, it won’t be because actual global average temperature will be made less than it otherwise would be. Human emissions of greenhouse gases’ effect on global temperature is, though theoretically real, too small to measure. After controlling for solar, volcanic, and ocean current cycle variations, there’s no warming left to blame on carbon dioxide.

However, a significant portion of the apparent increase in global average temperature actually comes from UHI contaminating temperature data. If Los Angeles’s street-painting project really works and gets copied by lots of other cities, we could see “global average temperature” decline not because truly global temperature truly declines but because UHI itself declines.

And that could expose the fact that the increase in truly global temperature has been exaggerated all along.

P.S. If you liked this article you might enjoy our Cornwall Alliance Email Newsletter! Sign up here to receive analysis on top issues of the day related to science, economics, and poverty development.

As a thank you for signing up, you will receive a link to watch Dr. Beisner’s 84 minute lecture (with Powerpoint slides) “Climate Change and the Christian: What’s True, What’s False, What’s our Responsibility?” Free!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A little etymology lesson: data derives from the Latin, dare (pronounced DAH-ray), “to give,” and means “given.” (Back when I took Latin, it was typically the second verb the conjugation of which one learned. The first was amo, “I love,” the infinitive form of which is amare.)

Data means “given.” In the natural sciences, it’s supposed to mean what we observe. It’s supposed to be quite distinct from what we do with or infer from what we observe.

To be data, information should be unadjusted from actual observations. Thus, if I stand on a scale that’s at factory default setting and it registers my weight as 260 lbs., and then I use the little knob at the front to adjust it and get back on it and it registers my weight as 255 lbs., 260 lbs. is data, and 255 lbs. is not. Some people will call it “adjusted data,” but in fact if it’s adjusted it’s not data—it’s information adjusted from data.

Classic scientific method ideally uses data (unadjusted information obtained from observations) to test guesses (or, on an ascending scale of confidence, hypotheses, theories, and laws). As Nobel Prize-winning physicist Richard Feynman put it,

In general we look for a new law by the following process. First we guess it. Then we compute the consequences of the guess to see what would be implied if this law that we guessed is right. Then we compare the result of the computation to nature, with experiment or experience, compare it directly with observation, to see if it works. If it disagrees with experiment it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is. It does not make any difference how smart you are, who made the guess, or what his name is—if it disagrees with experiment it is wrong. That is all there is to it. [Richard Feynman, The Character of Physical Law (London: British Broadcasting Corporation, 1965), 4, emphasis added.]

Not to be naive, and in recognition that we don’t often achieve what is ideal, we must acknowledge that data sometimes need to be adjusted to make information comparable over time, space, and observational tools and techniques. Mercury thermometers measure temperature differently from microwave sensors on satellites. Instruments’ accuracy changes as they age. Collection locations and times of day may change. Individuals’ skills at using instruments differ. With deep misgivings, we can agree to call the results of such adjustments “data,” but we should always do so while firmly remembering that they are not, strictly speaking, data, not even adjusted data, but information adjusted from data.

Up to a point, adjustments can make information adjusted from data more credible than the original data. If you observed me turning the knob to adjust the reading on my scale downward, and you calculate the number of pounds my action will strip from my apparent weight, your adding those pounds back in make your resulting information about my weight more credible than the reading on my tweaked instrument.

But if we assume that most inaccuracies and incomparabilities of data arise from random errors rather than intentional falsification, we can also assume that the results of those random errors will themselves be random. That is, if you take 30 measurements of the length of an object, and your errors are random, you can be reasonably sure that as many errors will be on the high side as the low, and generally their average will be more credible than any one of them taken singly. Likewise if you’re measuring its weight, or its temperature.

It follows from this that if the difference between the individual units of information adjusted from data consistently shift the measurements in one direction (upward, downward, heavier, lighter, harder, softer, stronger, weaker, longer, shorter, etc.) rather than randomly, we can be reasonably sure that there was intentional error either in the initial collection of what we now realize are wrongly called data, or in the adjustment process. That is, a clear pattern of adjusting data in one direction or another is evidence of skullduggery somewhere—whether in the initial collection, or in the adjustment process.

That’s where the work of master data sleuth Tony Heller comes in. When it comes to evaluating adjustments to global and regional temperature data, he’s the gold standard. What he does isn’t terribly complicated in principle. It just takes lots of patience and exquisite care with lots of numbers. And the results are bad news for the credibility of lots of official temperature “data” from sources like NASA, NOAA, Hadley, CRU, and others.

At the top of every page of his The Deplorable Climate Science Blog (deplorable describing not the blog but the climate science) you’ll see this:

Almost every one of those headlines links to a post in which Heller demonstrates, with the simplest of mathematics and the clearest of graphs, that something’s rotten in Denmark—that climate alarists are, over and over again, adjusting data with methods that result in information that’s less credible than the data from which it’s derived.

Let me introduce you to just one of his more recent posts, from March 20, “NOAA Data Tampering Approaching 2.5 Degrees.” In it, Heller shows how the National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) have adjusted raw U.S. temperature data in ways that raise younger and lower older numbers, creating a fictitious, or at best fictitiously exaggerated, warming trend.

To get the full impact, you should go ahead and read his whole article, but here are a few of the graphs he offers, and if you’re someone who dislikes dishonesty, especially dishonesty paid for by your tax dollars, these should be enough to make your blood boil.

The long-term upward trend in alleged U.S. annual average temperature is almost entirely an artifact of the adustments, which hide the fact that the 1930s were warmer than last two decades.

Look at that carefully, attending particularly to the vertical axis. What it shows is that NOAA has adjusted temperature data downward for all the years before shortly after 2000 and upward for the years since. The farther back you go, the more NOAA pushes the “data” downward (“0” on the vertical axis would be where a data point would fall if it hadn’t been adjusted at all, up or down); the more recent you go, the more it pushes the “data” upward. This is a sure sign of skullduggery.

As Heller reports,

Most of these adjustments are due to simply making up data.  Every month, a certain percentage of the 1,218 United States Historical Climatology Network (USHCN) stations fail to report their data, and the temperature gets estimated by NOAA using a computer model. Missing data is marked in the USHCN database with an “E” – meaning “estimated.” In 1970, about 10% of the data was missing, but that number has increased to almost 50%, meaning that almost half of the current adjusted data is fake.

Here’s his record of that growing substitution of fake for real measurements:

As a coup de grace, Heller then shows that the result of NOAA’s adjustments is to bring U.S. temperature data in line with the theory of CO2-driven global warming. That is, rather than follow Feynman’s dictum that in genuine scientific method observation corrects theory, NOAA uses theory to correct observation. Who are the real “science deniers” now?

But here is the real smoking gun of fraud by NOAA. The adjustments being made almost perfectly match atmospheric CO2 levels – showing that the data is being altered precisely to match global warming theory.

In other words, NOAA knew what it wanted the “data” to say, and when the real (raw, unadjusted) data didn’t say that, it adjusted them till they did. The result has as much credibility as a confession wrung from an accused by torture.

But read the whole of Heller’s article—and then click on the links to his many others showing similar frauds on the part of our offical “data” stewards. Then ask yourself, “And anyone would trust these guys because—why?”

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

“Golden Gate Bridge, San Francisco,” by Luke Price, Flickr Creative Commons

David Johnson, who follows us on Facebook, brought our attention to this article, which demonstrates that some California cities, including San Francisco, which with Oakland is suing five major oil companies including ExxonMobil, alleging that the companies suppressed from their stockholders what they knew about the risks of climate change to their stocks’ future value, are guilty of exactly that practice themselves in dealing with their bondholders. Here’s the essence of the story:

… consider the hypocrisy laid bare this week in an Exxon court filing.  It points out that many of the California towns and cities took the exact opposite position in their municipal bond offerings.  When borrowing money, they took pains to insulate themselves from liabilities stemming from climate change when offering bonds to investors.  Some said they had no way to predict accurately risk related to rising sea levels or climate change or simply failed to mention such risks.  Apparently, these cities and towns believe it is better to tell the truth to the markets in New York City than to judges and jurors in court.

For example, in its bond offering in 2017, Santa Cruz states, “Areas within the county may be subject to unpredictable climatic conditions, such as flood, droughts and destructive storms.” And yet, in its lawsuits against the energy companies, Santa Cruz declared with righteous certainty that there is a “…a 98% chance that the County experiences a devastating three-foot flood before the year 2050, and a 22% chance that such a flood occurs before 2030.”

In its 2017 bond offering, San Francisco acknowledged, “The City is unable to predict whether sea-level rise or other impacts of climate change or flooding from a major storm will occur, when they may occur, and if any such events occur, whether they will have a material adverse effect on the business operations or financial condition of the City and the local economy.” But in its lawsuit, the city declared, “Global warming-induced sea level rise is already causing flooding of low-lying areas of San Francisco…”

And this was even before smoking marijuana became legal in California.

These contradictions only underscore the highly political origin and motivation of the city lawsuits. The plaintiffs’ lawyers include Matthew Pawa, long regarded as an innovative and entrepreneurial strategist when it comes to using the court room as a substitute for the legislature to advance climate change orthodoxy. It is no coincidence that Hagens Berman Sobol Shapiro, a large plaintiffs firm based in Seattle, recently announced that it is expanding its environmental practice by acquiring the Pawa Law Group.  Municipalities likely welcome the chance to outsource the costs of litigation to firms working on a contingency basis. Rulings by environmentalist-friendly judges could mean a big pay day for all, giving new meaning to the “green” movement.

In a petition filed in Texas court early this week, Exxon lays out the conflict between what the cities have told judges and what they have told the bond markets. Exxon is asking the court to require government officials to answer questions under oath about those statements. The case could have broad implications for the bond markets, as investors could have legal grounds to challenge the local governments for similar inconsistencies.  In the end, the cities’ mismanagement will come home to roost in higher borrowing costs and, ultimately, higher taxes for their residents.

For more about San Francisco and Oakland’s lawsuit, click here.

P.S. If you liked this article you might enjoy our Cornwall Alliance Email Newsletter! Sign up here to receive analysis on top issues of the day related to science, economics, and poverty development.

As a thank you for signing up, you will receive a link to watch Dr. Beisner’s 84 minute lecture (with powerpoint slides) “Climate Change and the Christian: What’s True, What’s False, What’s our Responsibility?” Free!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

[Editor’s note: Lord Christopher Monckton of Brenchley, a British journalist and polymath who has been at the forefront of arguments against catastrophic anthropogenic global warming, prepared this document for submission as a brief to U.S. District Judge William Alsup in a case brought by San Francisco and Oakland against five major oil companies that seeks to hold the companies liable for damages from warming caused by the use of their products. It was first published at WattsUpWithThat.com.—ECB]

By Christopher Monckton of Brenchley

This will be a long posting, but it will not be found uninteresting.

Global warming on trial: Global warming goes on trial at 8.00 am this Wednesday, 21 March 2018, in Court 8 on the 19th floor of the Federal Building at 450 Golden Gate Avenue, San Francisco. Court 8 is the largest of the courtrooms in the Federal District Court of Northern California. They’re clearly expecting a crowd. The 8 am start, rather than the usual 10 am, is because the judge in the case is an early bird.

The judge: His Honor Judge William Haskell Alsup, who will preside over the coyly-titled “People of California” v. British Petroleum plc et al., is not to be underestimated. Judge Alsup, as the senior member of the Northern California Bench (he has been there for almost two decades), gets to pick the cases he likes the look of. He is no ordinary, custard-faced law graduate. Before he descended to the law (he wanted to help the civil rights movement), he earned a B.S. in engineering at Mississippi State University.

Don’t mess with me: His Honor Judge Alsup flourishing a tract by his mentor, the Supreme Court justice whom he once served as Clerk

Six years ago, in an acrimonious hearing between Oracle and Google, the two Silicon-Valley giants were arguing about nine lines of computer code, which Oracle said Google had filched for its Android cellphone system. In preparation for the case, Oracle had tested 15 million lines of Android code, and had found that just nine lines – a subroutine known as rangeCheck – had been copied keystroke for keystroke. Oracle’s case was that these nine lines of code, though representing only 0.00006% of the Android software, were a crucial element in the system. Judge Alsup did not buy that argument.

Rumors gather about great men. In hushed tones, those who talk of Judge Alsup say he taught himself the Java programming language so that he could decide the rangeCheck case. In fact, he is not familiar with Java, but he does write computer code using qBasic, which used to be bundled free with MS-DOS. On the vast desk in his book-lined office sits a 2011-vintage Dell laptop, the only one he has that will still run qBasic. He has written programs for his ham-radio hobby, for the Mastermind board game, and for his wife’s bridge game.

The 18-year-old Bill Alsup at his ham radio console in Mississippi.

This, then, is that rarest of creatures, a tech-savvy judge. And he has taken the very rare but commendable step of ordering both parties to answer nine scientific questions about climate change in preparation for what he has called a “tutorial” on the subject next Wednesday.

Hearing of this case, and of Bill Alsup’s starring role, I wondered what line of argument might convince a scientifically literate judge that the plaintiffs, two Californian cities who want the world’s five biggest oil corporations to pay them to adapt to rising sea level, that there is no cause for alarm about manmade global warming.

Judge Alsup might well be moved to dismiss the plaintiffs’ case provided that the defendants were able to establish definitively that fears of global warming had been very greatly exaggerated.

Two propositions: If the following two propositions were demonstrated, His Honor might decide – and all but a few irredentists would be compelled to agree – that global warming was not a problem and that the scare was over.

  1. It can be proven that an elementary error of physics is the sole cause of alarm about global warming – elementary because otherwise non-climatologists might not grasp it.
  2. It can be proven that, owing to that elementary error, current official mid-range estimates of equilibrium sensitivity to anthropogenic activity are at least twice what they should be.

Regular readers will know that my contributions here have been infrequent in the past year. The reason is that I have had the honor to lead a team of eminent climatological researchers who have been quietly but very busily investigating how much global warming we may cause, known as the “equilibrium-sensitivity” question.

We can now prove both points itemized above, and we have gone to more than customary lengths to confirm by multiple empirical methods what we originally demonstrated by a theoretical method. The half-dozen methods all cohere in the same ballpark.

Three days before His Honor posted up his list of questions on climate science, my team had submitted a paper on our result to a leading climatological journal (by convention, I am bound not to say which until publication).

The judge’s question: When I saw His Honor’s eighth question, “What are the main sources of heat that account for the incremental rise in temperature on Earth?”, I contacted my eight co-authors, who all agreed to submit an amicus curiae or “friend-of-the-court” brief.

Our reply: Our amicus brief, lodged for us by a good friend of the ever-valuable Heartland Institute, concludes with a respectful recommendation that the court should reject the plaintiffs’ case and that it should also order the oil corporations to meet their own costs in the cause because their me-too public statements to the effect that global warming is a “problem” that requires to be addressed are based on the same elementary error as the plaintiffs’ case.

In effect, the oil corporations have invited legal actions such as this, wherefore they should pay the cost of their folly in accordance with the ancient legal principle volenti non fit injuria – if you stick your chin out and invite someone to hit it, don’t blub if someone hits it.

The judge has the right to accept or reject the brief, so we accompanied our brief with the usual short application requesting the court to accept it for filing. Since the rules of court require the brief to be lodged as an exhibit to the application, the brief stands part of the court papers in any event, has been sent to all parties, and is now publicly available on PACER, the Federal judiciary’s public-access database.

Therefore, I am at last free to reveal what we have discovered. There is indeed an elementary error of physics right at the heart of the models’ calculations of equilibrium sensitivity. After correcting that error, and on the generous assumption that official climatology has made no error other than that which we have exposed, global warming will not be 3.3 ± 1.2 K: it will be only 1.2 ± 0.15 K. We say we can prove it.

The proof: I shall now outline our proof. Let us begin with the abstract of the underlying paper. It is just 70 words long, for the error (though it has taken me a dozen years to run it to earth) really is stupendously elementary:

Abstract: In a dynamical system, even an unamplified input signal induces a response to any feedback. Hitherto, however, the large feedback response to emission temperature has been misattributed to warming from the naturally-occurring, non-condensing greenhouse gases. After correction, the theoretically-derived pre-industrial feedback fraction is demonstrated to cohere with the empirically-derived industrial-era value an order of magnitude below previous estimates, mandating reduction of projected Charney sensitivity from  to .

Equations: To understand the argument that follows, we shall need three equations.

The zero-dimensional-model equation (1) says that equilibrium sensitivity or final warming ΔTeq is the ratio of reference sensitivity or initial warming ΔTref to (1 – f ), where f is the feedback fraction, i.e., the fraction of ΔTeq represented by the feedback response ΔT(ref) to ΔTref. The entire difference between reference and equilibrium sensitivity is accounted for by the feedback response ΔT(ref) (the bracketed subscript indicates a feedback response).

ΔTeq = ΔTref / (1 – f ).    (1)

The zero-dimensional model is not explicitly used in general-circulation models. However, it is the simplest expression of the difference between reference sensitivity before accounting for feedback and equilibrium sensitivity after accounting for feedback. Eq. (1), a simplified form of the feedback-amplification equation that originated in electronic network analysis, is of general application when deriving the feedback responses in all dynamical systems upon which feedbacks bear. The models must necessarily reflect it.

Eq. (1) is used diagnostically not only to derive equilibrium sensitivity (i.e. final warming) from official inputs but also to derive the equilibrium sensitivity that the models would be expected to predict if the inputs (such as the feedback fraction f ) were varied. We conducted a careful calibration exercise to confirm that the official reference sensitivity and the official interval of the feedback fraction, when input to Eq. (1), indeed yield the official interval of equilibrium sensitivity.

The feedback-fraction equation (2): If the reference sensitivity ΔTref and the equilibrium sensitivity ΔTeq are specified, the feedback fraction f is found by rearranging (1) as (2):

f  = 1 – ΔTref / ΔTeq.    (2)

The reference-sensitivity equation (3): Reference sensitivity ΔTref is the product of a radiative forcing ΔQ0, in Watts per square meter, and the Planck reference-sensitivity parameter λ0, in Kelvin per Watt per square meter.

ΔTref = λ0 ΔQ0.    (3)

The Planck parameter λ0 is currently estimated at about 0.3125, or 3.2–1 K W–1 m2 (Soden & Held 2006; Bony 2006, Appendix A; IPCC 2007, p. 631 fn.). The CO2 radiative forcing ΔQ0 is 3.5 W m–2 (Andrews 2012). Therefore, from Eq. (3), reference sensitivity ΔTref to doubled CO2 concentration is about 1.1 K.

The “natural greenhouse effect” is not 32 K: The difference of 32 K between natural temperature TN (= 287.6 K) in 1850 and emission temperature TE (= 255.4 K) without greenhouse gases or temperature feedbacks was hitherto imagined to comprise 8 K (25%) base warming ΔTB directly forced by the naturally-occurring, non-condensing greenhouse gases and a 24 K (75%) feedback response ΔT(B) to ΔTB, implying a pre-industrial feedback fraction f ≈ 24 / 32 = 0.75 (Lacis et al., 2010).

Similarly, the CMIP3/5 models’ mid-range reference sensitivity ΔTS (= 3.5 x 0.3125 = 1.1 K) and Charney sensitivity ΔT (= 3.3 K) (Charney sensitivity is equilibrium sensitivity to doubled CO2), imply a feedback fraction f = 1 – 1.1 / 3.3 = 0.67 (Eq. 2) in the industrial era.

The error: However, climatologists had made the grave error of not realizing that emission temperature TE (= 255 K) itself induces a substantial feedback. To correct that long-standing error, we illustratively assumed that the feedback fractions f in response to TE and to ΔTB were identical. Then we derived f simply by replacing the delta values ΔTref, ΔTeq in (2) with the underlying entire quantities Tref, Teq, setting Tref = TE + ΔTB, and Teq = TN (Eq. 4),

f  = 1 –Tref / Teq = 1 – (TE + ΔTB) / TN

= 1 – (255.4 + 8) / 287.6 = 0.08.    (4)

Contrast this true pre-industrial value f = 0.08 with the CMIP5 models’ current mid-range estimate f = 1 – 1.1 / 3.3 = 0.67 (Eq. 2), and with the f = 0.75 applied by Lacis et al. (2010) not only to the 32 K “entire natural greenhouse effect” but also to “current climate”.

Verification: We took no small trouble to verify by multiple empirical methods the result derived by the theoretical method in Eq. (4).

Test 1: IPCC’s best estimate (IPCC, 2013, fig. SPM.5) is that some 2.29 W m–2 of net anthropogenic forcing arose in the industrial era to 2011. The product of that value and the Planck parameter is the 0.72 K reference warming (Eq. 3).

However, 0.76 K warming was observed (taken as the linear trend on the HadCRUT4 monthly global mean surface temperature anomalies, 1850-2011).

Therefore, the industrial-era feedback fraction f is equal to 1 – 0.72 / 0.76. or 0.05 (Eq. 2). That is close to the pre-industrial value f = 0.08: but it is an order of magnitude (i.e., approximately tenfold) below the models’ 0.67 or Lacis’ 0.75.

There is little chance that some feedbacks had not fully acted. The feedbacks listed in IPCC (2013, p. 818, table 9.5) as being relevant to the derivation of equilibrium sensitivity are described by IPCC (2013, p. 128, Fig. 1.2) as having the following durations: Water vapor and lapse-rate feedback hours; Cloud feedback days; Surface albedo feedback years.

The new headline Charney sensitivity: Thus, Charney sensitivity is not 1.1 / (1 – 0.67) = 3.3 K (Eq. 1), the CMIP5 models’ imagined mid-range estimate (Andrews 2012). Instead, whether f = 0.05 or 0.08, Charney sensitivity ΔTeq = 1.1 / (1 – f ) is 1.2 K (Eq. 1). That new headline value is far too small to worry about.

Test 2: We sourced mainstream estimates of net anthropogenic forcing over ten different periods in the industrial era, converting each to reference sensitivity using Eq. (3) and then finding the feedback fraction f for each period using Eq. (2).

The mean of the ten values of f was 0.12, somewhat higher than the value 0.05 based on IPCC’s mid-range estimate of 2.29 W m–2 net anthropogenic forcing in the industrial era. The difference was driven by three high-end outliers in our table of ten results. Be that as it may, Charney sensitivity for f = 0.12 is only 1.25 K.

Test 3: We checked how much global warming had occurred since 1950, when IPCC says our influence on climate became detectable. The CMIP5 mid-range prediction of Charney sensitivity, at 3.3 K, is about equal to the original mid-range prediction of 21st-century global warming derivable from IPCC (1990, p. xiv), where 1.8 K warming compared with the pre-industrial era [equivalent to 1.35 K warming compared with 1990] is predicted for the 40-year period 1991-2030, giving a centennial warming rate of 1.35 / (40 / 100) = 3.3 K.

This coincidence of values allowed us to compare the 1.2 K Charney sensitivity derived from f on [0.05, 0.12] in Eq. (4) with the least-squares linear-regression trend on the HadCRUT4 monthly global mean surface temperature anomalies over the 68 years 1950-2017. Sure enough, the centennial-equivalent warming was 1.2 K/century:

The centennial-equivalent warming rate from 1950-2017 was 1.2 K/century

Test 4: We verified that the centennial-equivalent warming rate in the first 17 years (one-sixth) of the 21st century was not significantly greater than the rate since 1950. We averaged the monthly global mean surface and lower-troposphere temperature anomalies from the HadCRUT4 terrestrial and UAH satellite datasets and derived the least-squares linear-regression trend (the bright blue line on the graph below).

The satellite data were included because they cover a five-mile-high slab of the atmosphere immediately above the surface, and have a coverage greater than the terrestrial measurements. The trend was found to be 0.22 K, equivalent to 1.3 K/century:

Test 5: To confirm that we had understood feedback theory correctly, one of my distinguished co-authors, a hands-on electronics engineer, heard of our result and built a test rig in which we were able to specify the input signal (i.e., emission temperature TE) as a voltage, and also the direct-gain factor μ to allow for direct natural or anthropogenic forcings, and the feedback fraction β (we were using the more precise form of Eq. 1 that is usual in electronic network analysis). Then it was a simple matter directly to measure the output signal (i.e. equilibrium sensitivity ΔTeq).

The most crucial of the many experiments we ran on this rig was to set μ to unity, implying no greenhouse forcing at all. We set the feedback fraction β to a non-zero value and then verified that the output signal exceeded the input signal by the expected margin. Not at all to our surprise, it did. This experiment proved that emission temperature, on its own, induced a feedback response that climatology had hitherto overlooked.

This is where the elementary error made by climatologists for half a century has had its devastating effect. Look again at Eq. (1). The input signal is altogether absent. Although it is acceptable to use Eq. (1) to derive equilibrium sensitivities from reference sensitivities, the mistake made by the modelers was to assume, as Lacis et al. (2010) and many others had assumed, that the entire difference of 32 K between the natural temperature TN in 1850 and the emission temperature TE was accounted for by the natural greenhouse effect, comprising a direct greenhouse warming ΔTB = 8 K and a very large feedback reponse ΔT(B) = 24 K to ΔTB.

However, in truth – this is the crucial point – the emission temperature TE (= 255 K), even in the absence of any greenhouse gases, induces a large feedback response ΔTE. This feedback response to the input signal is entirely uncontroversial in electronic network analysis and in control theory generally, but we have not been able to find any acknowledgement in climatology that it exists.

Just as Lacis (2010) did, the modelers assumed that the industrial-era feedback fraction must be every bit as large as the pre-industrial feedback fraction that they had erroneously inflated by adding the large feedback response induced by emission temperature to the small feedback response induced by the presence of the naturally-occurring greenhouse gases.

It was that assumption that led the modelers to assume that there must be some very strongly positive feedbacks, chief among which was the water-vapor feedback. However, although the Clausius-Clapeyron relation indicates that the space occupied by the atmosphere can carry near-exponentially more water vapor as it warms, there is nothing to say that it must.

Suppose there were a water-vapor feedback anything like as large as that which the models have assumed (and they have assumed a very large feedback only because they are trying to explain the large but fictitious feedback fraction consequent upon their erroneous assumption that emission temperature of 255 K somehow induces no feedback response at all, while the next 8 K of warming magically induces a 24 K feedback response). In that event, atmospheric dynamics requires that there must be a tropical mid-troposphere “hot spot” [I had the honor to name it], where the warming rate should be twice or thrice that at the tropical surface. However, the “hot spot” is not observed in reality (see below), except in one suspect dataset that Dr Fred Singer scrutinized some years ago and determined to be defective.

Models predict the tropical mid-troposphere “hot spot” (top, IPCC 2007, citing Santer 2003; above left, Lee et al. 2008; above right, Karl et al., 2006).

However, the “hot spot” is not observed in reality (see below). Our result shows why not. The “hot spot” is an artefact of the modelers’ error in misallocating the substantial feedback response induced by emission temperature by adding it to the very small feedback response induced by the naturally-occurring greenhouse gases.

The model-predicted “hot spot” is not observed in reality (Karl et al. 2006).

Test 6: Even after we had built and operated our own test..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Today my good friend Dr. Tom Sheahen, a physicist and head of the Institute for Theological Encounter with Science and Technology (ITEST), wrote to say,

Already this morning a friend sent me the “Patriot Post” rendition of your article [“Senator Whitehouse, Your New Wardrobe Is Ready”] that quotes me at length. Thank you very much. The “sackcloth and ashes” wardrobe line was really clever.  I didn’t see that coming at all.

I just wish some members of the hierarchy of the Catholic Church would take note, and tell ol’ Sheldon to knock it off. …

PS: [Joe] Bastardi’s book [The Climate Chronicles: Inconvenient Revelations You Won’t Hear from Al Gore–And Others] arrived just fine a couple days ago, and I started reading it. But immediately an eager friend borrowed it for a while. Could you ask Megan to send me another copy?

That’s the ticket! Share what we send you!

Tom got his copy of Joe’s book by responding to our offer to send, as our way of saying thanks, a FREE copy to anyone who gave a donation of ANY amount (100% tax-deductible, by the way) and requested it, mentioning Promo Code 1803. So now you know how, and you can follow Tom’s example.

But learn a lesson: Never lend a book you haven’t finished reading!

P.S. If you liked this article you might enjoy our Cornwall Alliance Email Newsletter! Sign up here to receive analysis on top issues of the day related to science, economics, and poverty development.

As a thank you for signing up, you will receive a link to watch Dr. Beisner’s 84 minute lecture (with powerpoint slides) “Climate Change and the Christian: What’s True, What’s False, What’s our Responsibility?” Free!

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview