Of all known and imagined consequences of climate change, many people fear sea-level rise most. But efforts to determine what causes seas to rise are marred by poor data and disagreements about methodology. The noted oceanographer Walter Munk referred to sea-level rise as an “enigma”; it has also been called a riddle and a puzzle.
It is generally thought that sea-level rise accelerates mainly by thermal expansion of sea water, the so-called steric component. But by studying a very short time interval, it is possible to sidestep most of the complications, like “isostatic adjustment” of the shoreline (as continents rise after the overlying ice has melted) and “subsidence” of the shoreline (as ground water and minerals are extracted).
I chose to assess the sea-level trend from 1915-45, when a genuine, independently confirmed warming of approximately 0.5 degree Celsius occurred. I note particularly that sea-level rise is not affected by the warming; it continues at the same rate, 1.8 millimeters a year, according to a 1990 review by Andrew S. Trupin and John Wahr. I therefore conclude—contrary to the general wisdom—that the temperature of sea water has no direct effect on sea-level rise. That means neither does the atmospheric content of carbon dioxide.
This conclusion is worth highlighting: It shows that sea-level rise does not depend on the use of fossil fuels. The evidence should allay fear that the release of additional CO2 will increase sea-level rise.
But there is also good data showing sea levels are in fact rising at an accelerating rate. The trend has been measured by a network of tidal gauges, many of which have been collecting data for over a century.
The cause of the trend is a puzzle. Physics demands that water expand as its temperature increases. But to keep the rate of rise constant, as observed, expansion of sea water evidently must be offset by something else. What could that be? I conclude that it must be ice accumulation, through evaporation of ocean water, and subsequent precipitation turning into ice. Evidence suggests that accumulation of ice on the Antarctic continent has been offsetting the steric effect for at least several centuries.
It is difficult to explain why evaporation of seawater produces approximately 100% cancellation of expansion. My method of analysis considers two related physical phenomena: thermal expansion of water and evaporation of water molecules. But if evaporation offsets thermal expansion, the net effect is of course close to zero. What then is the real cause of sea-level rise of 1 to 2 millimeters a year?
Melting of glaciers and ice sheets adds water to the ocean and causes sea levels to rise. (Recall though that the melting of floating sea ice adds no water to the oceans, and hence does not affect the sea level.) After the rapid melting away of northern ice sheets, the slow melting of Antarctic ice at the periphery of the continent may be the main cause of current sea-level rise.
All this, because it is much warmer now than 12,000 years ago, at the end of the most recent glaciation. Yet there is little heat available in the Antarctic to support melting.
We can see melting happening right now at the Ross Ice Shelf of the West Antarctic Ice Sheet. Geologists have tracked Ross’s slow disappearance, and glaciologist Robert Bindschadler predicts the ice shelf will melt completely within about 7,000 years, gradually raising the sea level as it goes.
Of course, a lot can happen in 7,000 years. The onset of a new glaciation could cause the sea level to stop rising. It could even fall 400 feet, to the level at the last glaciation maximum 18,000 years ago.
Currently, sea-level rise does not seem to depend on ocean temperature, and certainly not on CO2. We can expect the sea to continue rising at about the present rate for the foreseeable future. By 2100 the seas will rise another 6 inches or so—a far cry from Al Gore’s alarming numbers. There is nothing we can do about rising sea levels in the meantime. We’d better build dikes and sea walls a little bit higher.
Mr. Singer is a professor emeritus of environmental science at the University of Virginia. He founded the Science and Environmental Policy Project and the Nongovernmental International Panel on Climate Change.
Ksubz LLC, 9401 Shoofly Lane, Wellington CO 80549, USA
Ned Nikolov Ksubz LLC, 9401 Shoofly Lane Wellington CO 80549, USA Tel: 970-980-3303, 970-206-0700 E-mail: firstname.lastname@example.org
Received date: November 11, 2016; Accepted date: February 06, 2017; Published date: February 13, 2017
Citation: Nikolov N, Zeller K (2017) New Insights on the Physical Nature of the Atmospheric Greenhouse Effect Deduced from an Empirical Planetary Temperature Model. Environ Pollut Climate Change 1:112.s
A recent study has revealed that the Earth’s natural atmospheric greenhouse effect is around 90 K or about 2.7 times stronger than assumed for the past 40 years. A thermal enhancement of such a magnitude cannot be explained with the observed amount of outgoing infrared long-wave radiation absorbed by the atmosphere (i.e. ≈ 158 W m-2), thus requiring a re-examination of the underlying Greenhouse theory. We present here a new investigation into the physical nature of the atmospheric thermal effect using a novel empirical approach toward predicting the Global Mean Annual near-surface equilibrium Temperature (GMAT) of rocky planets with diverse atmospheres. Our method utilizes Dimensional Analysis (DA) applied to a vetted set of observed data from six celestial bodies representing a broad range of physical environments in our Solar System, i.e. Venus, Earth, the Moon, Mars, Titan (a moon of Saturn), and Triton (a moon of Neptune). Twelve relationships (models) suggested by DA are explored via non-linear regression analyses that involve dimensionless products comprised of solar irradiance, greenhouse-gas partial pressure/density and total atmospheric pressure/density as forcing variables, and two temperature ratios as dependent variables. One non-linear regression model is found to statistically outperform the rest by a wide margin. Our analysis revealed that GMATs of rocky planets with tangible atmospheres and a negligible geothermal surface heating can accurately be predicted over a broad range of conditions using only two forcing variables: top-of-the-atmosphere solar irradiance and total surface atmospheric pressure. The hereto discovered interplanetary pressure-temperature relationship is shown to be statistically robust while describing a smooth physical continuum without climatic tipping points. This continuum fully explains the recently discovered 90 K thermal effect of Earth’s atmosphere. The new model displays characteristics of an emergent macro-level thermodynamic relationship heretofore unbeknown to science that has important theoretical implications. A key entailment from the model is that the atmospheric ‘greenhouse effect’ currently viewed as a radiative phenomenon is in fact an adiabatic (pressure-induced) thermal enhancement analogous to compression heating and independent of atmospheric composition. Consequently, the global down-welling long-wave flux presently assumed to drive Earth’s surface warming appears to be a product of the air temperature set by solar heating and atmospheric pressure. In other words, the so-called ‘greenhouse back radiation’ is globally a result of the atmospheric thermal effect rather than a cause for it. Our empirical model has also fundamental implications for the role of oceans, water vapour, and planetary albedo in global climate. Since produced by a rigorous attempt to describe planetary temperatures in the context of a cosmic continuum using an objective analysis of vetted observations from across the Solar System, these findings call for a paradigm shift in our understanding of the atmospheric ‘greenhouse effect’ as a fundamental property of climate.
Greenhouse effect; Emergent model; Planetary temperature; Atmospheric pressure; Greenhouse gas; Mars temperature
In a recent study Volokin et al.  demonstrated that the strength of Earth’s atmospheric Greenhouse Effect (GE) is about 90 K instead of 33 K as presently assumed by most researchers e.g. [2-7]. The new estimate corrected a long-standing mathematical error in the application of the Stefan–Boltzmann (SB) radiation law to a sphere pertaining to Hölder’s inequality between integrals. Since the current greenhouse theory strives to explain GE solely through a retention (trapping) of outgoing long-wavelength (LW) radiation by atmospheric gases [2,5,7- 10], a thermal enhancement of 90 K creates a logical conundrum, since satellite observations constrain the global atmospheric LW absorption to 155–158 W m-2 [11-13]. Such a flux might only explain a surface warming up to 35 K. Hence, more than 60% of Earth’s 90 K atmospheric effect appears to remain inexplicable in the context of the current theory. Furthermore, satellite- and surface-based radiation measurements have shown [12-14] that the lower troposphere emits 42-44% more radiation towards the surface (i.e., 341-346 W m-2) than the net shortwave flux delivered to the Earth-atmosphere system by the Sun (i.e., 240 W m-2). In other words, the lower troposphere contains significantly more kinetic energy than expected from solar heating alone, a conclusion also supported by the new 90 K GE estimate. A similar but more extreme situation is observed on Venus as well, where the atmospheric downwelling LW radiation near the surface (>15,000 W m-2) exceeds the total absorbed solar flux (65–150 W m-2) by a factor of 100 or more . The radiative greenhouse theory cannot explain this apparent paradox considering the fact that infrared-absorbing gases such as CO2, water vapor and methane only re-radiate available LW emissions and do not constitute significant heat storage or a net source of additional energy to the system. This raises a fundamental question about the origin of the observed energy surplus in the lower troposphere of terrestrial planets with respect to the solar input. The above inconsistencies between theory and observations prompted us to take a new look at the mechanisms controlling the atmospheric thermal effect.
We began our study with the premise that processes controlling the Global Mean Annual near-surface Temperature (GMAT) of Earth are also responsible for creating the observed pattern of planetary temperatures across the Solar System. Thus, our working hypothesis was that a general physical model should exist, which accurately describes GMATs of planets using a common set of drivers. If so, then such a model would also reveal the forcing behind the atmospheric thermal effect.
Instead of examining existing mechanistic models such as 3-D GCMs, we decided to try an empirical approach not constrained by a particular physical theory. An important reason for this was the fact that current process-oriented climate models rely on numerous theoretical assumptions while utilizing planet-specific parameterizations of key processes such as vertical convection and cloud nucleation in order to simulate the surface thermal regime over a range of planetary environments . These empirical parameterizations oftentimes depend on detailed observations that are not typically available for planetary bodies other than Earth. Hence, our goal was to develop a simple yet robust planetary temperature model of high predictive power that does not require case-specific parameter adjustments while successfully describing the observed range of planetary temperatures across the Solar System.
Methods and Data
In our model development we employed a ‘top-down’ empirical approach based on Dimensional Analysis (DA) of observed data from our Solar System. We chose DA as an analytic tool because of its ubiquitous past successes in solving complex problems of physics, engineering, mathematical biology, and biophysics [16-21]. To our knowledge DA has not previously been applied to constructing predictive models of macro-level properties such as the average global temperature of a planet; thus, the following overview of this technique is warranted.
Dimensional analysis background
DA is a method for extracting physically meaningful relationships from empirical data [22-24]. The goal of DA is to restructure a set of original variables deemed critical to describing a physical phenomenon into a smaller set of independent dimensionless products that may be combined into a dimensionally homogeneous model with predictive power. Dimensional homogeneity is a prerequisite for any robust physical relationship such as natural laws. DA distinguishes between measurement units and physical dimensions. For example, mass is a physical dimension that can be measured in gram, pound, metric ton etc.; time is another dimension measurable in seconds (s), hour (h), years, etc. While the physical dimension of a variable does not change, the units quantifying that variable may vary depending on the adopted measurement system.
Many physical variables and constants can be described in terms of four fundamental dimensions, i.e., mass [M], length [L], time [T], and absolute temperature [Θ]. For example, an energy flux commonly measured in W m-2 has a physical dimension [M T-3] since 1 W m-2=1 J s-1 m-2=1 (kg m2 s-2) s-1 m-2=kg s-3. Pressure may be reported in units of Pascal, bar, atm., PSI or Torr, but its physical dimension is always [M L-1 T-2] because 1 Pa=1 N m-2=1 (kg m s-2) m-2=1 kg m-1 s-2. Thinking in terms of physical dimensions rather than measurement units fosters a deeper understanding of the underlying physical reality. For instance, a comparison between the physical dimensions of energy flux and pressure reveals that a flux is simply the product of pressure and the speed of moving particles [L T-1], i.e., [M T-3]=[M L-1 T-2] [L T-1]. Thus, a radiative flux FR (W m-2) can be expressed in terms of photon pressure Pph (Pa) and the speed of light c (m s-1) as Fr=cPph. Since c is constant within a medium, varying the intensity of electromagnetic radiation in a given medium effectively means altering the pressure of photons. Thus, the solar radiation reaching Earth’s upper atmosphere exerts a pressure (force) of sufficient magnitude to perturb the orbits of communication satellites over time [25,26].
The simplifying power of DA in model development stems from the Buckingham Pi Theorem , which states that a problem involving n dimensioned xi variables, i.e.,
can be reformulated into a simpler relationship of (n-m) dimensionless πi products derived from xi, i.e.,
where m is the number of fundamental dimensions comprising the original variables. This theorem determines the number of nondimensional πi variables to be found in a set of products, but it does not prescribe the number of sets that could be generated from the original variables defining a particular problem. In other words, there might be, and oftentimes is more than one set of (n-m) dimensionless products to analyze. DA provides an objective method for constructing the sets of πi variables employing simultaneous equations solved via either matrix inversion or substitution .
The second step of DA (after the construction of dimensionless products) is to search for a functional relationship between the πivariables of each set using regression analysis. DA does not disclose the best function capable of describing the empirical data. It is the investigator’s responsibility to identify a suitable regression model based on prior knowledge of the phenomenon and a general expertise in the subject area. DA only guarantees that the final model (whatever its functional form) will be dimensionally homogeneous, hence it may qualify as a physically meaningful relationship provided that it (a) is not based on a simple polynomial fit; (b) has a small standard error; (c) displays high predictive skill over a broad range of input data; and (d) is statistically robust. The regression coefficients of the final model will also be dimensionless, and may reveal true constants of Nature by virtue of being independent of the units utilized to measure the forcing variables.
Selection of model variables
A planet’s GMAT depends on many factors. In this study, we focused on drivers that are remotely measurable and/or theoretically estimable. Based on the current state of knowledge we identified seven physical variables of potential relevance to the global surface temperature: 1) topof- the-atmosphere (TOA) solar irradiance (S); 2) mean planetary surface temperature in the absence of atmospheric greenhouse effect, hereto called a reference temperature (Tr); 3) near-surface partial pressure of atmospheric greenhouse gases (Pgh); 4) near-surface mass density of atmospheric greenhouse gases (ρgh); 5) total surface atmospheric pressure (P); 6) total surface atmospheric density (ρ); and 7) minimum air pressure required for the existence of a liquid solvent at the surface, hereto called a reference pressure (Pr). Table 1 lists the above variables along with their SI units and physical dimensions. Note that, in order to simplify the derivation of dimensionless products, pressure and density are represented in Table 1 by the generic variables Px and ρx, respectively. As explained below, the regression analysis following the construction of πi variables explicitly distinguished between models involving partial pressure/density of greenhouse gases and those employing total atmospheric pressure/density at the surface. The planetary Bond albedo (αp) was omitted as a forcing variable in our DA despite its known effect on the surface energy budget, because it is already dimensionless and also partakes in the calculation of reference temperatures discussed below.
Global mean annual near-surface temperature (GMAT), the dependent variable
Stellar irradiance (average shortwave flux incident on a plane perpendicular to the stellar rays at the top of a planet’s atmosphere)
Reference temperature (the planet’s mean surface temperature in the absence of an atmosphere or an atmospheric greenhouse effect)
Average near-surface gas pressure representing either partial pressure of greenhouse gases or total atmospheric pressure
[M L-1 T-2]
Average near-surface gas density representing either greenhouse-gas density or total atmospheric density
Reference pressure (the minimum atmospheric pressure required a liquid solvent to exists at the surface)
[M L-1 T-2]
Table 1: Variables employed in the Dimensional Analysis aimed at deriving a general planetary temperature model. The variables are comprised of 4 fundamental physical dimensions: mass [M], length [L], time [T] and absolute temperature [Θ].
Appendix A details the procedure employed to construct the πi variables. DA yielded two sets of πi products, each one consisting of two dimensionless variables, i.e.,
This implies an investigation of two types of dimensionally homogeneous functions (relationships):
Note that π1=Ts/Tr occurs as a dependent variable in both relationships, since it contains the sought temperature Ts. Upon replacing the generic pressure/density variables Px and ρx in functions (1) and (2) with either partial pressure/density of greenhouse gases (Pgh and ρgh) or total atmospheric pressure/density (P and ρ), one arrives at six prospective regression models. Further, as explained further, we employed two distinct kinds of reference temperature computed from different formulas, i.e., an effective radiating equilibrium temperature (Te) and a mean ‘no-atmosphere’ spherical surface temperature (Tna) (Table 1). This doubled the πi instances in the regression analysis bringing the total number of potential models for investigation to twelve.
Reference temperatures and reference pressure
A reference temperature (Tr) characterizes the average thermal environment at the surface of a planetary body in the absence of atmospheric greenhouse effect; hence, Tr is different for each body and depends on solar irradiance and surface albedo. The purpose of Tr is to provide a baseline for quantifying the thermal effect of planetary atmospheres. Indeed, the Ts/Tr ratio produced by DA can physically be interpreted as a Relative Atmospheric Thermal Enhancement (RATE) ideally expected to be equal to or greater than 1.0. Expressing the thermal effect of a planetary atmosphere as a non-dimensional quotient instead of an absolute temperature difference (as done in the past) allows for an unbiased comparison of the greenhouse effects of celestial bodies orbiting at different distances from the Sun. This is because the absolute strength of the greenhouse effect (measured in K) depends on both solar insolation and atmospheric properties, while RATE being a radiation-normalized quantity is expected to only be a function of a planet’s atmospheric environment. To our knowledge, RATE has not previously been employed to measure the thermal effect of planetary atmospheres.
Two methods have been proposed thus far for estimating the average surface temperature of a planetary body without the greenhouse effect, both based on the SB radiation law. The first and most popular approach uses the planet’s global energy budget to calculate a single radiating equilibrium temperature Te (also known as an effective emission..
IMHO all the "math & visual information" necessary to completely understand why there is only a "true" gravito/thermal GHE, and why that gravito/thermal GHE and Arrhenius "radiative GHE" are just the dichotomy of Y/N ways anyone can rightly use to discuss "the GHE" as a "local and total thermo/dynamic equilibrium, which is what the "GHE" really is"
If anyone has any Q/Point to ask the Hockey Schtick el al re the graphic or my description of the graphic below, please do so in the comments below. Thank you.