Loading...

Follow RealClimate on Feedspot

Continue with Google
Continue with Facebook
or

Valid

We hold these truths to be self-evident, that all models are created equal, that they are endowed by their Creators with certain unalienable Rights, that among these are a DOI, Runability and Inclusion in the CMIP ensemble mean.

Well, not quite. But it is Independence Day in the US, and coincidentally there is a new discussion paper (Abramowitz et al) (direct link) posted on model independence just posted at Earth System Dynamics.

What does anyone mean by model independence? In the international coordinated efforts to assess climate model skill (such as the Coupled Model Intercomparison Project), multiple groups from around the world submit their model results from specified experiments to a joint archive. The basic idea is that if different models from different groups agree on a result, then that result is likely to be robust based on the (shared) fundamental understanding of the climate system despite the structural uncertainty in modeling the climate. But there are two very obvious ways in which this ideal is not met in practice.

First, if the models are actually the same, then it’s totally unsurprising that a result might be common between them. One of the two models would be redundant and add nothing to our knowledge of structural uncertainties.

Second, the models might well be totally independent in formulation, history and usage, but the two models share a common, but fallacious, assumption about the real world. Then a common result might reflect that shared error, and not reflect anything about the real world at all.

These two issues are also closely tied to the problem of model selection. Given an ensemble of models, that have varied levels of skill across any number of metrics, is there a subset or weighting of models that could be expected to give the most skillful predictions? And if so, how would you demonstrate that?

These problems have been considered (within the climate realm) since the beginnings of the “MIP” process in the 1990s, but they are (perhaps surprisingly) very tough to deal with.

Ensemble Skill

One of the most interesting things about the MIP ensembles is that the mean of all the models generally has higher skill than any individual model. This is illustrated in the graphic from Reichler and Kim, (2008). Each dot is a model, with the ensemble mean in black, and an RMS score (across a range of metrics) increasing left to right, so that the most skillful models or means are those furthest to the left.

But as Reto Knutti and colleagues have showed, the increase in skill of the ensemble mean doesn’t keep increasing as you add more models. After you’ve averaged 10 or 15 models, the skill no longer improves. This is not what you would expect if every model result was an unbiased independent estimate of the true climate. But since the models are neither unbiased, nor independent, the fact that there is any increase in skill after averaging is more surprising!

Reto Knutti: Mysterious Models and Enigmatic Ensembles - YouTube

One Model, One Vote

The default approach to the ensemble (used almost uniformly in the IPCC reports for instance), is the notion of “model democracy”. Each model is weighted equally to all the others. While no-one thinks this is optimal, no-one has really been able to articulate a robust reasoning that would give a general method that’s better. Obviously, if two models are basically the same but have different names (which happened in CMIP5), such an ensemble would be wrongly (but only slightly) biased. But how different would two models need to be to be worthy of inclusion? What about models from a single modeling group that are just ‘variations on a theme’? They might provide a good test of a specific sensitivity, but would they be different ‘enough’ to warrant inclusion in the bigger ensemble?

Model selection has however been applied in hundreds of papers based on the CMIP5/CMIP3 ensemble. Generally speaking, authors have selected a metric that they feel is important for their topic, picked an arbitrary threshold for sufficient skill and produced a constrained projection based on a subset or weighted mean of the models. Almost invariably though, the constrained projection is very similar to the projection from the full ensemble. The key missing element is that people don’t often check to see whether the skill metric that is being used has any relationship to the quantity being predicted. If it is unrelated, then the sub-selection of models will very likely span the same range as the full ensemble.

The one case where model selection was used in AR5 was for the Arctic sea ice projections (based on Massonnet et al, 2012) where it is relatively easily demonstrated that the trends in sea ice are a function of how much sea ice you start with. This clarity has been surprisingly difficult to replicate in other studies though.

So what should we do? This topic was the subject of a workshop last year in Boulder, and the new ESD paper is a partial reflection of that discussion. There is a video presentation of some of these issues from Gab Abramowitz at the Aspen Global Change Institute that is worth viewing.

Unfortunately, we have not solved this problem, but maybe this paper and associated discussions can raise awareness of the issues.

In the meantime, a joint declaration of some sort is probably a little optimistic…

We, therefore, the Representatives of the united Modelling Groups of the World, in AGU Congress, Assembled, appealing to the Supreme Judge of the model ensemble for the rectitude of our intentions, do, in the Name, and by Authority of the good People of these modeling Centers, solemnly publish and declare, That these disparate Models are, and of Right ought to be Free and Independent Codes, that they are Absolved from all Allegiance to NCAR, GFDL and Arakawa, and that all algorithmic connection between them and the Met Office of Great Britain, is and ought to be totally dissolved; and that as Free and Independent Models, they have full Power to run Simulations, conclude Papers, contract Intercomparison Projects, establish Shared Protocols, and to do all other Acts and Things which Independent Models may of right do. — And for the support of this Declaration, with a firm reliance on the protection of Divine PCMDI, we mutually pledge to each other our Working Lives, our Git Repositories, and our sacred H-Index.

References
  1. T. Reichler, and J. Kim, "How Well Do Coupled Models Simulate Today's Climate?", Bulletin of the American Meteorological Society, vol. 89, pp. 303-312, 2008. http://dx.doi.org/10.1175/BAMS-89-3-303
  2. F. Massonnet, T. Fichefet, H. Goosse, C.M. Bitz, G. Philippon-Berthier, M.M. Holland, and P. Barriat, "Constraining projections of summer Arctic sea ice", The Cryosphere, vol. 6, pp. 1383-1394, 2012. http://dx.doi.org/10.5194/tc-6-1383-2012
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This month’s open thread for climate science related topics. The climate policy open thread is here.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Open thread for climate policy and responses.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

“The greenhouse effect is here.”
– Jim Hansen, 23rd June 1988, Senate Testimony

The first transient climate projections using GCMs are 30 years old this year, and they have stood up remarkably well.

We’ve looked at the skill in the Hansen et al (1988) (pdf) simulations before (back in 2008), and we said at the time that the simulations were skillful and that differences from observations would be clearer with a decade or two’s more data. Well, another decade has passed!



How should we go about assessing past projections? There have been updates to historical data (what we think really happened to concentrations, emissions etc.), none of the future scenarios (A, B, and C) were (of course) an exact match to what happened, and we now understand (and simulate) more of the complex drivers of change which were not included originally.

The easiest assessment is the crudest. What were the temperature trends predicted and what were the trends observed? The simulations were run in 1984 or so, and that seems a reasonable beginning date for a trend calculation through to the last full year available, 2017. The modeled changes were as follows:

  • Scenario A: 0.33±0.03ºC/decade (95% CI)
  • Scenario B: 0.28±0.03ºC/decade (95% CI)
  • Scenario C: 0.16±0.03ºC/decade (95% CI)

The observed changes 1984-2017 are 0.19±0.03ºC/decade (GISTEMP), or 0.21±0.03ºC/decade (Cowtan and Way), lying between Scenario B and C, and notably smaller than Scenario A. Compared to 10 years ago, the uncertainties on the trends have halved, and so the different scenarios are more clearly distinguished. By this measure it is clear that the scenarios bracketed the reality (as they were designed to), but did not match it exactly. Can we say more by looking at the details of what was in the scenarios more specifically? Yes, we can.

This is what the inputs into the climate model were (CO2, N2O, CH4 and CFC amounts) compared to observations (through to 2014):

Estimates of CO2 growth in Scenarios A and B were quite good, but estimates of N2O and CH4 overshot what happened (estimates of global CH4 have been revised down since the 1980s). CFCs were similarly overestimated (except in scenario C which was surprisingly prescient!). Note that when scenarios were designed and started (in 1983), the Montreal Protocol had yet to be signed, and so anticipated growth in CFCs in Scenarios A and B was pessimistic. The additional CFC changes in Scenario A compared to Scenario B were intended to produce a maximum estimate of what other forcings (ozone pollution, other CFCs etc.) might have done.

But the model sees the net effect of all the trace gases (and whatever other effects are included, which in this case is mainly volcanoes). So what was the net forcing since 1984 in each scenario?

There are multiple ways of defining the forcings, and the exact value in any specific model is a function of the radiative transfer code and background climatology. Additionally, knowing exactly what the forcings in the real world have been is hard to do precisely. Nonetheless, these subtleties are small compared to the signal, and it’s clear that the forcings in Scenario A and B will have overshot the real world.



If we compare the H88 forcings since 1984 to an estimate of the total anthropogenic forcings calculated for the CMIP5 experiments (1984 through to 2012), the main conclusion is very clear – forcing in scenario A is almost a factor of two larger (and growing) than our best estimate of what happened, and scenario B overshoots by about 20-30%. By contrast, scenario C undershoots by about 40% (which gets worse over time). The slight differences because of the forcing definition, whether you take forcing efficacy into account and independent estimates of the effects of aerosols etc. are small effects. We can also ignore the natural forcings here (mostly volcanic), which is also a small effect over the longer term (Scenarios B and C had an “El Chichon”-like volcano go off in 1995).

The amount that scenario B overshoots the CMIP5 forcing is almost equal to the over-estimate of the CFC trends. Without that, it would have been spot on (the over-estimates of CH4 and N2O are balanced by missing anthropogenic forcings).

The model predictions were skillful

Predictive skill is defined as the whether the model projection is better than you would have got assuming some reasonable null hypothesis. With respect to these projections, this was looked at by Hargreaves (2010) and can be updated here. The appropriate null hypothesis (which at the time would have been the most skillful over the historical record) would be a prediction of persistence of the 20 year mean, ie. the 1964-1983 mean anomaly. Whether you look at the trends or annual mean data, this gives positive skill for all the model projections regardless of the observational dataset used. i.e. all scenarios gave better predictions than a forecast based on persistence.



What do these projections tell us about the real world?

Can we make an estimate of what the model would have done with the correct forcing? Yes. The trends don’t completely scale with the forcing but a reduction of 20-30% in the trends of Scenario B to match the estimated forcings from the real world would give a trend of 0.20-0.22ºC/decade – remarkably close to the observations. One might even ask how would the sensitivity of the model need to be changed to get the observed trend? The equilibrium climate sensitivity of the Hansen model was 4.2ºC for doubled CO2, and so you could infer that a model with a sensitivity of say, 3.6ºC, would likely have had a better match (assuming that the transient climate response scales with the equilibrium value which isn’t quite valid).

Hansen was correct to claim that greenhouse warming had been detected

In June 1988, at the Senate hearing linked above, Hansen stated clearly that he was 99% sure that we were already seeing the effects of anthropogenic global warming. This is a statement about the detection of climate change – had the predicted effect ‘come out of the noise’ of internal variability and other factors? And with what confidence?

In retrospect, we can examine this issue more carefully. By estimating the response we would see in the global means from just natural forcings, and including a measure of internal variability, we should be able to see when the global warming signal emerged.



The shading in the figure (showing results from the CMIP5 GISS ModelE2), is a 95% confidence interval around the “all natural forcings” simulations. From this it’s easy to see that temperatures in 1988 (and indeed, since about 1978) fall easily outside the uncertainty bands. 99% confidence is associated with data more than ~2.6 standard deviations outside of the expected range, and even if you think that the internal variability is underestimated in this figure (double it to be conservative), the temperatures in any year past 1985 are more than 3 s.d. above the “natural” expectation. That is surely enough clarity to retrospectively support Hansen’s claim.

At the time however, the claim was more controversial; modeling was in it’s early stages, and estimates of internal variability and the relevant forcings were poorer, and so Hansen was going out on a little bit of a limb based on his understanding and insight into the problem. But he was right.

Misrepresentations and lies

Over the years, many people have misrepresented what was predicted and what could have been expected. Most (in)famously, Pat Michaels testified in Congress about climate changes and claimed that the predictions were wrong by 300% (!) – but his conclusion was drawn from a doctored graph (Cato Institute version) of the predictions where he erased the lower two scenarios:

Undoubtedly there will be claims this week that Scenario A was the most accurate projection of the forcings [Narrator: It was not]. Or they will show only the CO2 projection (and ignore the other factors). Similarly, someone will claim that the projections have been “falsified” because the temperature trends in Scenario B are statistically distinguishable from those in the real world. But this sleight of hand is trying to conflate a very specific set of hypotheses (the forcings combined with the model used) which no-one expects (or expected) to perfectly match reality, with the much more robust and valid prediction that the trajectory of greenhouse gases would lead to substantive warming by now – as indeed it has.

References
  1. J. Hansen, I. Fung, A. Lacis, D. Rind, S. Lebedeff, R. Ruedy, G. Russell, and P. Stone, "Global climate changes as forecast by Goddard Institute for Space Studies three-dimensional model", Journal of Geophysical Research, vol. 93, pp. 9341, 1988. http://dx.doi.org/10.1029/JD093iD08p09341
  2. J.C. Hargreaves, "Skill and uncertainty in climate models", Wiley Interdisciplinary Reviews: Climate Change, vol. 1, pp. 556-564, 2010. http://dx.doi.org/10.1002/wcc.58
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Guest post by Veronika Huber

Climate skeptics sometimes like to claim that although global warming will lead to more deaths from heat, it will overall save lives due to fewer deaths from cold. But is this true? Epidemiological studies suggest the opposite.

Mortality statistics generally show a distinct seasonality. More people die in the colder winter months than in the warmer summer months. In European countries, for example, the difference between the average number of deaths in winter (December – March) and in the remaining months of the year is 10% to 30%. Only a proportion of these winter excess deaths are directly related to low ambient temperatures (rather than other seasonal factors). Yet, it is reasonable to suspect that fewer people will die from cold as winters are getting milder with climate change. On the other hand, excess mortality from heat may also be high, with, for example, up to 70,000 additional deaths attributed to the 2003 summer heat wave in Europe. So, will the expected reduction in cold-related mortality be large enough to compensate for the equally anticipated increase in heat-related mortality under climate change?

Due to the record heat wave in the summer of 2003, the morgue in Paris was overcrowded, and the city had to set up refrigerated tents on the outskirts of the city to accommodate the many coffins with victims. The city set up a hotline where people could ask where they could find missing victims of the heatwave. Photo: Wikipedia, Sebjarod, CC BY-SA 3.0.

Some earlier studies indeed concluded on significant net reductions in temperature-related mortality with global warming. Interestingly, the estimated mortality benefits from one of these studies were later integrated into major integrated assessment models (FUND and ENVISAGE), used inter alia to estimate the highly policy-relevant social costs of carbon. They were also taken up by Björn Lomborg and other authors, who have repeatedly accused mainstream climate science to be overly alarmist. Myself and others have pointed to the errors inherent in these studies, biasing the results towards finding strong net benefits of climate change. In this post, I would like to (i) present some background knowledge on the relationship between ambient temperature and mortality, and (ii) discuss the results of a recent study published in The Lancet Planetary Health (which I co-authored) in light of potential mortality benefits from climate change. This study, for the first time, comprehensively presented future projections of cold- and heat-related mortality for more than 400 cities in 23 countries under different scenarios of global warming.

Mortality risk increases as temperature moves out of an optimal range

Typically, epidemiological studies, based on daily time series, find a U- or J-shaped relationship between mean daily temperature and the relative risk of death. Outside of an optimal temperature range, the mortality risk increases, not only in temperate latitudes but also in the tropics and subtropics (Fig. 1).

Fig. 1 Exposure-response associations for daily mean temperature and the relative mortality risk (RR) in four selected cities. The lower part of each graph shows the local temperature distribution. The solid grey lines mark the ‘optimal temperature’, where the lowest mortality risk is observed. The depicted relationships take into account lagged effects over a period of up to 21 days. Source: Gasparrini et al. 2015, The Lancet.

Furthermore, the optimal temperature tends to be higher the warmer the local climate, providing evidence that humans are at least somewhat adapted to the prevailing climatic conditions. Thus, although ‘cold’ and ‘warm’ may correspond to different absolute temperatures across different locations, the straightforward conclusion from the exposure-response curves shown in Fig. 1 is that both low and high ambient temperatures represent a risk of premature death. But there are a few more aspects to consider.

Causal pathways between non-optimal temperature and death

Only a negligible proportion of the deaths typically considered in this type of studies are due to actual hypo- or hyperthermia. Most epidemiological studies on the subject consider counts of deaths for all causes or for all non-external causes (e.g., excluding accidents). The majority of deaths due to cold and heat are related to existing cardiovascular and respiratory diseases, which reach their acute stage due to prevailing weather conditions. An important causal mechanism seems to be the temperature-induced change in blood composition and blood viscosity. With regard to the cold effect, a weakening of the defense mechanisms in the airways and thus a higher susceptibility to infection has also been suggested.

Is the cold effect overestimated?

As in any correlative analysis there is always the risk of confounding, especially given the complex, indirect mechanisms underlying the relationship between non-optimal outside temperature and increased risk of death. Regarding the topic discussed here, the crucial question is whether the applied statistical models account sufficiently well for seasonal effects independent of temperature. For example, it is suspected that the lower amount of UV light in winter has a negative effect on human vitamin D production, favoring infectious diseases (including flu epidemics). There are also some studies that point to the important role of specific humidity, that, if neglected, may confound estimates of the effect of temperature on mortality rates.

Interestingly enough, there is still an ongoing scientific debate regarding this point. Specifically, it has been suggested that the cold effect on mortality risk is often overestimated because of insufficient control for season in the applied models. On the other hand, the disagreement on the magnitude of the cold effect might simply result from using different approaches for modeling the lagged association between temperature and mortality. In fact, the lag structures of the heat and cold effects are distinct. While hot days are reflected in the mortality statistics relatively immediately on the same and 1-2 consecutive days, the effect of cold is spread over a longer period of up to 2-3 weeks. Simpler methods (e.g., moving averages) compared to more sophisticated approaches for representing lagged effects (e.g., distributed lag models) have been shown to misrepresent the long-lagged association between cold and mortality risk.

Mortality projections

But what about the impact of global warming temperature-related mortality? Let’s take a look at the results of the study published in The Lancet Planetary Health, which links city-specific exposure-response functions (as shown in Fig. 1) with local temperature projections under various climate change scenarios.

Fig. 2 Relative change of cold- and heat-related excess mortality by region. Shown are relative changes per decade compared to 2010-2019 for three different climate change scenarios (RCP 2.6, RCP 4.5, RCP 8.5). The 95% confidence intervals shown for the net change take into account uncertainties in the underlying climate projections and in the exposure-response associations. It should be noted that results for single cities (> 400 cities in 23 countries) are here grouped by region. Source: Gasparrini et al. 2017. The Lancet Planetary Health

In all scenarios, we find a relative decrease in cold-related mortality and a relative increase in heat-related mortality as global mean temperature rises (Fig. 2). Yet, in most regions the net effect of these opposing trends is an increase in excess mortality, especially under unabated global warming (RCP 8.5). This is what would be expected from the exposure-response associations (Fig. 1), which generally show a much steeper increase in risk from heat than from cold. A relative decline in net excess mortality (with considerable uncertainty) is only observed for Northern Europe, East Asia, and Australia (and Central America for the more moderate scenarios RCP 2.6, and RCP 4.5).

So, contrary to the propositions of those who like to stress the potential benefits of global warming, a net reduction in mortality is the exception rather than the rule, when comparing estimates around the world. And one must not forget that there are important caveats associated with these results, which caution against jumping to firm conclusions.

Adaptation and demographic change

As mentioned already, we know that people’s vulnerability to non-optimal outdoor temperatures is highly variable and that people are adapted to their local climate. However, it remains poorly understood how fast this adaptation takes place and what factors (e.g., physiology, air conditioning, health care, urban infrastructure) are the main determinants. Therefore, the results shown (Fig. 2) rely on the counterfactual assumption that the exposure-response associations remain unchanged in the future, i.e., that no adaptation takes place. Furthermore, since older people are more vulnerable to non-optimal temperatures than younger people, the true evolution of temperature-related mortality will also be heavily dependent on demographic trends at each location, which were also neglected in this study.

Bottom line

I would like to conclude with the following thought: Let’s assume – albeit extremely unlikely – that the study discussed here does correctly predict the actual future changes of temperature-related excess mortality due to climate change, despite the mentioned caveats. Mostly rich countries in temperate latitudes would then indeed experience a decline in overall temperature-related mortality. On the other hand, the world would witness a dramatic increase in heat-related mortality rates in the most populous and often poorest parts of the globe. And the latter alone would be in my view a sufficient argument for ambitious mitigation – independently of the innumerous, well-researched climate risks beyond the health sector.

Addendum: Short-term displacement or significant life shortening?

To judge the societal importance of temperature-related mortality, a central question is whether the considered deaths are merely brought forward by a short amount of time or whether they correspond to a considerable life-shortening. If, for example, mostly elderly and sick people were affected by non-optimal temperatures, whose individual life expectancies are low, the observed mortality risks would translate into a comparatively low number of years of life lost. Importantly, short-term displacements of deaths (often termed ‘harvesting’ in the literature) are accounted for in the models presented here, as long as they occur within the lag period considered. Beyond these short-term effects, recent research investigating temperature mortality associations on an annual scale indicates that the mortality risks found in daily time-series analyses are in fact associated with a significant life shortening, exceeding at least 1 year. Only comparatively few studies so far have explicitly considered relationships between temperature and years of life lost, taking statistical life expectancies according to sex and age into account. One such studies found that, for Brisbane (Australia), the years of life lost – unlike the mortality rates – were not markedly seasonal, implying that in winter the mortality risks for the elderly were especially elevated. Accordingly, low temperatures in this study were associated with fewer years of life lost than high temperatures – but interestingly, only in men. Understanding how exactly the effects of cold and heat on mortality differ among men and women, and across different age groups, definitely merits further investigations.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This month’s open thread. We know people like to go off on tangents, but last month’s thread went too far. There aren’t many places to discuss climate science topics intelligently, so please stay focused on those.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Stefan Rahmstorf, Kerry Emanuel, Mike Mann and Jim Kossin

Friday marks the official start of the Atlantic hurricane season, which will be watched with interest after last year’s season broke a number of records and e.g. devastated Puerto Rico’s power grid, causing serious problems that persist today. One of us (Mike) is part of a team that has issued a seasonal forecast (see Kozar et al 2012) calling for a roughly average season in terms of overall activity (10 +/- 3 named storms), with tropical Atlantic warmth constituting a favorable factor, but predicted El Nino conditions an unfavorable factor.  Meanwhile, the first named storm, Alberto, has gone ahead without waiting for the official start of the season.

In the long term, whether we will see fewer or more tropical cyclones in the Atlantic or in other basins as a consequence of anthropogenic climate change is still much-debated. There is a mounting consensus, however, that we will see more intense hurricanes. So let us revisit the question of whether global warming is leading to more intense tropical storms. Let’s take a step back and look at this issue globally, not just for the Atlantic.

Tropical storms are powered by evaporation of seawater.  More than 30 years ago, one of us (Emanuel) developed a quantity called potential intensity that sets an upper bound on hurricane wind speeds. In general, as the climate warms, this speed limit goes up, permitting stronger storms than were possible in the past.

Of course there could be other changes in the climate system that counteract this – e.g. an increase in wind shear that tears the hurricanes apart, changes in the humidity of the atmosphere, or increases in natural or anthropogenic aerosols. This question has been investigated for many years with the help of model simulations. The results of numerous such studies can be summarized briefly as follows: due to global warming we do not necessarily expect more tropical storms overall, but an increasing number of particularly strong storms in categories 4 and 5, especially storms of previously unobserved strength. This assessment has been widely agreed on at least since the 4th IPCC Report of 2007 and reaffirmed several times since then. A review article in the leading journal Science (Sobel et al. 2016) concluded:

We thus expect tropical cyclone intensities to increase with warming, both on average and at the high end of the scale, so that the strongest future storms will exceed the strength of any in the past.

Models also suggest that atmospheric aerosol pollution may have weakened tropical storms and masked the effect of global warming for decades, making it more difficult to detect trends in measurement data.

What do the data show?

Nevertheless, observational data support the expectation from models that the strongest storms are getting stronger. We focus here on the period from 1979, because this is the period covered by geostationary satellite data (thus no cyclones went unobserved) and also the period over which three quarters of global warming has occurred. These data show an increase in the strongest tropical storms in most ocean basins (Kossin et al. 2013). However, these data are not homogeneous but are estimated from a variety of satellite, and air- and ground-based instruments whose capabilities have improved over time. The homogenization of these data by Kossin et al. (2013), which is generally recognized as very careful, reduces the trends, but does not eliminate them. The strongest increase can be found in the North Atlantic (which is more than 99% significant) where the trend has likely been boosted by the decrease in sulfate aerosols over this period.

One consequence of this increase is that in most major tropical cyclone regions, the storms with the highest wind speeds on record have been observed in recent years (see Fig. 1 based on reanalysis by Velden et al. 2017). The strongest globally was Patricia (2015), which topped the previous record holder Haiyan (2013).

Fig. 1 The strongest storms for the major storm regions Western and Eastern North Pacific, North Indian, South Indian and South Pacific, Caribbean/Gulf of Mexico and open North Atlantic. Of these seven regions, five had the strongest storm on record in the past five years, which would be extremely unlikely just by chance. Irma was added by personal communication from Chris Velden, and a tie of two storms with equally strong winds in the South Indian was resolved by selecting the storm with the lower central pressure (Fantala). (Graph by Stefan Rahmstorf, Creative Commons License CC BY-SA 3.0.)

Other recent records are worth mentioning. Sandy (2012) was the largest hurricane ever observed in the Atlantic. Harvey (2017) dumped more rain than any hurricane in the United States. Ophelia (2017) formed further northeast than any other Category 3 Atlantic hurricane – fortunately it turned north before striking Portugal, against initial predictions, and then weakened over cool waters before it hit Ireland. September 2017 broke the record for cumulative hurricane energy in the Atlantic. Irma (2017) sustained wind speeds of 300 km/h longer than any storm on record (for 37 hours – the previous record was 24 hours by Haiyan in 2013). Cyclone Pam in March 2015 was  already beaten again by Winston in February 2016 according to the Southwest Pacific Enhanced Archive for Tropical Cyclones (though not in Velden’s data analysis). Donna in 2017 was the strongest May cyclone ever observed in the Southern Hemisphere. All coincidence?

One of us (Emanuel) performed an analysis of linear trends in historical tropical cyclone data from 1980 to 2016. These include some observations by aircraft, ships, buoys, and stations on land in addition to the satellite data, but these have not been treated for inhomogeneities.

Fig. 2 Percentage increase 1980 to 2016 (as a linear trend) in the number of tropical storms worldwide depending on their strength. Only 95% significant trends are shown. The strongest storms are also increasing the most. Red colors show the hurricane category on the Saffir-Simpson scale. Graph by Kerry Emanuel, MIT.

A significant global increase (95% significance level) can be found in all storms with maximum wind speeds from 175 km/h. Storms of 200 km/h and more have doubled in number, and those of 250 km/h and more have tripled. Although some of the trend may be owing to improved observation techniques, this provides some evidence that a global increase in the most intense tropical storms due to global warming is not just predicted by models but already happening.

However, global warming does not only increase the wind speed or frequency of strong storms (which is actually two ways of looking at the same phenomenon, as frequency depends on wind speed).  The average location where the storms are reaching their peak intensity is also slowly migrating poleward (Kossin et al. 2014) and the area where storms occur expands (Benestad 2009, Lucas et al. 2014), which changes patterns of storm risk and increases risk in regions that are historically less threatened by these storms (Kossin et al. 2016).

Most damage caused by tropical storms is not directly caused by the wind, but by water: rain from above, storm surge from the sea. Harvey brought the largest amounts of rain in US history – the probability of such a rain event has increased several times over  recent decades due to global warming (Emanuel 2017; Risser and Wehner, 2017; van Oldenborgh et al., 2017). Not least due to global warming, sea levels are rising at an accelerating rate and storm surges are becoming more dangerous. A recent study (Garner et al. 2017), for example, shows that the return period of a certain storm surge height in New York City will be reduced from 25 years today to 5 years within the next three decades. Therefore, storm surge barriers are the subject of intensive discussion in New York (Rahmstorf 2017).

While there may not yet be a “smoking gun” – a single piece of evidence that removes all doubt – the weight of the evidence suggests that the thirty-year-old prediction of more intense and wetter tropical cyclones is coming to pass. This is a risk that we can no longer afford to ignore.


Kerry Emanuel
is professor of atmospheric science at MIT


Jim Kossin
is a NOAA climate scientist specializing in tropical cyclones

(And Mike and Stefan of course are co-founders and regular authors of Realclimate)

References

Benestad RE (2009) On tropical cyclone frequency and the warm pool area. Natural Hazards and Earth System Sciences 9(2):635-645.

Emanuel K (2017) Assessing the present and future probability of Hurricane Harvey’s rainfall. Proc Natl Acad Sci U S A.

Garner A, et al. (2017) The Impact of Climate Change on New York City’s Coastal Flood Hazard: Increasing Flood Heights from the Pre-Industrial to 2300 CE. Proc Natl Acad Sci U S A.

Kossin JP, Olander TL, & Knapp KR (2013) Trend Analysis with a New Global Record of Tropical Cyclone Intensity. J. Clim. 26(24):9960-9976.

Kossin, J. P., K. A. Emanuel, and G. A. Vecchi, 2014: The poleward migration of the location of tropical cyclone maximum intensity. Nature, 509, 349-352.

Kossin, J. P., K. A. Emanuel, and S. J. Camargo, 2016: Past and projected changes in western North Pacific tropical cyclone exposure. J. Climate, 29, 5725-5739.

Kozar, M.E., Mann, M.E., Camargo, S.J., Kossin, J.P., Evans, J.L. (2012)  Stratified statistical models of North Atlantic basin-wide and regional tropical cyclone counts, J. Geophys. Res., 117, D18103, doi:10.1029/2011JD017170.

Lucas, C., Timbal, B. & Nguyen, H. (2014) The expanding tropics: a critical assessment of the observational and modeling studies. WIREs Clim. Change, 5, 89–112.

Rahmstorf S (2017) Rising hazard of storm-surge flooding. Proc Natl Acad Sci U S A 114(45):11806-11808

Risser, M. D., & Wehner, M. F. (2017): Attributable human-induced changes in the likelihood and magnitude of the observed extreme precipitation during Hurricane Harvey. Geophy. Res. Lett., 44, 12,457–12,464.

Sobel A, et al. (2016) Human influence on tropical cyclone intensity. Science 353:242-246.

van Oldenborgh, G. J., and Coauthors, 2017: Attribution of extreme rainfall from Hurricane Harvey. Environ. Res. Lett., 12, doi: 10.1088/1748-9326/aaa343.

Velden C, Olander T, Herndon D, & Kossin JP (2017) Reprocessing the Most Intense Historical Tropical Cyclones in the Satellite Era Using the Advanced Dvorak Technique. Mon. Weather Rev. 145(3):971-983.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A few weeks ago, we’ve argued in a paper in Nature that the Atlantic overturning circulation (sometimes popularly dubbed the Gulf Stream System) has weakened significantly since the late 19th Century, with most of the decline happening since the mid-20th Century. We have since received much praise for our study from colleagues around the world (thanks for that). But there were also some questions and criticisms in the media, so I’d like to present a forum here for discussing these questions and hope that others (particularly those with a different view) will weigh in in the comments section below.

Exhibit #1, and the prime observational finding, is a long-term cooling trend in the subpolar Atlantic – the only region in the world which has cooled while the rest of the planet has warmed. This ‘cold blob’ or ‘warming hole’ has been shown in IPCC reports since the 3rd assessment of 2001; it is shown in Fig. 1 in a version from the last (5th) IPCC report. In fact it is Figure 1 of the Summary for Policy Makers there – you can’t get more prominent than that.

Fig. 1 Observed temperature trends since the beginning of the 20th Century (Figure SPM1 of the last IPCC report).

I think there is a consensus that this is a real phenomenon and can’t be explained away as a data problem. According to NOAA, 2015 was the coldest year in this region since record-keeping began in 1880, while it was the hottest year globally. The key question thus is: what explains this cold blob?

In 2010, my colleagues Dima and Lohmann from Bremen were the first (as far as I know – let me know if you find an earlier source) to suggest, using sea surface temperature (SST) pattern analyses, that the cold blob is a tell-tale sign of a weakening AMOC. They wrote that

“the decreasing trend over the last seven decades is associated to the weakening of the conveyor, possibly in response to increased CO2 concentrations in the atmosphere”

(with ‘conveyor’ they refer to the AMOC). One of several arguments for this was the strong anti-correlation pattern between north and south Atlantic which they found using canonical correlation analysis and which is the well-known see-saw effect of AMOC changes.

I have since become convinced that Dima and Lohman were right. Let me list my main arguments upfront before discussing them further.

  1. The cold blob is a prediction come true. Climate models have long predicted that such a warming hole would appear in the subpolar Atlantic in response to global warming, due to an AMOC slowdown. This is seen e.g. in the IPCC model projections.
  2. There is no other convincing explanation for the cold blob. There is strong evidence that it is neither driven by internal atmospheric variability (such as the North Atlantic Oscillation, NAO) nor by aerosol forcing.
  3. A range of different data sets and analyses suggest a long-term AMOC slowdown.
  4. Claims that the slowdown is contradicted by current measurements generally turn out to be false. Such claims have presented apples-to-oranges comparisons. To the contrary, what we know from other sources about the AMOC evolution is largely consistent with the AMOC reconstruction we presented in Nature.

Let us look at these four points in turn.

A climate prediction come true

The following graph shows climate projections graph from the last IPCC report.

Fig. 2 Global warming from the late 20th Century to the late 21st Century (average over 32 models, RCP2.6 scenario) – Figure SPM8a of the IPCC AR5.

The IPCC writes that “hatching indicates regions where the multi-model mean is small compared to natural internal variability (i.e., less than one standard deviation of natural internal variability in 20-year means.)” The subpolar North Atlantic stands out as the only region lacking significant predicted warming even by the late 21st Century. The 4th IPCC report included a similar graph (Fig. TS28).

In our paper we have analysed the ‘historic’ runs of the CMIP5 climate models (i.e. those from preindustrial condition to the present) and found that the observed ‘cold blob’ in this region is consistent with what the models predicted, with the amount of cooling in the models depending mainly on how much the AMOC declines (see below). In the mean of the 13 models we examined (Fig. 5 of our paper), the downward trend of the AMOC index is -0.33 °C per century, in the observations we found -0.44 °C per century. (Our AMOC index simply consists of the difference between the surface temperatures of the subpolar Atlantic and the global ocean). The models on average thus predicted three quarters of the decline that the observational data indicate. (In fact most models cluster around the observed decline, but three models with almost zero AMOC decline cause the underestimation in the mean.)

Is there an alternative explanation?

If the ocean temperature in any region changes, this can only be due to a change in heat supply or loss. That can either be a change in heat flow via ocean currents or through the sea surface. Thus the subpolar Atlantic can either have cooled because the ocean currents are bringing less heat into this region, or alternatively because more heat is being lost to the atmosphere. So how do we know which of these two it is?

First, we can analyze the heat flux from ocean to atmosphere, which can be calculated with standard formula from the sea surface temperature and weather data. Halldór Björnsson of the Icelandic weather service has done this and presented the results at the Arctic Circle conference 2016 (they are not published yet). He showed that the short-term temperature fluctuations from year to year correlate with the heat exchange through the sea surface, but that this does not explain the longer-term development of the ‘cold blob’ over decades. His conclusion slide stated:

Surface heat fluxes did not cause the long term changes and are only implicated in the SST variations in the last two decades. Long term variations are likely to be oceanic transport but not due to local atmospheric forcing.

That’s exactly what one expects. Weather dominates the short-term fluctuations, but the ocean currents dominate the long-term development because of the longer response time scale and “memory” of the ocean.

Nevertheless some have suggested that the main mode of atmospheric variability in the north Atlantic, the North Atlantic Oscillation or NAO, might have caused the “cold blob”. In our paper we present a standard lagged correlation analysis of the NAO with the “cold blob” temperature (in form of our AMOC index). The result: there is indeed a significant correlation of the NAO with subpolar Atlantic surface temperatures. But on the longer time scales of interest to us (for 20-year smoothed data), changes in the sea surface temperature lead the NAO changes by three years. We conclude that changes in sea surface temperatures cause the changes in NAO and not vice versa. (And we’re certainly not the first to come to this conclusion.)

And a third point: in summer, the effect of heat flow through the sea surface should dominate, in winter the effect of ocean currents. That is because the well-mixed surface layer of the ocean is thin, so only the uppermost part of the ocean heat transport gets to affect the surface temperature. But the thin surface layer still feels the full brunt of atmospheric changes, and even stronger than in winter, because the thermal inertia of the thin summer surface layer is small. In our paper we analysed the seasonal cycle of the temperature changes in the subpolar Atlantic. The cooling in the “cold blob” is most pronounced in winter – both in the climate model (where we know it’s due to an AMOC slowdown) and in the observations. That yet again suggests the ‘cold blob’ is driven from the ocean and not the atmosphere.

There is another well-known mode of Atlantic temperature variability known as AMO, which correlates strongly with our AMOC index. Its established standard explanation in the scientific literature is… variations in the AMOC. (The NAO and AMO connections are discussed in more detail in the Extended Data section of our paper.)

There may be the possibility that some ocean heat transport change other than an AMOC change could be responsible for the ‘cold blob’ in the subpolar Atlantic, and I wouldn’t argue that we understand the ocean current changes in detail. But if you take a ‘big picture’ view, it is a fact that the AMOC is the dominant mechanism of heat transport into the high-latitude Atlantic, and the region that has cooled is exactly the region that cools in climate models when you slow down the AMOC. We have analysed the ensemble of CMIP5 “historic” model simulations for the past climate change from 1870 to 2016. For each of these model runs, we computed the AMOC slowdown over this time as diagnosed by our AMOC index (i.e. based on subpolar ocean surface temperatures) as well as the actual AMOC slowdown (which we know in the models, unlike in the real world.) The two correlate with a correlation coefficient R=0.95. Thus across the different models, differences in the amount of AMOC slowdown nearly completely explain the differences in subpolar Atlantic temperatures. If you doubt that what the temperatures in the Atlantic are telling us is a story of a slowing AMOC, you doubt not only that the high-resolution CM2.6 climate model is correct, but also the entire CMIP5 model ensemble.

A range of different data sets and analyses suggest a long-term AMOC slowdown

A number of different SST data sets and analyses support the idea of the AMOC slowdown. That is not just the existence of the subpolar cooling trend in the instrumental SST data. It is the cross-correlation with the South Atlantic performed by Dima and Lohmann. It is the fact that land-based proxy data for surface temperature suggest the cold blob is unprecedented for over a millennium. It is the exceptional SST warming off the North American coast, an expected dynamical effect of an AMOC slowdown, and strong warming off the west coast of southern Africa (see Fig. 1 in my previous post).

In addition we have the conclusion by Kanzow et al. from hydrographic sections that the AMOC has weakened by ~ 10% since the 1950s (see below). And the Nitrogen-15 data of Sherwood et al. indicating a water mass change that matches what is predicted by the CM2.6 model for an AMOC slowdown. And the subsurface Atlantic temperature proxy data published recently by Thornalley et al. Plus there is work suggesting a weakening open-ocean convection. And finally, our time evolution of the AMOC that we proposed based on our AMOC index, i.e. based on the temperatures in the cold blob region, for the past decades matches evidence from ocean reanalysis and the RAPID project. Some of these other data are shown together with our AMOC index below (for more discussion of this, see my previous post).

Fig. 3 Time evolution of the Atlantic overturning circulation reconstructed from different data types since 1700. The scales on the left and right indicate the units of the different data types. The lighter blue curve was shifted to the right by 12 years since Thornalley found the best correlation with temperature with this lag. Our index is the dark blue line starting in 1870. Graph: Levke Caesar.

Do measurements contradict our reconstruction?

Measuring the AMOC at a particular latitude in principle requires measuring a cross-section across the entire Atlantic, from surface to bottom. There are only two data sets that aspire to measure AMOC changes in this way. First, the RAPID project which has deployed 226 moored measuring instruments at 26.5 ° North for that purpose since 2004. It shows a downward trend since then, which closely matches what we find with our temperature-based AMOC index. Second is the work by Kanzow et al. (2010) using results of five research expeditions across the Atlantic between 1957 and 2004, correcting an earlier paper by Bryden et al. for seasonal effects and finding a roughly 10% decline over this period (in terms of the linear trend of these five data points).

Some other measurements cover parts of the overturning circulation, and generally for short periods only. For 1994-2013, Rossby et al. (2013) – at the Oleander line between 32° and 40° North – found a decrease in the upper 2000m transport of the Gulf Stream by 0.8 Sverdrup (a Sverdrup is a flow of a million cubic meters per second). It is important to realize that the AMOC is not the same as the Gulf Stream. The latter, as measured by Rossby, has a volume flow of  ~90 Sverdrup, while the AMOC has a volume flow of only 15-20 Sverdrup. While the upper northward branch of the AMOC does flow via the Gulf Stream, it thus only contributes about one fifth to the Gulf Stream flow. Any change in Gulf Stream strength could thus be due to a change in the other 80% of Gulf Stream flow, which are wind-driven. The AMOC does however provide the major northward heat transport which affects the northern Atlantic climate, because its return flow is cold and deep. Most of the Gulf Stream flow, in contrast, returns toward the south near the sea surface at a similar temperature as it flowed north, thus leaving little heat behind in the north.

Likewise for 1994-2013, Roessler et al. (2015) found an increase of 1.6 Sv in the transport of the North Atlantic Current between 47° and 53° North. This is a current with a mean transport of ~27 Sverdrup, 60% of which is subtropical waters (i.e., stemming from the south via the Gulf Stream). For this period, our reconstruction yields an AMOC increase by 1.3 Sv.

For 1994-2009, using sea-level data, Willis et al. (2010) reconstructed an increase in the upper AMOC limb at 41°N by 2.8 Sv. For this period, our reconstruction yields an AMOC increase by 2.1 Sv.

Finally, the MOVE project measures the deep southward flow at 15° North. This is a flow of ~20 Sverdrup which can be considered the sum of the north Atlantic overturning circulation plus a small component of returning Antarctic Bottom Water (see Fig. 1 in Send et al. 2011). The following graph shows all these measurements together with our own AMOC index (Caesar et al 2018).

Fig 4. Our AMOC index in black, compared to five different measurement series related more or less strongly to the AMOC. The dashed and dotted linear trends of our index can be directly compared to the linear trends over corresponding data intervals. The solid black line shows our standard smoothed index as shown in our paper and in Fig. 3. Graph by Levke Caesar.

First of all, it is clear that these data contain a lot of year-to-year variability – which doesn’t correlate between the different measurements and for our purposes is just ‘noise’ and not a climate signal. That is why for our index we generally only consider the long-term (multidecadal) changes in SST to reflect changes in the AMOC. Thus, we need to look at the trend lines in Fig. 4.

Given that even these trends cover short periods of noisy data sets and thus are sensitive to the exact start and end years, and that lags between the various parts of the system may be expected, all these trends are surprisingly consistent! At least I don’t see any significant differences or inconsistencies between these various trends. Generally, the earlier trends in the left part of the graph are upward and the later trends going up to the present are downward. That is fully consistent with our reconstruction showing a low around 1990, an AMOC increase up the early 2000s and then a decline up to the present (compare Fig. 3).

Claims that any of these measurements are at odds with our index or even disprove the long-term AMOC decline are thus baseless (and thus rightly fit into Breitbart News where they were raised by the notorious James Delingpole).

One interesting question for further research is how the AMOC in the Atlantic is linked to the exchange with the Nordic Seas across a line between Greenland, Iceland and Scotland. In our 2015 paper we showed a model result suggesting an anti-correlation of these overflows with the AMOC, and our new paper suggests a similar thing: a warm anomaly off Norway coinciding with the cold anomaly in the subpolar Atlantic, both in the high-resolution CM2.6 model and the observations.

So, while there is obviously the need to understand the ocean circulation changes in the North Atlantic in more detail, I personally have no more doubts that the conspicuous ‘cold blob’ in the subpolar Atlantic is indeed due to a long-term decline of the northward heat transport by the AMOC. If you still have doubts, we’d love to hear your arguments!

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Good thing? Of course.*

I was invited to give a short presentation to a committee at the National Academies last week on issues of reproducibility and replicability in climate science for a report they have been asked to prepare by Congress. My
slides give a brief overview of the points I made, but basically the issue is not that there isn’t enough data being made available, but rather there is too much!

A small selection of climate data sources is given on our (cleverly named) “Data Sources” page and these and others are enormously rich repositories of useful stuff that climate scientists and the interested public have been diving into for years. Claims that have persisted for decades that “data” aren’t available are mostly bogus (to save the commenters the trouble of angrily demanding it, here is a link for data from the original hockey stick paper. You’re welcome!).

The issues worth talking about are however a little more subtle. First off, what definitions are being used here. This committee has decided that formally:

  • Reproducibility is the ability to test a result using independent methods and alternate choices in data processing. This is akin to a different laboratory testing an experimental result or a different climate model showing the same phenomena etc.
  • Replicability is the ability to check and rerun the analysis and get the same answer.

[Note that these definitions are sometimes swapped in other discussions.] The two ideas are probably best described as checking the robustness of a result, or rerunning the analysis. Both are useful in different ways. Robustness is key if you want to make a case that any particular result is relevant to the real world (though that is necessary, not sufficient) and if a result is robust, there’s not much to be gained from rerunning the specifics of one person’s/one group’s analysis. For sure, rerunning the analysis is useful for checking the conclusions stemmed from the raw data, and is a great platform for subsequently testing its robustness (by making different choices for input data, analysis methods, etc.) as efficiently as possible.

So what issues are worth talking about? First, the big success in climate science with respect to robustness/reproducibility is the Coupled Model Intercomparison Project – all of the climate models from labs across the world running the same basic experiments with an open data platform that makes it easy to compare and contrast many aspects of the simulations. However, this data set is growing very quickly and the tools to analyse it have not scaled as well. So, while everything is testable in theory, bandwidth and computational restrictions make it difficult to do so in practice. This could be improved with appropriate server-side analytics (which are promised this time around) and the organized archiving of intermediate and derived data. Analysis code sharing in a more organized way would also be useful.

One minor issue is that while climate models are bit-reproducible at the local scale (something essential for testing and debugging), the environments for which that is true are fragile. Compilers, libraries, and operating systems change over time and preclude taking a code from say 2000 and the input files and getting exactly the same results (bit-for-bit) with simulations that are sensitive to initial conditions (like climate models). The emergent properties should be robust, and that is worth testing. There are ways to archive the run environment in digital ‘containers’, so this isn’t necessarily always going to be a problem, but this has not yet become standard practice. Most GCM codes are freely available (for instance, GISS ModelE, and the officially open source DOE E3SM).

There is more to climate science than GCMs of course. There are operational products (like GISTEMP – which is both replicable and reproducible), and paleo-climate records (such as are put together in projects like PAGES2K). Discussions on what the right standards are for those projects are being actively discussed (see this string of comments or the LiPD project for instance).

In all of the real discussions, the issue is not whether to strive for R&R, but how to do it efficiently, usably, and without unfairly burdening data producers. The costs (if any) of making an analysis replicable are borne by the original scientists, while the benefits are shared across the community. Conversely, the costs of reproducing research is borne by the community, while benefits accrue to the original authors (if the research is robust) or to the community (if it isn’t).

One aspect that is perhaps under-appreciated is that if research is done knowing from the start that there will be a code and data archive, it is much easier to build that into your workflow. Creating usable archives as an after thought is much harder. This lesson is one that is also true for specific communities – if we build an expectation for organized community archives and repositories it’s much easier for everyone to do the right thing.

* For the record, this does not imply support for the new EPA proposed rule on ‘transparency’**. This is an appallingly crafted ‘solution’ in search of a problem, promoted by people who really think that that the science of air pollution impacts on health can be disappeared by adding arbitrary hoops for researchers to jump through. They are wrong.

** Obviously this is my personal opinion, not an official statement.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
RealClimate by Rasmus - 2M ago

The climate system is complex, and a complete description of its state would require huge amounts of data. However, it is possible to keep track of its conditions through summary statistics.

There are some nice resources which give an overview of a number for climate indicators. Some examples include NASA and The Climate Reality Project.

The most common indicator is the atmospheric background CO2 concentration, the global mean temperature, the global mean sea level, and the area with snow or Arctic sea ice. Other indicators include rainfall statistics, drought indices, or other hydrological aspects. The EPA provides some examples.   

One challenge has been that the state of the hydrological cycle is not as easily summarised by one single index in the same way as the global mean temperature or the global mean sea level height. However, Giorgi et al. (2011) suggested a measure of hydro-climatic intensity (HY-INT) which is an integrated metric that captures the precipitation intensity as well as dry spell length.  

There are also global datasets of indices representing the more extreme aspects of climate called CLIMDEX, providing a list of 27 core climate extremes indices (so-called the ‘ETCCDI’ indices, referring to the ‘CCl/CLIVAR/JCOMM Expert Team on Climate Change Detection and Indices’).

In addition, there is a website hosted by the NOAA that presents various U.S. Climate Extremes Index (CEI) in an interactive way.

So there are quite a few indicators for various aspects of the climate. One question we should ask, however, is whether they capture all the important and relevant aspects of the climate. I think that they don’t, and that there are still some gaps.


Perhaps there is room for more indicators inspired by the “big picture physics”, such as the planetary energy balance and the outgoing long-wave radiation (OLR). An increased greenhouse effect means that the atmosphere becomes more opaque for infra-red radiation (IR), while the visible light that heats the surface is unaffected.

The heat loss from a planet happens through IR radiation, since space is virtually a vacuum where energy is only transmitted though electromagnetic waves. If one could see the IR light, an opaque atmosphere would make the pattern of emitted IR diffuse since only the IR from the upper levels of the atmosphere escape to space after it has been absorbed and re-emitted by the greenhouse gases (this of course depends on the wavelength of the IR and the absorption spectrum, but we can use this assumption for heat loss integrated over the whole IR spectrum).

The figure below shows the mean IR estimated from the 2-meter temperature according to (upper), the OLR measured by satellites (middle), and their difference.

Long-wave radiation estimated for surface temperatures according to Stefan-Boltzman’s law (upper), measured by satellites (middle) and the difference between the two (lower). (source code olr.R; PDF)

Hence, we would expect to see increasing differences in the spatial OLR structure compared to that of the heat emission from the surface, as the greenhouse effect is increased. One index capturing this could be the correlation between the spatial patterns in OLR and the surface IR flux over time (figure below taken from Benestad (2016)). 

Trend in pattern correlation between outgoing long-wave radiation (OLR) measured by satellites and calculated for surface temperatures. A decrease in the spatial correlation is consistent with the atmosphere more opaque in terms of IR.

Another index for the state of the climate and the hydrological cycle could be an metric for the global atmospheric overturning: how much air ascends and descends.

The vertical motion in the atmospheric plays a role in moving heat and moisture to greater heights, and influences both rain patterns and the OLR. One indicator could be variance in the vertical velocity over space estimated over grid-boxes, each with area :  

Trends in global variance of vertical flow over space for three different height intervals in the atmosphere. An increase in the vertical motion of the mid-troposphere is consistent with more convection and increased heat flow through convection. It is likely that this has affected the clouds. The vertical motion is labelled as in the figure (Source: Benestad (2016),PDF)

We can estimate the atmospheric overturning from reanalyses which provide data on the flow over a range of vertical levels and on a global scale. According to the figure above, there has been an increase in the global overturning indicator for the middle atmosphere (between 1 and 6.5 km above the surface).

The overturning indicator for the lower boundary layer, characterised by turbulence, shows a different trend to that in the middle troposphere. There is also less pronounced vertical motions in the upper part of the troposphere.

Another indicator could be the height of the region where the temperature is 254K (the 254K isotherm), which can be taken as a crude proxy for the average depth of the atmosphere from which the average heat escapes (Benestad (2016)).

A neglected indicator, which I think should be an obvious one, is the daily precipitation area . This indicator has a profound meaning for the hydrological cycle and is relevant for the question of flood risk and droughts.

The mean precipitation taken over area with precipitation for any given day can be considered as the wet-day mean precipitation and provides an indicator for the mean precipitation intensity.

The mean precipitation intensity is related to the mean evaporation and is proportional to the ratio of the areas of evaporation and rainfall:  

There is a kind of a “funnel effect” since the evaporated water over a large area has to come down as precipitation over a significantly smaller area. This is a bit like the action of a funnel (see figure below) where the water moves more slowly at the top where the cross-section area is greater than at the bottom with a small cross-section.

The differences in the area of evaporation and precipitation has a similar effect as a funnel: if the mean evaporation over a large area is returned a smaller , then the mean intensity is amplified by the factor of .

It is possible to get an estimate of the semi-global precipitation area from satellite observations (Benestad, 2018). The figure below indicates that the area with daily rain between 50S-50N has decreased by 7% since 1998, which implies that the rain has become more intense and concentrated over a smaller region.

The area between 50S and 50N (77% of Earth’s surface area) with precipitation estimated from the TRMM data. A reduction in the precipitation area implies higher mean precipitation intensity, and may be linked to changes in the atmospheric overturning presented above. (Source: Benestad (2018))

There may be other climate indicators that I have missed. Nevertheless, I hope there will be more discussions about climate indicators and more resources in the future that can offer up-to-date information about the state of the climate, based on these. Such sites could offer both graphical presentations and the actual numbers.

References
  1. F. Giorgi, E. Im, E. Coppola, N.S. Diffenbaugh, X.J. Gao, L. Mariotti, and Y. Shi, "Higher Hydroclimatic Intensity with Global Warming", Journal of Climate, vol. 24, pp. 5309-5324, 2011. http://dx.doi.org/10.1175/2011JCLI3979.1
  2. R.E. Benestad, "A mental picture of the greenhouse effect", Theoretical and Applied Climatology, vol. 128, pp. 679-688, 2016. http://dx.doi.org/10.1007/s00704-016-1732-y
  3. R.E. Benestad, "Implications of a decrease in the precipitation area for the past and the future", Environmental Research Letters, vol. 13, pp. 044022, 2018. http://dx.doi.org/10.1088/1748-9326/aab375
Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview