Global warming is the “hottest” environmental issue of the day, quite possibly of all time. Yet it’s increasingly clear that most people, even those who are passionate about the issue (on both sides), aren’t very well-informed about what earth’s temperature is doing, what it has done in the past, and what it’s likely to do in the future. There’s quite a gap between what most people know about the subject, and what people need to know.
I’d like to discuss the history of earth’s temperature according to thermometers.
Worldwide, there are many organizations that are “keepers of the thermometer.” That’s not to say there aren’t other excellent record-keepers and researchers, but these are leading the field. Temperature measurements happen every day, every night, every season of every year ad infinitum, from thousands upon thousands of locations over land and sea. They’ve collected past observations and checked and double-checked them so that errors can be corrected when possible and discarded when not. They determine the changes over time, and the differences between different regions of the earth. And they make their data freely available from the web. There are five that are best-known and most often used:
What climate researchers are most interested in is temperature change. Also, temperature differences can generally be measured with much greater precision and reliability than absolute temperature. So, what is usually studied is not temperature itself, but temperature anomaly.
Temperature anomaly is just the difference between the temperature, and what it used to be at the same time of year back in some “reference period,” called the baseline period. As an example, for HadCRU the reference period is 1961 through 1990, while for NASA GISS it’s 1951 through 1980. If it’s hotter now than during the reference period, the temperature anomaly is positive; if it’s colder now, the anomaly is negative.
Anomaly doesn’t just isolate temperature change from temperature itself. It also eliminates the yearly cycle of the seasons. After all, we’re not really interested in the fact that summer is hotter than winter, we already knew that. We’re most interested in whether or not this summer is hotter or colder than what a typical summer used to be (during the baseline period). By defining anomaly as the difference from the baseline average at the same time of year, we remove the seasonal cycle, which helps us focus on the meaningful changes over time rather than the seasonal changes we already knew about.
Global estimates are computed by determining the geographic distribution of temperature anomaly worldwide, and averaging that to get the best estimate of global temperature anomaly. This is the data that will tell us whether or not earth’s temperature has changed, or is changing.
Historical Thermometer Readings
Until the latter part of the 19th century, we didn’t have enough temperature measurements from enough places to do the job. The HadCRU time series runs from 1850 to the present, as do the Berkeley data and the Cowtan & Way data, while the NASA and NOAA data go from 1880 to today. Let’s look at the NASA data, starting with a graph very similar to the first one of its kind I saw: global average temperature anomaly, averaged throughout the year, for every year from 1880 through 2017. Each red bar represents the difference between the year’s temperature and the 1961-through-1990 average (the baseline period for NASA data). The coldest year recorded is 1904, at -0.495; the hottest is 2016, at +0.986 (measured in degrees Celsius, °C).
I’ll also show the same exact information in a different form. Here’s the global average yearly temperature anomaly as a line graph; each dot shows the average yearly temperature and I’ve connected them with a line.
There’s lots of wiggling around, fluctuation from year to year. But superimposed on these year-to-year changes are some more persistent trends. There’s slight warming from about 1915 to about 1940, a levelling off from then until 1970, and a sharp rise 1970 to the present — all superimposed on those year-to-year fluctuations.
These graphs (the forms most people have seen) are actually slowed-down versions of real temperature change. In real life, temperature changes from moment to moment. But by taking averages over 1-year periods, we have “slowed down” the changes, eliminating those that happen on timescales shorter than a year. We can go into greater time detail by looking at global average temperature, averaged not over every year, but over each month:
The line represents the difference between the given month’s temperature, and the average for that month from 1961 to 1990. The (relatively) coldest month is January 1893, at -0.8, while the hottest is February 2016 at +1.34.
The monthly averages show even more up-and-down fluctuations than the yearly averages. Each month is unique, with its own set of influences that are nearly impossible to predict. This leads to short-term, month-to-month variation that is, essentially, unpredictable but has a limited range of variation. Such variations happen on very rapid timescales. I’ll call this very fast natural variations; it’s sometimes also called scatter.
There are also fluctuations that have some persistence, but still happen rapidly. Such changes last way longer than a single month or two but not more than a few years. This is fast natural variation of global temperature, and it too has a limited range of variation. Sometimes we can figure out exactly why a particular fast natural variation has occurred.
Finally, there are the slower, even more persistent changes. For example, the temperature is higher, on average, today than it was in the past; we noted the warming from about 1970 to the present, when in addition to scatter and fast natural variations, temperature also showed a steady rise. Such persistent changes are slow variations; we can also call them trends.
Slow and Fast Changes
It’s natural to want to separate the long-term (slow) changes from the short-term (fast) changes from the very-short-term (very fast) changes, in fact we want to know the changes on all timescales. We’ve already seen one way to do this; by averaging over every year, we eliminated changes whose duration is much less than a year. Essentially we got rid of the very fast changes, giving us a better look at the fast changes (and the slow changes too).
We can carry this technique further, by averaging over even longer timescales. If, instead of 1-year averages, we take 10-year averages, then we’ll eliminate not only the very fast variations, but the fast variations as well, leaving only the long-term changes: the trends. Plotting the slow 10-year averages in red and the fast 1-year averages in blue, we get this:
What Causes Changes?
Many of Earth’s temperature changes, especially the very fast month-to-month changes, happen for reasons we don’t fully understand. But there are some things we do know, things that cause year-to-year changes which are sometimes quite prominent. Let’s look at NASA monthly average temperature data since about 1970:
I’ve marked some notable events, with labels for “Mt. Pinatubo volcano” in blue and “el Niño” in red. That’s because we do know the reason behind the downward dip in 1993 and the upward spikes in 1998 and 2016.
A very large volcanic explosion throws a lot of junk into the atmosphere. If it’s big enough, in can inject material very high, into the stratosphere, and that junk can take years to settle out of the air. One of the things volcanos emit are sulfur compounds, and chemical reactions in the atmosphere turn much of that into sulfates. Those in turn can assemble into aerosols, tiny particles in the air, and sulfate aerosols tend to be bright, scattering light, in particular scattering some of the incoming sunlight right back to space. Hence these aerosols tend to block some of the sunlight from getting to earth’s surface, and since sunlight is the ultimate source of energy for Earth’s climate, reducing the incoming sunlight has a net cooling effect on earth’s climate. The Mt. Pinatubo explosion caused a temporary cooling of the globe for a few years afterward.
El Niño is a spreading out of the pool of warm water in the eastern-central equatorial Pacific ocean. When it spreads out, it’s exposed to more air, releasing more of its heat into the atmosphere and warming the climate. Both 1998 and 2016 brought some of the strongest el Niño events ever recorded, making those years extra hot (even for the time).
Those changes are only temporary, and although they may be rare, the do happen again and again. As such, they’re not climate change — it would be more appropriate to refer to them as “weather,” and more scientific to call it “decadal variability.”
Another way to isolate long-term change from short-term change is to smooth the data, to fit a mathemematical curve, but tune the curve so that it “smooths out” fluctuations faster than the limit we’re interested in.
The figure below shows two smooth-fit curves to the data, one tuned for 10-year changes (red) and another for 1-year changes (blue). The smallest details differ from the graph of averages, but the results are essentially the same: fast natural variations like el Niño and Mt. Pinatubo, and two episodes of warming, from about 1915 to 1940, and about 1970 to the present.
Even the slow curve still shows wiggles that aren’t really part of climate change. To isolate climate from weather, we need an even longer smoothing time scale. The usual choice is 30 years, because that has been a useful choice in recent history (but it was intended to study climate, not climate change). In my opinion, a better choice is a 20-year smoothing time scale. Doing that gives us this:
The smooth curve isn’t the exact climate signal isolated from the weather fluctuations, it’s only an estimate. Therefore there is uncertainty associated with it. One of the benefits of mathematics is that it not only enables us to make this estimate, it also gives us some idea of how uncertain our estimate is. Here’s another version of the previous graph:
There are two additional red lines, above and below the smooth-fit estimate, showing the probable range in which the actual climate signal value lies. The true value could be outside those limits, but probably not. If “probably” and “uncertainty” make you a bit uncomfortable, accept the fact that there’s no getting around it. That’s the nature of measurement and estimation in science — almost nothing is certain, but with skill and luck we can at least determine roughly how uncertain our estimates are.
Here’s another view, that isn’t necessarily the most scientifically precise, but I find appealing:
We can’t estimate global temperature from thermometers before about 1850, because there aren’t enough measurements from enough places. But there are some isolated places with longer temperature records than that. The longest of all is the Central England Temperature (CET).
Central England Temperature (CET) has been reconstructed from thermometer measurements from 1659 to the present. The earliest data, from 1659 to 1670, are so much less precise than later data that monthly average temperature is only recorded to the nearest degree Celsius. From 1671 to late 1722, temperature is estimated to the nearest half a degree, except for a brief episode from 1699 to 1706 when data are more precise, to a tenth of a degree. Since late 1722 CET has been recorded as average monthly temperature to the nearest tenth of a degree.
In early times weather observations were not yet systematically made by weather services. The Central England Temperature is computed by combining several weather stations in England. In the beginning few stations were available, and from 1707 to 1722 even data from Delft in The Netherlands had to be used.
And what do the data say? This:
This is temperature, not temperature anomaly, so the cycle of the seasons is still present. Here it is with the seasonal cycle removed to form temperature anomaly:
The long-term pattern isn’t immediately obvious. In large part that’s because the level of fluctuation, of random scatter, tends to be a lot bigger for a single location than for the entire Earth.
Let’s smooth it, with a mathematical curve, in hopes to see the long-term pattern better:
The smooth fit (the red line) does show some signs of pattern. This is far more obvious is we expand the graph to show more detail, with just the smooth fit:
Now we can definitely see patterns. The present 30-year period (from 1990 to the present, actually) has the highest average temperature anomaly by far, and the change since the previous 20-year average (1970 to 1989) is the largest change in the entire CET record. If we stick to the more reliable part of the data, from 1723 to the present, we see that the smoothed values (an estimate of the climate part rather than the weather) never got more than 0.23 degrees hotter or colder than zero until 1940. The most recent value is 1.2. The recent rapid rise doesn’t look like what’s happened before.
But this is only indicative, it’s far from conclusive, because it’s only one small part of the globe. To estimate global temperature before the age of thermometry, we have to estimate temperature from other data. These are called proxy data, and come in many forms: tree rings, microfossils, borehole temperature profiles, coral reefs, ice cores, and many more. The best approach is to take all the available temperature proxy data and combine it with the best statistical tools. Then we can hope to see the temperature history of the earth over the last few thousand years. But that is the subject of a future post.
You can get global temperature data yourself online:
It’s not always obvious how to access and process the data. We’ll cover some of that in future posts.