Wednesday, February 16, 2022

CRISIS FOR THE CLIMATE MODELS?

CRISIS FOR THE CLIMATE MODELS?

BY STEVEN HAYWARD IN CLIMATESCIENCE

One of my heterodox positions on climate change is that many of our scientific efforts to improve our grasp of the earth’s climate system since it became a hot topic (no pun intended) back in the 1970s have actually moved our knowledge backwards. That is, we actually understand it less well than we did 40 years ago.

This is especially true of the heart of the matter: the computer climate models we use to make predictions about future changes in the climate. But as our computer climate models are refined with more and more raw data and endless tweaks of the climate simulations, the uncertainties have arguably grown larger rather than smaller. This is not as outlandish as it may seem, given we are expecting scientists to get a grasp of a phenomenon with so many factors and scientific sub-specialties, from oceanography to forestry.

Few of these difficulties ever make it into mainstream media coverage of climate science—until today. The Wall Street Journal has posted online a long feature that will appear in tomorrow’s print edition entitled “Climate Scientists Encounter Limits of Computer Models, Bedeviling Policy.” Read the whole thing if you have access to the Journal; if not I’ll cover some key excerpts here.

First, deep in the story is an excellent description of the complexity—and also the defects—of climate models. The main climate models contain over 2 million lines of computer code (much of it apparently still in Fortran). Even after an intensive five-year process to rework the code and acquire better data at the National Center for Atmospheric Research (NCAR), “The scientists would find that even the best tools at hand can’t model climates with the sureness the world needs as rising temperatures impact almost every region.”

One big problem is the resolution of the models, described thus:

Even the simplest diagnostic test is challenging. The model divides Earth into a virtual grid of 64,800 cubes, each 100 kilometers on a side, stacked in 72 layers. For each projection, the computer must calculate 4.6 million data points every 30 minutes. To test an upgrade or correction, researchers typically let the model run for 300 years of simulated computer time. . .

But as algorithms and the computer they run on become more powerful—able to crunch far more data and do better simulations—that very complexity has left climate scientists grappling with mismatches among competing computer models.

The problem is that the 100 km resolution of the models simply isn’t high enough to predict the climate accurately. Steven Koonin’s recent book Unsettled: What Climate Science Tells Us, What It Doesn’t, and Why It Matters, which contains one of the best discussions for the layperson of how climate models work that I’ve ever seen, gilds this point: “Many important [climate] phenomena occur on scales smaller than the 100 km (60 mile) grid size (such as mountains, clouds, and thunderstorms).” In other words, the accuracy of the models is highly limited. Why can’t we scale down the model resolution? Koonin, who taught computational physics at CalTech, explains: “A simulation that takes two months to run with 100 km grid squares would take more than a century if it instead used 10 km grid squaresThe run time would remain at two months if we had a supercomputer one thousand times faster than today’s—a capability probably two or three decades in the future.” (I’ll have a long review of Koonin’s book in the next edition of the Claremont Review of Books.)

The Wall Street Journal reports that the newest models kept spitting out even more dire predictions of future warming than many previous models—but that the climate modelers don’t believe the projections:

The scientists soon concluded their new calculations had been thrown off kilter by the physics of clouds in a warming world, which may amplify or damp climate change. “The old way is just wrong, we know that,” said Andrew Gettelman, a physicist at NCAR who specializes in clouds and helped develop the CESM2 model. “I think our higher sensitivity is wrong too. It’s probably a consequence of other things we did by making clouds better and more realistic. You solve one problem and create another.” . . .

Since then the CESM2 scientists have been reworking their climate-change algorithms using a deluge of new information about the effects of rising temperatures to better understand the physics at work. They have abandoned their most extreme calculations of climate sensitivity, but their more recent projections of future global warming are still dire—and still in flux.

Kudos also for the Journal reporting that the latest IPCC report last summer draw back from some of the previous extreme predictions of future doom, something not widely reported, if at all, in the media: “In its guidance to governments last year, the U.N. climate-change panel for the first time played down the most extreme forecasts.”

This passage is also a big problem for the climatistas:

In the process, the NCAR-consortium scientists checked whether the advanced models could reproduce the climate during the last Ice Age, 21,000 years ago, when carbon-dioxide levels and temperatures were much lower than today. CESM2 and other new models projected temperatures much colder than the geologic evidence indicated. University of Michigan scientists then tested the new models against the climate 50 million years ago when greenhouse-gas levels and temperatures were much higher than today. The new models projected higher temperatures than evidence suggested.

Watch for the climatistas to say, “Move along, nothing to see here.”

One big reason the 100 sq km resolution of climate models is inadequate is that the behavior of clouds and water vapor can’t be adequately modeled—something the IPCC reports usually admit in the technical sections the media never read. The WSJ story is similarly revealing on this point:

“If you don’t get clouds right, everything is out of whack.” said Tapio Schneider, an atmospheric scientist at the California Institute of Technology and the Climate Modeling Alliance, which is developing an experimental model. “Clouds are crucially important for regulating Earth’s energy balance.” . . .

In an independent assessment of 39 global-climate models last year, scientists found that 13 of the new models produced significantly higher estimates of the global temperatures caused by rising atmospheric levels of carbon dioxide than the older computer models—scientists called them the “wolf pack.” Weighed against historical evidence of temperature changes, those estimates were deemed unrealistic.

By adding far-more-detailed equations to simulate clouds, the scientists might have introduced small errors that could make their models less accurate than the blunt-force cloud assumptions of older models, according to a study by NCAR scientists published in January 2021.

Taking the uncertainties into account, the U.N.’s climate-change panel narrowed its estimate of climate sensitivity to a range between 4.5 and 7.2 degrees Fahrenheit (2.5 to 4 degrees Celsius) in its most recent report for policy makers last August. . .

A climate model able to capture the subtle effects of individual cloud systems, storms, regional wildfires and ocean currents at a more detailed scale would require a thousand times more computer power, they said.

But shut up, the science is settled.

https://www.powerlineblog.com/archives/2022/02/crisis-for-the-climate-models.php

No comments:

Post a Comment