The Danger of Leaving Weather Prediction to AI

Humans have tried to anticipate the climate’s turns for millennia, using early lore—“red skies at night” is an optimistic sigil for weather-weary sailors that’s actually associated with dry air and high pressure over an area—as well as observations taken from roofs, hand-drawn maps, and local rules of thumb. These guides to future weather predictions were based off years of observation and experience.

Then, in the 1950, a group of mathematicians, meteorologists, and computer scientists—led by John von Neumann, a renowned mathematician who had assisted the Manhattan Project years earlier, and Jule Charney, an atmospheric physicist often considered the father of dynamic meteorology—tested the first computerized automatic forecast.

Charney, with a team of five meteorologists, divided the United States into (by today’s standards) fairly large parcels, each more than 700 kilometers in area. By running a basic algorithm that took the real-time pressure field in each discrete unit and prognosticated it forward over the course of a day, the team created four 24-hour atmospheric forecasts covering the entire country. It took 33 full days and nights to complete the forecasts. Though far from perfect, the results were encouraging enough to set off a revolution in weather forecasting, moving the field toward computer-based modeling.

Over the ensuing decades, billions of dollars in investments and the evolution of faster, smaller computers led to a surge in predictive capability. Models are now capable of interpreting the dynamics of parcels of atmosphere as small as 3 kilometers in area, and since 1960 these models have been able to include ever-more-accurate data sent from weather satellites.

In 2016 and 2018, the GOES-16 and -17 satellites launched into orbit, providing a host of improvements, including higher-resolution images and pinpoint lightning detection. The most popular numerical models, the US-based Global Forecasting System (GFS) and European Center for Medium-Range Weather Forecasts (ECMWF), released significant upgrades this year, and new products and models are being developed at a faster clip than ever. At a finger’s touch, we can access an astonishingly precise weather forecast for our exact location on the Earth’s surface.

Today’s lightning-speed predictions, the product of advanced algorithms and global data collection, appear one step away from complete automation. But they’re not perfect yet. Despite the expensive models, array of advanced satellites, and mega-computers, human forecasters have a unique set of tools all their own. Experience—their ability to observe and draw connections where algorithms cannot—gives these forecasters an edge that continues to outperform the glitzy weather machines in the highest-stake situations.

Though tremendously useful with big-picture forecasting, models aren’t sensitive to, say, the little updraft in one small land quadrant that suggests a waterspout is forming, according to Andrew Devanas, an operational forecaster at the National Weather Service office in Key West, Florida. Devanas lives near one of the world’s most active regions for waterspouts, marine-based tornadoes that can damage ships that pass through the Florida Straits# and even come onshore. 

The same limitation impedes predictions of thunderstorms, extreme precipitation, and land-based tornadoes, like those that tore through the Midwest in early December, killing more than 60 people. But when tornadoes occur on land, forecasters can often spot them by looking for their signature on radar; waterspouts are much smaller and often lack this signal. In a tropical environment like the Florida Keys, the weather doesn’t change much from day to day, so Devanas and his colleagues had to manually look at variations in the atmosphere, like wind speed and available moisture—variations that the algorithms don’t always take into account—to see if there was any correlation between certain factors and a higher risk of waterspouts. They compared these observations to a modeled probability index that indicates whether waterspouts are likely and found that with the right combination of atmospheric measurements, the human forecast “outperformed” the model in every metric of predicting watersprouts.

Similarly, research published by NOAA Weather Prediction Service director David Novak and his colleagues show that while human forecasters may not be able to “beat” the models on your typical sunny, fair-weather day, they still produce more accurate predictions than the algorithm-crunchers in bad weather. Over the two decades of information Novak’s team studied, humans were 20 to 40 percent more accurate at forecasting near-future precipitation than the Global Forecast System (GFS) and the North American Mesoscale Forecast System (NAM), the most commonly used national models. Humans also made statistically significant improvements to temperature forecasting over both model’s guidance. “Oftentimes, we find that in the bigger events is when the forecasters can make some value-added improvements to the automated guidance,” says Novak.

Source

Author: showrunner