WunderBlog Archive » Category 6™

Category 6 has moved! See the latest from Dr. Jeff Masters and Bob Henson here.

An Avalanche of New Models for Severe Weather Prediction

By: Bob Henson 6:33 PM GMT on July 18, 2015

Not so long ago, forecasters at NOAA had just one high-resolution computer model to tell them where thunderstorms might erupt later in the day. Now there’s a whole cornucopia of models that project how storms will evolve, hour by hour, at fine scale. It’s a bit like having a large network of friends and family to consult when you’re making a big personal decision, instead of asking just one person for a single opinion that might steer you right or wrong. Processing all those viewpoints does take some time, though. Forecasters practiced using the array of new guidance during May and early June as part of the 2015 Spring Forecasting Experiment at the NOAA Hazardous Weather Testbed (HWT) in Norman (see my posts of May 5 and May 21).

Here’s one reason why predicting specific thunderstorms has remained such a challenge: traditional computer models simply aren’t fine-grained enough to explicitly portray individual showers and storms. Instead, they use convective parameterization, where a model takes favorable large-scale conditions as a cue to place showers and thunderstorms (convection) inside model grid cells that may be 10 or 20 miles wide. In contrast, “convection-allowing” models have grid cells that are 6 miles wide or less, which means a single large thunderstorm can emerge naturally across multiple grid cells instead of being artificially implanted by the model within each cell. This allows for a much more realistic portrayal of thunderstorms, with chunky blobs replaced by finely detailed filaments and bands (see Figure 1).


Figure 1. A little over a decade ago, operational forecasters might have used a model-generated forecast of 3-hour precipitation (left image, at 40-km resolution) to gain insight into thunderstorm development. Advances in computing power and modeling have allowed for higher resolution, explicit modeling of showers and thunderstorms, and more frequent time steps within the model. At right: simulated radar reflectivity from a numerical model with 4-km grid resolution and hourly forecast output. Image credit: Greg Carbin, NOAA/SPC.


This year's forecasting experiment called on 18 different model configurations, issued as often as once per hour, with resolutions mostly between 1 and 4 km. Most of these models were run multiple times in ensemble mode, with small variations in the starting-point data designed to mimic the uncertainty in initial observations. All this added up to a bounty of model-generated guidance.

The main job in the forecasting experiment was to issue short-term probabilities for the likelihood of tornadoes, severe hail, severe wind, and "significant" severe hail and wind (2" diameter hailstones or 65-knot winds). SPC already estimates such odds for the current day and the following two days, but this experiment tested more frequent probabilities, issued several times a day for periods spanning 1 to 4 hours. The new convection-allowing models provided ample raw material for this task. A variety of forecasters from the public, private, and academic sectors, including participants from across the United States as well as from Australia, Canada, England, and Hong Kong, convened at the testbed to evaluate the new guidance. “Part of what makes this experiment special is the diversity of the participants,” said Adam Clark (NOAA’s National Severe Storms Laboratory), one of the experiment’s lead planners. “It’s designed to mix folks together who typically don’t interact much as part of their regular jobs. The different perspectives make things fun, engaging, and interesting, and most importantly help foster new ideas and directions for future research.”


Figure 2. Ariel Cohen (left), from NOAA’s Storm Prediction Center, was among the participants in the 2015 Spring Forecast Experiment. Image credit: NOAA.

The task facing all this talent was to see how much value they could add to an automated short-term outlook derived from the ensembles. Of course, there’s not nearly enough time to scrutinize every model run. “We could look at individual ensemble members, but that gets a little cumbersome,” said Greg Carbin, warning coordination meteorologist at the NOAA Storm Prediction Center (SPC). “The more important questions are: What’s the model spread? Where do the models agree and disagree? What’s the character of storms within the ensembles? We issued forecasts based on all this information, then determined how well the forecasts verified.”


Figure 2. An experimental forecast, issued seven hours in advance, for the likelihood of any type of severe weather in a one-hour period (6:00 - 7:00 pm on May 19, 2015), based on probabilities generated by model ensembles. Colored circles show the percentage likelihood of at least one severe report within a 25-mile radius of any point. Colored icons show the actual severe weather that occurred: red = tornado, blue = severe wind, green = severe hail, and green triangle = “significant” severe hail, or at least 2” in diameter. Image credit: Greg Carbin, NOAA/SPC.


Carbin saw encouraging signs this spring in the ensembles’ ability to provide insight into storm mode, such as whether a day will feature potentially tornadic supercells. On May 19, forecasters used the model output to place parts of north Texas in a 2-to-5 percent tornado risk (the odds that a tornado would occur in the next hour within 25 miles of a given point). “This was a day with some uncertainty in tornado potential, especially south of the Red River,” said Carbin. “There was a robust signal in the ensemble data that short-track, but intense, rotating storms were likely. Our experimental forecast for total severe threat, based almost entirely on the ensemble information, verified very well.” (See Figure 2, above, for an example.)

James Correia (NOAA), SPC’s liaison to the testbed, also came away from this spring’s test with some cautious optimism. “As in years past with multiple ensembles, we always get multiple answers. I fully expected to get 60+ answers from 60+ members. I think we learned, again, that we need to go beyond probability to really hear what the ensembles are telling us.” For example, the high-resolution models often produce high values of updraft helicity, an indicator of storm rotation. But there aren’t enough fine-scale observations to confirm that storms are in fact producing that much helicity. In this sense, said Correia, “the ensembles are showing us what’s possible but not necessarily probable.”

Along with providing more confidence and lead time on the biggest, most dangerous outbreaks, ensembles may help get a handle on what some meteorologists call “mesoscale accidents”. This informal term refers to localized severe events that develop against the grain of mesoscale conditions that seem to be unfavorable for a significant event. “Mesoscale accidents are common in at least one or two members of an ensemble and can give forecasters a heads-up that something 'unexpected' has a small, but non-negligible, chance of occurring,” Correia said. “Knowing when and how to trust such a signal or classify it as noise is a challenge.” Getting familiar with the quirks of each model is a crucial step, but many models are so fresh on the scene that their idiosyncrasies aren't yet fully known.

MPAS: The future of multiday storm modeling?
Along with drawing on a new wealth of same-day model guidance, forecasters at the 2015 Spring Forecasting Experiment also test-drove output from a newly configured model that provides what was once thought to be either pointless or impossible: explicit modeling of showers and thunderstorms up to five days in advance.

The Model for Prediction across Scales (MPAS) is being developed by the National Center for Atmospheric Research (atmospheric component) and Los Alamos National Laboratory (ocean component). As its name implies, MPAS can operate on a variety of scales in both space and time. It uses an innovative grid: unlike the standard array of grid cells carved out by latitude and longitude, MPAS uses a hexagon-based grid called an unstructured Voronoi mesh (think of the pattern on a soccer ball). This eliminates problems like the narrowing of grid cells closer to the poles. The MPAS grid also allows for a near-seamless tightening of resolution where it’s most desired, such as over the deep tropics to depict hurricane development.


Figure 3. The honeycomb-like structure of MPAS (left) eliminates many of the challenges of model grids based on latitude and longitude. Each day in May 2015, MPAS was run with a grid-cell spacing of 3 km across most of North America (right; 3-km cells lie within the 4-km contour), with the resolution tapering off at greater distances from the continent. Forecasts were continued with a slightly different configuration for the PECAN field experiment in June and early July. Image credits: MPAS/Bill Skamarock, NCAR.


For the 2015 experiment, the atmospheric component of MPAS was run daily with a top resolution across a circle centered on North America of 3 kilometers (about 2 miles) between grid cells, surrounded by a concentric mesh with progressively lower resolution (see Figure 3). The result was a total of nearly 7 million grid cells covering the Northern Hemisphere. In the real world, each day’s convection shapes how the next day's will evolve, so the point of this MPAS test wasn’t to determine exactly where a particular thunderstorm would be in 120 hours. Instead, the idea was to employ MPAS’s skill at modeling larger-scale features in order to gauge what types of convection to expect over the next five days--squall lines, supercells, etc.--and where the heaviest activity might be focused.

Like any fine-scale model, MPAS includes much more realistic topography and land use than that found in a traditional, coarser model. Also, MPAS appears to capture the diurnal cycle of convection especially well, which is vital for multiday prediction. In a couple of cases, MPAS gave several days’ notice of where a large tornadic supercell was likely to emerge and track. (See Figure 4.)


Figure 4. Five days in advance, MPAS predicted that a band of strong updraft helicity (an index of potential storm rotation and severe weather) would emerge around Wichita Falls, TX, to near Tulsa, OK, on May 16, 2015 (left). A supercell ended up producing multiple tornadoes and very large hail close to that track (right). A second area of severe weather predicted for south-central Kansas ended up smaller and further to the northeast. Image credits: NCAR (left), NOAA/SPC (right).


According to Bill Skamarock, who leads the MPAS project at NCAR, this appears to be the first time that any model on Earth has carried out such high-resolution forecasting of showers and thunderstorms out to five days on a daily basis. “We have been pleasantly surprised as to how consistent, plausible, and even correct the longer-term forecasts from MPAS have been,” said Louis Wicker (NOAA National Severe Storms Laboratory). Wicker and Skamarock are collaborating with Joseph Klemp (NCAR), Steven Cavallo (University of Oklahoma), and Adam Clark (NOAA) on post-experiment analysis. One test is to see how the MPAS results compare to the predictions from a traditional large-scale model (GFS) with an embedded finer-scale model (WRF).

“What an opportunity to see what a global model can do at convection-allowing scales!” said NOAA’s James Correia. "We learned that convection is at the beck and call of the larger-scale features…no surprise there. But to see a model predict mesoscale convective systems a few days out and be 'close' is always very encouraging.”

According to Skamarock, much work remains to be done on how best to assimilate radar, satellite, and other data into the starting-point analyses of each MPAS run. Depending on the forecast goal, it’s an open question whether it makes more sense to put resources into an MPAS-like system versus a full set of shorter-range, convection-allowing models. But MPAS may be just the first of its kind. Skamarock pointed to the European Center for Medium-Range Weather Forecasts, where global model resolution has been steadily rising in line with Moore’s Law. The current high-resolution version of the flagship ECMWF model has a 16-km resolution covering the globe. If you extend the ECMWF progress from the 1980s to the year 2030, you end up with a global model that boasts 2.5-km resolution. According to Skamarock, "We are not that far from the point where we can run global models at convection-resolving scales."

Bob Henson


Mesoscale Forecasting

The views of the author are his/her own and do not necessarily represent the position of The Weather Company or its parent, IBM.