In the 1920s, numerical weather prediction (NWP) emerged. They are pervasive and help with economic planning in important industries, including transportation, logistics, agriculture, and energy production. Numerous lives have been saved by accurate weather predictions that warned of severe catastrophes in advance. Over the past few decades, weather forecasts have improved in quality. Lewis Fry Richardson used a slide rule and a table of logarithms to calculate the first dynamically modelled numerical weather prediction for a single place in 1922. It took him six weeks to produce a 6-hour forecast of the atmosphere. Early electronic computers significantly increased forecasting speed by the 1950s, enabling operational predictions to be computed quickly enough to be helpful for future predictions.
Improvements in weather forecasting have been made possible in addition to improved computational power by better parameterising small-scale phenomena through a deeper knowledge of their physics and better atmospheric observations. By assimilating data, the latter has led to better model initializations. Because they have orders of magnitude cheaper processing costs than cutting-edge NWP models, data-driven Deep Learning (DL) models are becoming more and more popular for weather forecasting. Building data-driven models for predicting the large-scale circulation of the atmosphere has been the subject of several research. These models have been trained using climate model outputs, general circulation models (GCM), reanalysis products, or a combination of climate model outputs and reanalysis products.
By removing model biases prevalent in NWP models and enabling the production of large ensembles for probabilistic forecasting and data assimilation at low computing cost, data-driven models offer a significant potential to enhance weather forecasts. By training on the reanalysis of data or observations, data-driven models can get around constraints in NWP models, including biases in convection parameterization schemes that significantly impact precipitation forecasts. Once trained, data-driven models generate forecasts via inference orders of magnitude quicker than typical NWP models, allowing for the production of very large ensembles. In this context, researchers have demonstrated that large data-driven ensembles outperform operational NWP models that can only include a limited number of ensemble members in subseasonal-to-seasonal (S2S) forecasts.
Additionally, a sizable ensemble supports short- and long-term forecasts with data-driven predictions of extreme weather occurrences. However, most data-driven weather models employ low-resolution data for training, often at the 5.625 or 2 resolution. Forecasting some of the broad, low-resolution atmospheric variables has been successful in the past. However, the coarsening process causes the loss of important, fine-scale physical information. Data-driven models must provide forecasts with the same or better resolution as the most recent state-of-the-art numerical weather models run at 0.1 resolution to be genuinely effective. For example, estimates at 5.625 spatial resolution provide a meager 32 64-pixel grid representing the world.
A prediction like this cannot distinguish features smaller than 500 km. The significant impacts of small-scale dynamics on big scales and the influence of topographic factors like mountain ranges and lakes on small-scale dynamics are not considered by such imprecise projections. Low-resolution predictions may only be used in certain situations as a result. High-resolution data (e.g., at 0.25 resolution) can significantly improve the predictions of data-driven models for variables like low-level winds (U10 and V10) that have complex fine-scale structures, even though low-resolution forecasts may be justified for variables like the geo-potential height at 500 hPa (Z500) that do not possess many small-scale structures.
Furthermore, a coarser grid would not accurately depict the creation and behaviour of high-impact severe events like tropical cyclones. High-resolution models can address these aspects. Their strategy: Researchers from NVIDIA Corporation, Lawrence Berkeley, Rice University, University of Michigan, California Institute of Technology and Purdue University create FourCastNet, a Fourier-based neural network forecasting model, to produce global data-driven forecasts of important atmospheric variables at a resolution of 0.25, or roughly 30 km near the equator, and a global grid size of 720*1440 pixels. This enables us to compare our results directly for the first time with those obtained by the ECMWF’s high-resolution Integrated Forecasting System (IFS) model.
Figure 1 illustrates a worldwide near-surface wind speed forecast with a 96-hour lead time. They emphasize significant high-resolution features resolved and reliably tracked by their prediction, such as Super Typhoon Mangkhut and three named cyclones (Florence, Issac, and Helene) moving towards the eastern coast of the United States.
In conclusion, FourCastNet offers four novel improvements to data-driven weather forecasting:
1. FourCastNet accurately forecasts difficult variables like surface winds and precipitation at forecast lead periods of up to one week. Surface wind forecasting on a global scale has yet to be tried using any deep learning (DL) models. Furthermore, global DL models for precipitation have yet to be able to resolve small-scale features. Planning for wind energy resources and catastrophe mitigation are both significantly impacted by this.
2. FourCastNet offers an eight times higher resolution than cutting-edge DL-based global weather models. FourCastNet resolves severe occurrences like tropical cyclones and atmospheric rivers that need more represented by earlier DL models due to their coarser grids, high resolution, and precision.
3. At lead periods of up to three days, FourCastNet’s predictions are equivalent to those of the IFS model in terms of metrics such as Root Mean Squared Error (RMSE) and Anomaly Correlation Coefficient (ACC). Then, for lead periods of up to a week, projections of all modelled variables behind IFS by a significant margin. FourCastNet models 20 variables at five vertical levels and is only driven by data, in contrast to the IFS model, which has been built over decades, comprises more than 150 variables at more than 50 vertical levels in the atmosphere, and is governed by physics. This contrast demonstrates the immense potential of data-driven modelling to someday replace and supplement NWP.
4. Compared to current NWP ensembles, which have at most about 50 members due to their high computational cost, FourCastNet’s reliable, quick, and computationally affordable forecasts enable the generation of very large ensembles, allowing estimation of well-calibrated and constrained uncertainties in extremes with higher confidence. What is achievable in probabilistic weather forecasting is drastically altered by the quick development of 1,000-member ensembles, improving the accuracy of early warnings of extreme weather occurrences and making it possible to evaluate their effects rapidly.
Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 32k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..
We are also on WhatsApp. Join our AI Channel on Whatsapp..
Aneesh Tickoo is a consulting intern at MarktechPost. He is currently pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He spends most of his time working on projects aimed at harnessing the power of machine learning. His research interest is image processing and is passionate about building solutions around it. He loves to connect with people and collaborate on interesting projects.