Fluid Dynamics

Doghouse – Weather Forecast

Article from Issue 243/2021
Author(s):

A recent rocket launch has maddog thinking about high performance computing and accurate weather forecasts.

Recently SpaceX, working with NASA, sent a capsule carrying four astronauts to the International Space Station. The launching and flight were flawless.

I have been watching space flights since 1961. I still remember all of the students of my elementary school packing into the school's auditorium, with the school wheeling in their large black-and-white TV on a cart (one of the few "portable TVs" in those days) to watch it launch.

That is assuming that the rocket did take off, because in those days it was very likely that a storm might move in and have the countdown delayed, or even canceled. The act of fueling the rocket and putting the astronaut (first one, later multiple astronauts) in the capsule had to be planned and executed far in advance of the takeoff, and Cape Canaveral (as it was called in those days) was very likely to have bad weather.

We have known how to predict weather 24 hours in advance for a long time with 100-percent accuracy. The problem was that it took 48 hours to gather and process the data, so we could tell you precisely what the weather would be 24 hours in the past.

The problem of weather forecasting is one called "fluid dynamics," and this is a common problem in many things we need to compute. Heat flow, turbine efficiency, virus spread, airplane and car design, you name it … they are all "fluid dynamics." Even in objects we consider "solid," there are applications of fluid dynamics technologies that can be computed.

Before 1994 these type of problems were tackled by "supercomputers," machines and operating systems designed by companies like Cray, ECL, CDC, IBM, and others. These companies would spend many, many millions of dollars on developing these supercomputers, using state-of-the-art technology, and then produce a limited number of machines that might gain the title of "world's fastest" for a couple of years until the next one came along. Often the development costs were not covered by the profits from the sales of machines and service. Some of these companies (or at least their supercomputer divisions) were going out of business.

Then two people at NASA, Dr. Thomas Sterling and Donald Becker, developed a concept that allowed the use of commodity computer components to solve these compute-intensive problems by dividing these problems into highly parallel tasks, which they called "Beowulf" systems. Roughly speaking, you could see about the same computing power from a system like this that you would see from a "supercomputer" that cost 40 times more. In addition, since these commodity based systems used an Open Source operating system (typically GNU/Linux-based) the programmers who worked on fluid dynamics problems knew how to program the system with the addition of a few libraries – Parallel Virtual Machines (PVM), Message Passing Interface (MPI), and Shared Memory Interface (OpenMP) – as well as standard methods of breaking apart large programs (decomposition, thread programming, and parallelism).

The Beowulf concept – now called High Performance Computing (HPC) and used on the world's 500 fastest computers – also allowed for people to experiment with better compilers and systems with relatively inexpensive hardware. Systems purchased for some large problem could be disassembled and repurposed for other tasks when the large problem was finished.

Early systems included Oakridge National Lab's "Stone Soupercomputer" (named after the "stone soup" fable), made of 133 cast-off desktop systems connected with simple Ethernet as the communication between boxes. Implemented by William Hargrove and Forrest Hoffman in response to not receiving the funding for a traditional supercomputer, the Stone Soupercomputer helped to solve several real-world problems, as well as act as a system to develop new HPC applications.

Over time various distributions (Rocks and OSCAR were two) specializing in this type of computing came out from the various National Laboratories that made it easier to set up your own high performance cluster, including the use of Raspberry Pis as the hardware.

Over the years, the time needed to gather and process the data to forecast weather 24 hours in advance, with 100-percent accuracy, dropped from 48 hours to 24 hours (stick your hand out the window) to 12 hours, and after that we never had to suspend a NASA launch because of weather. Over a broader scale, these calculations also helped predict weather for sporting events, weddings, agriculture, and other issues.

Today the same technology is being used to calculate the damage of a hurricane versus what would happen if the environment had been one or two degrees cooler, important in the age of climate change. Weather forecasters are showing that temperatures of a few degrees can mean rainfall differences of 10 or 15 percent. In a rainfall of 12 inches during a hurricane, this can mean one to two inches of additional water for flooding, enough to make a difference in whether your home or business will be flooded.

The Author

Jon "maddog" Hall is an author, educator, computer scientist, and free software pioneer who has been a passionate advocate for Linux since 1994 when he first met Linus Torvalds and facilitated the port of Linux to a 64-bit system. He serves as president of Linux International®.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News