Cyclone Center Paper published

A paper on Cyclone Center has been published in the American Meteorological Society’s Monthly Weather Review Journal. It will appear in print in the October issue and is also available online. This blog post is the first in a series explaining some of the paper’s important results, in particular: what happens when scientists don’t agree on an image.

You mean scientists disagree?

Yes. It is very rare that all the classifications for an image are unanimous … like about 1% of the time. So we needed to figure out what to do the rest of the time.

The exciting part is that the paper shows that it is possible to learn how Cyclone Citizen Scientists – or classifiers – classify different images to better understand the characteristics of the storm. We do more than just take an average of all the classifications. We use statistics to understand tendencies of each classifier. This allows us to better assign the most probable answer based not only what a classifier selects, but in relation to their answer to other questions. For instance see the following two images.

cc-ward-wilma

Both are Eye storms – Hurricane Ward (1995) on the left and Hurricane Wilma (2005) on the right. For Hurricane Ward, only 4 of 13 classifiers classified this as an Eye image. However, one of those classifiers was someone who did really well (that is, agreed with the algorithm) for lots of other images. So even though the majority effectively said “Not an eye image” the algorithm determined this is an eye storm.

Conversely, 11 of 14 classifiers said the Wilma image was an eye. In this case, one of the 3 that selected a different storm type (Not an eye) also made the same mistake on many other systems. So the algorithm weighted their “Not an eye” selection less and trusted the others.

But does the algorithm really work?

Yes. We showed that the algorithm recognizes classifiers that consistently classify similar images the same way. In this way, it can take different selections and estimate a consistent storm type. Here’s a set of classifications from the lifetime of Hurricane Katrina (2005)

cc-katrina

The bottom graph shows all the classifications for Katrina from all citizen scientists as the proportion of storm types selected. You can see that there is quite a mixture of classifications. At some times there is lots of agreement, and at others not so much. The algorithm (called the Expectation-Maximization – EM -algorithm) selection is the bar just above the graph. The result is a more consistent selection of the storm type. These selections are plotted along the path of Katrina in the map above.

It is really encouraging that the EM algorithm can take the numerous classifications – that don’t always agree – and select the most probable type! What is really surprising is that the algorithm does not use any information about the storm type for previous images! So the fact that the EM algorithm output shows consistency through time is great!

tl;dr

OK. So that was long. Here’s the summary.

We need to understand how to get the best selection when classifiers choose different answers. We developed a statistical algorithm to combine the many selections from classifiers into a consistent result. Storm type is a key step in determining the storm’s intensity, as will be shown in the next post.

 

About K Knapp

I am a meteorologist at NOAA’s National Climatic Data Center in Asheville, NC. My research interests include using satellite data to observe hurricanes, clouds, and other climate variables. *******Disclaimer******* The opinions expressed in these blogs are mine only. They do not necessarily reflect the official views or policies of NOAA, Department of Commerce, or the US Government.

Leave a comment