Cyclone Center Citizen Scientists Contribute To Article In Top Meteorology Journal

Our first major publication appeared online the week of September 8 (link at the end of the post) in the #1 journal for meteorology papers, the Bulletin of the American Meteorological Society.  We have been working with nearly 300,000 classifications from over 5,000 of our valuable citizen scientists over the past year (we now have over 365,000 classifications from 7,400 registered users).  Our primary goal was to assess how well Cyclone Center is working and whether it can lead to even more valuable results down the road.  The answer is a resounding YES!  We’d like to share a couple of those results with you now so you can see how your work has contributed to the project so far – and hopefully inspire you to do more!

Result #1 – Citizen Scientists are Skilled in Classifying Tropical Cyclone Intensity

The first question we wanted to evaluate was how well our classifiers, most who are not tropical cyclone experts, were doing with their classifications.  This is a bit tricky to test, since we don’t know what the “right” answers are (otherwise, we wouldn’t need to do Cyclone Center in the first place!).  So we had to pick a few storms where we had a pretty good idea what the wind speeds were; these storms were ones that were measured directly by specially equipped “hurricane hunter” aircraft – we call this set of data “Best-Track/Recon”.

So we counted the number of times that your average classification for a storm time (remember that we try to get at least 10 unique classifications for each image) fell within a range of wind speeds and compared it to the Best-Track/Recon as well as an automated computer program that determines a wind speed without human input (“ADT-HURSAT”).  Ideally, we would like our citizen science classifications (red) to match very closely with the recon data (green).   Here’s what the results look like:

Distribution of tropical cyclone wind speeds of Cyclone Center (CC Consensus) and a computer algorithm (ADT-HURSAT) compared to storms sampled by aircraft

Distribution of tropical cyclone wind speeds of Cyclone Center (CC Consensus) and a computer algorithm (ADT-HURSAT) compared to storms sampled by aircraft

For the most part, both the computer and Cyclone Center citizen scientists match up fairly well with the “best” observed storms.  This is quite encouraging, because at this point we are only harnessing a fraction of the power of your responses in calculating the best wind speed estimate.  Even more encouraging is is what happens between the 55 kt (tropical storm) and 75 kt (mature tropical cyclone) wind speed bins.  This is the time in the evolution of the storm where an eye typically appears in the imagery, which makes a big difference in the estimated wind speed.  The computer algorithm has a difficult time identifying the emergence of an eye (too many 55 kt wind speeds but too few 75 kt classifications), but our classifiers appeared to have done much better at classifying this particular phase of the storms (note the good agreement in the 75 kt wind speed bin).

Result #2 – Citizen Scientist Classifications Can Help Resolve Disagreements in Historical Tropical Cyclone Data

So now that we had determined that citizen scientists were doing a good job, we used their classifications to target some of the historical tropical cyclones that exhibit the biggest differences in wind speeds as determined by different forecast agencies.  Take a look at the figure below.


These figures are for Typhoon Yvette (1992).  The left panel shows how various forecast agencies diagnosed Yvette’s maximum wind speed (grey lines) compared to the computer algorithm (magenta) and cyclone center (green).  Note that there are large differences (up to 70 kt!) between the experts.  The computer and Cyclone Center classifiers are pretty close for most of Yvette’s life and support the conclusion that the upper grey line is probably the most accurate.  We can also see the “flattening” of the magenta line around day 6 while other measures are all increasing in strength.  This is an example of the shortcoming in the computer algorithm described above and suggests that the human eye can be better at tropical cyclone pattern recognition than a computer in certain cases (our results showed that you agreed with the computer cloud pattern most of the time).

The blue shade on the right side shows the “spread” of Cyclone Center classifications for each time – the wider the blue area is, the more disagreement in the classifications.  This is a very valuable result for us, as we can assign a level of confidence to our results based on citizen scientists’ work.  Here we see that higher confidence for the early portions of the storm and less as Yvette transitions to a typhoon.

Closing Remarks

We are just scratching the surface here and work is currently underway to examine your classifications in much deeper detail.  We could not have done this work without your contributions.  We would like to especially thank three of your colleagues and paper co-authors (bretarn, shocko61, Struck) who have put in hundreds if not thousands of hours in classifications.  Our work is far from over; although we are super-excited that Cyclone Center has been a success so far, we still need thousands of more classifications to cover the entire 32-year period of our data.  Head on over to now and get classifying.

If you are interested in reading the full version of the paper, you can find it here.

– Chris Hennon is part of the Cyclone Center Science Team and Associate Professor of Atmospheric Sciences at the University of North Carolina at Asheville



Tags: ,

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: