We wanted to explain more about what happens behind the scenes after our awesome Notes from Nature volunteers do transcriptions or classifications. What do we do with it and how do we get it back to curators or other scientists at Museums? One thing you may not know is that every label is transcribed by three different people. The idea is that more folks examining labels will lead to better results. For example, if two people enter Wisconsin for the state, and one person accidentally enters Wyoming then we can assume Wisconsin is correct and that Wyoming was a mistake. We also know that some labels are tough to interpret, and sometimes a couple different guesses can get closer to the right answer than just one.
This seems pretty easy right? Well… it gets more complicated when we start working with free text labels. Those text boxes where you enter sentences and phrases from the label. Things like locality information “Route 46 next to a tree by the stop sign on 4th street”, or habitat data “in a field”. How do we compare answers for these kinds of labels. What do we do with extra punctuations? Extra spaces? Extra words? Different words?
We have spent the last few months writing code that helps handle these kinds of situations. Essentially we want to first find labels that match and if not then we want to select the best label we can from the set of answers. We have set up a series of decisions rules to go through your answers. First, we ask if two of the three answers are identical including spaces and punctuation. If they match we are done. If not, then we remove extra spaces and punctuation and ignore capitals and ask if two of the three answers are identical. If so then we select the one with the most characters- with the idea of getting more information.
These two labels would be found to match after removing punctuation, spaces and ignoring capitals. Here we generally take the one with more characters to include as much information as possible.
Rd. 10 KM 24 *RD. 10. KM 24 *this one gets selected more characters
At this next stage things get a little more complicated and we want to use our decision rules to select the best answer we can among the three. First we look for labels where all of the words from one are found in another – partial ratio match. If we find this then we take the label with the most words.
North Fork of Salmon River at Deep Creek, by US-93 *North Fork of the Salmon River at Deep Creek, by US-93 *partial match selection– more words
Finally, we compare the answers using both a ‘fuzzy matching’ scheme. The fuzzy matching looks partial matches on words for example someone may have written ‘rd’ whereas someone else wrote ‘road’, our fuzzy matching will allow those to be considered the same. This strategy also allows for slight misspellings between words. If we get a fuzzy match between the two labels then we take the label with the most words. That ensures that we get the most data we can from these answers.
*County Line Road 2 mi E of airport County Line Rd. 2 mi. E. of airport *fuzzy match select this one
The end result of all this is a reconciliation “toolkit”. We pass all transcripts from finished expeditions through this toolkit, and it delivers three products. The first is just the raw data. The second is a best guess transcription based on the field by field reconciliations described above. The third is perhaps the most important – a summary of what we did and how we did it as a .html file. The summary output is something we are extending, as we think of new things that providers might want to see. Here is an example from the New World Swallowtail Expedition, one of the more difficult expeditions we’ve launched.
More recently, we have added some new features, including information about how many transcriptions were done by transcribers (based on their login names at Zooniverse) and a plot of transcription “effort” and how that looks over all transcribers. The effort plot is very new, but we wanted to provide information on whether most of the effort is done by a very few people, or there is more even spread across transcribers. Here is an example for a different expedition, “WeDigFLPlants’ Laurels of Florida”:
Finally, we give them the information about how labels were reconciled (if there was an exact match, partial or fuzzy match). We do this so the providers can go through them and decide if there are some they want to check. We also highlight any problem record, those for which we could not get a match, or those for which there was only one answer – so we could not compare the answers. Here is an example from one label. The areas in green are the three different answers, the top row is the ‘best guess’ reconciled record and the gray row is information about how the reconciliation was done. For example on the first column Country all three answers were Myanmar – and in gray it says we had an exact match with three answers. The ones in red are potential issues (in this case only one answer given).
The goal of all of this is to make it easy for providers to use these data right away. And we’ll note that this tool allows us to also get an overall look at transcription “success” rates, something we may come back to future posts, because these numbers are striking and illustrate the high value of this effort.
– Julie Allen, Notes from Nature data scientist
A huge shout out to our volunteers for quick work on our first NFN Ideas project, which focused on oak phenology. We completed the expedition a week after launching it, with 1944 transcripts of 644 subjects. 53 awesome transcribers took part. A lot of discussion on talk focused on some of the challenges with denoting flowers and fruits — it is harder than it first looks! So folks were interested in whether there was consistency among transcribers, and if the results would be consistent with an expert assessment. We have some initial answers to those questions and more! And a note that ALL of these data – the label data and phenological scoring – were ALL done by Notes from Nature volunteers.
So to get right to it! Transcriber consistency on this expedition was absolutely remarkable. Well above 99%. Yeah. We were surprised, too. There were three cases where we didn’t get consistent results. Just 3! Out of 664 subjects. So apparently there was very strong agreement.
We took a closer look at the three that seemed to prove difficult.
- subject_id: 4308678 –http://www.sernecportal.org/portal/collections/individual/index.php?occid=11108535
- subject_id: 4308659 –http://www.sernecportal.org/portal/collections/individual/index.php?occid=11108069
- subject_id: 4308844 —http://www.sernecportal.org/portal/collections/individual/index.php?occid=11130030
The consensus scoring for those from transcribers were:
- subject_id: 4308678: Flowers: No, Fruits: No
- subject_id: 4308659: Flowers: No, Fruits: Yes
- subject_id: Flowers: Yes, Fruits: No
I then asked NFN’s own Michael Denslow, who is also a darn fine botanist, for his assessment (without reporting anything about transcriber’s scoring), and he was 100% consistent with the three above. He noted for 4308678, “Funky one for sure” and for 4308659, “The terminal buds might be confusing people on these. Based on the collection date (and presence of terminal buds) fruits could be from pervious fall.”
And finally, we wanted to see if we could use these data to look at phenology patterns, so our data scientist Julie Allen did some quick visualizations of the data using the statistical package, R, which has some great plotting functions. You can see our plot above, for two species, Quercus falcata (top) and Quercus marilandica (bottom), two common oaks where we had enough data to examine patterns. The plot shows time on x-axis measured from March through November, and the y-axis is just a yes-no response. For yesses, we show a little emoji, and for no’s you can see those no reports over time for fruits and flowers in different colors. Yup, we decided to go with a tropical flower and fruit motif here, despite oaks definitively not producing pineapples!
The really neat thing is that we do pick up the short, and early flowering period for oaks during Spring, and in Q. falcata, a seemingly quick transition to acorns, and a slower cadence for Q. marilandica (note the longer period between flowering and appearance of acorns). There are still some great questions to examine here — these records were not all from the same year, and maybe some variation we are seeing is due to climate variation year to year. There were a couple “no flower” records during a typical flowering period and these might be either limited information from the sample, or perhaps something about that particular year. We are more than happy to share the raw data from this expedition with anyone who wants a closer look!
Thanks to everyone for all their hard work on the four expeditions near completion late last year. Quick update – we are done! All those expeditions are finished. Finito. Done. Awesome.
Here is a quick summary about how you beat the ETC (estimated time to completion)!
- Pinned Specimen_Tiger Beetles 3. That one had an 8 day ETC on Dec. 30th, and finished on January 1, beating the ETC by 6 days.
- Herbarium_Arkansas Dendrology: Part 8: Hickories and Walnuts. You beat the ETC by 3 days, also finishing on January 1.
- Aquatics Aquatic Insects of the Southeastern United States expedition had a 3-day estimated time to completion (ETC) and finished within the first day. You beat ETC by 2 days.
- Magnified_The Killer Within: Wasps, but not as you know them had a 19 day ETC and those are some challenging labels, as well. We finished that one in 15 days. So we beat the ETC by 4 days, but it was a major effort to get those last, and likely hardest ones, done.
Overall, you shaved off 19 days in total, and we couldn’t be more thrilled. Now that we have cleared out some of these older expeditions, we are looking forward to some new ones coming on board in the next few weeks. We’ll have more information on those, and some other plans for 2017, to share soon!
Happy first day of 2017! And a big WOW on all the effort to beat the ETC. Just now, your effort helped to get Pinned Specimen_Tiger Beetles 3 done. Finished. That one had an 8 day ETC on Dec. 30th, so you beat the ETC by 6 days!
Now there are just 2 expeditions near-complete left, Herbarium_Arkansas Dendrology: Part 8: Hickories and Walnuts (ETC 3 Days) and Magnified_The Killer Within: Wasps, but not as you know them (ETC 14 days). Both are getting done (slightly) faster than the ETCs we posted on Dec. 30th, so that is great – but lets just how much faster!
Update as 7pm Jan. 1: Herbarium_Arkansas Dendrology: Part 8: Hickories and Walnuts just FINISHED – you beat the ETC by 3 days and took it to another level with 168 transcriptions in the last 24 hours.
Quick update! In just 20 hours, you guys finished the Aquatics Aquatic Insects of the Southeastern United States expedition, which had a 3-day estimated time to completion (ETC). That is WAY ahead of schedule!
However, two others that had ETCs of 8 and 5 days (Pinned Specimen_Tiger Beetles 3 and Herbarium_Arkansas Dendrology: Part 8: Hickories and Walnuts, respectively) yesterday, now show 9 and 6 days ETC – yes, they appear to be backsliding. We bet that this change is more a hiccup in how ETC is calculated, but we’ll check back again and send in reports over the next couple days.
Thanks again for all the effort!
Hi everyone, and Happy Almost New Year from everyone at Notes from Nature. Its been a great year for NFN (bucking, perhaps, the overall trend), with our re-launch, WeDigBio and lots of great activity on the site . And we are excited about some new features coming in 2017. In the interim, we have been trying to finish a few more expeditions off before we launch some new ones.
Below are the expeditions that are nearly finished and the ETC (estimated time to completion) — we are hoping to be able to beat those ETCs if you can help out. Lets see by how much we can beat them. We’ll report the outcome here soon (we hope!) You can check how you are doing to help get there by checking our handy-dandy and now less cluttered stats page: https://www.zooniverse.org/projects/zooniverse/notes-from-nature/stats.
Aquatics_Aquatic Insects of the Southeastern United States
ETC* 3 days
Herbarium_Arkansas Dendrology: Part 8: Hickories
ETC* 5 days
Pinned Specimen_Tiger Beetles 3
ETC* 8 days
Magnified_The Killer Within: Wasps, but not as you know them
ETC* 19 days
A couple weeks ago, we asked our Notes from Nature citizen scientists for help completing 5 near-done expeditions. As of last week, we completed this challenge, and we want to thank all the efforts by some dedicated folks to get us there. We learned a lot with this challenge — the biggest lesson being that people really enjoyed tackling a challenge. We also know now to give folks a little more time than 24 hours, especially given that some expeditions still had a fair number of transcriptions to do (and took longest to finish). We hope to find some ways to make “expedition finishers” further rewarded in the coming weeks. Stay tuned!
A couple more quick Notes from Nature updates as we sail into 2017:
- We have new “About” pages! For volunteers, the exciting part is “The Team” page where we list all our researchers and collaborators. We hope to better organize this down the road and link people to the different expeditions, but it’s a start.
- Zooniverse had a small glitch with their stats and we didn’t have any Notes from Nature stats for the time period between Dec. 6th and 10th. Those may be recoverable, but for now you will notice a gap in stats reporting those days.
We appreciate the help, as always, and happy winter holidays to all.