Museum Blog

Crowdtasting: Bringing Crowdsourcing to Sensory Science

Posted 3/9/2016 12:03 AM by Nicole Garneau | Comments

It can be argued that crowdsourcing dates back to the early 1900s with the start of the Audubon Society’s Christmas Bird Count, now the longest running citizen science program. However, crowdsourcing was coined in 2006 by Jeff Howe of Wired magazine. He described it as the growing trend of everyday people using their spare time to “create content, solve problems, even do corporate R & D.” He said crowdsourcing represented “the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call.” A decade later, none of us could have predicted how much crowdsourcing had changed the very way we do science.

 

Figure 1: The Bioscience Branch of Crowdsourcing’s Family Tree

Crowdsourcing Family Tree 

 

To dial into the use of crowdsourcing for sensory science, let’s first consider the evolution of bioscience-based studies using this model. We turn back to 2005 when the [email protected] project was launched. A major obstacle in creating new drugs, cures and vaccines is figuring out the natural structure of biological proteins. Structure determines many things, including how that protein might bind and function in the body. To overcome this obstacle, scientists created a computer algorithm that would quickly test all possible structure to see which one would be most likely in nature (based on many variables, including thermodynamics). This is where crowdsourcing in bioscience first comes into play. The program could be downloaded by everyday people on their home computers. These users were essentially tapping their personal computers to contribute to a massive network dedicated to predicting these all important protein structures. This type of crowdsourcing is what I refer to as passive contribution to science (Figure 1) because as a user you're allowing access to something personally owned (in this case a computer, in more recent programs iPhones etc.) to generate data, but you yourself are not contributing data, nor are you contributing intellectually.

 

This original passive contribution through crowdsourcing led to the next stage in crowdsourcing bioscience, active contribution. [email protected] was designed to show the progress of the computer algorithm live to the home user through the screen saver on their computer. Literally, you could watch a rapid succession of possible structures cycle on your computer screen for a given protein. And in fact, people apparently did just sit there and watch and be mesmerized by bioscience research flashing before their eyes. That’s where it gets interesting. The human mind has incredible pattern matching and spatial reasoning, so the Rosetta designers started getting calls from people saying they had solved the problem far faster than the computer algorithm. What an extraordinary moment, to realize the potential value to science by the volunteered contributions of the masses.

 

From there, Rosetta evolved to FoldIt, a gaming interface that allows users to play and compete to solve protein structures. When truly successful, proteins designed through FoldIt have then been empirically tested in the lab. These players are taking crowdsourcing it to the level of intellectual contribution. This brings me to the scientific process and how I breakdown crowdsourcing into the two forms of active contribution: citizen science and self-data (Figure 2).

 

Figure 2: Crowdsourcing and the Scientific Process

Crowdsourcing And Scientific Process 

 

 

 

While FoldIt is one of my favorite studies that allow users to contribute directly in the scientific process by data preparation, the American Gut project is my favorite example of self-data contribution. The American Gut team goes to great lengths to immerse users in the experience, from the way the data collection takes place to the quality of the shared information and comparative data. It is this audience focus that drives home the philosophical difference between crowdsourcing self-data contributions and regular old human subject research. Although close cousins and regulated as such (i.e., whether it's traditional human subject research or self-data contribution, if you want to publish, you need an institutional review process in place to ensure ethical protection of your contributors), self-data contribution is distinguished from its human subject counterpart by the resources placed in people as well as data. This is my rule of thumb for crowdsourcing: You should be as concerned about making the experience educational and enjoyable as you are about collecting data, and you will evaluate your study and elicit feedback from your contributors. This means you are really invested beyond just getting a data set. This takes time and planning alongside your research questions and instruments, and when done right includes proper evaluation to ensure you are meeting your educational and experiential goals.

 

To bring this home, at the Denver Museum of Nature & Science we use both forms of active contribution in crowdsourcing sensory data: self-data contribution and citizen science. In addition to the rule of thumb above, we follow a set of 5 rules to increase our chance of success (Figure 3). Using these rules, we currently have two models of crowdsourcing sensory science open to the public.

 

First, we have just embarked on conducting crowdtastings during events. Crowdtasting is a form of self-data contribution, and in this case, the self-data is sensory and involves some aspect of trying and rating a taste, mouthfeel, or aroma aspect for which flavor is known. This way we can generate data from a large group of people in just one session. This vastly improves the turnaround time from study design to execution to analysis and publication. By nature people self-select into this type of crowdtasting because of the entertainment value, and we also can better execute learning aspects for participants.

 

The second model of crowdtasting has been part of our portfolio since 2009. In our Genetics of Taste Lab, we found that a very special thing happens when you combine self-data and citizen science: You create personally relevant studies that are for the people, by the people. In this way, we crowdsource self-data from everyday Museum guests and the data is collected, processed, and in some cases even analyzed and communicated, by our citizen scientists. We conduct regular evaluation of our educational and experiential goals for both our Museum guests and our citizen scientists to ensure our working model supports both in addition to publishing papers on the results (Rule #5).

 

Figure 3: Six Steps to Successfully Design a Research Study Using a Crowdsourcing Model

5 Crowdsourcing Rules 

 

 

Finally, speaking of Rule #5, I think this step is imperative. I’ve learned of projects where publication is not a goal. In these, the focus on providing a truly wonderful educational experience is the top goal. I would argue that if a project is not hypothesis-driven and publication (the gold standard for all forms of research) is not the goal, then it is misleading to tell users that they are contributing to real research. For this reason, I do not advocate collecting for collecting sake and building chock-full databases and data sets that are not used. Be true to your users, they are contributing self-data and their intellectual time and resources; they want to see their contributions make a difference. Do them a favor and design scientifically sound, hypothesis driven studies and publication is a no-brainer end goal. At the Museum, we don’t just talk the talk on this one; we walk the walk. Our scientific research has become the basis of a number of peer-reviewed scientific publications, including one co-authored by a citizen scientist. Moreover, as work conducted by the people for the people, we have provided free public access to all our lab-based studies (event-based studies are yet to be published). It is just one more step to thank people for the contributions they have made by participating in crowdsourcing in our lab.

 

See a poster our lab presented on designing citizen science.

 

For more resources on citizen science, and to get involved, please visit http://citizenscienceassociation.org/

Comments

Subscribe to our RSS feed

Authors

Categories

Social

Archives

Tags

2015 in Space2017 Solar Eclipse40 Eridani system60 Minutes in SpaceAltitudeanatomyAndromedaAntaresanthropologyarchaeologyArctic IceArtAsk a ScientistAsteroidAsteroid 2012 DA14Asteroid sample returnAstronomyAtmospherebeerBeerFWGBeetlesBig BangBinary StarBlack HolesBlood MoonBlue TongueBrown DwarfButterfliesCarnegie Institution for ScienceCassiniCatalystCelestial EventsCentaurus ACeresChandra X-Ray TelescopeChang’e 3 moon missionChang’e 4 moon missionCharonChina Space ProgramChinese Space ProgramChipmunksChristmasCitizen ScienceClimateClimate changecollaborationCollectionscollections moveColoradoCometComet 67PComet 67P/Churyumov–GerasimenkoComet Swift-TuttleConferenceConversations in Local Health ResearchCootiesCosmic InflationCrowdsourcingCuriosityCuriosity RoverCygnusCygnus SpacecraftDark EnergyDark MatterDatabaseDawnDawn missionDawn SpaecraftDDIGDenverdiscoveryDiscovery MissionsdonationDream ChaserDung BeetlesDwarf PlanetEagle NebulaEarthEarth and MoonEarth from SpaceEarth Observation SatellitesEclipse ViewingEducation and Collections Facilityeducation collectionsEinsteinEl NiñoEnceladusentomologyESAEuclid SpacecraftEuropaEuropean Space AgencyEvolutionExoMarsExoMars SpacecraftExoplanetExoplanet Search TechniquesExoplanetsExtinctionextremophilefieldfieldworkFirst Earthrisefolk artfoodGAIA MissionGalaxiesGalaxyGalaxy ClustersGanymedegem carvingGeneticsGRACE SpacecraftGravitational WavesGravity Recovery and Climate ExperimentGreenhouse GasesHabitable ZonehealthHeartHolidayHolidayshorticultural pestHot JupitersHubbleHubble Space TelescopehumanHuman SpaceflightHydrainsect collectioninsectsInsightInternational Space StationISSISS SightingsJason-2 (Spacecraft)JPLJWSTKeplerKepler Missionknow healthKonovalenkoKuiper Belt ObjectLaser CommunicationsLawrence Livermore National LaboratoryLepidoperaLibraryLiceLight PollutionLinear Etalon Imaging Spectral Array (LEISA)literatureLockheed Martin DenverLROLunar EclipseLunar Reconnaissance OrbiterMadagascarMarathon ValleyMars 2020Mars ExplorationMars OrbiterMars Reconnaissance OrbiterMars RoverMars RoversMars Science LabMars Science LaboratoryMars spacecraftMars WaterMAVENMemoryMesa VerdeMeteor ShowersMeteorsMilky WayMongoliaMoon Rise/SetMothsMount SharpMROMSLMurray ButtesNASANASA-JPLNASA-TVNeptuneNeuroscienceNeutron StarNew HorizonsNew Horizons spacecraftNight SkynomenclatureNSFNutritionOcean CurrentsOcean Surface Topography Mission (OSTM)Opportunity RoverOrbital SciencesOriginsOrionOrion spacecraftOSIRIS-RExPaleo DietpaleontologyparasitesPerseidsPersied Meteor ShowerPhilaePhobosPhotographyPlankPlutopoisonPolar bearsProgresspublishingPulsarQuasarRADRadio AstronomyRegolith ExplorerRelativityResource IdentificationRosettaRussiasamplesSaturnSaturn MoonsSaturn Ringsschoolscience on tapScientific visitorSecurityShrewsSierra NevadaSky calendarSky watchSmellSnowmassSolar SystemSoyuzSpace CommunicationsSpace ProbesSpace Stories of 2015Space TelescopesSpaceXspecimensSpectral InterpretationspidersSpitzer Space TelescopeStar ClusterStar TrekstarsSTEMStickney craterSunSuomi National Polar-orbiting PartnershipSuper EarthSuper MoonSupernovaTasteTeen Science Scholarsthe MoonTravelturtleUniverseUtopia PlanitiaVenusVery Large ArrayVestaVirgin GalacticVLAvolunteeringVulcanWebb Space TelescopeWeddingwormXMM-NewtonX-ray Multi-Mirror Missionzoology
^ Back to Top
comments powered by Disqus