Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
PLoS One ; 18(11): e0293289, 2023.
Article in English | MEDLINE | ID: mdl-37988360

ABSTRACT

Citizen scientists around the world are collecting data with their smartphones, performing scientific calculations on their home computers, and analyzing images on online platforms. These online citizen science projects are frequently lauded for their potential to revolutionize the scope and scale of data collection and analysis, improve scientific literacy, and democratize science. Yet, despite the attention online citizen science has attracted, it remains unclear how widespread public participation is, how it has changed over time, and how it is geographically distributed. Importantly, the demographic profile of citizen science participants remains uncertain, and thus to what extent their contributions are helping to democratize science. Here, we present the largest quantitative study of participation in citizen science based on online accounts of more than 14 million participants over two decades. We find that the trend of broad rapid growth in online citizen science participation observed in the early 2000s has since diverged by mode of participation, with consistent growth observed in nature sensing, but a decline seen in crowdsourcing and distributed computing. Most citizen science projects, except for nature sensing, are heavily dominated by men, and the vast majority of participants, male and female, have a background in science. The analysis we present here provides, for the first time, a robust 'baseline' to describe global trends in online citizen science participation. These results highlight current challenges and the future potential of citizen science. Beyond presenting our analysis of the collated data, our work identifies multiple metrics for robust examination of public participation in science and, more generally, online crowds. It also points to the limits of quantitative studies in capturing the personal, societal, and historical significance of citizen science.


Subject(s)
Citizen Science , Crowdsourcing , Humans , Male , Female , Community Participation , Data Collection , Demography
2.
Elife ; 112022 05 19.
Article in English | MEDLINE | ID: mdl-35588296

ABSTRACT

Tuberculosis is a respiratory disease that is treatable with antibiotics. An increasing prevalence of resistance means that to ensure a good treatment outcome it is desirable to test the susceptibility of each infection to different antibiotics. Conventionally, this is done by culturing a clinical sample and then exposing aliquots to a panel of antibiotics, each being present at a pre-determined concentration, thereby determining if the sample isresistant or susceptible to each sample. The minimum inhibitory concentration (MIC) of a drug is the lowestconcentration that inhibits growth and is a more useful quantity but requires each sample to be tested at a range ofconcentrations for each drug. Using 96-well broth micro dilution plates with each well containing a lyophilised pre-determined amount of an antibiotic is a convenient and cost-effective way to measure the MICs of several drugs at once for a clinical sample. Although accurate, this is still an expensive and slow process that requires highly-skilled and experienced laboratory scientists. Here we show that, through the BashTheBug project hosted on the Zooniverse citizen science platform, a crowd of volunteers can reproducibly and accurately determine the MICs for 13 drugs and that simply taking the median or mode of 11-17 independent classifications is sufficient. There is therefore a potential role for crowds to support (but not supplant) the role of experts in antibiotic susceptibility testing.


Tuberculosis is a bacterial respiratory infection that kills about 1.4 million people worldwide each year. While antibiotics can cure the condition, the bacterium responsible for this disease, Mycobacterium tuberculosis, is developing resistance to these treatments. Choosing which antibiotics to use to treat the infection more carefully may help to combat the growing threat of drug-resistant bacteria. One way to find the best choice is to test how an antibiotic affects the growth of M. tuberculosis in the laboratory. To speed up this process, laboratories test multiple drugs simultaneously. They do this by growing bacteria on plates with 96 wells and injecting individual antibiotics in to each well at different concentrations. The Comprehensive Resistance Prediction for Tuberculosis (CRyPTIC) consortium has used this approach to collect and analyse bacteria from over 20,000 tuberculosis patients. An image of the 96-well plate is then captured and the level of bacterial growth in each well is assessed by laboratory scientists. But this work is difficult, time-consuming, and subjective, even for tuberculosis experts. Here, Fowler et al. show that enlisting citizen scientists may help speed up this process and reduce errors that arise from analysing such a large dataset. In April 2017, Fowler et al. launched the project 'BashTheBug' on the Zooniverse citizen science platform where anyone can access and analyse the images from the CRyPTIC consortium. They found that a crowd of inexperienced volunteers were able to consistently and accurately measure the concentration of antibiotics necessary to inhibit the growth of M. tuberculosis. If the concentration is above a pre-defined threshold, the bacteria are considered to be resistant to the treatment. A consensus result could be reached by calculating the median value of the classifications provided by as few as 17 different BashTheBug participants. The work of BashTheBug volunteers has reduced errors in the CRyPTIC project data, which has been used for several other studies. For instance, the World Health Organization (WHO) has also used the data to create a catalogue of genetic mutations associated with antibiotics resistance in M. tuberculosis. Enlisting citizen scientists has accelerated research on tuberculosis and may help with other pressing public health concerns.


Subject(s)
Mycobacterium tuberculosis , Tuberculosis , Antitubercular Agents/pharmacology , Humans , Microbial Sensitivity Tests , Tuberculosis/drug therapy , Volunteers
3.
Traffic ; 22(7): 240-253, 2021 07.
Article in English | MEDLINE | ID: mdl-33914396

ABSTRACT

Advancements in volume electron microscopy mean it is now possible to generate thousands of serial images at nanometre resolution overnight, yet the gold standard approach for data analysis remains manual segmentation by an expert microscopist, resulting in a critical research bottleneck. Although some machine learning approaches exist in this domain, we remain far from realizing the aspiration of a highly accurate, yet generic, automated analysis approach, with a major obstacle being lack of sufficient high-quality ground-truth data. To address this, we developed a novel citizen science project, Etch a Cell, to enable volunteers to manually segment the nuclear envelope (NE) of HeLa cells imaged with serial blockface scanning electron microscopy. We present our approach for aggregating multiple volunteer annotations to generate a high-quality consensus segmentation and demonstrate that data produced exclusively by volunteers can be used to train a highly accurate machine learning algorithm for automatic segmentation of the NE, which we share here, in addition to our archived benchmark data.


Subject(s)
Deep Learning , HeLa Cells , Humans , Microscopy, Electron , Nuclear Envelope , Volunteers
5.
Sci Data ; 7(1): 102, 2020 03 26.
Article in English | MEDLINE | ID: mdl-32218449

ABSTRACT

Time-lapse cameras facilitate remote and high-resolution monitoring of wild animal and plant communities, but the image data produced require further processing to be useful. Here we publish pipelines to process raw time-lapse imagery, resulting in count data (number of penguins per image) and 'nearest neighbour distance' measurements. The latter provide useful summaries of colony spatial structure (which can indicate phenological stage) and can be used to detect movement - metrics which could be valuable for a number of different monitoring scenarios, including image capture during aerial surveys. We present two alternative pathways for producing counts: (1) via the Zooniverse citizen science project Penguin Watch and (2) via a computer vision algorithm (Pengbot), and share a comparison of citizen science-, machine learning-, and expert- derived counts. We provide example files for 14 Penguin Watch cameras, generated from 63,070 raw images annotated by 50,445 volunteers. We encourage the use of this large open-source dataset, and the associated processing methodologies, for both ecological studies and continued machine learning and computer vision development.


Subject(s)
Citizen Science , Image Processing, Computer-Assisted , Machine Learning , Time-Lapse Imaging , Algorithms , Animals , Spheniscidae
7.
Proc Natl Acad Sci U S A ; 116(6): 1902-1909, 2019 02 05.
Article in English | MEDLINE | ID: mdl-30718393

ABSTRACT

Citizen science has proved to be a unique and effective tool in helping science and society cope with the ever-growing data rates and volumes that characterize the modern research landscape. It also serves a critical role in engaging the public with research in a direct, authentic fashion and by doing so promotes a better understanding of the processes of science. To take full advantage of the onslaught of data being experienced across the disciplines, it is essential that citizen science platforms leverage the complementary strengths of humans and machines. This Perspectives piece explores the issues encountered in designing human-machine systems optimized for both efficiency and volunteer engagement, while striving to safeguard and encourage opportunities for serendipitous discovery. We discuss case studies from Zooniverse, a large online citizen science platform, and show that combining human and machine classifications can efficiently produce results superior to those of either one alone and how smart task allocation can lead to further efficiencies in the system. While these examples make clear the promise of human-machine integration within an online citizen science system, we then explore in detail how system design choices can inadvertently lower volunteer engagement, create exclusionary practices, and reduce opportunity for serendipitous discovery. Throughout we investigate the tensions that arise when designing a human-machine system serving the dual goals of carrying out research in the most efficient manner possible while empowering a broad community to authentically engage in this research.


Subject(s)
Community Participation/methods , Efficiency , Machine Learning , Science , Biological Science Disciplines/education , Comprehension , Computing Methodologies , Humans , Natural Science Disciplines/education , Research , Research Design , Surveys and Questionnaires
8.
Sci Data ; 5: 180124, 2018 06 26.
Article in English | MEDLINE | ID: mdl-29944146

ABSTRACT

Automated time-lapse cameras can facilitate reliable and consistent monitoring of wild animal populations. In this report, data from 73,802 images taken by 15 different Penguin Watch cameras are presented, capturing the dynamics of penguin (Spheniscidae; Pygoscelis spp.) breeding colonies across the Antarctic Peninsula, South Shetland Islands and South Georgia (03/2012 to 01/2014). Citizen science provides a means by which large and otherwise intractable photographic data sets can be processed, and here we describe the methodology associated with the Zooniverse project Penguin Watch, and provide validation of the method. We present anonymised volunteer classifications for the 73,802 images, alongside the associated metadata (including date/time and temperature information). In addition to the benefits for ecological monitoring, such as easy detection of animal attendance patterns, this type of annotated time-lapse imagery can be employed as a training tool for machine learning algorithms to automate data extraction, and we encourage the use of this data set for computer vision development.


Subject(s)
Spheniscidae , Time-Lapse Imaging/methods , Animals , Antarctic Regions , Ecological Parameter Monitoring/methods , Population Dynamics
9.
Conserv Biol ; 30(3): 520-31, 2016 06.
Article in English | MEDLINE | ID: mdl-27111678

ABSTRACT

Citizen science has the potential to expand the scope and scale of research in ecology and conservation, but many professional researchers remain skeptical of data produced by nonexperts. We devised an approach for producing accurate, reliable data from untrained, nonexpert volunteers. On the citizen science website www.snapshotserengeti.org, more than 28,000 volunteers classified 1.51 million images taken in a large-scale camera-trap survey in Serengeti National Park, Tanzania. Each image was circulated to, on average, 27 volunteers, and their classifications were aggregated using a simple plurality algorithm. We validated the aggregated answers against a data set of 3829 images verified by experts and calculated 3 certainty metrics-level of agreement among classifications (evenness), fraction of classifications supporting the aggregated answer (fraction support), and fraction of classifiers who reported "nothing here" for an image that was ultimately classified as containing an animal (fraction blank)-to measure confidence that an aggregated answer was correct. Overall, aggregated volunteer answers agreed with the expert-verified data on 98% of images, but accuracy differed by species commonness such that rare species had higher rates of false positives and false negatives. Easily calculated analysis of variance and post-hoc Tukey tests indicated that the certainty metrics were significant indicators of whether each image was correctly classified or classifiable. Thus, the certainty metrics can be used to identify images for expert review. Bootstrapping analyses further indicated that 90% of images were correctly classified with just 5 volunteers per image. Species classifications based on the plurality vote of multiple citizen scientists can provide a reliable foundation for large-scale monitoring of African wildlife.


Subject(s)
Community Participation , Conservation of Natural Resources , Animals , Animals, Wild , Data Collection , Ecology , Research , Tanzania , Volunteers
10.
J Wildl Manage ; 79(6): 1014-1021, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26640297

ABSTRACT

The random encounter model (REM) is a novel method for estimating animal density from camera trap data without the need for individual recognition. It has never been used to estimate the density of large carnivore species, despite these being the focus of most camera trap studies worldwide. In this context, we applied the REM to estimate the density of female lions (Panthera leo) from camera traps implemented in Serengeti National Park, Tanzania, comparing estimates to reference values derived from pride census data. More specifically, we attempted to account for bias resulting from non-random camera placement at lion resting sites under isolated trees by comparing estimates derived from night versus day photographs, between dry and wet seasons, and between habitats that differ in their amount of tree cover. Overall, we recorded 169 and 163 independent photographic events of female lions from 7,608 and 12,137 camera trap days carried out in the dry season of 2010 and the wet season of 2011, respectively. Although all REM models considered over-estimated female lion density, models that considered only night-time events resulted in estimates that were much less biased relative to those based on all photographic events. We conclude that restricting REM estimation to periods and habitats in which animal movement is more likely to be random with respect to cameras can help reduce bias in estimates of density for female Serengeti lions. We highlight that accurate REM estimates will nonetheless be dependent on reliable measures of average speed of animal movement and camera detection zone dimensions. © 2015 The Authors. Journal of Wildlife Management published by Wiley Periodicals, Inc. on behalf of The Wildlife Society.

11.
EBioMedicine ; 2(7): 681-9, 2015 Jul.
Article in English | MEDLINE | ID: mdl-26288840

ABSTRACT

BACKGROUND: Citizen science, scientific research conducted by non-specialists, has the potential to facilitate biomedical research using available large-scale data, however validating the results is challenging. The Cell Slider is a citizen science project that intends to share images from tumors with the general public, enabling them to score tumor markers independently through an internet-based interface. METHODS: From October 2012 to June 2014, 98,293 Citizen Scientists accessed the Cell Slider web page and scored 180,172 sub-images derived from images of 12,326 tissue microarray cores labeled for estrogen receptor (ER). We evaluated the accuracy of Citizen Scientist's ER classification, and the association between ER status and prognosis by comparing their test performance against trained pathologists. FINDINGS: The area under ROC curve was 0.95 (95% CI 0.94 to 0.96) for cancer cell identification and 0.97 (95% CI 0.96 to 0.97) for ER status. ER positive tumors scored by Citizen Scientists were associated with survival in a similar way to that scored by trained pathologists. Survival probability at 15 years were 0.78 (95% CI 0.76 to 0.80) for ER-positive and 0.72 (95% CI 0.68 to 0.77) for ER-negative tumors based on Citizen Scientists classification. Based on pathologist classification, survival probability was 0.79 (95% CI 0.77 to 0.81) for ER-positive and 0.71 (95% CI 0.67 to 0.74) for ER-negative tumors. The hazard ratio for death was 0.26 (95% CI 0.18 to 0.37) at diagnosis and became greater than one after 6.5 years of follow-up for ER scored by Citizen Scientists, and 0.24 (95% CI 0.18 to 0.33) at diagnosis increasing thereafter to one after 6.7 (95% CI 4.1 to 10.9) years of follow-up for ER scored by pathologists. INTERPRETATION: Crowdsourcing of the general public to classify cancer pathology data for research is viable, engages the public and provides accurate ER data. Crowdsourced classification of research data may offer a valid solution to problems of throughput requiring human input.


Subject(s)
Breast Neoplasms/pathology , Crowdsourcing , Pathology, Molecular , Breast Neoplasms/mortality , Female , Humans , Kaplan-Meier Estimate , Proportional Hazards Models , ROC Curve , Receptors, Estrogen/metabolism
12.
Sci Data ; 2: 150026, 2015.
Article in English | MEDLINE | ID: mdl-26097743

ABSTRACT

Camera traps can be used to address large-scale questions in community ecology by providing systematic data on an array of wide-ranging species. We deployed 225 camera traps across 1,125 km(2) in Serengeti National Park, Tanzania, to evaluate spatial and temporal inter-species dynamics. The cameras have operated continuously since 2010 and had accumulated 99,241 camera-trap days and produced 1.2 million sets of pictures by 2013. Members of the general public classified the images via the citizen-science website www.snapshotserengeti.org. Multiple users viewed each image and recorded the species, number of individuals, associated behaviours, and presence of young. Over 28,000 registered users contributed 10.8 million classifications. We applied a simple algorithm to aggregate these individual classifications into a final 'consensus' dataset, yielding a final classification for each image and a measure of agreement among individual answers. The consensus classifications and raw imagery provide an unparalleled opportunity to investigate multi-species dynamics in an intact ecosystem and a valuable resource for machine-learning and computer-vision research.


Subject(s)
Behavior, Animal , Mammals , Animals , Ecosystem , Image Processing, Computer-Assisted , Tanzania
13.
J Surg Res ; 187(1): 65-71, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24555877

ABSTRACT

BACKGROUND: Validated methods of objective assessments of surgical skills are resource intensive. We sought to test a web-based grading tool using crowdsourcing called Crowd-Sourced Assessment of Technical Skill. MATERIALS AND METHODS: Institutional Review Board approval was granted to test the accuracy of Amazon.com's Mechanical Turk and Facebook crowdworkers compared with experienced surgical faculty grading a recorded dry-laboratory robotic surgical suturing performance using three performance domains from a validated assessment tool. Assessor free-text comments describing their rating rationale were used to explore a relationship between the language used by the crowd and grading accuracy. RESULTS: Of a total possible global performance score of 3-15, 10 experienced surgeons graded the suturing video at a mean score of 12.11 (95% confidence interval [CI], 11.11-13.11). Mechanical Turk and Facebook graders rated the video at mean scores of 12.21 (95% CI, 11.98-12.43) and 12.06 (95% CI, 11.57-12.55), respectively. It took 24 h to obtain responses from 501 Mechanical Turk subjects, whereas it took 24 d for 10 faculty surgeons to complete the 3-min survey. Facebook subjects (110) responded within 25 d. Language analysis indicated that crowdworkers who used negation words (i.e., "but," "although," and so forth) scored the performance more equivalently to experienced surgeons than crowdworkers who did not (P < 0.00001). CONCLUSIONS: For a robotic suturing performance, we have shown that surgery-naive crowdworkers can rapidly assess skill equivalent to experienced faculty surgeons using Crowd-Sourced Assessment of Technical Skill. It remains to be seen whether crowds can discriminate different levels of skill and can accurately assess human surgery performances.


Subject(s)
Competency-Based Education/methods , Crowdsourcing/methods , Educational Measurement/methods , General Surgery/education , Robotics/education , Adult , Competency-Based Education/standards , Crowdsourcing/standards , Data Collection , Depth Perception , Educational Measurement/standards , Humans , Internet , Internship and Residency/methods , Internship and Residency/standards , Reference Standards , Suture Techniques/education , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...