Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Int J Eat Disord ; 55(2): 276-277, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34931338

RESUMEN

In this commentary, we respond to Burnette et al.'s (2021) paper, which gives significant practical recommendations to improve data quality and validity while gathering data via Amazon's Mechanical Turk (MTurk). We argue that it is also important to acknowledge and review the specific ethical issues that might arise when recruiting MTurk workers as participants. We particularly raise three main ethical concerns that need to be addressed when recruiting research participants from participant recruitment platforms: participants' economic vulnerability, participants' sensitivity, and power dynamics between participants and researchers. We elaborate on these issues by discussing the ways in which they may appear and be responded to. We conclude that considering the ethical aspects of data collection and the potential impacts of data collection on those involved would complement Burnette et al.'s recommendations. Consequently, data collection processes should be transparent as well, in addition to data screening processes.


Asunto(s)
Colaboración de las Masas , Colaboración de las Masas/normas , Recolección de Datos/normas , Humanos
2.
Int J Eat Disord ; 55(2): 282-284, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34984704

RESUMEN

Burnette et al. aimed to validate two eating disorder symptom measures among transgender adults recruited from Mechanical Turk (MTurk). After identifying several data quality issues, Burnette et al. abandoned this aim and instead documented the issues they faced (e.g., demographic misrepresentation, repeat submissions, inconsistent responses across similar questions, failed attention checks). Consequently, Burnette et al. raised concerns about the use of MTurk for psychological research, particularly in an eating disorder context. However, we believe these claims are overstated because they arise from a single study not designed to test MTurk data quality. Further, despite claiming to go "above and beyond" current recommendations, Burnette et al. missed key screening procedures. In particular, they missed procedures known to prevent participants who use commercial data centers (i.e., server farms) to hide their true IP address and complete multiple surveys for financial gain. In this commentary, we outline key screening procedures that allow researchers to obtain quality MTurk data. We also highlight the importance of balancing efforts to increase data quality with efforts to maintain sample diversity. With appropriate screening procedures, which should be preregistered, MTurk remains a viable participant source that requires further validation in an eating disorder context.


Asunto(s)
Colaboración de las Masas , Trastornos de Alimentación y de la Ingestión de Alimentos , Adulto , Atención , Colaboración de las Masas/métodos , Colaboración de las Masas/normas , Trastornos de Alimentación y de la Ingestión de Alimentos/diagnóstico , Humanos , Encuestas y Cuestionarios
3.
Int J Aging Hum Dev ; 93(2): 700-721, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-32683886

RESUMEN

A growing number of studies within the field of gerontology have included samples recruited from Amazon's Mechanical Turk (MTurk), an online crowdsourcing portal. While some research has examined how younger adult participants recruited through other means may differ from those recruited using MTurk, little work has addressed this question with older adults specifically. In the present study, we examined how older adults recruited via MTurk might differ from those recruited via a national probability sample, the Health and Retirement Study (HRS), on a battery of outcomes related to health and cognition. Using a Latin-square design, we examined the relationship between recruitment time, remuneration amount, and measures of cognitive functioning. We found substantial differences between our MTurk sample and the participants within the HRS, most notably within measures of verbal fluency and analogical reasoning. Additionally, remuneration amount was related to differences in time to complete recruitment, particularly at the lowest remuneration level, where recruitment completion required between 138 and 485 additional hours. While the general consensus has been that MTurk samples are a reasonable proxy for the larger population, this work suggests that researchers should be wary of overgeneralizing research conducted with older adults recruited through this portal.


Asunto(s)
Colaboración de las Masas/estadística & datos numéricos , Sujetos de Investigación/estadística & datos numéricos , Anciano , Anciano de 80 o más Años , Colaboración de las Masas/normas , Femenino , Humanos , Masculino , Persona de Mediana Edad , Selección de Paciente , Sujetos de Investigación/psicología , Estados Unidos
4.
Ann Rheum Dis ; 79(9): 1139-1140, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32527863

RESUMEN

The COVID-19 pandemic forces the whole rheumatic and musculoskeletal diseases community to reassemble established treatment and research standards. Digital crowdsourcing is a key tool in this pandemic to create and distil desperately needed clinical evidence and exchange of knowledge for patients and physicians alike. This viewpoint explains the concept of digital crowdsourcing and discusses examples and opportunities in rheumatology. First experiences of digital crowdsourcing in rheumatology show transparent, accessible, accelerated research results empowering patients and rheumatologists.


Asunto(s)
Investigación Biomédica/métodos , Infecciones por Coronavirus/terapia , Colaboración de las Masas/métodos , Neumonía Viral/terapia , Reumatología/métodos , Betacoronavirus , Investigación Biomédica/normas , COVID-19 , Infecciones por Coronavirus/virología , Colaboración de las Masas/normas , Humanos , Pandemias , Neumonía Viral/virología , Reumatología/normas , SARS-CoV-2
5.
Syst Biol ; 67(1): 49-60, 2018 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-29253296

RESUMEN

Scientists building the Tree of Life face an overwhelming challenge to categorize phenotypes (e.g., anatomy, physiology) from millions of living and fossil species. This biodiversity challenge far outstrips the capacities of trained scientific experts. Here we explore whether crowdsourcing can be used to collect matrix data on a large scale with the participation of nonexpert students, or "citizen scientists." Crowdsourcing, or data collection by nonexperts, frequently via the internet, has enabled scientists to tackle some large-scale data collection challenges too massive for individuals or scientific teams alone. The quality of work by nonexpert crowds is, however, often questioned and little data have been collected on how such crowds perform on complex tasks such as phylogenetic character coding. We studied a crowd of over 600 nonexperts and found that they could use images to identify anatomical similarity (hypotheses of homology) with an average accuracy of 82% compared with scores provided by experts in the field. This performance pattern held across the Tree of Life, from protists to vertebrates. We introduce a procedure that predicts the difficulty of each character and that can be used to assign harder characters to experts and easier characters to a nonexpert crowd for scoring. We test this procedure in a controlled experiment comparing crowd scores to those of experts and show that crowds can produce matrices with over 90% of cells scored correctly while reducing the number of cells to be scored by experts by 50%. Preparation time, including image collection and processing, for a crowdsourcing experiment is significant, and does not currently save time of scientific experts overall. However, if innovations in automation or robotics can reduce such effort, then large-scale implementation of our method could greatly increase the collective scientific knowledge of species phenotypes for phylogenetic tree building. For the field of crowdsourcing, we provide a rare study with ground truth, or an experimental control that many studies lack, and contribute new methods on how to coordinate the work of experts and nonexperts. We show that there are important instances in which crowd consensus is not a good proxy for correctness.


Asunto(s)
Clasificación/métodos , Colaboración de las Masas/normas , Filogenia , Animales , Fenotipo , Competencia Profesional , Reproducibilidad de los Resultados
6.
BMC Infect Dis ; 19(1): 112, 2019 Feb 04.
Artículo en Inglés | MEDLINE | ID: mdl-30717678

RESUMEN

BACKGROUND: Crowdsourcing method is an excellent tool for developing tailored interventions to improve sexual health. We evaluated the implementation of an innovation contest for sexual health promotion in China. METHODS: We organized an innovation contest over three months in 2014 for Chinese individuals < 30 years old to submit images for a sexual health promotion campaign. We solicited entries via social media and in-person events. The winning entry was adapted into a poster and distributed to STD clinics across Guangdong Province. In this study, we evaluated factors associated with images that received higher scores, described the themes of the top five finalists, and evaluated the acceptability of the winning entry using an online survey tool. RESULTS: We received 96 image submissions from 76 participants in 10 Chinese provinces. Most participants were youth (< 25 years, 85%) and non-professionals (without expertise in medicine, public health, or media, 88%). Youth were more likely to submit high-scoring entries. Images from professionals in medicine, public health, or media did not have higher scores compared to images from non-professionals. Participants were twice as likely to have learned about the contest through in-person events compared to social media. We adapted and distributed the winning entry to 300 STD clinics in 22 cities over 2 weeks. A total of 8338 people responded to an acceptability survey of the finalist entry. Among them, 79.8% endorsed or strongly endorsed being more willing to undergo STD testing after seeing the poster. CONCLUSIONS: Innovation contests may be useful for soliciting images as a part of comprehensive sexual health campaigns in low- and middle-income countries.


Asunto(s)
Educación en Salud/organización & administración , Promoción de la Salud , Innovación Organizacional , Mejoramiento de la Calidad , Salud Sexual/educación , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , China , Colaboración de las Masas/métodos , Colaboración de las Masas/normas , Estudios de Evaluación como Asunto , Femenino , Educación en Salud/métodos , Educación en Salud/normas , Promoción de la Salud/métodos , Promoción de la Salud/organización & administración , Promoción de la Salud/normas , Humanos , Masculino , Persona de Mediana Edad , Salud Pública/métodos , Salud Pública/normas , Mejoramiento de la Calidad/organización & administración , Mejoramiento de la Calidad/normas , Conducta Sexual/fisiología , Adulto Joven
7.
Annu Rev Public Health ; 39: 335-350, 2018 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-29608871

RESUMEN

Environmental health issues are becoming more challenging, and addressing them requires new approaches to research design and decision-making processes. Participatory research approaches, in which researchers and communities are involved in all aspects of a research study, can improve study outcomes and foster greater data accessibility and utility as well as increase public transparency. Here we review varied concepts of participatory research, describe how it complements and overlaps with community engagement and environmental justice, examine its intersection with emerging environmental sensor technologies, and discuss the strengths and limitations of participatory research. Although participatory research includes methodological challenges, such as biases in data collection and data quality, it has been found to increase the relevance of research questions, result in better knowledge production, and impact health policies. Improved research partnerships among government agencies, academia, and communities can increase scientific rigor, build community capacity, and produce sustainable outcomes.


Asunto(s)
Investigación Participativa Basada en la Comunidad/métodos , Investigación Participativa Basada en la Comunidad/organización & administración , Salud Ambiental , Investigación Participativa Basada en la Comunidad/normas , Colaboración de las Masas/métodos , Colaboración de las Masas/normas , Toma de Decisiones , Política de Salud , Humanos
8.
Brain ; 140(6): 1680-1691, 2017 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-28459961

RESUMEN

There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care.


Asunto(s)
Algoritmos , Colaboración de las Masas/métodos , Electrocorticografía/métodos , Diseño de Equipo/métodos , Convulsiones/diagnóstico , Adulto , Animales , Colaboración de las Masas/normas , Modelos Animales de Enfermedad , Electrocorticografía/normas , Diseño de Equipo/normas , Humanos , Prótesis e Implantes , Reproducibilidad de los Resultados
9.
Behav Res Methods ; 49(6): 1969-1983, 2017 12.
Artículo en Inglés | MEDLINE | ID: mdl-28127682

RESUMEN

The use of online crowdsourcing services like Amazon's Mechanical Turk (AMT) as a method of collecting behavioral data online has become increasingly popular in recent years. A growing body of contemporary research has empirically validated the use of AMT as a tool in psychological research by replicating a wide range of well-established effects that have been previously reported in controlled laboratory studies. However, the potential for AMT to be used to conduct spatial cuing experiments has yet to be investigated in depth. Spatial cuing tasks are typically very basic in terms of their stimulus complexity and experimental testing procedures, thus making them ideal for remote testing online that requires minimal task instruction. Studies employing the spatial cuing paradigm are typically aimed at unveiling novel facets of the symbolic control of attention, which occurs whenever observers orient their attention through space in accordance with the meaning of a spatial cue. Ultimately, the present study empirically validated the use of AMT to study the symbolic control of attention by successfully replicating four hallmark effects reported throughout the visual attention literature: the left/right advantage, cue type effect, cued axis effect, and cued endpoint effect. Various recommendations for future endeavors using AMT as a means of remotely collecting behavioral data online are also provided. In sum, the present study provides a crucial first step toward establishing a novel tool for conducting psychological research that can be used to expedite not only our own scientific contributions, but also those of our colleagues.


Asunto(s)
Atención/fisiología , Investigación Biomédica/métodos , Colaboración de las Masas/métodos , Señales (Psicología) , Internet , Percepción Espacial/fisiología , Percepción Visual/fisiología , Adulto , Investigación Biomédica/normas , Colaboración de las Masas/normas , Humanos
10.
Behav Res Methods ; 49(1): 320-334, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-26907746

RESUMEN

Increasingly, researchers have begun to explore the potential of the Internet to reach beyond the traditional undergraduate sample. In the present study, we sought to compare the data obtained from a conventional undergraduate college-student sample to data collected via two online survey recruitment platforms. In order to examine whether the data sampled from the three populations were equivalent, we conducted a test of equivalency using inferential confidence intervals-an approach that differs from the more traditional null hypothesis significance testing. The results showed that the data obtained via the two online recruitment platforms, the Amazon Mechanical Turk crowdsourcing site and the virtual environment of Second Life, were statistically equivalent to the data obtained from the college sample, on the basis of means of standardized measures of psychological stress and sleep quality. Additionally, correlations between the sleep and stress measures were not statistically different between the groups. These results, along with practical considerations for the use of these recruitment platforms, are discussed, and recommendations for other researchers who may be considering the use of these platforms are provided.


Asunto(s)
Colaboración de las Masas , Evaluación de Resultado en la Atención de Salud/normas , Selección de Paciente , Autoinforme/normas , Interfaz Usuario-Computador , Adulto , Colaboración de las Masas/métodos , Colaboración de las Masas/normas , Femenino , Humanos , Internet , Masculino , Higiene del Sueño , Estrés Psicológico/diagnóstico , Estrés Psicológico/psicología , Estudiantes/psicología , Estudiantes/estadística & datos numéricos , Encuestas y Cuestionarios
12.
Annu Rev Clin Psychol ; 12: 53-81, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26772208

RESUMEN

Crowdsourcing has had a dramatic impact on the speed and scale at which scientific research can be conducted. Clinical scientists have particularly benefited from readily available research study participants and streamlined recruiting and payment systems afforded by Amazon Mechanical Turk (MTurk), a popular labor market for crowdsourcing workers. MTurk has been used in this capacity for more than five years. The popularity and novelty of the platform have spurred numerous methodological investigations, making it the most studied nonprobability sample available to researchers. This article summarizes what is known about MTurk sample composition and data quality with an emphasis on findings relevant to clinical psychological research. It then addresses methodological issues with using MTurk--many of which are common to other nonprobability samples but unfamiliar to clinical science researchers--and suggests concrete steps to avoid these issues or minimize their impact.


Asunto(s)
Investigación Biomédica/estadística & datos numéricos , Colaboración de las Masas/estadística & datos numéricos , Investigación Biomédica/normas , Colaboración de las Masas/normas , Humanos
13.
Int J Health Geogr ; 15(1): 20, 2016 06 23.
Artículo en Inglés | MEDLINE | ID: mdl-27339260

RESUMEN

Adverse neighborhood conditions play an important role beyond individual characteristics. There is increasing interest in identifying specific characteristics of the social and built environments adversely affecting health outcomes. Most research has assessed aspects of such exposures via self-reported instruments or census data. Potential threats in the local environment may be subject to short-term changes that can only be measured with more nimble technology. The advent of new technologies may offer new opportunities to obtain geospatial data about neighborhoods that may circumvent the limitations of traditional data sources. This overview describes the utility, validity and reliability of selected emerging technologies to measure neighborhood conditions for public health applications. It also describes next steps for future research and opportunities for interventions. The paper presents an overview of the literature on measurement of the built and social environment in public health (Google Street View, webcams, crowdsourcing, remote sensing, social media, unmanned aerial vehicles, and lifespace) and location-based interventions. Emerging technologies such as Google Street View, social media, drones, webcams, and crowdsourcing may serve as effective and inexpensive tools to measure the ever-changing environment. Georeferenced social media responses may help identify where to target intervention activities, but also to passively evaluate their effectiveness. Future studies should measure exposure across key time points during the life-course as part of the exposome paradigm and integrate various types of data sources to measure environmental contexts. By harnessing these technologies, public health research can not only monitor populations and the environment, but intervene using novel strategies to improve the public health.


Asunto(s)
Recolección de Datos/métodos , Ambiente , Salud Pública/métodos , Características de la Residencia/estadística & datos numéricos , Medio Social , Colaboración de las Masas/normas , Recolección de Datos/normas , Planificación Ambiental , Sistemas de Información Geográfica/normas , Humanos , Salud Pública/normas , Reproducibilidad de los Resultados , Medios de Comunicación Sociales/normas
14.
Behav Res Methods ; 47(2): 361-73, 2015 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-24878596

RESUMEN

We describe Ostracism Online, a novel, social media-based ostracism paradigm designed to (1) keep social interaction experimentally controlled, (2) provide researchers with the flexibility to manipulate the properties of the social situation to fit their research purposes, (3) be suitable for online data collection, (4) be convenient for studying subsequent within-group behavior, and (5) be ecologically valid. After collecting data online, we compared the Ostracism Online paradigm with the Cyberball paradigm (Williams & Jarvis Behavior Research Methods, 38, 174-180, 2006) on need-threat and mood questionnaire scores (van Beest & Williams Journal of Personality and Social Psychology 91, 918-928, 2006). We also examined whether ostracized targets of either paradigm would be more likely to conform to their group members than if they had been included. Using a Bayesian analysis of variance to examine the individual effects of the different paradigms and to compare these effects across paradigms, we found analogous effects on need-threat and mood. Perhaps because we examined conformity to the ostracizers (rather than neutral sources), neither paradigm showed effects of ostracism on conformity. We conclude that Ostracism Online is a cost-effective, easy to use, and ecologically valid research tool for studying the psychological and behavioral effects of ostracism.


Asunto(s)
Colaboración de las Masas , Relaciones Interpersonales , Adulto , Teorema de Bayes , Investigación Conductal/métodos , Colaboración de las Masas/métodos , Colaboración de las Masas/normas , Humanos , Distancia Psicológica , Psicología Social/métodos , Medios de Comunicación Sociales , Encuestas y Cuestionarios
15.
J Surg Res ; 187(1): 65-71, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24555877

RESUMEN

BACKGROUND: Validated methods of objective assessments of surgical skills are resource intensive. We sought to test a web-based grading tool using crowdsourcing called Crowd-Sourced Assessment of Technical Skill. MATERIALS AND METHODS: Institutional Review Board approval was granted to test the accuracy of Amazon.com's Mechanical Turk and Facebook crowdworkers compared with experienced surgical faculty grading a recorded dry-laboratory robotic surgical suturing performance using three performance domains from a validated assessment tool. Assessor free-text comments describing their rating rationale were used to explore a relationship between the language used by the crowd and grading accuracy. RESULTS: Of a total possible global performance score of 3-15, 10 experienced surgeons graded the suturing video at a mean score of 12.11 (95% confidence interval [CI], 11.11-13.11). Mechanical Turk and Facebook graders rated the video at mean scores of 12.21 (95% CI, 11.98-12.43) and 12.06 (95% CI, 11.57-12.55), respectively. It took 24 h to obtain responses from 501 Mechanical Turk subjects, whereas it took 24 d for 10 faculty surgeons to complete the 3-min survey. Facebook subjects (110) responded within 25 d. Language analysis indicated that crowdworkers who used negation words (i.e., "but," "although," and so forth) scored the performance more equivalently to experienced surgeons than crowdworkers who did not (P < 0.00001). CONCLUSIONS: For a robotic suturing performance, we have shown that surgery-naive crowdworkers can rapidly assess skill equivalent to experienced faculty surgeons using Crowd-Sourced Assessment of Technical Skill. It remains to be seen whether crowds can discriminate different levels of skill and can accurately assess human surgery performances.


Asunto(s)
Educación Basada en Competencias/métodos , Colaboración de las Masas/métodos , Evaluación Educacional/métodos , Cirugía General/educación , Robótica/educación , Adulto , Educación Basada en Competencias/normas , Colaboración de las Masas/normas , Recolección de Datos , Percepción de Profundidad , Evaluación Educacional/normas , Humanos , Internet , Internado y Residencia/métodos , Internado y Residencia/normas , Estándares de Referencia , Técnicas de Sutura/educación , Adulto Joven
18.
Behav Res Methods ; 46(4): 1023-31, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-24356996

RESUMEN

Data quality is one of the major concerns of using crowdsourcing websites such as Amazon Mechanical Turk (MTurk) to recruit participants for online behavioral studies. We compared two methods for ensuring data quality on MTurk: attention check questions (ACQs) and restricting participation to MTurk workers with high reputation (above 95% approval ratings). In Experiment 1, we found that high-reputation workers rarely failed ACQs and provided higher-quality data than did low-reputation workers; ACQs improved data quality only for low-reputation workers, and only in some cases. Experiment 2 corroborated these findings and also showed that more productive high-reputation workers produce the highest-quality data. We concluded that sampling high-reputation workers can ensure high-quality data without having to resort to using ACQs, which may lead to selection bias if participants who fail ACQs are excluded post-hoc.


Asunto(s)
Colaboración de las Masas , Selección de Paciente , Proyectos de Investigación/normas , Investigación Conductal/métodos , Colaboración de las Masas/métodos , Colaboración de las Masas/normas , Recolección de Datos , Humanos , Internet
19.
J Med Internet Res ; 15(4): e73, 2013 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-23548263

RESUMEN

BACKGROUND: A high-quality gold standard is vital for supervised, machine learning-based, clinical natural language processing (NLP) systems. In clinical NLP projects, expert annotators traditionally create the gold standard. However, traditional annotation is expensive and time-consuming. To reduce the cost of annotation, general NLP projects have turned to crowdsourcing based on Web 2.0 technology, which involves submitting smaller subtasks to a coordinated marketplace of workers on the Internet. Many studies have been conducted in the area of crowdsourcing, but only a few have focused on tasks in the general NLP field and only a handful in the biomedical domain, usually based upon very small pilot sample sizes. In addition, the quality of the crowdsourced biomedical NLP corpora were never exceptional when compared to traditionally-developed gold standards. The previously reported results on medical named entity annotation task showed a 0.68 F-measure based agreement between crowdsourced and traditionally-developed corpora. OBJECTIVE: Building upon previous work from the general crowdsourcing research, this study investigated the usability of crowdsourcing in the clinical NLP domain with special emphasis on achieving high agreement between crowdsourced and traditionally-developed corpora. METHODS: To build the gold standard for evaluating the crowdsourcing workers' performance, 1042 clinical trial announcements (CTAs) from the ClinicalTrials.gov website were randomly selected and double annotated for medication names, medication types, and linked attributes. For the experiments, we used CrowdFlower, an Amazon Mechanical Turk-based crowdsourcing platform. We calculated sensitivity, precision, and F-measure to evaluate the quality of the crowd's work and tested the statistical significance (P<.001, chi-square test) to detect differences between the crowdsourced and traditionally-developed annotations. RESULTS: The agreement between the crowd's annotations and the traditionally-generated corpora was high for: (1) annotations (0.87, F-measure for medication names; 0.73, medication types), (2) correction of previous annotations (0.90, medication names; 0.76, medication types), and excellent for (3) linking medications with their attributes (0.96). Simple voting provided the best judgment aggregation approach. There was no statistically significant difference between the crowd and traditionally-generated corpora. Our results showed a 27.9% improvement over previously reported results on medication named entity annotation task. CONCLUSIONS: This study offers three contributions. First, we proved that crowdsourcing is a feasible, inexpensive, fast, and practical approach to collect high-quality annotations for clinical text (when protected health information was excluded). We believe that well-designed user interfaces and rigorous quality control strategy for entity annotation and linking were critical to the success of this work. Second, as a further contribution to the Internet-based crowdsourcing field, we will publicly release the JavaScript and CrowdFlower Markup Language infrastructure code that is necessary to utilize CrowdFlower's quality control and crowdsourcing interfaces for named entity annotations. Finally, to spur future research, we will release the CTA annotations that were generated by traditional and crowdsourced approaches.


Asunto(s)
Colaboración de las Masas/normas , Procesamiento de Lenguaje Natural , Medios de Comunicación Sociales , Telemedicina/normas , Ensayos Clínicos como Asunto/estadística & datos numéricos , Colaboración de las Masas/estadística & datos numéricos , Humanos , Internet , Proyectos Piloto , Control de Calidad , Telemedicina/estadística & datos numéricos
20.
J Glob Health ; 11: 09001, 2021 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-33791099

RESUMEN

BACKGROUND: Crowdsourcing was recognized as having the potential to collect information rapidly, inexpensively and accurately. U-Report is a mobile empowerment platform that connects young people all over the world to information that will change their lives and influence decisions. Previous studies of U-Report's effectiveness highlight strengths in the timeliness, low cost and high credibility for collecting and sending information, however they also highlight areas to improve on concerning data representation. EquityTool has developed a simpler approach to assess the wealth quintiles of respondents based on fewer questions derived from large household surveys such as Multiple Indicators Cluster Surveys (MICS) and Demographic and Health Surveys (DHS). METHODS: The methodology of Equity Tool was adopted to assess the socio-economic profile of U-Reporters (ie, enrolled participants of U-Report) in Bangladesh. The RapidPro flow collected the survey responses and scored them against the DHS national wealth index using the EquityTool methodology. This helped placing each U-Reporter who completed all questions into the appropriate wealth quintile. RESULTS: With 19% of the respondents completing all questions, the respondents fell into all 5 wealth quintiles, with 79% in the top-two quintiles and only 21% in the lower-three resulting in an Equity Index of 53/100 where 100 is completely in line with Bangladesh equity distribution and 1 is the least in line. An equitable random sample of 1828 U-Reporters from among the regular and frequent respondents was subsequently created for future surveys and the sample has an Equity Index of 98/100. CONCLUSIONS: U-Report in Bangladesh does reach the poorest quintiles while the initial recruitment skews to respondents towards better off families. It is possible to create an equitable random sub-sample of respondents from all five wealth quintiles and thus process information and data for future surveys. Moving forward, U-Reporters from the poorly represented quintiles may be incentivized to recruit peers to increase equity and representation. In times of COVID-19, U-Report in combination with the EquityTool has the potential to enhance the quality of crowdsourced data for statistical analysis.


Asunto(s)
Colaboración de las Masas/normas , Encuestas y Cuestionarios/normas , Bangladesh , Femenino , Predicción , Humanos , Masculino , Factores Socioeconómicos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA