Your browser doesn't support javascript.
loading
Gamified Crowdsourcing as a Novel Approach to Lung Ultrasound Data Set Labeling: Prospective Analysis.
Duggan, Nicole M; Jin, Mike; Duran Mendicuti, Maria Alejandra; Hallisey, Stephen; Bernier, Denie; Selame, Lauren A; Asgari-Targhi, Ameneh; Fischetti, Chanel E; Lucassen, Ruben; Samir, Anthony E; Duhaime, Erik; Kapur, Tina; Goldsmith, Andrew J.
Affiliation
  • Duggan NM; Department of Emergency Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States.
  • Jin M; Department of Emergency Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States.
  • Duran Mendicuti MA; Centaur Labs, Boston, MA, United States.
  • Hallisey S; Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States.
  • Bernier D; Department of Emergency Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States.
  • Selame LA; Department of Emergency Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States.
  • Asgari-Targhi A; Department of Emergency Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States.
  • Fischetti CE; Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States.
  • Lucassen R; Department of Emergency Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States.
  • Samir AE; Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands.
  • Duhaime E; Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States.
  • Kapur T; Centaur Labs, Boston, MA, United States.
  • Goldsmith AJ; Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States.
J Med Internet Res ; 26: e51397, 2024 Jul 04.
Article de En | MEDLINE | ID: mdl-38963923
ABSTRACT

BACKGROUND:

Machine learning (ML) models can yield faster and more accurate medical diagnoses; however, developing ML models is limited by a lack of high-quality labeled training data. Crowdsourced labeling is a potential solution but can be constrained by concerns about label quality.

OBJECTIVE:

This study aims to examine whether a gamified crowdsourcing platform with continuous performance assessment, user feedback, and performance-based incentives could produce expert-quality labels on medical imaging data.

METHODS:

In this diagnostic comparison study, 2384 lung ultrasound clips were retrospectively collected from 203 emergency department patients. A total of 6 lung ultrasound experts classified 393 of these clips as having no B-lines, one or more discrete B-lines, or confluent B-lines to create 2 sets of reference standard data sets (195 training clips and 198 test clips). Sets were respectively used to (1) train users on a gamified crowdsourcing platform and (2) compare the concordance of the resulting crowd labels to the concordance of individual experts to reference standards. Crowd opinions were sourced from DiagnosUs (Centaur Labs) iOS app users over 8 days, filtered based on past performance, aggregated using majority rule, and analyzed for label concordance compared with a hold-out test set of expert-labeled clips. The primary outcome was comparing the labeling concordance of collated crowd opinions to trained experts in classifying B-lines on lung ultrasound clips.

RESULTS:

Our clinical data set included patients with a mean age of 60.0 (SD 19.0) years; 105 (51.7%) patients were female and 114 (56.1%) patients were White. Over the 195 training clips, the expert-consensus label distribution was 114 (58%) no B-lines, 56 (29%) discrete B-lines, and 25 (13%) confluent B-lines. Over the 198 test clips, expert-consensus label distribution was 138 (70%) no B-lines, 36 (18%) discrete B-lines, and 24 (12%) confluent B-lines. In total, 99,238 opinions were collected from 426 unique users. On a test set of 198 clips, the mean labeling concordance of individual experts relative to the reference standard was 85.0% (SE 2.0), compared with 87.9% crowdsourced label concordance (P=.15). When individual experts' opinions were compared with reference standard labels created by majority vote excluding their own opinion, crowd concordance was higher than the mean concordance of individual experts to reference standards (87.4% vs 80.8%, SE 1.6 for expert concordance; P<.001). Clips with discrete B-lines had the most disagreement from both the crowd consensus and individual experts with the expert consensus. Using randomly sampled subsets of crowd opinions, 7 quality-filtered opinions were sufficient to achieve near the maximum crowd concordance.

CONCLUSIONS:

Crowdsourced labels for B-line classification on lung ultrasound clips via a gamified approach achieved expert-level accuracy. This suggests a strategic role for gamified crowdsourcing in efficiently generating labeled image data sets for training ML systems.
Sujet(s)
Mots clés

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Sujet principal: Échographie / Externalisation ouverte / Poumon Limites: Adult / Female / Humans / Male / Middle aged Langue: En Journal: J Med Internet Res Sujet du journal: INFORMATICA MEDICA Année: 2024 Type de document: Article Pays d'affiliation: États-Unis d'Amérique

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Sujet principal: Échographie / Externalisation ouverte / Poumon Limites: Adult / Female / Humans / Male / Middle aged Langue: En Journal: J Med Internet Res Sujet du journal: INFORMATICA MEDICA Année: 2024 Type de document: Article Pays d'affiliation: États-Unis d'Amérique
...