Your browser doesn't support javascript.
loading
A Deep Learning Tool for Automated Landmark Annotation on Hip and Pelvis Radiographs.
Mulford, Kellen L; Johnson, Quinn J; Mujahed, Tala; Khosravi, Bardia; Rouzrokh, Pouria; Mickley, John P; Taunton, Michael J; Wyles, Cody C.
Afiliação
  • Mulford KL; Department of Orthopedic Surgery, Orthopedic Surgery Artificial Intelligence Laboratory, Mayo Clinic, Rochester, Minnesota.
  • Johnson QJ; Mayo Clinic Alix School of Medicine, Mayo Clinic, Rochester, Minnesota.
  • Mujahed T; Mayo Clinic Alix School of Medicine, Scottsdale, Arizona.
  • Khosravi B; Department of Orthopedic Surgery, Orthopedic Surgery Artificial Intelligence Laboratory, Mayo Clinic, Rochester, Minnesota; Mayo Clinic Artificial Intelligence Laboratory, Mayo Clinic, Rochester, Minnesota.
  • Rouzrokh P; Department of Orthopedic Surgery, Orthopedic Surgery Artificial Intelligence Laboratory, Mayo Clinic, Rochester, Minnesota; Mayo Clinic Artificial Intelligence Laboratory, Mayo Clinic, Rochester, Minnesota.
  • Mickley JP; Department of Orthopedic Surgery, Orthopedic Surgery Artificial Intelligence Laboratory, Mayo Clinic, Rochester, Minnesota.
  • Taunton MJ; Department of Orthopedic Surgery, Orthopedic Surgery Artificial Intelligence Laboratory, Mayo Clinic, Rochester, Minnesota; Mayo Clinic Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota.
  • Wyles CC; Department of Orthopedic Surgery, Orthopedic Surgery Artificial Intelligence Laboratory, Mayo Clinic, Rochester, Minnesota; Mayo Clinic Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota.
J Arthroplasty ; 38(10): 2024-2031.e1, 2023 10.
Article em En | MEDLINE | ID: mdl-37236288
BACKGROUND: Automatic methods for labeling and segmenting pelvis structures can improve the efficiency of clinical and research workflows and reduce the variability introduced with manual labeling. The purpose of this study was to develop a single deep learning model to annotate certain anatomical structures and landmarks on antero-posterior (AP) pelvis radiographs. METHODS: A total of 1,100 AP pelvis radiographs were manually annotated by 3 reviewers. These images included a mix of preoperative and postoperative images as well as a mix of AP pelvis and hip images. A convolutional neural network was trained to segment 22 different structures (7 points, 6 lines, and 9 shapes). Dice score, which measures overlap between model output and ground truth, was calculated for the shapes and lines structures. Euclidean distance error was calculated for point structures. RESULTS: Dice score averaged across all images in the test set was 0.88 and 0.80 for the shape and line structures, respectively. For the 7-point structures, average distance between real and automated annotations ranged from 1.9 mm to 5.6 mm, with all averages falling below 3.1 mm except for the structure labeling the center of the sacrococcygeal junction, where performance was low for both human and machine-produced labels. Blinded qualitative evaluation of human and machine produced segmentations did not reveal any drastic decrease in performance of the automatic method. CONCLUSION: We present a deep learning model for automated annotation of pelvis radiographs that flexibly handles a variety of views, contrasts, and operative statuses for 22 structures and landmarks.
Assuntos
Palavras-chave

Texto completo: 1 Bases de dados: MEDLINE Assunto principal: Aprendizado Profundo Tipo de estudo: Diagnostic_studies / Guideline / Prognostic_studies / Qualitative_research Limite: Humans Idioma: En Revista: J Arthroplasty Assunto da revista: ORTOPEDIA Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Bases de dados: MEDLINE Assunto principal: Aprendizado Profundo Tipo de estudo: Diagnostic_studies / Guideline / Prognostic_studies / Qualitative_research Limite: Humans Idioma: En Revista: J Arthroplasty Assunto da revista: ORTOPEDIA Ano de publicação: 2023 Tipo de documento: Article