Your browser doesn't support javascript.
loading
Automating Linear and Angular Measurements for the Hip and Knee After Computed Tomography: Validation of a Three-Stage Deep Learning and Computer Vision-Based Pipeline for Pathoanatomic Assessment.
Vidhani, Faizaan R; Woo, Joshua J; Zhang, Yibin B; Olsen, Reena J; Ramkumar, Prem N.
Afiliação
  • Vidhani FR; Brown University/The Warren Alpert School of Brown University, Providence, RI, USA.
  • Woo JJ; Brown University/The Warren Alpert School of Brown University, Providence, RI, USA.
  • Zhang YB; Harvard Medical School/Brigham and Women's, Boston, MA, USA.
  • Olsen RJ; Sports Medicine Institute, Hospital for Special Surgery, New York, NY, USA.
  • Ramkumar PN; Long Beach Orthopedic Institute, Long Beach, CA, USA.
Arthroplast Today ; 27: 101394, 2024 Jun.
Article em En | MEDLINE | ID: mdl-39071819
ABSTRACT

Background:

Variability in the bony morphology of pathologic hips/knees is a challenge in automating preoperative computed tomography (CT) scan measurements. With the increasing prevalence of CT for advanced preoperative planning, processing this data represents a critical bottleneck in presurgical planning, research, and development. The purpose of this study was to demonstrate a reproducible and scalable methodology for analyzing CT-based anatomy to process hip and knee anatomy for perioperative planning and execution.

Methods:

One hundred patients with preoperative CT scans undergoing total knee arthroplasty for osteoarthritis were processed. A two-step deep learning pipeline of classification and segmentation models was developed that identifies landmark images and then generates contour representations. We utilized an open-source computer vision library to compute measurements. Classification models were assessed by accuracy, precision, and recall. Segmentation models were evaluated using dice and mean Intersection over Union (IOU) metrics. Contour measurements were compared against manual measurements to validate posterior condylar axis angle, sulcus angle, trochlear groove-tibial tuberosity distance, acetabular anteversion, and femoral version.

Results:

Classifiers identified landmark images with accuracy of 0.91 and 0.88 for hip and knee models, respectively. Segmentation models demonstrated mean IOU scores above 0.95 with the highest dice coefficient of 0.957 [0.954-0.961] (UNet3+) and the highest mean IOU of 0.965 [0.961-0.969] (Attention U-Net). There were no statistically significant differences for the measurements taken automatically vs manually (P > 0.05). Average time for the pipeline to preprocess (48.65 +/- 4.41 sec), classify/retrieve landmark images (8.36 +/- 3.40 sec), segment images (<1 sec), and obtain measurements was 2.58 (+/- 1.92) minutes.

Conclusions:

A fully automated three-stage deep learning and computer vision-based pipeline of classification and segmentation models accurately localized, segmented, and measured landmark hip and knee images for patients undergoing total knee arthroplasty. Incorporation of clinical parameters, like patient-reported outcome measures and instability risk, will be important considerations alongside anatomic parameters.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article