Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Arthroplasty ; 2024 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-38679347

RESUMO

BACKGROUND: Increasing deformity of the lower extremities, as measured by the hip-knee-ankle angle (HKAA), is associated with poor patient outcomes after total hip and knee arthroplasty (THA, TKA). Automated calculation of HKAA is imperative to reduce the burden on orthopaedic surgeons. We proposed a detection-based deep learning (DL) model to calculate HKAA in THA and TKA patients and assessed the agreement between DL-derived HKAAs and manual measurement. METHODS: We retrospectively identified 1,379 long-leg radiographs (LLRs) from patients scheduled for THA or TKA within an academic medical center. There were 1,221 LLRs used to develop the model (randomly split into 70% training, 20% validation, and 10% held-out test sets); 158 LLRs were considered "difficult," as the femoral head was difficult to distinguish from surrounding tissue. There were 2 raters who annotated the HKAA of both lower extremities, and inter-rater reliability was calculated to compare the DL-derived HKAAs with manual measurement within the test set. RESULTS: The DL model achieved a mean average precision of 0.985 on the test set. The average HKAA of the operative leg was 173.05 ± 4.54°; the nonoperative leg was 175.55 ± 3.56°. The inter-rater reliability between manual and DL-derived HKAA measurements on the operative leg and nonoperative leg indicated excellent reliability (intraclass correlation (2,k) = 0.987 [0.96, 0.99], intraclass correlation (2, k) = 0.987 [0.98, 0.99, respectively]). The standard error of measurement for the DL-derived HKAA for the operative and nonoperative legs was 0.515° and 0.403°, respectively. CONCLUSIONS: A detection-based DL algorithm can calculate the HKAA in LLRs and is comparable to that calculated by manual measurement. The algorithm can detect the bilateral femoral head, knee, and ankle joints with high precision, even in patients where the femoral head is difficult to visualize.

2.
Med Image Comput Comput Assist Interv ; 11768: 101-109, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37011258

RESUMO

Intraductal papillary mucinous neoplasm (IPMN) is a precursor to pancreatic ductal adenocarcinoma. While over half of patients are diagnosed with pancreatic cancer at a distant stage, patients who are diagnosed early enjoy a much higher 5-year survival rate of 34% compared to 3% in the former; hence, early diagnosis is key. Unique challenges in the medical imaging domain such as extremely limited annotated data sets and typically large 3D volumetric data have made it difficult for deep learning to secure a strong foothold. In this work, we construct two novel "inflated" deep network architectures, InceptINN and DenseINN, for the task of diagnosing IPMN from multisequence (T1 and T2) MRI. These networks inflate their 2D layers to 3D and bootstrap weights from their 2D counterparts (Inceptionv3 and DenseNet121 respectively) trained on ImageNet to the new 3D kernels. We also extend the inflation process by further expanding the pre-trained kernels to handle any number of input modalities and different fusion strategies. This is one of the first studies to train an end-to-end deep network on multisequence MRI for IPMN diagnosis, and shows that our proposed novel inflated network architectures are able to handle the extremely limited training data (139 MRI scans), while providing an absolute improvement of 8.76% in accuracy for diagnosing IPMN over the current state-of-the-art. Code is publicly available at https://github.com/lalonderodney/INN-Inflated-Neural-Nets.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA