Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Osteoarthr Imaging ; 3(1)2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-39036792

RESUMEN

Objective: To evaluate whether the deep learning (DL) segmentation methods from the six teams that participated in the IWOAI 2019 Knee Cartilage Segmentation Challenge are appropriate for quantifying cartilage loss in longitudinal clinical trials. Design: We included 556 subjects from the Osteoarthritis Initiative study with manually read cartilage volume scores for the baseline and 1-year visits. The teams used their methods originally trained for the IWOAI 2019 challenge to segment the 1130 knee MRIs. These scans were anonymized and the teams were blinded to any subject or visit identifiers. Two teams also submitted updated methods. The resulting 9,040 segmentations are available online.The segmentations included tibial, femoral, and patellar compartments. In post-processing, we extracted medial and lateral tibial compartments and geometrically defined central medial and lateral femoral sub-compartments. The primary study outcome was the sensitivity to measure cartilage loss as defined by the standardized response mean (SRM). Results: For the tibial compartments, several of the DL segmentation methods had SRMs similar to the gold standard manual method. The highest DL SRM was for the lateral tibial compartment at 0.38 (the gold standard had 0.34). For the femoral compartments, the gold standard had higher SRMs than the automatic methods at 0.31/0.30 for medial/lateral compartments. Conclusion: The lower SRMs for the DL methods in the femoral compartments at 0.2 were possibly due to the simple sub-compartment extraction done during post-processing. The study demonstrated that state-of-the-art DL segmentation methods may be used in standardized longitudinal single-scanner clinical trials for well-defined cartilage compartments.

2.
Nat Commun ; 13(1): 4128, 2022 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-35840566

RESUMEN

International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. We organized the Medical Segmentation Decathlon (MSD)-a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities to investigate the hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. MSD results confirmed this hypothesis, moreover, MSD winner continued generalizing well to a wide range of other clinical problems for the next two years. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to scientists that are not versed in AI model training.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos
3.
J Magn Reson Imaging ; 55(6): 1650-1663, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-34918423

RESUMEN

BACKGROUND: Segmentation of medical image volumes is a time-consuming manual task. Automatic tools are often tailored toward specific patient cohorts, and it is unclear how they behave in other clinical settings. PURPOSE: To evaluate the performance of the open-source Multi-Planar U-Net (MPUnet), the validated Knee Imaging Quantification (KIQ) framework, and a state-of-the-art two-dimensional (2D) U-Net architecture on three clinical cohorts without extensive adaptation of the algorithms. STUDY TYPE: Retrospective cohort study. SUBJECTS: A total of 253 subjects (146 females, 107 males, ages 57 ± 12 years) from three knee osteoarthritis (OA) studies (Center for Clinical and Basic Research [CCBR], Osteoarthritis Initiative [OAI], and Prevention of OA in Overweight Females [PROOF]) with varying demographics and OA severity (64/37/24/53/2 scans of Kellgren and Lawrence [KL] grades 0-4). FIELD STRENGTH/SEQUENCE: 0.18 T, 1.0 T/1.5 T, and 3 T sagittal three-dimensional fast-spin echo T1w and dual-echo steady-state sequences. ASSESSMENT: All models were fit without tuning to knee magnetic resonance imaging (MRI) scans with manual segmentations from three clinical cohorts. All models were evaluated across KL grades. STATISTICAL TESTS: Segmentation performance differences as measured by Dice coefficients were tested with paired, two-sided Wilcoxon signed-rank statistics with significance threshold α = 0.05. RESULTS: The MPUnet performed superior or equal to KIQ and 2D U-Net on all compartments across three cohorts. Mean Dice overlap was significantly higher for MPUnet compared to KIQ and U-Net on CCBR ( 0.83±0.04 vs. 0.81±0.06 and 0.82±0.05 ), significantly higher than KIQ and U-Net OAI ( 0.86±0.03 vs. 0.84±0.04 and 0.85±0.03) , and not significantly different from KIQ while significantly higher than 2D U-Net on PROOF ( 0.78±0.07 vs. 0.77±0.07 , P=0.10 , and 0.73±0.07) . The MPUnet performed significantly better on N=22 KL grade 3 CCBR scans with 0.78±0.06 vs. 0.75±0.08 for KIQ and 0.76±0.06 for 2D U-Net. DATA CONCLUSION: The MPUnet matched or exceeded the performance of state-of-the-art knee MRI segmentation models across cohorts of variable sequences and patient demographics. The MPUnet required no manual tuning making it both accurate and easy-to-use. LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY: Stage 2.


Asunto(s)
Articulación de la Rodilla , Osteoartritis de la Rodilla , Anciano , Estudios de Cohortes , Femenino , Humanos , Rodilla/diagnóstico por imagen , Rodilla/patología , Articulación de la Rodilla/diagnóstico por imagen , Articulación de la Rodilla/patología , Imagen por Resonancia Magnética/métodos , Masculino , Persona de Mediana Edad , Osteoartritis de la Rodilla/diagnóstico por imagen , Osteoartritis de la Rodilla/patología , Estudios Retrospectivos
4.
Radiol Artif Intell ; 3(3): e200078, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-34235438

RESUMEN

PURPOSE: To organize a multi-institute knee MRI segmentation challenge for characterizing the semantic and clinical efficacy of automatic segmentation methods relevant for monitoring osteoarthritis progression. MATERIALS AND METHODS: A dataset partition consisting of three-dimensional knee MRI from 88 retrospective patients at two time points (baseline and 1-year follow-up) with ground truth articular (femoral, tibial, and patellar) cartilage and meniscus segmentations was standardized. Challenge submissions and a majority-vote ensemble were evaluated against ground truth segmentations using Dice score, average symmetric surface distance, volumetric overlap error, and coefficient of variation on a holdout test set. Similarities in automated segmentations were measured using pairwise Dice coefficient correlations. Articular cartilage thickness was computed longitudinally and with scans. Correlation between thickness error and segmentation metrics was measured using the Pearson correlation coefficient. Two empirical upper bounds for ensemble performance were computed using combinations of model outputs that consolidated true positives and true negatives. RESULTS: Six teams (T 1-T 6) submitted entries for the challenge. No differences were observed across any segmentation metrics for any tissues (P = .99) among the four top-performing networks (T 2, T 3, T 4, T 6). Dice coefficient correlations between network pairs were high (> 0.85). Per-scan thickness errors were negligible among networks T 1-T 4 (P = .99), and longitudinal changes showed minimal bias (< 0.03 mm). Low correlations (ρ < 0.41) were observed between segmentation metrics and thickness error. The majority-vote ensemble was comparable to top-performing networks (P = .99). Empirical upper-bound performances were similar for both combinations (P = .99). CONCLUSION: Diverse networks learned to segment the knee similarly, where high segmentation accuracy did not correlate with cartilage thickness accuracy and voting ensembles did not exceed individual network performance.See also the commentary by Elhalawani and Mak in this issue.Keywords: Cartilage, Knee, MR-Imaging, Segmentation © RSNA, 2020Supplemental material is available for this article.

5.
NPJ Digit Med ; 4(1): 72, 2021 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-33859353

RESUMEN

Sleep disorders affect a large portion of the global population and are strong predictors of morbidity and all-cause mortality. Sleep staging segments a period of sleep into a sequence of phases providing the basis for most clinical decisions in sleep medicine. Manual sleep staging is difficult and time-consuming as experts must evaluate hours of polysomnography (PSG) recordings with electroencephalography (EEG) and electrooculography (EOG) data for each patient. Here, we present U-Sleep, a publicly available, ready-to-use deep-learning-based system for automated sleep staging ( sleep.ai.ku.dk ). U-Sleep is a fully convolutional neural network, which was trained and evaluated on PSG recordings from 15,660 participants of 16 clinical studies. It provides accurate segmentations across a wide range of patient cohorts and PSG protocols not considered when building the system. U-Sleep works for arbitrary combinations of typical EEG and EOG channels, and its special deep learning architecture can label sleep stages at shorter intervals than the typical 30 s periods used during training. We show that these labels can provide additional diagnostic information and lead to new ways of analyzing sleep. U-Sleep performs on par with state-of-the-art automatic sleep staging systems on multiple clinical datasets, even if the other systems were built specifically for the particular data. A comparison with consensus-scores from a previously unseen clinic shows that U-Sleep performs as accurately as the best of the human experts. U-Sleep can support the sleep staging workflow of medical experts, which decreases healthcare costs, and can provide highly accurate segmentations when human expertize is lacking.

6.
Basic Clin Pharmacol Toxicol ; 126 Suppl 6: 116-121, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-31228220

RESUMEN

While the physiological function and mechanisms of agonist-dependent G protein-coupled receptor (GPCR) internalization have been extensively studied, the functional characterization of constitutive internalization of these critically important receptors has received less attention. Here we relate the constitutive internalization of more than 30 therapeutically targeted GPCRs to their agonist-induced internalization. The constitutive internalization ranges from levels of bulk membrane endocytosis in some cases to levels of agonist-induced internalization for other receptors. Moreover, for receptors with high constitutive internalization this occludes further agonist-induced internalization. Additionally, Gq-coupled GPCRs show a significantly higher rate of constitutive internalization than Gs- and Gi-coupled receptors. Finally, we consolidate the proposed link between the constitutive internalization, as assessed by a cytometry-based assay, and the constitutive activity of these receptors, as previously reported by a ß-arrestin recruitment assay across the range of pharmacologically relevant receptors. In summary, we provide a quantitative comparison of GPCR internalization across a range of pharmacologically relevant receptors providing generalized insight into the relations between constitutive internalization, constitutive activity and agonist-induced internalization, which has so far relied on mutational studies in individual receptors.


Asunto(s)
Receptores Acoplados a Proteínas G/metabolismo , Línea Celular , Membrana Celular , Endocitosis , Células HEK293 , Humanos , Transducción de Señal , beta-Arrestinas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA