Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
PLoS One ; 19(6): e0305947, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38917161

RESUMO

Cephalometric analysis is critically important and common procedure prior to orthodontic treatment and orthognathic surgery. Recently, deep learning approaches have been proposed for automatic 3D cephalometric analysis based on landmarking from CBCT scans. However, these approaches have relied on uniform datasets from a single center or imaging device but without considering patient ethnicity. In addition, previous works have considered a limited number of clinically relevant cephalometric landmarks and the approaches were computationally infeasible, both impairing integration into clinical workflow. Here our aim is to analyze the clinical applicability of a light-weight deep learning neural network for fast localization of 46 clinically significant cephalometric landmarks with multi-center, multi-ethnic, and multi-device data consisting of 309 CBCT scans from Finnish and Thai patients. The localization performance of our approach resulted in the mean distance of 1.99 ± 1.55 mm for the Finnish cohort and 1.96 ± 1.25 mm for the Thai cohort. This performance turned out to be clinically significant i.e., ≤ 2 mm with 61.7% and 64.3% of the landmarks with Finnish and Thai cohorts, respectively. Furthermore, the estimated landmarks were used to measure cephalometric characteristics successfully i.e., with ≤ 2 mm or ≤ 2° error, on 85.9% of the Finnish and 74.4% of the Thai cases. Between the two patient cohorts, 33 of the landmarks and all cephalometric characteristics had no statistically significant difference (p < 0.05) measured by the Mann-Whitney U test with Benjamini-Hochberg correction. Moreover, our method is found to be computationally light, i.e., providing the predictions with the mean duration of 0.77 s and 2.27 s with single machine GPU and CPU computing, respectively. Our findings advocate for the inclusion of this method into clinical settings based on its technical feasibility and robustness across varied clinical datasets.


Assuntos
Pontos de Referência Anatômicos , Cefalometria , Tomografia Computadorizada de Feixe Cônico , Aprendizado Profundo , Imageamento Tridimensional , Humanos , Cefalometria/métodos , Tomografia Computadorizada de Feixe Cônico/métodos , Imageamento Tridimensional/métodos , Masculino , Feminino , Pontos de Referência Anatômicos/diagnóstico por imagem , Finlândia , Adulto , Tailândia , Adulto Jovem , Adolescente
2.
Commun Med (Lond) ; 4(1): 110, 2024 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-38851837

RESUMO

BACKGROUND: Radiotherapy is a core treatment modality for oropharyngeal cancer (OPC), where the primary gross tumor volume (GTVp) is manually segmented with high interobserver variability. This calls for reliable and trustworthy automated tools in clinician workflow. Therefore, accurate uncertainty quantification and its downstream utilization is critical. METHODS: Here we propose uncertainty-aware deep learning for OPC GTVp segmentation, and illustrate the utility of uncertainty in multiple applications. We examine two Bayesian deep learning (BDL) models and eight uncertainty measures, and utilize a large multi-institute dataset of 292 PET/CT scans to systematically analyze our approach. RESULTS: We show that our uncertainty-based approach accurately predicts the quality of the deep learning segmentation in 86.6% of cases, identifies low performance cases for semi-automated correction, and visualizes regions of the scans where the segmentations likely fail. CONCLUSIONS: Our BDL-based analysis provides a first-step towards more widespread implementation of uncertainty quantification in OPC GTVp segmentation.


Radiotherapy is used as a treatment for people with oropharyngeal cancer. It is important to distinguish the areas where cancer is present so the radiotherapy treatment can be targeted at the cancer. Computational methods based on artificial intelligence can automate this task but need to be able to distinguish areas where it is unclear whether cancer is present. In this study we compare these computational methods that are able to highlight areas where it is unclear whether or not cancer is present. Our approach accurately predicts how well these areas are distinguished by the models. Our results could be applied to improve the computational methods used during radiotherapy treatment. This could enable more targeted treatment to be used in the future, which could result in better outcomes for people with oropharyngeal cancer.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA