Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Base de datos
Tipo de estudio
Tipo del documento
Asunto de la revista
Intervalo de año de publicación
1.
Sci Rep ; 14(1): 11750, 2024 05 23.
Artículo en Inglés | MEDLINE | ID: mdl-38782964

RESUMEN

Sex determination is essential for identifying unidentified individuals, particularly in forensic contexts. Traditional methods for sex determination involve manual measurements of skeletal features on CBCT scans. However, these manual measurements are labor-intensive, time-consuming, and error-prone. The purpose of this study was to automatically and accurately determine sex on a CBCT scan using a two-stage anatomy-guided attention network (SDetNet). SDetNet consisted of a 2D frontal sinus segmentation network (FSNet) and a 3D anatomy-guided attention network (SDNet). FSNet segmented frontal sinus regions in the CBCT images and extracted regions of interest (ROIs) near them. Then, the ROIs were fed into SDNet to predict sex accurately. To improve sex determination performance, we proposed multi-channel inputs (MSIs) and an anatomy-guided attention module (AGAM), which encouraged SDetNet to learn differences in the anatomical context of the frontal sinus between males and females. SDetNet showed superior sex determination performance in the area under the receiver operating characteristic curve, accuracy, Brier score, and specificity compared with the other 3D CNNs. Moreover, the results of ablation studies showed a notable improvement in sex determination with the embedding of both MSI and AGAM. Consequently, SDetNet demonstrated automatic and accurate sex determination by learning the anatomical context information of the frontal sinus on CBCT scans.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Seno Frontal , Humanos , Tomografía Computarizada de Haz Cónico/métodos , Masculino , Femenino , Seno Frontal/diagnóstico por imagen , Seno Frontal/anatomía & histología , Imagenología Tridimensional/métodos , Adulto , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos , Determinación del Sexo por el Esqueleto/métodos
2.
Dentomaxillofac Radiol ; 53(1): 22-31, 2024 Jan 11.
Artículo en Inglés | MEDLINE | ID: mdl-38214942

RESUMEN

OBJECTIVES: This study aimed to develop a robust and accurate deep learning network for detecting the posterior superior alveolar artery (PSAA) in dental cone-beam CT (CBCT) images, focusing on the precise localization of the centre pixel as a critical centreline pixel. METHODS: PSAA locations were manually labelled on dental CBCT data from 150 subjects. The left maxillary sinus images were horizontally flipped. In total, 300 datasets were created. Six different deep learning networks were trained, including 3D U-Net, deeply supervised 3D U-Net (3D U-Net DS), multi-scale deeply supervised 3D U-Net (3D U-Net MSDS), 3D Attention U-Net, 3D V-Net, and 3D Dense U-Net. The performance evaluation involved predicting the centre pixel of the PSAA. This was assessed using mean absolute error (MAE), mean radial error (MRE), and successful detection rate (SDR). RESULTS: The 3D U-Net MSDS achieved the best prediction performance among the tested networks, with an MAE measurement of 0.696 ± 1.552 mm and MRE of 1.101 ± 2.270 mm. In comparison, the 3D U-Net showed the lowest performance. The 3D U-Net MSDS demonstrated a SDR of 95% within a 2 mm MAE. This was a significantly higher result than other networks that achieved a detection rate of over 80%. CONCLUSIONS: This study presents a robust deep learning network for accurate PSAA detection in dental CBCT images, emphasizing precise centre pixel localization. The method achieves high accuracy in locating small vessels, such as the PSAA, and has the potential to enhance detection accuracy and efficiency, thus impacting oral and maxillofacial surgery planning and decision-making.


Asunto(s)
Arterias , Tomografía Computarizada de Haz Cónico , Humanos , Tomografía Computarizada de Haz Cónico/métodos , Seno Maxilar , Procesamiento de Imagen Asistido por Computador/métodos
3.
BMC Oral Health ; 23(1): 866, 2023 11 15.
Artículo en Inglés | MEDLINE | ID: mdl-37964229

RESUMEN

BACKGROUND: The purpose of this study was to compare the segmentation performances of the 2D, 2.5D, and 3D networks for maxillary sinuses (MSs) and lesions inside the maxillary sinus (MSL) with variations in sizes, shapes, and locations in cone beam CT (CBCT) images under the same constraint of memory capacity. METHODS: The 2D, 2.5D, and 3D networks were compared comprehensively for the segmentation of the MS and MSL in CBCT images under the same constraint of memory capacity. MSLs were obtained by subtracting the prediction of the air region of the maxillary sinus (MSA) from that of the MS. RESULTS: The 2.5D network showed the highest segmentation performances for the MS and MSA compared to the 2D and 3D networks. The performances of the Jaccard coefficient, Dice similarity coefficient, precision, and recall by the 2.5D network of U-net + + reached 0.947, 0.973, 0.974, and 0.971 for the MS, respectively, and 0.787, 0.875, 0.897, and 0.858 for the MSL, respectively. CONCLUSIONS: The 2.5D segmentation network demonstrated superior segmentation performance for various MSLs with an ensemble learning approach of combining the predictions from three orthogonal planes.


Asunto(s)
Seno Maxilar , Tomografía Computarizada de Haz Cónico Espiral , Humanos , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Seno Maxilar/diagnóstico por imagen , Aprendizaje Profundo , Elevación del Piso del Seno Maxilar
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA