Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Dentomaxillofac Radiol ; 53(5): 325-335, 2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-38696751

RESUMO

OBJECTIVES: Currently, there is no reliable automated measurement method to study the changes in the condylar process after orthognathic surgery. Therefore, this study proposes an automated method to measure condylar changes in patients with skeletal class II malocclusion following surgical-orthodontic treatment. METHODS: Cone-beam CT (CBCT) scans from 48 patients were segmented using the nnU-Net network for automated maxillary and mandibular delineation. Regions unaffected by orthognathic surgery were selectively cropped. Automated registration yielded condylar displacement and volume calculations, each repeated three times for precision. Logistic regression and linear regression were used to analyse the correlation between condylar position changes at different time points. RESULTS: The Dice score for the automated segmentation of the condyle was 0.971. The intraclass correlation coefficients (ICCs) for all repeated measurements ranged from 0.93 to 1.00. The results of the automated measurement showed that 83.33% of patients exhibited condylar resorption occurring six months or more after surgery. Logistic regression and linear regression indicated a positive correlation between counterclockwise rotation in the pitch plane and condylar resorption (P < .01). And a positive correlation between the rotational angles in both three planes and changes in the condylar volume at six months after surgery (P ≤ .04). CONCLUSIONS: This study's automated method for measuring condylar changes shows excellent repeatability. Skeletal class II malocclusion patients may experience condylar resorption after bimaxillary orthognathic surgery, and this is correlated with counterclockwise rotation in the sagittal plane. ADVANCES IN KNOWLEDGE: This study proposes an innovative multi-step registration method based on CBCT, and establishes an automated approach for quantitatively measuring condyle changes post-orthognathic surgery. This method opens up new possibilities for studying condylar morphology.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Má Oclusão Classe II de Angle , Côndilo Mandibular , Procedimentos Cirúrgicos Ortognáticos , Humanos , Tomografia Computadorizada de Feixe Cônico/métodos , Má Oclusão Classe II de Angle/diagnóstico por imagem , Má Oclusão Classe II de Angle/cirurgia , Côndilo Mandibular/diagnóstico por imagem , Feminino , Masculino , Adulto , Adolescente , Adulto Jovem
2.
IEEE Trans Med Imaging ; 43(7): 2522-2536, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38386579

RESUMO

Automatic vertebral osteophyte recognition in Digital Radiography is of great importance for the early prediction of degenerative disease but is still a challenge because of the tiny size and high inter-class similarity between normal and osteophyte vertebrae. Meanwhile, common sampling strategies applied in Convolution Neural Network could cause detailed context loss. All of these could lead to an incorrect positioning predicament. In this paper, based on important pathological priors, we define a set of potential lesions of each vertebra and propose a novel Pathological Priors Inspired Network (PPIN) to achieve accurate osteophyte recognition. PPIN comprises a backbone feature extractor integrating with a Wavelet Transform Sampling module for high-frequency detailed context extraction, a detection branch for locating all potential lesions and a classification branch for producing final osteophyte recognition. The Anatomical Map-guided Filter between two branches helps the network focus on the specific anatomical regions via the generated heatmaps of potential lesions in the detection branch to address the incorrect positioning problem. To reduce the inter-class similarity, a Bilateral Augmentation Module based on the graph relationship is proposed to imitate the clinical diagnosis process and to extract discriminative contextual information between adjacent vertebrae in the classification branch. Experiments on the two osteophytes-specific datasets collected from the public VinDr-Spine database show that the proposed PPIN achieves the best recognition performance among multitask frameworks and shows strong generalization. The results on a private dataset demonstrate the potential in clinical application. The Class Activation Maps also show the powerful localization capability of PPIN. The source codes are available in https://github.com/Phalo/PPIN.


Assuntos
Osteófito , Humanos , Osteófito/diagnóstico por imagem , Algoritmos , Redes Neurais de Computação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Coluna Vertebral/diagnóstico por imagem , Análise de Ondaletas
3.
Med Phys ; 50(1): 104-116, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36029008

RESUMO

PURPOSE: Automated measurement of spine indices on axial magnetic resonance (MR) images plays a significant role in lumbar spinal stenosis diagnosis. Existing direct spine indices measurement approaches fail to explicitly focus on the task-specific region or feature channel with the additional information for guiding. We aim to achieve accurate spine indices measurement by introducing the guidance of the segmentation task. METHODS: In this paper, we propose a segmentation-guided regression network (SGRNet) to achieve automated spine indices measurement. SGRNet consists of a segmentation path for generating the spine segmentation prediction and a regression path for producing spine indices estimation. The segmentation path is a U-Net-like network which includes a segmentation encoder and a decoder which generates multilevel segmentation features and segmentation prediction. The proposed segmentation-guided attention module (SGAM) in the regression encoder extracts the attention-aware regression feature under the guidance of the segmentation feature. Based on the attention-aware regression feature, a fully connected layer is utilized to output the accurate spine indices estimation. RESULTS: Experiments on the open-access Lumbar Spine MRI data set show that SGRNet achieves state-of-the-art performance with a mean absolute error of 0.49 mm and mean Pearson correlation coefficient of 0.956 for four indices estimation. CONCLUSIONS: The proposed SGAM in SGRNet is capable of improving the performance of spine indices measurement by focusing on the task-specific region and feature channel under the guidance of the segmentation task.


Assuntos
Estenose Espinal , Humanos , Estenose Espinal/diagnóstico por imagem , Redes Neurais de Computação , Coluna Vertebral , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos
4.
Heliyon ; 9(2): e13694, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36852021

RESUMO

Background: Manual segmentation of the inferior alveolar canal (IAC) in panoramic images requires considerable time and labor even for dental experts having extensive experience. The objective of this study was to evaluate the performance of automatic segmentation of IAC with ambiguity classification in panoramic images using a deep learning method. Methods: Among 1366 panoramic images, 1000 were selected as the training dataset and the remaining 336 were assigned to the testing dataset. The radiologists divided the testing dataset into four groups according to the quality of the visible segments of IAC. The segmentation time, dice similarity coefficient (DSC), precision, and recall rate were calculated to evaluate the efficiency and segmentation performance of deep learning-based automatic segmentation. Results: Automatic segmentation achieved a DSC of 85.7% (95% confidence interval [CI] 75.4%-90.3%), precision of 84.1% (95% CI 78.4%-89.3%), and recall of 87.7% (95% CI 77.7%-93.4%). Compared with manual annotation (5.9s per image), automatic segmentation significantly increased the efficiency of IAC segmentation (33 ms per image). The DSC and precision values of group 4 (most visible) were significantly better than those of group 1 (least visible). The recall values of groups 3 and 4 were significantly better than those of group 1. Conclusions: The deep learning-based method achieved high performance for IAC segmentation in panoramic images under different visibilities and was positively correlated with IAC image clarity.

5.
Med Phys ; 49(7): 4494-4507, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35338781

RESUMO

PURPOSE: Automated retinal vessel segmentation is crucial to the early diagnosis and treatment of ophthalmological diseases. Many deep-learning-based methods have shown exceptional success in this task. However, current approaches are still inadequate in challenging vessels (e.g., thin vessels) and rarely focus on the connectivity of vessel segmentation. METHODS: We propose using an error discrimination network (D) to distinguish whether the vessel pixel predictions of the segmentation network (S) are correct, and S is trained to obtain fewer error predictions of D. Our method is similar to, but not the same as, the generative adversarial network. Three types of vessel samples and corresponding error masks are used to train D, as follows: (1) vessel ground truth; (2) vessel segmented by S; (3) artificial thin vessel error samples that further improve the sensitivity of D to wrong small vessels. As an auxiliary loss function of S, D strengthens the supervision of difficult vessels. Optionally, we can use the errors predicted by D to correct the segmentation result of S. RESULTS: Compared with state-of-the-art methods, our method achieves the highest scores in sensitivity (86.19%, 86.26%, and 86.53%) and G-Mean (91.94%, 91.30%, and 92.76%) on three public datasets, namely, STARE, DRIVE, and HRF. Our method also maintains a competitive level in other metrics. On the STARE dataset, the F1-score and area under the receiver operating characteristic curve (AUC) of our method rank second and first, respectively, reaching 84.51% and 98.97%. The top scores of the three topology-relevant metrics (Conn, Inf, and Cor) demonstrate that the vessels extracted by our method have excellent connectivity. We also validate the effectiveness of error discrimination supervision and artificial error sample training through ablation experiments. CONCLUSIONS: The proposed method provides an accurate and robust solution for difficult vessel segmentation.


Assuntos
Redes Neurais de Computação , Vasos Retinianos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Curva ROC , Vasos Retinianos/diagnóstico por imagem
6.
Comput Methods Programs Biomed ; 216: 106631, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35123347

RESUMO

BACKGROUND AND OBJECTIVE: Conjunctival microcirculation has been used to quantitatively assess microvascular changes due to systemic disorders. The space between red blood cell clusters in conjunctival microvessels is essential for assessing hemodynamics. However, it causes discontinuities in vessel image segmentation and increases the difficulty of automatically measuring blood velocity. In this study, we developed an EVA system based on deep learning to maintain vessel segmentation continuity and automatically measure blood velocity. METHODS: The EVA system sequentially performs image registration, vessel segmentation, diameter measurement, and blood velocity measurement on conjunctival images. A U-Net model optimized with a connectivity-preserving loss function was used to solve the problem of discontinuities in vessel segmentation. Then, an automatic measurement algorithm based on line segment detection was proposed to obtain accurate blood velocity. Finally, the EVA system assessed hemodynamic parameters based on the measured blood velocity in each vessel segment. RESULTS: The EVA system was validated for 23 videos of conjunctival microcirculation captured using functional slit-lamp microscopy. The U-Net model produced the longest average vessel segment length, 158.03 ± 181.87 µm, followed by the adaptive threshold method and Frangi filtering, which produced lengths of 120.05 ± 151.47 µm and 99.94 ± 138.12 µm, respectively. The proposed method and one based on cross-correlation were validated to measure blood velocity for a dataset consisting of 30 vessel segments. Bland-Altman analysis showed that compared with the cross-correlation method (bias: 0.36, SD: 0.32), the results of the proposed method were more consistent with a manual measurement-based gold standard (bias: -0.04, SD: 0.14). CONCLUSIONS: The proposed EVA system provides an automatic and reliable solution for quantitative assessment of hemodynamics in conjunctival microvascular images, and potentially can be applied to hypoglossal microcirculation images.


Assuntos
Microvasos , Velocidade do Fluxo Sanguíneo , Hemodinâmica , Microcirculação , Microvasos/diagnóstico por imagem
7.
Quant Imaging Med Surg ; 11(8): 3569-3583, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34341732

RESUMO

BACKGROUND: Intersubject registration of functional magnetic resonance imaging (fMRI) is necessary for group analysis. Accurate image registration can significantly improve the results of statistical analysis. Traditional methods are achieved by using high-resolution structural images or manually extracting functional information. However, structural alignment does not necessarily lead to functional alignment, and manually extracting functional features is complicated and time-consuming. Recent studies have shown that deep learning-based methods can be used for deformable image registration. METHODS: We proposed a deep learning framework with a three-cascaded multi-resolution network (MR-Net) to achieve deformable image registration. MR-Net separately extracts the features of moving and fixed images via a two-stream path, predicts a sub-deformation field, and is cascaded three times. The moving and fixed images' deformation field is composed of all sub-deformation fields predicted by the MR-Net. We imposed large smoothness constraints on all sub-deformation fields to ensure their smoothness. Our proposed architecture can complete the progressive registration process to ensure the topology of the deformation field. RESULTS: We implemented our method on the 1000 Functional Connectomes Project (FCP) and Eyes Open Eyes Closed fMRI datasets. Our method increased the peak t values in six brain functional networks to 19.8, 17.8, 15.0, 16.4, 17.0, and 13.2. Compared with traditional methods [i.e., FMRIB Software Library (FSL) and Statistical Parametric Mapping (SPM)] and deep learning networks [i.e., VoxelMorph (VM) and Volume Tweening Network (VTN)], our method improved 47.58%, 11.88%, 18.60%, and 15.16%, respectively. CONCLUSIONS: Our three-cascaded MR-Net can achieve statistically significant improvement in functional consistency across subjects.

8.
Med Phys ; 48(6): 2847-2858, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33583029

RESUMO

PURPOSE: Traditional registration of functional magnetic resonance images (fMRI) is typically achieved through registering their coregistered structural MRI. However, it cannot achieve accurate performance in that functional units which are not necessarily located relative to anatomical structures. In addition, registration methods based on functional information focus on gray matter (GM) information but ignore the importance of white matter (WM). To overcome the limitations of exiting techniques, in this paper, we aim to register resting-state fMRI (rs-fMRI) based directly on rs-fMRI data and make full use of GM and WM information to improve the registration performance. METHODS: We provide a robust representation of WM functional connectivity features using tissue-specific patch-based functional correlation tensors (ts-PFCTs) as auxiliary information to assist registration. Furthermore, we propose a semi-supervised deep learning model that uses GM and WM information (GM ts-PFCTs and WM ts-PFCTs) during training as a fine tweak to improve registration accuracy when such information is not provided in new test image pairs. We implement our method on the 1000 Functional Connectomes Project dataset. To evaluate our method, a group-level analysis was implemented in resting-state brain functional networks after registration, resulting in t maps. RESULTS: Our method increases the peak t values of the t maps of default mode network, visual network, central executive network, and sensorimotor network to 21.4, 20.0, 18.4, and 19.0, respectively. Through comparison with traditional methods (FMRIB Software Library(FSL), Statistical Parametric Mapping _ Echo Planar Image(SPM_EPI), and SPM_T1), our method achieves an average improvement of 67.39%, 12.96%, and 25.14%. CONCLUSION: We propose a semi-supervised deep learning network by adding GM and WM information as auxiliary information for resting-state fMRI registration. GM and WM information is extracted and described as GM ts-PFCTs and WM ts-PFCTs. Experimental results show that our method achieves superior registration performance.


Assuntos
Aprendizado Profundo , Substância Branca , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Substância Cinzenta , Imageamento por Ressonância Magnética
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA