Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Comput Biol Med ; 178: 108755, 2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38897151

ABSTRACT

PURPOSE: Impacted teeth are abnormal tooth disorders under the gums or jawbone that cannot take their normal position even though it is time to erupt. This study aims to detect all impacted teeth and to classify impacted third molars according to the Winter method with an artificial intelligence model on panoramic radiographs. METHODS: In this study, 1197 panoramic radiographs from the dentistry faculty database were collected for all impacted teeth, and 1000 panoramic radiographs were collected for Winter classification. Some pre-processing methods were performed and the images were doubled with data augmentation. Both datasets were randomly divided into 80% training, 10% validation, and 10% testing. After transfer learning and fine-tuning processes, the two datasets were trained with the YOLOv8 deep learning algorithm, a high-performance artificial intelligence model, and the detection of impacted teeth was carried out. The results were evaluated with precision, recall, mAP, and F1-score performance metrics. A graphical user interface was designed for clinical use with the artificial intelligence weights obtained as a result of the training. RESULTS: For the detection of impacted third molar teeth according to Winter classification, the average precision, average recall, and average F1 score were obtained to be 0.972, 0.967, and 0.969, respectively. For the detection of all impacted teeth, the average precision, average recall, and average F1 score were obtained as 0.991, 0.995, and 0.993, respectively. CONCLUSION: According to the results, the artificial intelligence-based YOLOv8 deep learning model successfully detected all impacted teeth and the impacted third molar teeth according to the Winter classification system.

2.
J Imaging Inform Med ; 2024 May 14.
Article in English | MEDLINE | ID: mdl-38743125

ABSTRACT

Tooth decay is a common oral disease worldwide, but errors in diagnosis can often be made in dental clinics, which can lead to a delay in treatment. This study aims to use artificial intelligence (AI) for the automated detection and localization of secondary, occlusal, and interproximal (D1, D2, D3) caries types on bite-wing radiographs. The eight hundred and sixty bite-wing radiographs were collected from the School of Dentistry database. Pre-processing and data augmentation operations were performed. Interproximal (D1, D2, D3), secondary, and occlusal caries on bite-wing radiographs were annotated by two oral radiologists. The data were split into 80% for training, 10% for validation, and 10% for testing. The AI-based training process was conducted using the YOLOv8 algorithm. A clinical decision support system interface was designed using the Python PyQT5 library, allowing for the use of dental caries detection without the need for complex programming procedures. In the test images, the average precision, average sensitivity, and average F1 score values for secondary, occlusal, and interproximal caries were obtained as 0.977, 0.932, and 0.954, respectively. The AI-based dental caries detection system yielded highly successful results in the test, receiving full approval from dentists for clinical use. YOLOv8 has the potential to increase sensitivity and reliability while reducing the burden on dentists and can prevent diagnostic errors in dental clinics.

3.
Sci Rep ; 14(1): 4437, 2024 02 23.
Article in English | MEDLINE | ID: mdl-38396289

ABSTRACT

Idiopathic osteosclerosis (IO) are focal radiopacities of unknown etiology observed in the jaws. These radiopacities are incidentally detected on dental panoramic radiographs taken for other reasons. In this study, we investigated the performance of a deep learning model in detecting IO using a small dataset of dental panoramic radiographs with varying contrasts and features. Two radiologists collected 175 IO-diagnosed dental panoramic radiographs from the dental school database. The dataset size is limited due to the rarity of IO, with its incidence in the Turkish population reported as 2.7% in studies. To overcome this limitation, data augmentation was performed by horizontally flipping the images, resulting in an augmented dataset of 350 panoramic radiographs. The images were annotated by two radiologists and divided into approximately 70% for training (245 radiographs), 15% for validation (53 radiographs), and 15% for testing (52 radiographs). The study employing the YOLOv5 deep learning model evaluated the results using precision, recall, F1-score, mAP (mean Average Precision), and average inference time score metrics. The training and testing processes were conducted on the Google Colab Pro virtual machine. The test process's performance criteria were obtained with a precision value of 0.981, a recall value of 0.929, an F1-score value of 0.954, and an average inference time of 25.4 ms. Although radiographs diagnosed with IO have a small dataset and exhibit different contrasts and features, it has been observed that the deep learning model provides high detection speed, accuracy, and localization results. The automatic identification of IO lesions using artificial intelligence algorithms, with high success rates, can contribute to the clinical workflow of dentists by preventing unnecessary biopsy procedure.


Subject(s)
Deep Learning , Osteosclerosis , Humans , Artificial Intelligence , Radiography, Panoramic , Radiography , Contrast Media , Osteosclerosis/diagnostic imaging
5.
Clin Oral Investig ; 27(6): 2679-2689, 2023 Jun.
Article in English | MEDLINE | ID: mdl-36564651

ABSTRACT

OBJECTIVES: Pulpal calcifications are discrete hard calcified masses of varying sizes in the dental pulp cavity. This study is aimed at measuring the performance of the YOLOv4 deep learning algorithm to automatically determine whether there is calcification in the pulp chambers in bite-wing radiographs. MATERIALS AND METHODS: In this study, 2000 bite-wing radiographs were collected from the faculty database. The oral radiologists labeled the pulp chambers on the radiographs as "Present" and "Absent" according to whether there was calcification. The data were randomly divided into 80% training, 10% validation, and 10% testing. The weight file for pulpal calcification was obtained by training the YOLOv4 algorithm with the transfer learning method. Using the weights obtained, pulp chambers and calcifications were automatically detected on the test radiographs that the algorithm had never seen. Two oral radiologists evaluated the test results, and performance criteria were calculated. RESULTS: The results obtained on the test data were evaluated in two stages: detection of pulp chambers and detection of pulpal calcification. The detection performance of pulp chambers was as follows: recall 86.98%, precision 98.94%, F1-score 91.60%, and accuracy 86.18%. Pulpal calcification "Absent" and "Present" detection performance was as follows: recall 86.39%, precision 85.23%, specificity 97.94%, F1-score 85.49%, and accuracy 96.54%. CONCLUSION: The YOLOv4 algorithm trained with bite-wing radiographs detected pulp chambers and calcification with high success rates. CLINICAL RELEVANCE: Automatic detection of pulpal calcifications with deep learning will be used in clinical practice as a decision support system with high accuracy rates in diagnosing dentists.


Subject(s)
Deep Learning , Dental Pulp Calcification , Humans , Dental Pulp Calcification/diagnostic imaging , Radiography , Dental Pulp Cavity
6.
Dentomaxillofac Radiol ; 51(6): 20220108, 2022 Sep 01.
Article in English | MEDLINE | ID: mdl-35762349

ABSTRACT

OBJECTIVES: The aim of the present study was to compare five convolutional neural networks for predicting osteoporosis based on mandibular cortical index (MCI) on panoramic radiographs. METHODS: Panoramic radiographs of 744 female patients over 50 years of age were labeled as C1, C2, and C3 depending on the MCI. The data of the present study were reviewed in different categories including (C1, C2, C3), (C1, C2), (C1, C3), and (C1, (C2 +C3)) as two-class and three-class predictions. The data were separated randomly as 20% test data, and the remaining data were used for training and validation with fivefold cross-validation. AlexNET, GoogleNET, ResNET-50, SqueezeNET, and ShuffleNET deep-learning models were trained through the transfer learning method. The results were evaluated by performance criteria including accuracy, sensitivity, specificity, F1-score, AUC, and training duration. The Gradient-Weighted Class Activation Mapping (Grad-CAM) method was applied for visual interpretation of where deep-learning algorithms gather the feature from image regions. RESULTS: The dataset (C1, C2, C3) has an accuracy rate of 81.14% with AlexNET; the dataset (C1, C2) has an accuracy rate of 88.94% with GoogleNET; the dataset (C1, C3) has an accuracy rate of 98.56% with AlexNET; and the dataset (C1,(C2+C3)) has an accuracy rate of 92.79% with GoogleNET. CONCLUSION: The highest accuracy was obtained in the differentiation of C3 and C1 where osseous structure characteristics change significantly. Since the C2 score represent the intermediate stage (osteopenia), structural characteristics of the bone present behaviors closer to C1 and C3 scores. Therefore, the data set including the C2 score provided relatively lower accuracy results.


Subject(s)
Bone Density , Osteoporosis , Female , Humans , Mandible/diagnostic imaging , Middle Aged , Neural Networks, Computer , Osteoporosis/diagnostic imaging , Radiography, Panoramic/methods
7.
Med Biol Eng Comput ; 58(12): 2971-2987, 2020 Dec.
Article in English | MEDLINE | ID: mdl-33006703

ABSTRACT

The binary categorisation of brain tumours is challenging owing to the complexities of tumours. These challenges arise because of the diversities between shape, size, and intensity features for identical types of tumours. Accordingly, framework designs should be optimised for two phenomena: feature analyses and classification. Based on the challenges and difficulty of the issue, limited information or studies exist that consider the binary classification of three-dimensional (3D) brain tumours. In this paper, the discrimination of high-grade glioma (HGG) and low-grade glioma (LGG) is accomplished by designing various frameworks based on 3D magnetic resonance imaging (3D MRI) data. Accordingly, diverse phase combinations, feature-ranking approaches, and hybrid classifiers are integrated. Feature analyses are performed to achieve remarkable performance using first-order statistics (FOS) by examining different phase combinations near the usage of single phases (T1c, FLAIR, T1, and T2) and by considering five feature-ranking approaches (Bhattacharyya, Entropy, Roc, t test, and Wilcoxon) to detect the appropriate input to the classifier. Hybrid classifiers based on neural networks (NN) are considered due to their robustness and superiority with medical pattern classification. In this study, state-of-the-art optimisation methods are used to form the hybrid classifiers: dynamic weight particle swarm optimisation (DW-PSO), chaotic dynamic weight particle swarm optimisation (CDW-PSO), and Gauss-map-based chaotic particle-swarm optimisation (GM-CPSO). The integrated frameworks, including DW-PSO-NN, CDW-PSO-NN, and GM-CPSO-NN, are evaluated on the BraTS 2017 challenge dataset involving 210 HGG and 75 LGG samples. The 2-fold cross-validation test method and seven metrics (accuracy, AUC, sensitivity, specificity, g-mean, precision, f-measure) are processed to evaluate the performance of frameworks efficiently. In experiments, the most effective framework is provided that uses FOS, data including three phase combinations, the Wilcoxon feature-ranking approach, and the GM-CPSO-NN method. Consequently, our framework achieved remarkable scores of 90.18% (accuracy), 85.62% (AUC), 95.24% (sensitivity), 76% (specificity), 85.08% (g-mean), 91.74% (precision), and 93.46% (f-measure) for HGG/LGG discrimination of 3D brain MRI data. Graphical abstract.


Subject(s)
Brain Neoplasms , Glioma , Brain Neoplasms/diagnostic imaging , Glioma/diagnostic imaging , Humans , Magnetic Resonance Imaging , Neural Networks, Computer , Neuroimaging
8.
J Chem Neuroanat ; 88: 33-40, 2018 03.
Article in English | MEDLINE | ID: mdl-29113947

ABSTRACT

Professional musicians represent an ideal model to study the training-induced brain plasticity. The current study aimed to investigate the brain volume and diffusion characteristics of musicians using structural magnetic resonance and diffusion tensor imaging (DTI). The combined use of volumetric and diffusion methods in studying musician brain has not been done in literature. Our study group consisted of seven male musicians playing an instrument and seven age- and gender-matched non-musicians. We evaluated the volumes of gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) and calculated total intracranial volume (TIV) and measured the fractional anisotropy (FA) of pre-selected WM bundles: corpus callosum (CC), corticospinal tract (CST), superior longitudinal fasciculus (SLF), forceps major (ForMaj), forceps minor (ForMin), and arcuate fasciculus (AF). The mean WM/TIV volume in musicians was higher compared to non-musicians. The mean FA was lower in CC, SLF, ForMaj, ForMin, and right AF but higher in right CST in the musicians. The mean value of the total number of fibers was larger in the CST, SLF, left AF, and ForMaj in the musicians. The observed differences were not statistically significant between the groups (p>0.05). However, increased GM volume was found in the musicians compared to the non-musicians in the right and left cerebellum and supramarginal and angular gyrus, left superior and inferior parietal lobule and as well as left middle temporal gyrus. Our findings suggest differing brain structure in musicians and the confirmation of the results on a larger population.


Subject(s)
Brain/anatomy & histology , Brain/physiology , Music , Neuronal Plasticity/physiology , Adult , Female , Humans , Magnetic Resonance Imaging/methods , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...