Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 50
Filtrar
1.
Reprod Biol Endocrinol ; 22(1): 112, 2024 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-39210437

RESUMEN

PURPOSE: To find the machine learning (ML) method that has the highest accuracy in predicting the semen quality of men based on basic questionnaire data about lifestyle behavior. METHODS: The medical records of men whose semen was analyzed for any reason were collected. Those who had data about their lifestyle behaviors were included in the study. All semen analyses of the men included were evaluated according to the WHO 2021 guideline. All semen analyses were categorized as normozoospermia, oligozoospermia, teratozoospermia, and asthenozoospermia. The Extra Trees Classifier, Average (AVG) Blender, Light Gradient Boosting Machine (LGBM) Classifier, eXtreme Gradient Boosting (XGB) Classifier, Logistic Regression, and Random Forest Classifier techniques were used as ML algorithms. RESULTS: Seven hundred thirty-four men who met the inclusion criteria and had data about lifestyle behavior were included in the study. 356 men (48.5%) had abnormal semen results, 204 (27.7%) showed the presence of oligozoospermia, 193 (26.2%) asthenozoospermia, and 265 (36.1%) teratozoospermia according to the WHO 2021. The AVG Blender model had the highest accuracy and AUC for predicting normozoospermia and teratozoospermia. The Extra Trees Classifier and Random Forest Classifier models achieved the best performance for predicting oligozoospermia and asthenozoospermia, respectively. CONCLUSION: The ML models have the potential to predict semen quality based on lifestyles.


Asunto(s)
Estilo de Vida , Aprendizaje Automático , Análisis de Semen , Masculino , Humanos , Análisis de Semen/métodos , Adulto , Oligospermia/diagnóstico , Astenozoospermia/diagnóstico , Teratozoospermia/diagnóstico , Persona de Mediana Edad , Infertilidad Masculina/diagnóstico
2.
BMC Pregnancy Childbirth ; 24(1): 574, 2024 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-39217284

RESUMEN

BACKGROUND: We aimed to determine the best-performing machine learning (ML)-based algorithm for predicting gestational diabetes mellitus (GDM) with sociodemographic and obstetrics features in the pre-conceptional period. METHODS: We collected the data of pregnant women who were admitted to the obstetric clinic in the first trimester. The maternal age, body mass index, gravida, parity, previous birth weight, smoking status, the first-visit venous plasma glucose level, the family history of diabetes mellitus, and the results of an oral glucose tolerance test of the patients were evaluated. The women were categorized into groups based on having and not having a GDM diagnosis and also as being nulliparous or primiparous. 7 common ML algorithms were employed to construct the predictive model. RESULTS: 97 mothers were included in the study. 19 and 26 nulliparous were with and without GDM, respectively. 29 and 23 primiparous were with and without GDM, respectively. It was found that the greatest feature importance variables were the venous plasma glucose level, maternal BMI, and the family history of diabetes mellitus. The eXtreme Gradient Boosting (XGB) Classifier had the best predictive value for the two models with the accuracy of 66.7% and 72.7%, respectively. DISCUSSION: The XGB classifier model constructed with maternal sociodemographic findings and the obstetric history could be used as an early prediction model for GDM especially in low-income countries.


Asunto(s)
Índice de Masa Corporal , Diabetes Gestacional , Prueba de Tolerancia a la Glucosa , Aprendizaje Automático , Humanos , Diabetes Gestacional/diagnóstico , Diabetes Gestacional/sangre , Femenino , Embarazo , Adulto , Glucemia/análisis , Algoritmos , Primer Trimestre del Embarazo , Valor Predictivo de las Pruebas , Paridad , Factores de Riesgo , Adulto Joven
3.
BMC Med Imaging ; 24(1): 172, 2024 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-38992601

RESUMEN

OBJECTIVES: In the interpretation of panoramic radiographs (PRs), the identification and numbering of teeth is an important part of the correct diagnosis. This study evaluates the effectiveness of YOLO-v5 in the automatic detection, segmentation, and numbering of deciduous and permanent teeth in mixed dentition pediatric patients based on PRs. METHODS: A total of 3854 mixed pediatric patients PRs were labelled for deciduous and permanent teeth using the CranioCatch labeling program. The dataset was divided into three subsets: training (n = 3093, 80% of the total), validation (n = 387, 10% of the total) and test (n = 385, 10% of the total). An artificial intelligence (AI) algorithm using YOLO-v5 models were developed. RESULTS: The sensitivity, precision, F-1 score, and mean average precision-0.5 (mAP-0.5) values were 0.99, 0.99, 0.99, and 0.98 respectively, to teeth detection. The sensitivity, precision, F-1 score, and mAP-0.5 values were 0.98, 0.98, 0.98, and 0.98, respectively, to teeth segmentation. CONCLUSIONS: YOLO-v5 based models can have the potential to detect and enable the accurate segmentation of deciduous and permanent teeth using PRs of pediatric patients with mixed dentition.


Asunto(s)
Aprendizaje Profundo , Dentición Mixta , Odontología Pediátrica , Radiografía Panorámica , Diente , Radiografía Panorámica/métodos , Aprendizaje Profundo/normas , Diente/diagnóstico por imagen , Humanos , Preescolar , Niño , Adolescente , Masculino , Femenino , Odontología Pediátrica/métodos
4.
Odontology ; 112(2): 552-561, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37907818

RESUMEN

The objective of this study is to use a deep-learning model based on CNN architecture to detect the second mesiobuccal (MB2) canals, which are seen as a variation in maxillary molars root canals. In the current study, 922 axial sections from 153 patients' cone beam computed tomography (CBCT) images were used. The segmentation method was employed to identify the MB2 canals in maxillary molars that had not previously had endodontic treatment. Labeled images were divided into training (80%), validation (10%) and testing (10%) groups. The artificial intelligence (AI) model was trained using the You Only Look Once v5 (YOLOv5x) architecture with 500 epochs and a learning rate of 0.01. Confusion matrix and receiver-operating characteristic (ROC) analysis were used in the statistical evaluation of the results. The sensitivity of the MB2 canal segmentation model was 0.92, the precision was 0.83, and the F1 score value was 0.87. The area under the curve (AUC) in the ROC graph of the model was 0.84. The mAP value at 0.5 inter-over union (IoU) was found as 0.88. The deep-learning algorithm used showed a high success in the detection of the MB2 canal. The success of the endodontic treatment can be increased and clinicians' time can be preserved using the newly created artificial intelligence-based models to identify variations in root canal anatomy before the treatment.


Asunto(s)
Inteligencia Artificial , Cavidad Pulpar , Humanos , Cavidad Pulpar/diagnóstico por imagen , Raíz del Diente , Maxilar/anatomía & histología , Tomografía Computarizada de Haz Cónico/métodos
5.
Artículo en Inglés | MEDLINE | ID: mdl-39024043

RESUMEN

OBJECTIVE: This study aimed to assess the effectiveness of deep convolutional neural network (CNN) algorithms in detecting and segmentation of overhanging dental restorations in bitewing radiographs. METHOD AND MATERIALS: A total of 1160 anonymized bitewing radiographs were used to progress the Artificial Intelligence (AI) system) for the detection and segmentation of overhanging restorations. The data was then divided into three groups: 80% for training (930 images, 2399 labels), 10% for validation (115 images, 273 labels), and 10% for testing (115 images, 306 labels). A CNN model known as you only look once (YOLOv5) was trained to detect overhanging restorations in bitewing radiographs. After utilizing the remaining 115 radiographs to evaluate the efficacy of the proposed CNN model, the accuracy, sensitivity, precision, F1 score, and area under the receiver operating characteristic curve (AUC) were computed. RESULTS: The model demonstrated a precision of 90.9%, a sensitivity of 85.3%, and an F1 score of 88.0%. Furthermore, the model achieved an AUC of 0.859 on the Receiver Operating Characteristic (ROC) curve. The mean average precision (mAP) at an intersection over a union (IoU) threshold of 0.5 was notably high at 0.87. CONCLUSION: The findings suggest that deep CNN algorithms are highly effective in the detection and diagnosis of overhanging dental restorations in bitewing radiographs. The high levels of precision, sensitivity, and F1 score, along with the significant AUC and mAP values, underscore the potential of these advanced deep learning techniques in revolutionizing dental diagnostic procedures.

6.
Dentomaxillofac Radiol ; 53(4): 256-266, 2024 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-38502963

RESUMEN

OBJECTIVES: The study aims to develop an artificial intelligence (AI) model based on nnU-Net v2 for automatic maxillary sinus (MS) segmentation in cone beam computed tomography (CBCT) volumes and to evaluate the performance of this model. METHODS: In 101 CBCT scans, MS were annotated using the CranioCatch labelling software (Eskisehir, Turkey) The dataset was divided into 3 parts: 80 CBCT scans for training the model, 11 CBCT scans for model validation, and 10 CBCT scans for testing the model. The model training was conducted using the nnU-Net v2 deep learning model with a learning rate of 0.00001 for 1000 epochs. The performance of the model to automatically segment the MS on CBCT scans was assessed by several parameters, including F1-score, accuracy, sensitivity, precision, area under curve (AUC), Dice coefficient (DC), 95% Hausdorff distance (95% HD), and Intersection over Union (IoU) values. RESULTS: F1-score, accuracy, sensitivity, precision values were found to be 0.96, 0.99, 0.96, 0.96, respectively for the successful segmentation of maxillary sinus in CBCT images. AUC, DC, 95% HD, IoU values were 0.97, 0.96, 1.19, 0.93, respectively. CONCLUSIONS: Models based on nnU-Net v2 demonstrate the ability to segment the MS autonomously and accurately in CBCT images.


Asunto(s)
Inteligencia Artificial , Tomografía Computarizada de Haz Cónico , Seno Maxilar , Tomografía Computarizada de Haz Cónico/métodos , Humanos , Seno Maxilar/diagnóstico por imagen , Programas Informáticos , Femenino , Masculino , Adulto
7.
BMC Oral Health ; 24(1): 155, 2024 Jan 31.
Artículo en Inglés | MEDLINE | ID: mdl-38297288

RESUMEN

BACKGROUND: This retrospective study aimed to develop a deep learning algorithm for the interpretation of panoramic radiographs and to examine the performance of this algorithm in the detection of periodontal bone losses and bone loss patterns. METHODS: A total of 1121 panoramic radiographs were used in this study. Bone losses in the maxilla and mandibula (total alveolar bone loss) (n = 2251), interdental bone losses (n = 25303), and furcation defects (n = 2815) were labeled using the segmentation method. In addition, interdental bone losses were divided into horizontal (n = 21839) and vertical (n = 3464) bone losses according to the defect patterns. A Convolutional Neural Network (CNN)-based artificial intelligence (AI) system was developed using U-Net architecture. The performance of the deep learning algorithm was statistically evaluated by the confusion matrix and ROC curve analysis. RESULTS: The system showed the highest diagnostic performance in the detection of total alveolar bone losses (AUC = 0.951) and the lowest in the detection of vertical bone losses (AUC = 0.733). The sensitivity, precision, F1 score, accuracy, and AUC values were found as 1, 0.995, 0.997, 0.994, 0.951 for total alveolar bone loss; found as 0.947, 0.939, 0.943, 0.892, 0.910 for horizontal bone losses; found as 0.558, 0.846, 0.673, 0.506, 0.733 for vertical bone losses and found as 0.892, 0.933, 0.912, 0.837, 0.868 for furcation defects (respectively). CONCLUSIONS: AI systems offer promising results in determining periodontal bone loss patterns and furcation defects from dental radiographs. This suggests that CNN algorithms can also be used to provide more detailed information such as automatic determination of periodontal disease severity and treatment planning in various dental radiographs.


Asunto(s)
Pérdida de Hueso Alveolar , Aprendizaje Profundo , Defectos de Furcación , Humanos , Pérdida de Hueso Alveolar/diagnóstico por imagen , Radiografía Panorámica/métodos , Estudios Retrospectivos , Defectos de Furcación/diagnóstico por imagen , Inteligencia Artificial , Algoritmos
8.
BMC Oral Health ; 24(1): 1034, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39227802

RESUMEN

BACKGROUND: This study aims to evaluate the performance of a deep learning system for the evaluation of tooth development stages on images obtained from panoramic radiographs from child patients. METHODS: The study collected a total of 1500 images obtained from panoramic radiographs from child patients between the ages of 5 and 14 years. YOLOv5, a convolutional neural network (CNN)-based object detection model, was used to automatically detect the calcification states of teeth. Images obtained from panoramic radiographs from child patients were trained and tested in the YOLOv5 algorithm. True-positive (TP), false-positive (FP), and false-negative (FN) ratios were calculated. A confusion matrix was used to evaluate the performance of the model. RESULTS: Among the 146 test group images with 1022 labels, there were 828 TPs, 308 FPs, and 1 FN. The sensitivity, precision, and F1-score values of the detection model of the tooth stage development model were 0.99, 0.72, and 0.84, respectively. CONCLUSIONS: In conclusion, utilizing a deep learning-based approach for the detection of dental development on pediatric panoramic radiographs may facilitate a precise evaluation of the chronological correlation between tooth development stages and age. This can help clinicians make treatment decisions and aid dentists in finding more accurate treatment options.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Radiografía Panorámica , Humanos , Niño , Adolescente , Preescolar , Femenino , Masculino , Inteligencia Artificial , Diente/crecimiento & desarrollo , Diente/diagnóstico por imagen , Determinación de la Edad por los Dientes/métodos , Redes Neurales de la Computación
9.
BMC Oral Health ; 24(1): 490, 2024 Apr 24.
Artículo en Inglés | MEDLINE | ID: mdl-38658959

RESUMEN

BACKGROUND: Deep learning model trained on a large image dataset, can be used to detect and discriminate targets with similar but not identical appearances. The aim of this study is to evaluate the post-training performance of the CNN-based YOLOv5x algorithm in the detection of white spot lesions in post-orthodontic oral photographs using the limited data available and to make a preliminary study for fully automated models that can be clinically integrated in the future. METHODS: A total of 435 images in JPG format were uploaded into the CranioCatch labeling software and labeled white spot lesions. The labeled images were resized to 640 × 320 while maintaining their aspect ratio before model training. The labeled images were randomly divided into three groups (Training:349 images (1589 labels), Validation:43 images (181 labels), Test:43 images (215 labels)). YOLOv5x algorithm was used to perform deep learning. The segmentation performance of the tested model was visualized and analyzed using ROC analysis and a confusion matrix. True Positive (TP), False Positive (FP), and False Negative (FN) values were determined. RESULTS: Among the test group images, there were 133 TPs, 36 FPs, and 82 FNs. The model's performance metrics include precision, recall, and F1 score values of detecting white spot lesions were 0.786, 0.618, and 0.692. The AUC value obtained from the ROC analysis was 0.712. The mAP value obtained from the Precision-Recall curve graph was 0.425. CONCLUSIONS: The model's accuracy and sensitivity in detecting white spot lesions remained lower than expected for practical application, but is a promising and acceptable detection rate compared to previous study. The current study provides a preliminary insight to further improved by increasing the dataset for training, and applying modifications to the deep learning algorithm. CLINICAL REVELANCE: Deep learning systems can help clinicians to distinguish white spot lesions that may be missed during visual inspection.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Fotografía Dental , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Fotografía Dental/métodos , Proyectos Piloto
10.
J Oral Rehabil ; 50(9): 758-766, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37186400

RESUMEN

BACKGROUND: The use of artificial intelligence has many advantages, especially in the field of oral and maxillofacial radiology. Early diagnosis of temporomandibular joint osteoarthritis by artificial intelligence may improve prognosis. OBJECTIVE: The aim of this study is to perform the classification of temporomandibular joint (TMJ) osteoarthritis and TMJ segmentation on cone beam computed tomography (CBCT) sagittal images with artificial intelligence. METHODS: In this study, the success of YOLOv5 architecture, an artificial intelligence model, in TMJ segmentation and osteoarthritis classification was evaluated on 2000 sagittal sections (500 healthy, 500 erosion, 500 osteophyte, 500 flattening images) obtained from CBCT DICOM images of 290 patients. RESULTS: The sensitivity, precision and F1 scores of the model for TMJ osteoarthritis classification are 1, 0.7678 and 0.8686, respectively. The accuracy value for classification is 0.7678. The prediction values of the classification model are 88% for healthy joints, 70% for flattened joints, 95% for joints with erosion and 86% for joints with osteophytes. The sensitivity, precision and F1 score of the YOLOv5 model for TMJ segmentation are 1, 0.9953 and 0.9976, respectively. The AUC value of the model for TMJ segmentation is 0.9723. In addition, the accuracy value of the model for TMJ segmentation was found to be 0.9953. CONCLUSION: Artificial intelligence model applied in this study can be a support method that will save time and convenience for physicians in the diagnosis of the disease with successful results in TMJ segmentation and osteoarthritis classification.


Asunto(s)
Osteoartritis , Trastornos de la Articulación Temporomandibular , Humanos , Trastornos de la Articulación Temporomandibular/diagnóstico por imagen , Inteligencia Artificial , Articulación Temporomandibular/diagnóstico por imagen , Tomografía Computarizada de Haz Cónico/métodos , Osteoartritis/diagnóstico por imagen
11.
BMC Oral Health ; 23(1): 764, 2023 10 17.
Artículo en Inglés | MEDLINE | ID: mdl-37848870

RESUMEN

BACKGROUND: Panoramic radiographs, in which anatomic landmarks can be observed, are used to detect cases closely related to pediatric dentistry. The purpose of the study is to investigate the success and reliability of the detection of maxillary and mandibular anatomic structures observed on panoramic radiographs in children using artificial intelligence. METHODS: A total of 981 mixed images of pediatric patients for 9 different pediatric anatomic landmarks including maxillary sinus, orbita, mandibular canal, mental foramen, foramen mandible, incisura mandible, articular eminence, condylar and coronoid processes were labelled, the training was carried out using 2D convolutional neural networks (CNN) architectures, by giving 500 training epochs and Pytorch-implemented YOLO-v5 models were produced. The success rate of the AI model prediction was tested on a 10% test data set. RESULTS: A total of 14,804 labels including maxillary sinus (1922), orbita (1944), mandibular canal (1879), mental foramen (884), foramen mandible (1885), incisura mandible (1922), articular eminence (1645), condylar (1733) and coronoid (990) processes were made. The most successful F1 Scores were obtained from orbita (1), incisura mandible (0.99), maxillary sinus (0.98), and mandibular canal (0.97). The best sensitivity values were obtained from orbita, maxillary sinus, mandibular canal, incisura mandible, and condylar process. The worst sensitivity values were obtained from mental foramen (0.92) and articular eminence (0.92). CONCLUSIONS: The regular and standardized labelling, the relatively larger areas, and the success of the YOLO-v5 algorithm contributed to obtaining these successful results. Automatic segmentation of these structures will save time for physicians in clinical diagnosis and will increase the visibility of pathologies related to structures and the awareness of physicians.


Asunto(s)
Puntos Anatómicos de Referencia , Inteligencia Artificial , Humanos , Niño , Radiografía Panorámica/métodos , Puntos Anatómicos de Referencia/diagnóstico por imagen , Reproducibilidad de los Resultados , Mandíbula/diagnóstico por imagen
12.
Med Princ Pract ; 31(6): 555-561, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36167054

RESUMEN

OBJECTIVE: The purpose of the study was to create an artificial intelligence (AI) system for detecting idiopathic osteosclerosis (IO) on panoramic radiographs for automatic, routine, and simple evaluations. SUBJECT AND METHODS: In this study, a deep learning method was carried out with panoramic radiographs obtained from healthy patients. A total of 493 anonymized panoramic radiographs were used to develop the AI system (CranioCatch, Eskisehir, Turkey) for the detection of IOs. The panoramic radiographs were acquired from the radiology archives of the Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University. GoogLeNet Inception v2 model implemented with TensorFlow library was used for the detection of IOs. Confusion matrix was used to predict model achievements. RESULTS: Fifty IOs were detected accurately by the AI model from the 52 test images which had 57 IOs. The sensitivity, precision, and F-measure values were 0.88, 0.83, and 0.86, respectively. CONCLUSION: Deep learning-based AI algorithm has the potential to detect IOs accurately on panoramic radiographs. AI systems may reduce the workload of dentists in terms of diagnostic efforts.


Asunto(s)
Aprendizaje Profundo , Osteosclerosis , Humanos , Inteligencia Artificial , Radiografía Panorámica , Algoritmos , Osteosclerosis/diagnóstico por imagen
13.
Pol J Radiol ; 87: e516-e520, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36250137

RESUMEN

Purpose: Magnetic resonance imaging (MRI) has a special place in the evaluation of orbital and periorbital lesions. Segmentation is one of the deep learning methods. In this study, we aimed to perform segmentation in orbital and periorbital lesions. Material and methods: Contrast-enhanced orbital MRIs performed between 2010 and 2019 were retrospectively screened, and 302 cross-sections of contrast-enhanced, fat-suppressed, T1-weighted, axial MRI images of 95 patients obtained using 3 T and 1.5 T devices were included in the study. The dataset was divided into 3: training, test, and validation. The number of training and validation data was increased 4 times by applying data augmentation (horizontal, vertical, and both). Pytorch UNet was used for training, with 100 epochs. The intersection over union (IOU) statistic (the Jaccard index) was selected as 50%, and the results were calculated. Results: The 77th epoch model provided the best results: true positives, 23; false positives, 4; and false negatives, 8. The pre-cision, sensitivity, and F1 score were determined as 0.85, 0.74, and 0.79, respectively. Conclusions: Our study proved to be successful in segmentation by deep learning method. It is one of the pioneering studies on this subject and will shed light on further segmentation studies to be performed in orbital MR images.

14.
BMC Med Imaging ; 21(1): 124, 2021 08 13.
Artículo en Inglés | MEDLINE | ID: mdl-34388975

RESUMEN

BACKGROUND: Panoramic radiography is an imaging method for displaying maxillary and mandibular teeth together with their supporting structures. Panoramic radiography is frequently used in dental imaging due to its relatively low radiation dose, short imaging time, and low burden to the patient. We verified the diagnostic performance of an artificial intelligence (AI) system based on a deep convolutional neural network method to detect and number teeth on panoramic radiographs. METHODS: The data set included 2482 anonymized panoramic radiographs from adults from the archive of Eskisehir Osmangazi University, Faculty of Dentistry, Department of Oral and Maxillofacial Radiology. A Faster R-CNN Inception v2 model was used to develop an AI algorithm (CranioCatch, Eskisehir, Turkey) to automatically detect and number teeth on panoramic radiographs. Human observation and AI methods were compared on a test data set consisting of 249 panoramic radiographs. True positive, false positive, and false negative rates were calculated for each quadrant of the jaws. The sensitivity, precision, and F-measure values were estimated using a confusion matrix. RESULTS: The total numbers of true positive, false positive, and false negative results were 6940, 250, and 320 for all quadrants, respectively. Consequently, the estimated sensitivity, precision, and F-measure were 0.9559, 0.9652, and 0.9606, respectively. CONCLUSIONS: The deep convolutional neural network system was successful in detecting and numbering teeth. Clinicians can use AI systems to detect and number teeth on panoramic radiographs, which may eventually replace evaluation by human observers and support decision making.


Asunto(s)
Redes Neurales de la Computación , Radiografía Panorámica , Diente/diagnóstico por imagen , Algoritmos , Conjuntos de Datos como Asunto , Aprendizaje Profundo , Humanos , Sensibilidad y Especificidad
15.
Acta Odontol Scand ; 79(4): 275-281, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33176533

RESUMEN

OBJECTIVES: Radiological examination has an important place in dental practice, and it is frequently used in intraoral imaging. The correct numbering of teeth on radiographs is a routine practice that takes time for the dentist. This study aimed to propose an automatic detection system for the numbering of teeth in bitewing images using a faster Region-based Convolutional Neural Networks (R-CNN) method. METHODS: The study included 1125 bite-wing radiographs of patients who attended the Faculty of Dentistry of Ordu University from 2018 to 2019. A faster R-CNN an advanced object identification method was used to identify the teeth. The confusion matrix was used as a metric and to evaluate the success of the model. RESULTS: The deep CNN system (CranioCatch, Eskisehir, Turkey) was used to detect and number teeth in bitewing radiographs. Of 715 teeth in 109 bite-wing images, 697 were correctly numbered in the test data set. The F1 score, precision and sensitivity were 0.9515, 0.9293 and 0.9748, respectively. CONCLUSIONS: A CNN approach for the analysis of bitewing images shows promise for detecting and numbering teeth. This method can save dentists time by automatically preparing dental charts.


Asunto(s)
Inteligencia Artificial , Diente , Oclusión Dental , Humanos , Redes Neurales de la Computación , Diente/diagnóstico por imagen , Turquía
16.
Int J Comput Dent ; 24(1): 1-9, 2021 Feb 26.
Artículo en Inglés | MEDLINE | ID: mdl-33634681

RESUMEN

AIM: The aim of the study was to compare the success and reliability of an artificial intelligence (AI) application in the detection and classification of submerged teeth in panoramic radiographs. MATERIALS AND METHODS: Convolutional neural network (CNN) algorithms were used to detect and classify submerged molars. The detection module, based on the stateof- the-art Faster R-CNN architecture, processed a radiograph to define the boundaries of submerged molars. A separate testing set was used to evaluate the diagnostic performance of the system and compare it with that of experts in the field. RESULT: The success rate of the classification and identification of the system was high when evaluated according to the reference standard. The system was extremely accurate in its performance in comparison with observers. CONCLUSIONS: The performance of the proposed computeraided diagnosis solution is comparable to that of experts. It is useful to diagnose submerged molars with an AI application to prevent errors. In addition, this will facilitate the diagnoses of pediatric dentists.


Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Niño , Humanos , Proyectos Piloto , Reproducibilidad de los Resultados , Diente Primario
17.
Tuberk Toraks ; 69(4): 486-491, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34957742

RESUMEN

INTRODUCTION: Computed tomography (CT) is an auxiliary modality in the diagnosis of the novel Coronavirus (COVID-19) disease and can guide physicians in the presence of lung involvement. In this study, we aimed to investigate the contribution of deep learning to diagnosis in patients with typical COVID-19 pneumonia findings on CT. MATERIALS AND METHODS: This study retrospectively evaluated 690 lesions obtained from 35 patients diagnosed with COVID-19 pneumonia based on typical findings on non-contrast high-resolution CT (HRCT) in our hospital. The diagnoses of the patients were also confirmed by other necessary tests. HRCT images were assessed in the parenchymal window. In the images obtained, COVID-19 lesions were detected. For the deep Convolutional Neural Network (CNN) algorithm, the Confusion matrix was used based on a Tensorflow Framework in Python. RESULT: A total of 596 labeled lesions obtained from 224 sections of the images were used for the training of the algorithm, 89 labeled lesions from 27 sections were used in validation, and 67 labeled lesions from 25 images in testing. Fifty-six of the 67 lesions used in the testing stage were accurately detected by the algorithm while the remaining 11 were not recognized. There was no false positive. The Recall, Precision and F1 score values in the test group were 83.58, 1, and 91.06, respectively. CONCLUSIONS: We successfully detected the COVID-19 pneumonia lesions on CT images using the algorithms created with artificial intelligence. The integration of deep learning into the diagnostic stage in medicine is an important step for the diagnosis of diseases that can cause lung involvement in possible future pandemics.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Inteligencia Artificial , Humanos , Estudios Retrospectivos , SARS-CoV-2 , Tomografía Computarizada por Rayos X
19.
J Oral Implantol ; 49(4): 344-345, 2023 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-37527149
20.
J Eval Clin Pract ; 2024 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-39113266

RESUMEN

BACKGROUND AND AIM: Urinary incontinence is an important problem with potentially adverse effects on the psychological, social and personality development of children. Today, with the developing technology, the use of information and communication technologies such as wearable technology, message services and mobile applications has become widespread in solving health problems. In this study, it was aimed to develop a mobile application that facilitates the follow-up of children, increases their compliance with treatment and ensures the continuity of communication between them and the health worker. The methodology, design and preliminary evaluation results of the mobile application are presented in this article. METHODS: During the development process of the mobile application, the content was first created in line with the literature review. After the content was determined, the interface design was made on MS Word and Photoshop software. At this stage, six experts were consulted for content and design. The mobile application, finalised in design, was implemented on Android and IOS platforms. After the mobile application was created, 10 children and their families were interviewed. RESULTS: Nine of the families (90%) found the developed mobile application useful and easy to use. Families' suggestions to improve the mobile application were to make it more interesting for children and to enrich its content. CONCLUSION: In line with the feedback, the mobile application was updated and finalised. Preliminary results are promising that the developed mobile application can be used as an aid to treatment in children with urinary incontinence. With the mobile application developed, urotherapy training was not limited to the time they visited the hospital. This suggests that the mobile application can eliminate the problem of partial or omitted treatment. This research has shown that leveraging technology can be a good option to increase treatment success. CLINICAL TRIAL NUMBER: NCT05815940.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda