Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Stud Health Technol Inform ; 316: 1096-1097, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176572

RESUMO

Dentists, especially those who are not oral lesion specialists and live in rural areas, need an artificial intelligence (AI) system for accurately assisting them in screening for oral cancer that may appear in smartphone images. Not many literatures present a viable model that addresses the needs, especially in the context of oral lesion segmentation in smartphone images. This study demonstrates the use of a deep learning-based AI for simultaneously identifying types of oral cancer lesions as well as precisely outlining the boundary of the lesions in the images for the first time. The lesions of interest were oral potentially malignant disorders (OPMDs) and oral squamous cell carcinoma (OSCC) lesions. The model could successfully (1) detect if the images contained the oral lesions, (2) determine types of the lesions, and (3) precisely outline the boundary of the lesions. With future success of our project, patients will be diagnosed and treated early before the pre-cancer lesions can progress into deadly cancerous ones.


Assuntos
Inteligência Artificial , Neoplasias Bucais , Neoplasias Bucais/diagnóstico , Humanos , Smartphone , Carcinoma de Células Escamosas/diagnóstico , Carcinoma de Células Escamosas/diagnóstico por imagem , Aprendizado Profundo , Detecção Precoce de Câncer , Interpretação de Imagem Assistida por Computador/métodos
2.
Int Dent J ; 2024 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-39043529

RESUMO

BACKGROUND: Preoperative assessment of the impacted mandibular third molar (LM3) in a panoramic radiograph is important in surgical planning. The aim of this study was to develop and evaluate a computer-aided visualisation-based deep learning (DL) system using a panoramic radiograph to predict the difficulty level of surgical removal of an impacted LM3. METHODS: The study included 1367 LM3 images from 784 patients who presented from 2021-2023 to the University Dental Hospital; images were collected retrospectively. The difficulty level of surgically removing impacted LM3s was assessed via our newly developed DL system, which seamlessly integrated 3 distinct DL models. ResNet101V2 handled binary classification for identifying impacted LM3s in panoramic radiographs, RetinaNet detected the precise location of the impacted LM3, and Vision Transformer performed multiclass image classification to evaluate the difficulty levels of removing the detected impacted LM3. RESULTS: The ResNet101V2 model achieved a classification accuracy of 0.8671. The RetinaNet model demonstrated exceptional detection performance, with a mean average precision of 0.9928. Additionally, the Vision Transformer model delivered an average accuracy of 0.7899 in predicting removal difficulty levels. CONCLUSIONS: The development of a 3-phase computer-aided visualisation-based DL system has yielded a very good performance in using panoramic radiographs to predict the difficulty level of surgically removing an impacted LM3.

3.
Eur J Dent ; 18(3): 907-917, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38744326

RESUMO

OBJECTIVE: A 5-year survival rate is a predictor for the assessment of oral cancer prognosis. The purpose of this study is to analyze oral cancer data to discover and rank the prognostic factors associated with oral cancer 5-year survival using the association rule mining (ARM) technique. MATERIALS AND METHODS: This study is a retrospective analysis of 897 oral cancer patients from a regional cancer center between 2011 and 2017. The 5-year survival rate was assessed. The multivariable Cox proportional hazards analysis was performed to determine prognostic factors. ARM was applied to clinicopathologic and treatment modalities data to identify and rank the prognostic factors associated with oral cancer 5-year survival. RESULTS: The 5-year overall survival rate was 35.1%. Multivariable Cox proportional hazards analysis showed that tumor (T) stage, lymph node metastasis, surgical margin, extranodal extension, recurrence, and distant metastasis of tumor were significantly associated with overall survival rate (p < 0.05). The top associated death within 5 years rule was positive extranodal extension, followed by positive perineural and lymphovascular invasion, with confidence levels of 0.808, 0.808, and 0.804, respectively. CONCLUSION: This study has shown that extranodal extension, and perineural and lymphovascular invasion were the top ranking and major deadly prognostic factors affecting the 5-year survival of oral cancer.

4.
BMC Oral Health ; 24(1): 519, 2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38698358

RESUMO

BACKGROUND: Oral cancer is a deadly disease and a major cause of morbidity and mortality worldwide. The purpose of this study was to develop a fuzzy deep learning (FDL)-based model to estimate the survival time based on clinicopathologic data of oral cancer. METHODS: Electronic medical records of 581 oral squamous cell carcinoma (OSCC) patients, treated with surgery with or without radiochemotherapy, were collected retrospectively from the Oral and Maxillofacial Surgery Clinic and the Regional Cancer Center from 2011 to 2019. The deep learning (DL) model was trained to classify survival time classes based on clinicopathologic data. Fuzzy logic was integrated into the DL model and trained to create FDL-based models to estimate the survival time classes. RESULTS: The performance of the models was evaluated on a test dataset. The performance of the DL and FDL models for estimation of survival time achieved an accuracy of 0.74 and 0.97 and an area under the receiver operating characteristic (AUC) curve of 0.84 to 1.00 and 1.00, respectively. CONCLUSIONS: The integration of fuzzy logic into DL models could improve the accuracy to estimate survival time based on clinicopathologic data of oral cancer.


Assuntos
Aprendizado Profundo , Lógica Fuzzy , Neoplasias Bucais , Humanos , Neoplasias Bucais/patologia , Neoplasias Bucais/mortalidade , Estudos Retrospectivos , Feminino , Masculino , Pessoa de Meia-Idade , Carcinoma de Células Escamosas/patologia , Carcinoma de Células Escamosas/mortalidade , Carcinoma de Células Escamosas/terapia , Análise de Sobrevida , Idoso , Taxa de Sobrevida , Adulto
5.
BMC Oral Health ; 24(1): 212, 2024 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-38341571

RESUMO

BACKGROUND: Oral cancer is a life-threatening malignancy, which affects the survival rate and quality of life of patients. The aim of this systematic review was to review deep learning (DL) studies in the diagnosis and prognostic prediction of oral cancer. METHODS: This systematic review was conducted following the PRISMA guidelines. Databases (Medline via PubMed, Google Scholar, Scopus) were searched for relevant studies, from January 2000 to June 2023. RESULTS: Fifty-four qualified for inclusion, including diagnostic (n = 51), and prognostic prediction (n = 3). Thirteen studies showed a low risk of biases in all domains, and 40 studies low risk for concerns regarding applicability. The performance of DL models was reported of the accuracy of 85.0-100%, F1-score of 79.31 - 89.0%, Dice coefficient index of 76.0 - 96.3% and Concordance index of 0.78-0.95 for classification, object detection, segmentation, and prognostic prediction, respectively. The pooled diagnostic odds ratios were 2549.08 (95% CI 410.77-4687.39) for classification studies. CONCLUSIONS: The number of DL studies in oral cancer is increasing, with a diverse type of architectures. The reported accuracy showed promising DL performance in studies of oral cancer and appeared to have potential utility in improving informed clinical decision-making of oral cancer.


Assuntos
Aprendizado Profundo , Neoplasias Bucais , Humanos , Prognóstico
6.
Stud Health Technol Inform ; 310: 1495-1496, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38269713

RESUMO

Temporomandibular joint (TMJ) disorders have been misinterpreted by various normal TMJ features leading to treatment failure. This study assessed deep learning algorithms, DenseNet-121 and InceptionV3, for multi-class classification of TMJ normal variations and disorders in 1,710 panoramic radiographs. The overall accuracy of DenseNet-121 and InceptionV3 were 0.99 and 0.95, respectively. The AUC from 0.99 to 1.00, indicating high performance for TMJ disorders classification in panoramic radiographs.


Assuntos
Aprendizado Profundo , Transtornos da Articulação Temporomandibular , Humanos , Algoritmos , Transtornos da Articulação Temporomandibular/diagnóstico por imagem
7.
Stud Health Technol Inform ; 310: 1497-1498, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38269714

RESUMO

This study deploys the deep learning-based object detection algorithms to detect midfacial fractures in computed tomography (CT) images. The object detection models were created using faster R-CNN and RetinaNet from 2,000 CT images. The best detection model, faster R-CNN, yielded an average precision of 0.79 and an area under the curve (AUC) of 0.80. In conclusion, faster R-CNN model has good potential for detecting midfacial fractures in CT images.


Assuntos
Aprendizado Profundo , Fraturas Ósseas , Humanos , Algoritmos , Área Sob a Curva
8.
Sci Rep ; 13(1): 3434, 2023 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-36859660

RESUMO

The purpose of this study was to evaluate the performance of convolutional neural network-based models for the detection and classification of maxillofacial fractures in computed tomography (CT) maxillofacial bone window images. A total of 3407 CT images, 2407 of which contained maxillofacial fractures, were retrospectively obtained from the regional trauma center from 2016 to 2020. Multiclass image classification models were created by using DenseNet-169 and ResNet-152. Multiclass object detection models were created by using faster R-CNN and YOLOv5. DenseNet-169 and ResNet-152 were trained to classify maxillofacial fractures into frontal, midface, mandibular and no fracture classes. Faster R-CNN and YOLOv5 were trained to automate the placement of bounding boxes to specifically detect fracture lines in each fracture class. The performance of each model was evaluated on an independent test dataset. The overall accuracy of the best multiclass classification model, DenseNet-169, was 0.70. The mean average precision of the best multiclass detection model, faster R-CNN, was 0.78. In conclusion, DenseNet-169 and faster R-CNN have potential for the detection and classification of maxillofacial fractures in CT images.


Assuntos
Fraturas Ósseas , Humanos , Estudos Retrospectivos , Face , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
9.
BMC Med Res Methodol ; 22(1): 281, 2022 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-36316659

RESUMO

BACKGROUND: The aim of this study was to evaluate the most effective combination of autoregressive integrated moving average (ARIMA), a time series model, and association rule mining (ARM) techniques to identify meaningful prognostic factors and predict the number of cases for efficient COVID-19 crisis management. METHODS: The 3685 COVID-19 patients admitted at Thailand's first university field hospital following the four waves of infections from March 2020 to August 2021 were analyzed using the autoregressive integrated moving average (ARIMA), its derivative to exogenous variables (ARIMAX), and association rule mining (ARM). RESULTS: The ARIMA (2, 2, 2) model with an optimized parameter set predicted the number of the COVID-19 cases admitted at the hospital with acceptable error scores (R2 = 0.5695, RMSE = 29.7605, MAE = 27.5102). Key features from ARM (symptoms, age, and underlying diseases) were selected to build an ARIMAX (1, 1, 1) model, which yielded better performance in predicting the number of admitted cases (R2 = 0.5695, RMSE = 27.7508, MAE = 23.4642). The association analysis revealed that hospital stays of more than 14 days were related to the healthcare worker patients and the patients presented with underlying diseases. The worsening cases that required referral to the hospital ward were associated with the patients admitted with symptoms, pregnancy, metabolic syndrome, and age greater than 65 years old. CONCLUSIONS: This study demonstrated that the ARIMAX model has the potential to predict the number of COVID-19 cases by incorporating the most associated prognostic factors identified by ARM technique to the ARIMA model, which could be used for preparation and optimal management of hospital resources during pandemics.


Assuntos
COVID-19 , Humanos , Idoso , COVID-19/epidemiologia , Fatores de Tempo , Modelos Estatísticos , Pandemias , Previsões , Mineração de Dados
10.
PLoS One ; 17(8): e0273508, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36001628

RESUMO

Artificial intelligence (AI) applications in oncology have been developed rapidly with reported successes in recent years. This work aims to evaluate the performance of deep convolutional neural network (CNN) algorithms for the classification and detection of oral potentially malignant disorders (OPMDs) and oral squamous cell carcinoma (OSCC) in oral photographic images. A dataset comprising 980 oral photographic images was divided into 365 images of OSCC, 315 images of OPMDs and 300 images of non-pathological images. Multiclass image classification models were created by using DenseNet-169, ResNet-101, SqueezeNet and Swin-S. Multiclass object detection models were fabricated by using faster R-CNN, YOLOv5, RetinaNet and CenterNet2. The AUC of multiclass image classification of the best CNN models, DenseNet-196, was 1.00 and 0.98 on OSCC and OPMDs, respectively. The AUC of the best multiclass CNN-base object detection models, Faster R-CNN, was 0.88 and 0.64 on OSCC and OPMDs, respectively. In comparison, DenseNet-196 yielded the best multiclass image classification performance with AUC of 1.00 and 0.98 on OSCC and OPMD, respectively. These values were inline with the performance of experts and superior to those of general practictioners (GPs). In conclusion, CNN-based models have potential for the identification of OSCC and OPMDs in oral photographic images and are expected to be a diagnostic tool to assist GPs for the early detection of oral cancer.


Assuntos
Carcinoma de Células Escamosas , Neoplasias Bucais , Úlceras Orais , Inteligência Artificial , Carcinoma de Células Escamosas/diagnóstico por imagem , Carcinoma de Células Escamosas/patologia , Detecção Precoce de Câncer , Humanos , Neoplasias Bucais/diagnóstico por imagem , Neoplasias Bucais/patologia , Redes Neurais de Computação
11.
Artigo em Inglês | MEDLINE | ID: mdl-34886359

RESUMO

This study aims to analyze the patient characteristics and factors related to clinical outcomes in the crisis management of the COVID-19 pandemic in a field hospital. We conducted retrospective analysis of patient clinical data from March 2020 to August 2021 at the first university-based field hospital in Thailand. Multivariable logistic regression models were used to evaluate the factors associated with the field hospital discharge destination. Of a total of 3685 COVID-19 patients, 53.6% were women, with the median age of 30 years. General workers accounted for 97.5% of patients, while 2.5% were healthcare workers. Most of the patients were exposed to coronavirus from the community (84.6%). At the study end point, no patients had died, 97.7% had been discharged home, and 2.3% had been transferred to designated high-level hospitals due to their condition worsening. In multivariable logistic regression analysis, older patients with one or more underlying diseases who showed symptoms of COVID-19 and whose chest X-rays showed signs of pneumonia were in a worse condition than other patients. In conclusion, the university-based field hospital has the potential to fill acute gaps and prevent public agencies from being overwhelmed during crisis events.


Assuntos
COVID-19 , Adulto , Feminino , Pessoal de Saúde , Humanos , Pandemias , Estudos Retrospectivos , SARS-CoV-2
12.
Eur J Dent ; 15(4): 812-816, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34428837

RESUMO

A variety of black-pigmented lesions of the oral cavity can be found, ranging from harmless benign lesions such as melanotic macule, smoker's melanosis, amalgam/graphite tattoos, and pigmented nevus to a life-threatening oral malignant melanoma. Oral melanoma is a rare and aggressive malignant tumor that originates from melanocytes' proliferation and accounts for only 0.5% of all oral malignancies. The etiology is unknown. Most oral melanomas are present at the palate and the upper alveolar ridge, whereas occurrences at the buccal mucosa, the lower alveolar ridge, and the lip are rare, with only a few reports in the literature. The diagnosis is confirmed by a biopsy. The prognosis is poor, with a 5-year survival rate of ~20%. In this report, we present a case of large oral melanoma at the right buccal mucosa involving the right lower alveolar ridge and lip commissure, which are relatively unusual locations for oral melanoma. In addition, immunohistochemical markers used for diagnostic, therapeutic, and prognostic decision-making of oral melanoma are also discussed.

13.
J Oral Pathol Med ; 50(9): 911-918, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34358372

RESUMO

BACKGROUND: Oral cancer is a deadly disease among the most common malignant tumors worldwide, and it has become an increasingly important public health problem in developing and low-to-middle income countries. This study aims to use the convolutional neural network (CNN) deep learning algorithms to develop an automated classification and detection model for oral cancer screening. METHODS: The study included 700 clinical oral photographs, collected retrospectively from the oral and maxillofacial center, which were divided into 350 images of oral squamous cell carcinoma and 350 images of normal oral mucosa. The classification and detection models were created by using DenseNet121 and faster R-CNN, respectively. Four hundred and ninety images were randomly selected as training data. In addition, 70 and 140 images were assigned as validating and testing data, respectively. RESULTS: The classification accuracy of DenseNet121 model achieved a precision of 99%, a recall of 100%, an F1 score of 99%, a sensitivity of 98.75%, a specificity of 100%, and an area under the receiver operating characteristic curve of 99%. The detection accuracy of a faster R-CNN model achieved a precision of 76.67%, a recall of 82.14%, an F1 score of 79.31%, and an area under the precision-recall curve of 0.79. CONCLUSION: The DenseNet121 and faster R-CNN algorithm were proved to offer the acceptable potential for classification and detection of cancerous lesions in oral photographic images.


Assuntos
Carcinoma de Células Escamosas , Aprendizado Profundo , Neoplasias Bucais , Algoritmos , Carcinoma de Células Escamosas/diagnóstico por imagem , Humanos , Neoplasias Bucais/diagnóstico por imagem , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA