Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
PLoS One ; 18(9): e0291972, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37747904

RESUMEN

The high prevalence of oral potentially-malignant disorders exhibits diverse severity and risk of malignant transformation, which mandates a Point-of-Care diagnostic tool. Low patient compliance for biopsies underscores the need for minimally-invasive diagnosis. Oral cytology, an apt method, is not clinically applicable due to a lack of definitive diagnostic criteria and subjective interpretation. The primary objective of this study was to identify and evaluate the efficacy of biomarkers for cytology-based delineation of high-risk oral lesions. A comprehensive systematic review and meta-analysis of biomarkers recognized a panel of markers (n: 10) delineating dysplastic oral lesions. In this observational cross sectional study, immunohistochemical validation (n: 131) identified a four-marker panel, CD44, Cyclin D1, SNA-1, and MAA, with the best sensitivity (>75%; AUC>0.75) in delineating benign, hyperplasia, and mild-dysplasia (Low Risk Lesions; LRL) from moderate-severe dysplasia (High Grade Dysplasia: HGD) along with cancer. Independent validation by cytology (n: 133) showed that expression of SNA-1 and CD44 significantly delineate HGD and cancer with high sensitivity (>83%). Multiplex validation in another cohort (n: 138), integrated with a machine learning model incorporating clinical parameters, further improved the sensitivity and specificity (>88%). Additionally, image automation with SNA-1 profiled data set also provided a high sensitivity (sensitivity: 86%). In the present study, cytology with a two-marker panel, detecting aberrant glycosylation and a glycoprotein, provided efficient risk stratification of oral lesions. Our study indicated that use of a two-biomarker panel (CD44/SNA-1) integrated with clinical parameters or SNA-1 with automated image analysis (Sensitivity >85%) or multiplexed two-marker panel analysis (Sensitivity: >90%) provided efficient risk stratification of oral lesions, indicating the significance of biomarker-integrated cytopathology in the development of a Point-of-care assay.


Asunto(s)
Bioensayo , Receptores de Hialuranos , Humanos , Hiperplasia/diagnóstico , Automatización , Biopsia , Glicosilación , Estudios Observacionales como Asunto
2.
Cancers (Basel) ; 15(5)2023 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-36900210

RESUMEN

Convolutional neural networks have demonstrated excellent performance in oral cancer detection and classification. However, the end-to-end learning strategy makes CNNs hard to interpret, and it can be challenging to fully understand the decision-making procedure. Additionally, reliability is also a significant challenge for CNN based approaches. In this study, we proposed a neural network called the attention branch network (ABN), which combines the visual explanation and attention mechanisms to improve the recognition performance and interpret the decision-making simultaneously. We also embedded expert knowledge into the network by having human experts manually edit the attention maps for the attention mechanism. Our experiments have shown that ABN performs better than the original baseline network. By introducing the Squeeze-and-Excitation (SE) blocks to the network, the cross-validation accuracy increased further. Furthermore, we observed that some previously misclassified cases were correctly recognized after updating by manually editing the attention maps. The cross-validation accuracy increased from 0.846 to 0.875 with the ABN (Resnet18 as baseline), 0.877 with SE-ABN, and 0.903 after embedding expert knowledge. The proposed method provides an accurate, interpretable, and reliable oral cancer computer-aided diagnosis system through visual explanation, attention mechanisms, and expert knowledge embedding.

3.
J Biomed Opt ; 27(11)2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36329004

RESUMEN

Significance: Oral cancer is one of the most prevalent cancers, especially in middle- and low-income countries such as India. Automatic segmentation of oral cancer images can improve the diagnostic workflow, which is a significant task in oral cancer image analysis. Despite the remarkable success of deep-learning networks in medical segmentation, they rarely provide uncertainty quantification for their output. Aim: We aim to estimate uncertainty in a deep-learning approach to semantic segmentation of oral cancer images and to improve the accuracy and reliability of predictions. Approach: This work introduced a UNet-based Bayesian deep-learning (BDL) model to segment potentially malignant and malignant lesion areas in the oral cavity. The model can quantify uncertainty in predictions. We also developed an efficient model that increased the inference speed, which is almost six times smaller and two times faster (inference speed) than the original UNet. The dataset in this study was collected using our customized screening platform and was annotated by oral oncology specialists. Results: The proposed approach achieved good segmentation performance as well as good uncertainty estimation performance. In the experiments, we observed an improvement in pixel accuracy and mean intersection over union by removing uncertain pixels. This result reflects that the model provided less accurate predictions in uncertain areas that may need more attention and further inspection. The experiments also showed that with some performance compromises, the efficient model reduced computation time and model size, which expands the potential for implementation on portable devices used in resource-limited settings. Conclusions: Our study demonstrates the UNet-based BDL model not only can perform potentially malignant and malignant oral lesion segmentation, but also can provide informative pixel-level uncertainty estimation. With this extra uncertainty information, the accuracy and reliability of the model's prediction can be improved.


Asunto(s)
Neoplasias de la Boca , Semántica , Humanos , Incertidumbre , Teorema de Bayes , Reproducibilidad de los Resultados , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias de la Boca/diagnóstico por imagen
4.
J Biomed Opt ; 27(1)2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-35023333

RESUMEN

SIGNIFICANCE: Convolutional neural networks (CNNs) show the potential for automated classification of different cancer lesions. However, their lack of interpretability and explainability makes CNNs less than understandable. Furthermore, CNNs may incorrectly concentrate on other areas surrounding the salient object, rather than the network's attention focusing directly on the object to be recognized, as the network has no incentive to focus solely on the correct subjects to be detected. This inhibits the reliability of CNNs, especially for biomedical applications. AIM: Develop a deep learning training approach that could provide understandability to its predictions and directly guide the network to concentrate its attention and accurately delineate cancerous regions of the image. APPROACH: We utilized Selvaraju et al.'s gradient-weighted class activation mapping to inject interpretability and explainability into CNNs. We adopted a two-stage training process with data augmentation techniques and Li et al.'s guided attention inference network (GAIN) to train images captured using our customized mobile oral screening devices. The GAIN architecture consists of three streams of network training: classification stream, attention mining stream, and bounding box stream. By adopting the GAIN training architecture, we jointly optimized the classification and segmentation accuracy of our CNN by treating these attention maps as reliable priors to develop attention maps with more complete and accurate segmentation. RESULTS: The network's attention map will help us to actively understand what the network is focusing on and looking at during its decision-making process. The results also show that the proposed method could guide the trained neural network to highlight and focus its attention on the correct lesion areas in the images when making a decision, rather than focusing its attention on relevant yet incorrect regions. CONCLUSIONS: We demonstrate the effectiveness of our approach for more interpretable and reliable oral potentially malignant lesion and malignant lesion classification.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Boca , Atención , Humanos , Neoplasias de la Boca/diagnóstico por imagen , Redes Neurales de la Computación , Reproducibilidad de los Resultados
5.
Biomed Opt Express ; 12(10): 6422-6430, 2021 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-34745746

RESUMEN

In medical imaging, deep learning-based solutions have achieved state-of-the-art performance. However, reliability restricts the integration of deep learning into practical medical workflows since conventional deep learning frameworks cannot quantitatively assess model uncertainty. In this work, we propose to address this shortcoming by utilizing a Bayesian deep network capable of estimating uncertainty to assess oral cancer image classification reliability. We evaluate the model using a large intraoral cheek mucosa image dataset captured using our customized device from high-risk population to show that meaningful uncertainty information can be produced. In addition, our experiments show improved accuracy by uncertainty-informed referral. The accuracy of retained data reaches roughly 90% when referring either 10% of all cases or referring cases whose uncertainty value is greater than 0.3. The performance can be further improved by referring more patients. The experiments show the model is capable of identifying difficult cases needing further inspection.

6.
J Biomed Opt ; 26(10)2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34689442

RESUMEN

SIGNIFICANCE: Early detection of oral cancer is vital for high-risk patients, and machine learning-based automatic classification is ideal for disease screening. However, current datasets collected from high-risk populations are unbalanced and often have detrimental effects on the performance of classification. AIM: To reduce the class bias caused by data imbalance. APPROACH: We collected 3851 polarized white light cheek mucosa images using our customized oral cancer screening device. We use weight balancing, data augmentation, undersampling, focal loss, and ensemble methods to improve the neural network performance of oral cancer image classification with the imbalanced multi-class datasets captured from high-risk populations during oral cancer screening in low-resource settings. RESULTS: By applying both data-level and algorithm-level approaches to the deep learning training process, the performance of the minority classes, which were difficult to distinguish at the beginning, has been improved. The accuracy of "premalignancy" class is also increased, which is ideal for screening applications. CONCLUSIONS: Experimental results show that the class bias induced by imbalanced oral cancer image datasets could be reduced using both data- and algorithm-level methods. Our study may provide an important basis for helping understand the influence of unbalanced datasets on oral cancer deep learning classifiers and how to mitigate.


Asunto(s)
Neoplasias de la Boca , Redes Neurales de la Computación , Algoritmos , Detección Precoz del Cáncer , Humanos , Aprendizaje Automático , Neoplasias de la Boca/diagnóstico por imagen
7.
Cancers (Basel) ; 13(14)2021 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-34298796

RESUMEN

Non-invasive strategies that can identify oral malignant and dysplastic oral potentially-malignant lesions (OPML) are necessary in cancer screening and long-term surveillance. Optical coherence tomography (OCT) can be a rapid, real time and non-invasive imaging method for frequent patient surveillance. Here, we report the validation of a portable, robust OCT device in 232 patients (lesions: 347) in different clinical settings. The device deployed with algorithm-based automated diagnosis, showed efficacy in delineation of oral benign and normal (n = 151), OPML (n = 121), and malignant lesions (n = 75) in community and tertiary care settings. This study showed that OCT images analyzed by automated image processing algorithm could distinguish the dysplastic-OPML and malignant lesions with a sensitivity of 95% and 93%, respectively. Furthermore, we explored the ability of multiple (n = 14) artificial neural network (ANN) based feature extraction techniques for delineation high grade-OPML (moderate/severe dysplasia). The support vector machine (SVM) model built over ANN, delineated high-grade dysplasia with sensitivity of 83%, which in turn, can be employed to triage patients for tertiary care. The study provides evidence towards the utility of the robust and low-cost OCT instrument as a point-of-care device in resource-constrained settings and the potential clinical application of device in screening and surveillance of oral cancer.

8.
J Biomed Opt ; 26(6)2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-34164967

RESUMEN

SIGNIFICANCE: Oral cancer is among the most common cancers globally, especially in low- and middle-income countries. Early detection is the most effective way to reduce the mortality rate. Deep learning-based cancer image classification models usually need to be hosted on a computing server. However, internet connection is unreliable for screening in low-resource settings. AIM: To develop a mobile-based dual-mode image classification method and customized Android application for point-of-care oral cancer detection. APPROACH: The dataset used in our study was captured among 5025 patients with our customized dual-modality mobile oral screening devices. We trained an efficient network MobileNet with focal loss and converted the model into TensorFlow Lite format. The finalized lite format model is ∼16.3 MB and ideal for smartphone platform operation. We have developed an Android smartphone application in an easy-to-use format that implements the mobile-based dual-modality image classification approach to distinguish oral potentially malignant and malignant images from normal/benign images. RESULTS: We investigated the accuracy and running speed on a cost-effective smartphone computing platform. It takes ∼300 ms to process one image pair with the Moto G5 Android smartphone. We tested the proposed method on a standalone dataset and achieved 81% accuracy for distinguishing normal/benign lesions from clinically suspicious lesions, using a gold standard of clinical impression based on the review of images by oral specialists. CONCLUSIONS: Our study demonstrates the effectiveness of a mobile-based approach for oral cancer screening in low-resource settings.


Asunto(s)
Neoplasias de la Boca , Sistemas de Atención de Punto , Detección Precoz del Cáncer , Humanos , Neoplasias de la Boca/diagnóstico por imagen , Sensibilidad y Especificidad , Teléfono Inteligente
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...