Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
Cancers (Basel) ; 16(5)2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38473348

RESUMO

Oral cancer, a pervasive and rapidly growing malignant disease, poses a significant global health concern. Early and accurate diagnosis is pivotal for improving patient outcomes. Automatic diagnosis methods based on artificial intelligence have shown promising results in the oral cancer field, but the accuracy still needs to be improved for realistic diagnostic scenarios. Vision Transformers (ViT) have outperformed learning CNN models recently in many computer vision benchmark tasks. This study explores the effectiveness of the Vision Transformer and the Swin Transformer, two cutting-edge variants of the transformer architecture, for the mobile-based oral cancer image classification application. The pre-trained Swin transformer model achieved 88.7% accuracy in the binary classification task, outperforming the ViT model by 2.3%, while the conventional convolutional network model VGG19 and ResNet50 achieved 85.2% and 84.5% accuracy. Our experiments demonstrate that these transformer-based architectures outperform traditional convolutional neural networks in terms of oral cancer image classification, and underscore the potential of the ViT and the Swin Transformer in advancing the state of the art in oral cancer image analysis.

3.
Clin Oral Investig ; 27(12): 7575-7581, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37870594

RESUMO

OBJECTIVES: Oral cancer is a leading cause of morbidity and mortality. Screening and mobile Health (mHealth)-based approach facilitates early detection remotely in a resource-limited settings. Recent advances in eHealth technology have enabled remote monitoring and triage to detect oral cancer in its early stages. Although studies have been conducted to evaluate the diagnostic efficacy of remote specialists, to our knowledge, no studies have been conducted to evaluate the consistency of remote specialists. The aim of this study was to evaluate interobserver agreement between specialists through telemedicine systems in real-world settings using store-and-forward technology. MATERIALS AND METHODS: The two remote specialists independently diagnosed clinical images (n=822) from image archives. The onsite specialist diagnosed the same participants using conventional visual examination, which was tabulated. The diagnostic accuracy of two remote specialists was compared with that of the onsite specialist. Images that were confirmed histopathologically were compared with the onsite diagnoses and the two remote specialists. RESULTS: There was moderate agreement (k= 0.682) between two remote specialists and (k= 0.629) between the onsite specialist and two remote specialists in the diagnosis of oral lesions. The sensitivity and specificity of remote specialist 1 were 92.7% and 83.3%, respectively, and those of remote specialist 2 were 95.8% and 60%, respectively, each compared with histopathology. CONCLUSION: The diagnostic accuracy of the two remote specialists was optimal, suggesting that "store and forward" technology and telehealth can be an effective tool for triage and monitoring of patients. CLINICAL RELEVANCE: Telemedicine is a good tool for triage and enables faster patient care in real-world settings.


Assuntos
Doenças da Boca , Neoplasias Bucais , Telemedicina , Humanos , Variações Dependentes do Observador , Neoplasias Bucais/diagnóstico , Neoplasias Bucais/patologia , Telemedicina/métodos , Tecnologia
4.
J Biomed Opt ; 28(8): 082809, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37483565

RESUMO

Significance: India has one of the highest rates of oral squamous cell carcinoma (OSCC) in the world, with an incidence of 15 per 100,000 and more than 70,000 deaths per year. The problem is exacerbated by a lack of medical infrastructure and routine screening, especially in rural areas. New technologies for oral cancer detection and timely treatment at the point of care are urgently needed. Aim: Our study aimed to use a hand-held smartphone-coupled intraoral imaging device, previously investigated for autofluorescence (auto-FL) diagnostics adapted here for treatment guidance and monitoring photodynamic therapy (PDT) using 5-aminolevulinic acid (ALA)-induced protoporphyrin IX (PpIX) fluorescence (FL). Approach: A total of 12 patients with 14 buccal mucosal lesions having moderately/well-differentiated micro-invasive OSCC lesions (<2 cm diameter and <5 mm depth) were systemically (in oral solution) administered three doses of 20 mg/kg ALA (total 60 mg/kg). Lesion site PpIX and auto-FL were imaged using the multichannel FL and polarized white-light oral cancer imaging probe before/after ALA administration and after light delivery (fractionated, total 100 J/cm2 of 635 nm red LED light). Results: The handheld device was conducive for access to lesion site images in the oral cavity. Segmentation of ratiometric images in which PpIX FL is mapped relative to auto-FL enabled improved demarcation of lesion boundaries relative to PpIX alone. A relative FL (R-value) threshold of 1.4 was found to segment lesion site PpIX production among the patients with mild to severe dysplasia malignancy. The segmented lesion size is well correlated with ultrasound findings. Lesions for which R-value was >1.65 at the time of treatment were associated with successful outcomes. Conclusion: These results indicate the utility of a low-cost, handheld intraoral imaging probe for image-guided PDT and treatment monitoring while also laying the groundwork for an integrated approach, combining cancer screening and treatment with the same hardware.


Assuntos
Carcinoma de Células Escamosas , Neoplasias Bucais , Fotoquimioterapia , Humanos , Ácido Aminolevulínico/uso terapêutico , Smartphone , Neoplasias Bucais/patologia , Fotoquimioterapia/métodos , Protoporfirinas/metabolismo , Fármacos Fotossensibilizantes/uso terapêutico
5.
Cancers (Basel) ; 15(10)2023 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-37345112

RESUMO

Efforts are underway to improve the accuracy of non-specialist screening for oral cancer (OC) risk, yet better screening will only translate into improved outcomes if at-risk individuals comply with specialist referral. Most individuals from low-resource, minority, and underserved (LRMU) populations fail to complete a specialist referral for OC risk. The goal was to evaluate the impact of a novel approach on specialist referral compliance in individuals with a positive OC risk screening outcome. A total of 60 LRMU subjects who had screened positive for increased OC risk were recruited and given the choice of referral for an in-person (20 subjects) or a telehealth (40 subjects) specialist visit. Referral compliance was tracked weekly over 6 months. Compliance was 30% in the in-person group, and 83% in the telehealth group. Approximately 83-85% of subjects from both groups who had complied with the first specialist referral complied with a second follow-up in-person specialist visit. Overall, 72.5% of subjects who had chosen a remote first specialist visit had entered into the continuum of care by the study end, vs. 25% of individuals in the in-person specialist group. A two-step approach that uses telehealth to overcome barriers may improve specialist referral compliance in LRMU individuals with increased OC risk.

6.
Res Sq ; 2023 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-37066209

RESUMO

Oral Cancer is one of the most common causes of morbidity and mortality. Screening and mobile Health (mHealth) based approach facilitates remote early detection of Oral cancer in a resource-constrained settings. The emerging eHealth technology has aided specialist reach to rural areas enabling remote monitoring and triaging to downstage Oral cancer. Though the diagnostic accuracy of the remote specialist has been evaluated, there are no studies evaluating the consistency among the remote specialists, to the best of our knowledge. The purpose of the study was to evaluate the interobserver agreement between the specialists through telemedicine systems in real-world settings using store and forward technology. Two remote specialists independently diagnosed the clinical images from image repositories, and the diagnostic accuracy was compared with onsite specialist and histopathological diagnosis when available. Moderate agreement (k = 0.682) between two remote specialists and (k = 0.629) between the onsite specialist and two remote specialists in diagnosing oral lesions. The sensitivity and specificity of remote specialist 1 were 92.7% and 83.3%, whereas remote specialist 2 was 95.8% and 60%, respectively, compared to histopathology. The store and forward technology and telecare can be effective tools in triaging and surveillance of patients.

7.
Cancers (Basel) ; 15(5)2023 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-36900210

RESUMO

Convolutional neural networks have demonstrated excellent performance in oral cancer detection and classification. However, the end-to-end learning strategy makes CNNs hard to interpret, and it can be challenging to fully understand the decision-making procedure. Additionally, reliability is also a significant challenge for CNN based approaches. In this study, we proposed a neural network called the attention branch network (ABN), which combines the visual explanation and attention mechanisms to improve the recognition performance and interpret the decision-making simultaneously. We also embedded expert knowledge into the network by having human experts manually edit the attention maps for the attention mechanism. Our experiments have shown that ABN performs better than the original baseline network. By introducing the Squeeze-and-Excitation (SE) blocks to the network, the cross-validation accuracy increased further. Furthermore, we observed that some previously misclassified cases were correctly recognized after updating by manually editing the attention maps. The cross-validation accuracy increased from 0.846 to 0.875 with the ABN (Resnet18 as baseline), 0.877 with SE-ABN, and 0.903 after embedding expert knowledge. The proposed method provides an accurate, interpretable, and reliable oral cancer computer-aided diagnosis system through visual explanation, attention mechanisms, and expert knowledge embedding.

8.
J Biomed Opt ; 27(11)2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36329004

RESUMO

Significance: Oral cancer is one of the most prevalent cancers, especially in middle- and low-income countries such as India. Automatic segmentation of oral cancer images can improve the diagnostic workflow, which is a significant task in oral cancer image analysis. Despite the remarkable success of deep-learning networks in medical segmentation, they rarely provide uncertainty quantification for their output. Aim: We aim to estimate uncertainty in a deep-learning approach to semantic segmentation of oral cancer images and to improve the accuracy and reliability of predictions. Approach: This work introduced a UNet-based Bayesian deep-learning (BDL) model to segment potentially malignant and malignant lesion areas in the oral cavity. The model can quantify uncertainty in predictions. We also developed an efficient model that increased the inference speed, which is almost six times smaller and two times faster (inference speed) than the original UNet. The dataset in this study was collected using our customized screening platform and was annotated by oral oncology specialists. Results: The proposed approach achieved good segmentation performance as well as good uncertainty estimation performance. In the experiments, we observed an improvement in pixel accuracy and mean intersection over union by removing uncertain pixels. This result reflects that the model provided less accurate predictions in uncertain areas that may need more attention and further inspection. The experiments also showed that with some performance compromises, the efficient model reduced computation time and model size, which expands the potential for implementation on portable devices used in resource-limited settings. Conclusions: Our study demonstrates the UNet-based BDL model not only can perform potentially malignant and malignant oral lesion segmentation, but also can provide informative pixel-level uncertainty estimation. With this extra uncertainty information, the accuracy and reliability of the model's prediction can be improved.


Assuntos
Neoplasias Bucais , Semântica , Humanos , Incerteza , Teorema de Bayes , Reprodutibilidade dos Testes , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Bucais/diagnóstico por imagem
9.
Nat Biomed Eng ; 6(8): 979-991, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35986185

RESUMO

Sensitive and specific blood-based assays for the detection of pulmonary and extrapulmonary tuberculosis would reduce mortality associated with missed diagnoses, particularly in children. Here we report a nanoparticle-enhanced immunoassay read by dark-field microscopy that detects two Mycobacterium tuberculosis virulence factors (the glycolipid lipoarabinomannan and its carrier protein) on the surface of circulating extracellular vesicles. In a cohort study of 147 hospitalized and severely immunosuppressed children living with HIV, the assay detected 58 of the 78 (74%) cases of paediatric tuberculosis, 48 of the 66 (73%) cases that were missed by microbiological assays, and 8 out of 10 (80%) cases undiagnosed during the study. It also distinguished tuberculosis from latent-tuberculosis infections in non-human primates. We adapted the assay to make it portable and operable by a smartphone. With further development, the assay may facilitate the detection of tuberculosis at the point of care, particularly in resource-limited settings.


Assuntos
Vesículas Extracelulares , Mycobacterium tuberculosis , Tuberculose , Animais , Estudos de Coortes , Humanos , Tuberculose/diagnóstico , Fatores de Virulência
10.
Sci Rep ; 12(1): 14283, 2022 08 22.
Artigo em Inglês | MEDLINE | ID: mdl-35995987

RESUMO

Early detection of oral cancer in low-resource settings necessitates a Point-of-Care screening tool that empowers Frontline-Health-Workers (FHW). This study was conducted to validate the accuracy of Convolutional-Neural-Network (CNN) enabled m(mobile)-Health device deployed with FHWs for delineation of suspicious oral lesions (malignant/potentially-malignant disorders). The effectiveness of the device was tested in tertiary-care hospitals and low-resource settings in India. The subjects were screened independently, either by FHWs alone or along with specialists. All the subjects were also remotely evaluated by oral cancer specialist/s. The program screened 5025 subjects (Images: 32,128) with 95% (n = 4728) having telediagnosis. Among the 16% (n = 752) assessed by onsite specialists, 20% (n = 102) underwent biopsy. Simple and complex CNN were integrated into the mobile phone and cloud respectively. The onsite specialist diagnosis showed a high sensitivity (94%), when compared to histology, while telediagnosis showed high accuracy in comparison with onsite specialists (sensitivity: 95%; specificity: 84%). FHWs, however, when compared with telediagnosis, identified suspicious lesions with less sensitivity (60%). Phone integrated, CNN (MobileNet) accurately delineated lesions (n = 1416; sensitivity: 82%) and Cloud-based CNN (VGG19) had higher accuracy (sensitivity: 87%) with tele-diagnosis as reference standard. The results of the study suggest that an automated mHealth-enabled, dual-image system is a useful triaging tool and empowers FHWs for oral cancer screening in low-resource settings.


Assuntos
Telefone Celular , Aprendizado Profundo , Neoplasias Bucais , Telemedicina , Detecção Precoce de Câncer/métodos , Humanos , Neoplasias Bucais/diagnóstico , Neoplasias Bucais/patologia , Sistemas Automatizados de Assistência Junto ao Leito , Telemedicina/métodos
11.
J Biomed Opt ; 27(1)2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-35023333

RESUMO

SIGNIFICANCE: Convolutional neural networks (CNNs) show the potential for automated classification of different cancer lesions. However, their lack of interpretability and explainability makes CNNs less than understandable. Furthermore, CNNs may incorrectly concentrate on other areas surrounding the salient object, rather than the network's attention focusing directly on the object to be recognized, as the network has no incentive to focus solely on the correct subjects to be detected. This inhibits the reliability of CNNs, especially for biomedical applications. AIM: Develop a deep learning training approach that could provide understandability to its predictions and directly guide the network to concentrate its attention and accurately delineate cancerous regions of the image. APPROACH: We utilized Selvaraju et al.'s gradient-weighted class activation mapping to inject interpretability and explainability into CNNs. We adopted a two-stage training process with data augmentation techniques and Li et al.'s guided attention inference network (GAIN) to train images captured using our customized mobile oral screening devices. The GAIN architecture consists of three streams of network training: classification stream, attention mining stream, and bounding box stream. By adopting the GAIN training architecture, we jointly optimized the classification and segmentation accuracy of our CNN by treating these attention maps as reliable priors to develop attention maps with more complete and accurate segmentation. RESULTS: The network's attention map will help us to actively understand what the network is focusing on and looking at during its decision-making process. The results also show that the proposed method could guide the trained neural network to highlight and focus its attention on the correct lesion areas in the images when making a decision, rather than focusing its attention on relevant yet incorrect regions. CONCLUSIONS: We demonstrate the effectiveness of our approach for more interpretable and reliable oral potentially malignant lesion and malignant lesion classification.


Assuntos
Aprendizado Profundo , Neoplasias Bucais , Atenção , Humanos , Neoplasias Bucais/diagnóstico por imagem , Redes Neurais de Computação , Reprodutibilidade dos Testes
12.
Biomed Opt Express ; 12(10): 6422-6430, 2021 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-34745746

RESUMO

In medical imaging, deep learning-based solutions have achieved state-of-the-art performance. However, reliability restricts the integration of deep learning into practical medical workflows since conventional deep learning frameworks cannot quantitatively assess model uncertainty. In this work, we propose to address this shortcoming by utilizing a Bayesian deep network capable of estimating uncertainty to assess oral cancer image classification reliability. We evaluate the model using a large intraoral cheek mucosa image dataset captured using our customized device from high-risk population to show that meaningful uncertainty information can be produced. In addition, our experiments show improved accuracy by uncertainty-informed referral. The accuracy of retained data reaches roughly 90% when referring either 10% of all cases or referring cases whose uncertainty value is greater than 0.3. The performance can be further improved by referring more patients. The experiments show the model is capable of identifying difficult cases needing further inspection.

13.
J Biomed Opt ; 26(10)2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34689442

RESUMO

SIGNIFICANCE: Early detection of oral cancer is vital for high-risk patients, and machine learning-based automatic classification is ideal for disease screening. However, current datasets collected from high-risk populations are unbalanced and often have detrimental effects on the performance of classification. AIM: To reduce the class bias caused by data imbalance. APPROACH: We collected 3851 polarized white light cheek mucosa images using our customized oral cancer screening device. We use weight balancing, data augmentation, undersampling, focal loss, and ensemble methods to improve the neural network performance of oral cancer image classification with the imbalanced multi-class datasets captured from high-risk populations during oral cancer screening in low-resource settings. RESULTS: By applying both data-level and algorithm-level approaches to the deep learning training process, the performance of the minority classes, which were difficult to distinguish at the beginning, has been improved. The accuracy of "premalignancy" class is also increased, which is ideal for screening applications. CONCLUSIONS: Experimental results show that the class bias induced by imbalanced oral cancer image datasets could be reduced using both data- and algorithm-level methods. Our study may provide an important basis for helping understand the influence of unbalanced datasets on oral cancer deep learning classifiers and how to mitigate.


Assuntos
Neoplasias Bucais , Redes Neurais de Computação , Algoritmos , Detecção Precoce de Câncer , Humanos , Aprendizado de Máquina , Neoplasias Bucais/diagnóstico por imagem
14.
Opt Lett ; 46(11): 2722-2725, 2021 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-34061097

RESUMO

In this Letter, a microLED-based chromatic confocal microscope with a virtual confocal slit is proposed and demonstrated for three-dimensional (3D) profiling without any mechanical scanning or external light source. In the proposed method, a micro-scale light-emitting diode (microLED) panel works as a point source array to achieve lateral scanning. Axial scanning is realized through the chromatic aberration of an aspherical objective. A virtual pinhole technique is utilized to improve the contrast and precision of depth reconstruction. The system performance has been demonstrated with a diamond-turned copper sample and onion epidermis. The experimental results show that the microLED panel could be a potential solution for portable 3D confocal microscopy. Several considerations and prospects are proposed for future microLED requirements in confocal imaging.

15.
J Biomed Opt ; 26(6)2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-34164967

RESUMO

SIGNIFICANCE: Oral cancer is among the most common cancers globally, especially in low- and middle-income countries. Early detection is the most effective way to reduce the mortality rate. Deep learning-based cancer image classification models usually need to be hosted on a computing server. However, internet connection is unreliable for screening in low-resource settings. AIM: To develop a mobile-based dual-mode image classification method and customized Android application for point-of-care oral cancer detection. APPROACH: The dataset used in our study was captured among 5025 patients with our customized dual-modality mobile oral screening devices. We trained an efficient network MobileNet with focal loss and converted the model into TensorFlow Lite format. The finalized lite format model is ∼16.3 MB and ideal for smartphone platform operation. We have developed an Android smartphone application in an easy-to-use format that implements the mobile-based dual-modality image classification approach to distinguish oral potentially malignant and malignant images from normal/benign images. RESULTS: We investigated the accuracy and running speed on a cost-effective smartphone computing platform. It takes ∼300 ms to process one image pair with the Moto G5 Android smartphone. We tested the proposed method on a standalone dataset and achieved 81% accuracy for distinguishing normal/benign lesions from clinically suspicious lesions, using a gold standard of clinical impression based on the review of images by oral specialists. CONCLUSIONS: Our study demonstrates the effectiveness of a mobile-based approach for oral cancer screening in low-resource settings.


Assuntos
Neoplasias Bucais , Sistemas Automatizados de Assistência Junto ao Leito , Detecção Precoce de Câncer , Humanos , Neoplasias Bucais/diagnóstico por imagem , Sensibilidade e Especificidade , Smartphone
16.
Sensors (Basel) ; 20(13)2020 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-32630246

RESUMO

Phase unwrapping is a very important step in fringe projection 3D imaging. In this paper, we propose a new neural network for accurate phase unwrapping to address the special needs in fringe projection 3D imaging. Instead of labeling the wrapped phase with integers directly, a two-step training process with the same network configuration is proposed. In the first step, the network (network I) is trained to label only four key features in the wrapped phase. In the second step, another network with same configuration (network II) is trained to label the wrapped phase segments. The advantages are that the dimension of the wrapped phase can be much larger from that of the training data, and the phase with serious Gaussian noise can be correctly unwrapped. We demonstrate the performance and key features of the neural network trained with the simulation data for the experimental data.

17.
J Biomed Opt ; 25(6): 1-21, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32578406

RESUMO

SIGNIFICANCE: The rates of melanoma and nonmelanoma skin cancer are rising across the globe. Due to a shortage of board-certified dermatologists, the burden of dermal lesion screening and erythema monitoring has fallen to primary care physicians (PCPs). An adjunctive device for lesion screening and erythema monitoring would be beneficial because PCPs are not typically extensively trained in dermatological care. AIM: We aim to examine the feasibility of using a smartphone-camera-based dermascope and a USB-camera-based dermascope utilizing polarized white-light imaging (PWLI) and polarized multispectral imaging (PMSI) to map dermal chromophores and erythema. APPROACH: Two dermascopes integrating LED-based PWLI and PMSI with both a smartphone-based camera and a USB-connected camera were developed to capture images of dermal lesions and erythema. Image processing algorithms were implemented to provide chromophore concentrations and redness measures. RESULTS: PWLI images were successfully converted to an alternate colorspace for erythema measures, and the spectral bandwidth of the PMSI LED illumination was sufficient for mapping of deoxyhemoglobin, oxyhemoglobin, and melanin chromophores. Both types of dermascopes were able to achieve similar relative concentration results. CONCLUSION: Chromophore mapping and erythema monitoring are feasible with PWLI and PMSI using LED illumination and smartphone-based cameras. These systems can provide a simpler, more portable geometry and reduce device costs compared with interference-filter-based or spectrometer-based clinical-grade systems. Future research should include a rigorous clinical trial to collect longitudinal data and a large enough dataset to train and implement a machine learning-based image classifier.


Assuntos
Eritema , Smartphone , Eritema/diagnóstico , Humanos , Processamento de Imagem Assistida por Computador , Sistemas Automatizados de Assistência Junto ao Leito , Pele
18.
J Biomed Opt ; 24(10): 1-8, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31642247

RESUMO

Oral cancer is a growing health issue in low- and middle-income countries due to betel quid, tobacco, and alcohol use and in younger populations of middle- and high-income communities due to the prevalence of human papillomavirus. The described point-of-care, smartphone-based intraoral probe enables autofluorescence imaging and polarized white light imaging in a compact geometry through the use of a USB-connected camera module. The small size and flexible imaging head improves on previous intraoral probe designs and allows imaging the cheek pockets, tonsils, and base of tongue, the areas of greatest risk for both causes of oral cancer. Cloud-based remote specialist and convolutional neural network clinical diagnosis allow for both remote community and home use. The device is characterized and preliminary field-testing data are shared.


Assuntos
Detecção Precoce de Câncer/instrumentação , Neoplasias Bucais/diagnóstico por imagem , Imagem Óptica/instrumentação , Neoplasias Orofaríngeas/diagnóstico por imagem , Desenho de Equipamento , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Sistemas Automatizados de Assistência Junto ao Leito , Telemedicina
19.
PLoS One ; 13(12): e0207493, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30517120

RESUMO

Oral cancer is a growing health issue in a number of low- and middle-income countries (LMIC), particularly in South and Southeast Asia. The described dual-modality, dual-view, point-of-care oral cancer screening device, developed for high-risk populations in remote regions with limited infrastructure, implements autofluorescence imaging (AFI) and white light imaging (WLI) on a smartphone platform, enabling early detection of pre-cancerous and cancerous lesions in the oral cavity with the potential to reduce morbidity, mortality, and overall healthcare costs. Using a custom Android application, this device synchronizes external light-emitting diode (LED) illumination and image capture for AFI and WLI. Data is uploaded to a cloud server for diagnosis by a remote specialist through a web app, with the ability to transmit triage instructions back to the device and patient. Finally, with the on-site specialist's diagnosis as the gold-standard, the remote specialist and a convolutional neural network (CNN) were able to classify 170 image pairs into 'suspicious' and 'not suspicious' with sensitivities, specificities, positive predictive values, and negative predictive values ranging from 81.25% to 94.94%.


Assuntos
Detecção Precoce de Câncer/instrumentação , Detecção Precoce de Câncer/métodos , Neoplasias Bucais/diagnóstico , Computação em Nuvem , Humanos , Aplicativos Móveis , Redes Neurais de Computação , Imagem Óptica , Sistemas Automatizados de Assistência Junto ao Leito , Pobreza , Sensibilidade e Especificidade , Smartphone/instrumentação
20.
Biomed Opt Express ; 9(11): 5318-5329, 2018 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-30460130

RESUMO

With the goal to screen high-risk populations for oral cancer in low- and middle-income countries (LMICs), we have developed a low-cost, portable, easy to use smartphone-based intraoral dual-modality imaging platform. In this paper we present an image classification approach based on autofluorescence and white light images using deep learning methods. The information from the autofluorescence and white light image pair is extracted, calculated, and fused to feed the deep learning neural networks. We have investigated and compared the performance of different convolutional neural networks, transfer learning, and several regularization techniques for oral cancer classification. Our experimental results demonstrate the effectiveness of deep learning methods in classifying dual-modal images for oral cancer detection.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...