Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 84
Filtrar
1.
J Neurosci Methods ; 408: 110159, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38723868

RESUMO

BACKGROUND: In order to push the frontiers of brain-computer interface (BCI) and neuron-electronics, this research presents a novel framework that combines cutting-edge technologies for improved brain-related diagnostics in smart healthcare. This research offers a ground-breaking application of transparent strategies to BCI, promoting openness and confidence in brain-computer interactions and taking inspiration from Grad-CAM (Gradient-weighted Class Activation Mapping) based Explainable Artificial Intelligence (XAI) methodology. The landscape of healthcare diagnostics is about to be redefined by the integration of various technologies, especially when it comes to illnesses related to the brain. NEW METHOD: A novel approach has been proposed in this study comprising of Xception architecture which is trained on imagenet database following transfer learning process for extraction of significant features from magnetic resonance imaging dataset acquired from publicly available distinct sources as an input and linear support vector machine has been used for distinguishing distinct classes.Afterwards, gradient-weighted class activation mapping has been deployed as the foundation for explainable artificial intelligence (XAI) for generating informative heatmaps, representing spatial localization of features which were focused to achieve model's predictions. RESULTS: Thus, the proposed model not only provides accurate outcomes but also provides transparency for the predictions generated by the Xception network to diagnose presence of abnormal tissues and avoids overfitting issues. Hyperparameters along with performance-metrics are also obtained while validating the proposed network on unseen brain MRI scans to ensure effectiveness of the proposed network. COMPARISON WITH EXISTING METHODS AND CONCLUSIONS: The integration of Grad-CAM based explainable artificial intelligence with deep neural network namely Xception offers a significant impact in diagnosing brain tumor disease while highlighting the specific regions of input brain MRI images responsible for making predictions. In this study, the proposed network results in 98.92% accuracy, 98.15% precision, 99.09% sensitivity, 98.18% specificity and 98.91% dice-coefficient while identifying presence of abnormal tissues in the brain. Thus, Xception model trained on distinct dataset following transfer learning process offers remarkable diagnostic accuracy and linear support vector act as a classifier to provide efficient classification among distinct classes. In addition, the deployed explainable artificial intelligence approach helps in revealing the reasoning behind predictions made by deep neural network having black-box nature and provides a clear perspective to assist medical experts in achieving trustworthiness and transparency while diagnosing brain tumor disease in the smart healthcare.


Assuntos
Inteligência Artificial , Interfaces Cérebro-Computador , Encéfalo , Imageamento por Ressonância Magnética , Máquina de Vetores de Suporte , Humanos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Redes Neurais de Computação
2.
Front Neurol ; 15: 1404283, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38651099

RESUMO

[This corrects the article DOI: 10.3389/fneur.2023.1221209.].

3.
Photodiagnosis Photodyn Ther ; 46: 104048, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38484830

RESUMO

BACKGROUND: Breast cancer is a leading cause of cancer-related deaths among women worldwide. Early and accurate detection is crucial for improving patient outcomes. Our study utilizes Visible and Near-Infrared Hyperspectral Imaging (VIS-NIR HSI), a promising non-invasive technique, to detect cancerous regions in ex-vivo breast specimens based on their hyperspectral response. METHODS: In this paper, we present a novel HSI platform integrated with fuzzy c-means clustering for automated breast cancer detection. We acquire hyperspectral data from breast tissue samples, and preprocess it to reduce noise and enhance hyperspectral features. Fuzzy c-means clustering is then applied to segment cancerous regions based on their spectral characteristics. RESULTS: Our approach demonstrates promising results. We evaluated the quality of the clustering using metrics like Silhouette Index (SI), Davies-Bouldin Index (DBI), and Calinski-Harabasz Index (CHI). The clustering metrics results revealed an optimal number of 6 clusters for breast tissue classification, and the SI values ranged from 0.68 to 0.72, indicating well-separated clusters. Moreover, the CHI values showed that the clusters were well-defined, and the DBI values demonstrated low cluster dispersion. Additionally, the sensitivity, specificity, and accuracy of our system were evaluated on a dataset of breast tissue samples. We achieved an average sensitivity of 96.83%, specificity of 93.39%, and accuracy of 95.12%. These results indicate the effectiveness of our HSI-based approach in distinguishing cancerous and non-cancerous regions. CONCLUSIONS: The paper introduces a robust hyperspectral imaging platform coupled with fuzzy c-means clustering for automated breast cancer detection. The clustering metrics results support the reliability of our approach in effectively segmenting breast tissue samples. In addition, the system shows high sensitivity and specificity, making it a valuable tool for early-stage breast cancer diagnosis. This innovative approach holds great potential for improving breast cancer screening and, thereby, enhancing our understanding of the disease and its detection patterns.


Assuntos
Neoplasias da Mama , Imageamento Hiperespectral , Espectroscopia de Luz Próxima ao Infravermelho , Humanos , Neoplasias da Mama/diagnóstico por imagem , Feminino , Imageamento Hiperespectral/métodos , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Lógica Fuzzy
4.
Front Robot AI ; 11: 1123762, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38384357

RESUMO

Finding actual causes of unmanned aerial vehicle (UAV) failures can be split into two main tasks: building causal models and performing actual causality analysis (ACA) over them. While there are available solutions in the literature to perform ACA, building comprehensive causal models is still an open problem. The expensive and time-consuming process of building such models, typically performed manually by domain experts, has hindered the widespread application of causality-based diagnosis solutions in practice. This study proposes a methodology based on natural language processing for automating causal model generation for UAVs. After collecting textual data from online resources, causal keywords are identified in sentences. Next, cause-effect phrases are extracted from sentences based on predefined dependency rules between tokens. Finally, the extracted cause-effect pairs are merged to form a causal graph, which we then use for ACA. To demonstrate the applicability of our framework, we scrape online text resources of Ardupilot, an open-source UAV controller software. Our evaluations using real flight logs show that the generated graphs can successfully be used to find the actual causes of unwanted events. Moreover, our hybrid cause-effect extraction module performs better than a purely deep-learning based tool (i.e., CiRA) by 32% in precision and 25% in recall in our Ardupilot use case.

5.
IEEE J Transl Eng Health Med ; 12: 291-297, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38410180

RESUMO

OBJECTIVE: A change in handwriting is an early sign of Parkinson's disease (PD). However, significant inter-person differences in handwriting make it difficult to identify pathological handwriting, especially in the early stages. This paper reports the testing of NeuroDiag, a software-based medical device, for the automated detection of PD using handwriting patterns. NeuroDiag is designed to direct the user to perform six drawing and writing tasks, and the recordings are then uploaded onto a server for analysis. Kinematic information and pen pressure of handwriting are extracted and used as baseline parameters. NeuroDiag was trained based on 26 PD patients in the early stage of the disease and 26 matching controls. METHODS: Twenty-three people with PD (PPD) in their early stage of the disease, 25 age-matched healthy controls (AMC), and 7 young healthy controls were recruited for this study. Under the supervision of a consultant neurologist or their nurse, the participants used NeuroDiag. The reports were generated in real-time and tabulated by an independent observer. RESULTS: The participants were able to use NeuroDiag without assistance. The handwriting data was successfully uploaded to the server where the report was automatically generated in real-time. There were significant differences in the writing speed between PPD and AMC (P<0.001). NeuroDiag showed 86.96% sensitivity and 76.92% specificity in differentiating PPD from those without PD. CONCLUSION: In this work, we tested the reliability of NeuroDiag in differentiating between PPD and AMC for real-time applications. The results show that NeuroDiag has the potential to be used to assist neurologists and for telehealth applications. Clinical and Translational Impact Statement - This pre-clinical study shows the feasibility of developing a community-wide screening program for Parkinson's disease using automated handwriting analysis software, NeuroDiag.


Assuntos
Doença de Parkinson , Humanos , Doença de Parkinson/diagnóstico , Reprodutibilidade dos Testes , Escrita Manual , Software , Fenômenos Biomecânicos
6.
Dentomaxillofac Radiol ; 53(1): 52-59, 2024 Jan 11.
Artigo em Inglês | MEDLINE | ID: mdl-38214946

RESUMO

OBJECTIVES: To compare artificial intelligence (AI)-driven web-based platform and manual measurements for analysing facial asymmetry in craniofacial CT examinations. METHODS: The study included 95 craniofacial CT scans from patients aged 18-30 years. The degree of asymmetry was measured based on AI platform-predefined anatomical landmarks: sella (S), condylion (Co), anterior nasal spine (ANS), and menton (Me). The concordance between the results of automatic asymmetry reports and manual linear 3D measurements was calculated. The asymmetry rate (AR) indicator was determined for both automatic and manual measurements, and the concordance between them was calculated. The repeatability of manual measurements in 20 randomly selected subjects was assessed. The concordance of measurements of quantitative variables was assessed with interclass correlation coefficient (ICC) according to the Shrout and Fleiss classification. RESULTS: Erroneous AI tracings were found in 16.8% of cases, reducing the analysed cases to 79. The agreement between automatic and manual asymmetry measurements was very low (ICC < 0.3). A lack of agreement between AI and manual AR analysis (ICC type 3 = 0) was found. The repeatability of manual measurements and AR calculations showed excellent correlation (ICC type 2 > 0.947). CONCLUSIONS: The results indicate that the rate of tracing errors and lack of agreement with manual AR analysis make it impossible to use the tested AI platform to assess the degree of facial asymmetry.


Assuntos
Inteligência Artificial , Assimetria Facial , Humanos , Assimetria Facial/diagnóstico por imagem , Reprodutibilidade dos Testes , Imageamento Tridimensional/métodos , Cefalometria/métodos
7.
Eur J Neurol ; 31(4): e16195, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38235841

RESUMO

BACKGROUND AND PURPOSE: The integration of artificial intelligence (AI) in healthcare has the potential to revolutionize patient care and clinical decision-making. This study aimed to explore the reliability of large language models in neurology by comparing the performance of an AI chatbot with neurologists in diagnostic accuracy and decision-making. METHODS: A cross-sectional observational study was conducted. A pool of clinical cases from the American Academy of Neurology's Question of the Day application was used as the basis for the study. The AI chatbot used was ChatGPT, based on GPT-3.5. The results were then compared to neurology peers who also answered the questions-a mean of 1500 neurologists/neurology residents. RESULTS: The study included 188 questions across 22 different categories. The AI chatbot demonstrated a mean success rate of 71.3% in providing correct answers, with varying levels of proficiency across different neurology categories. Compared to neurology peers, the AI chatbot performed at a similar level, with a mean success rate of 69.2% amongst peers. Additionally, the AI chatbot achieved a correct diagnosis in 85.0% of cases and it provided an adequate justification for its correct responses in 96.1%. CONCLUSIONS: The study highlights the potential of AI, particularly large language models, in assisting with clinical reasoning and decision-making in neurology and emphasizes the importance of AI as a complementary tool to human expertise. Future advancements and refinements are needed to enhance the AI chatbot's performance and broaden its application across various medical specialties.


Assuntos
Inteligência Artificial , Neurologia , Humanos , Estudos Transversais , Reprodutibilidade dos Testes , Software
8.
Mod Pathol ; 37(1): 100373, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37925056

RESUMO

The current flow cytometric analysis of blood and bone marrow samples for diagnosis of acute myeloid leukemia (AML) relies heavily on manual intervention in the processing and analysis steps, introducing significant subjectivity into resulting diagnoses and necessitating highly trained personnel. Furthermore, concurrent molecular characterization via cytogenetics and targeted sequencing can take multiple days, delaying patient diagnosis and treatment. Attention-based multi-instance learning models (ABMILMs) are deep learning models that make accurate predictions and generate interpretable insights regarding the classification of a sample from individual events/cells; nonetheless, these models have yet to be applied to flow cytometry data. In this study, we developed a computational pipeline using ABMILMs for the automated diagnosis of AML cases based exclusively on flow cytometric data. Analysis of 1820 flow cytometry samples shows that this pipeline provides accurate diagnoses of acute leukemia (area under the receiver operating characteristic curve [AUROC] 0.961) and accurately differentiates AML vs B- and T-lymphoblastic leukemia (AUROC 0.965). Models for prediction of 9 cytogenetic aberrancies and 32 pathogenic variants in AML provide accurate predictions, particularly for t(15;17)(PML::RARA) [AUROC 0.929], t(8;21)(RUNX1::RUNX1T1) (AUROC 0.814), and NPM1 variants (AUROC 0.807). Finally, we demonstrate how these models generate interpretable insights into which individual flow cytometric events and markers deliver optimal diagnostic utility, providing hematopathologists with a data visualization tool for improved data interpretation, as well as novel biological associations between flow cytometric marker expression and cytogenetic/molecular variants in AML. Our study is the first to illustrate the feasibility of using deep learning-based analysis of flow cytometric data for automated AML diagnosis and molecular characterization.


Assuntos
Aprendizado Profundo , Leucemia Mieloide Aguda , Humanos , Citometria de Fluxo/métodos , Leucemia Mieloide Aguda/diagnóstico , Leucemia Mieloide Aguda/genética , Leucemia Mieloide Aguda/metabolismo , Doença Aguda , Citogenética
9.
Ultrasound Med Biol ; 50(2): 304-314, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-38044200

RESUMO

OBJECTIVE: Ultrasound (US) examination has unique advantages in diagnosing carpal tunnel syndrome (CTS), although identification of the median nerve (MN) and diagnosis of CTS depend heavily on the expertise of examiners. In the aim of alleviating this problem, we developed a one-stop automated CTS diagnosis system (OSA-CTSD) and evaluated its effectiveness as a computer-aided diagnostic tool. METHODS: We combined real-time MN delineation, accurate biometric measurements and explainable CTS diagnosis into a unified framework, called OSA-CTSD. We then collected a total of 32,301 static images from US videos of 90 normal wrists and 40 CTS wrists for evaluation using a simplified scanning protocol. RESULTS: The proposed model exhibited better segmentation and measurement performance than competing methods, with a Hausdorff distance (95th percentile) score of 7.21 px, average symmetric surface distance score of 2.64 px, Dice score of 85.78% and intersection over union score of 76.00%. In the reader study, it exhibited performance comparable to the average performance of experienced radiologists in classifying CTS and outperformed inexperienced radiologists in terms of classification metrics (e.g., accuracy score 3.59% higher and F1 score 5.85% higher). CONCLUSION: Diagnostic performance of the OSA-CTSD was promising, with the advantages of real-time delineation, automation and clinical interpretability. The application of such a tool not only reduces reliance on the expertise of examiners but also can help to promote future standardization of the CTS diagnostic process, benefiting both patients and radiologists.


Assuntos
Síndrome do Túnel Carpal , Aprendizado Profundo , Humanos , Síndrome do Túnel Carpal/diagnóstico por imagem , Condução Nervosa/fisiologia , Nervo Mediano/diagnóstico por imagem , Ultrassonografia
10.
Diagnostics (Basel) ; 13(24)2023 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-38132257

RESUMO

Early detection of colorectal cancer is crucial for improving outcomes and reducing mortality. While there is strong evidence of effectiveness, currently adopted screening methods present several shortcomings which negatively impact the detection of early stage carcinogenesis, including low uptake due to patient discomfort. As a result, developing novel, non-invasive alternatives is an important research priority. Recent advancements in the field of breathomics, the study of breath composition and analysis, have paved the way for new avenues for non-invasive cancer detection and effective monitoring. Harnessing the utility of Volatile Organic Compounds in exhaled breath, breathomics has the potential to disrupt colorectal cancer screening practices. Our goal is to outline key research efforts in this area focusing on machine learning methods used for the analysis of breathomics data, highlight challenges involved in artificial intelligence application in this context, and suggest possible future directions which are currently considered within the framework of the European project ONCOSCREEN.

11.
Comput Biol Med ; 167: 107616, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37922601

RESUMO

Age-related macular degeneration (AMD) is a leading cause of vision loss in the elderly, highlighting the need for early and accurate detection. In this study, we proposed DeepDrAMD, a hierarchical vision transformer-based deep learning model that integrates data augmentation techniques and SwinTransformer, to detect AMD and distinguish between different subtypes using color fundus photographs (CFPs). The DeepDrAMD was trained on the in-house WMUEH training set and achieved high performance in AMD detection with an AUC of 98.76% in the WMUEH testing set and 96.47% in the independent external Ichallenge-AMD cohort. Furthermore, the DeepDrAMD effectively classified dryAMD and wetAMD, achieving AUCs of 93.46% and 91.55%, respectively, in the WMUEH cohort and another independent external ODIR cohort. Notably, DeepDrAMD excelled at distinguishing between wetAMD subtypes, achieving an AUC of 99.36% in the WMUEH cohort. Comparative analysis revealed that the DeepDrAMD outperformed conventional deep-learning models and expert-level diagnosis. The cost-benefit analysis demonstrated that the DeepDrAMD offers substantial cost savings and efficiency improvements compared to manual reading approaches. Overall, the DeepDrAMD represents a significant advancement in AMD detection and differential diagnosis using CFPs, and has the potential to assist healthcare professionals in informed decision-making, early intervention, and treatment optimization.


Assuntos
Aprendizado Profundo , Degeneração Macular , Humanos , Idoso , Diagnóstico Diferencial , Degeneração Macular/diagnóstico por imagem , Técnicas de Diagnóstico Oftalmológico , Fotografação/métodos
12.
Life (Basel) ; 13(11)2023 Oct 26.
Artigo em Inglês | MEDLINE | ID: mdl-38004263

RESUMO

Skin cancer has become increasingly common over the past decade, with melanoma being the most aggressive type. Hence, early detection of skin cancer and melanoma is essential in dermatology. Computational methods can be a valuable tool for assisting dermatologists in identifying skin cancer. Most research in machine learning for skin cancer detection has focused on dermoscopy images due to the existence of larger image datasets. However, general practitioners typically do not have access to a dermoscope and must rely on naked-eye examinations or standard clinical images. By using standard, off-the-shelf cameras to detect high-risk moles, machine learning has also proven to be an effective tool. The objective of this paper is to provide a comprehensive review of image-processing techniques for skin cancer detection using clinical images. In this study, we evaluate 51 state-of-the-art articles that have used machine learning methods to detect skin cancer over the past decade, focusing on clinical datasets. Even though several studies have been conducted in this field, there are still few publicly available clinical datasets with sufficient data that can be used as a benchmark, especially when compared to the existing dermoscopy databases. In addition, we observed that the available artifact removal approaches are not quite adequate in some cases and may also have a negative impact on the models. Moreover, the majority of the reviewed articles are working with single-lesion images and do not consider typical mole patterns and temporal changes in the lesions of each patient.

13.
J Clin Med ; 12(18)2023 Sep 07.
Artigo em Inglês | MEDLINE | ID: mdl-37762772

RESUMO

Otolaryngological diagnoses, such as otitis media, are traditionally performed using endoscopy, wherein diagnostic accuracy can be subjective and vary among clinicians. The integration of objective tools, like artificial intelligence (AI), could potentially improve the diagnostic process by minimizing the influence of subjective biases and variability. We systematically reviewed the AI techniques using medical imaging in otolaryngology. Relevant studies related to AI-assisted otitis media diagnosis were extracted from five databases: Google Scholar, PubMed, Medline, Embase, and IEEE Xplore, without date restrictions. Publications that did not relate to AI and otitis media diagnosis or did not utilize medical imaging were excluded. Of the 32identified studies, 26 used tympanic membrane images for classification, achieving an average diagnosis accuracy of 86% (range: 48.7-99.16%). Another three studies employed both segmentation and classification techniques, reporting an average diagnosis accuracy of 90.8% (range: 88.06-93.9%). These findings suggest that AI technologies hold promise for improving otitis media diagnosis, offering benefits for telemedicine and primary care settings due to their high diagnostic accuracy. However, to ensure patient safety and optimal outcomes, further improvements in diagnostic performance are necessary.

14.
Front Neurol ; 14: 1221209, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37670775

RESUMO

Introduction: Real-life headache presentations may fit more than one ICHD3 diagnosis. This project seeks to exhaustively list all logically consistent "co-diagnoses" according to the ICHD3 criteria. We limited our project to cases of two concurrent diagnoses. Methods: We included the criteria for "Migraine" (1.1, 1.2, 1.3), "Tension-type headache" (2.1, 2.2, 2.3, 2.4), "Trigeminal autonomic cephalalgias" (3.1, 3.2, 3.3, 3.4, 3.5), and "Other primary headache disorders." We also excluded "probable" diagnosis criteria. Each characteristic in the above criteria is assigned a unique prime number. We then encoded each ICHD3 criteria into integers through multiplication in a list format; we called these criteria representations. "Codiagnoses representations" were generated by multiplying all possible pairings of criteria representations. We then manually encoded a list of logically inconsistent characteristics through multiplication. All co-diagnoses representations divisible by any inconsistency representations were filtered out, generating a list of co-diagnoses representations that were logically consistent. This list was then translated back into ICHD3 diagnoses. Results: We used a total of 103 prime numbers to encode 578 ICHD3 criteria. Once illogical characteristics were excluded, we obtained 145 dual diagnoses. Of the dual diagnoses, two contained intersecting characteristics due to subset relationships, 14 contained intersecting characteristics without subset relationships, and 129 contained dual diagnoses as a result of non-intersecting characteristics. Conclusion: Analysis of dual diagnosis in headaches offers insight into "loopholes" in the ICHD3 as well as a potential explanation for the source of a number of controversies regarding headache disorders. The existence of dual diagnoses and their identification may carry implications for future developments and testing of machine-learning diagnostic algorithms for headaches.

15.
medRxiv ; 2023 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-37693437

RESUMO

Importance: Acute Hepatic Porphyria (AHP) is a group of rare but treatable conditions associated with diagnostic delays of fifteen years on average. The advent of electronic health records (EHR) data and machine learning (ML) may improve the timely recognition of rare diseases like AHP. However, prediction models can be difficult to train given the limited case numbers, unstructured EHR data, and selection biases intrinsic to healthcare delivery. Objective: To train and characterize models for identifying patients with AHP. Design Setting and Participants: This diagnostic study used structured and notes-based EHR data from two centers at the University of California, UCSF (2012-2022) and UCLA (2019-2022). The data were split into two cohorts (referral, diagnosis) and used to develop models that predict: 1) who will be referred for testing of acute porphyria, amongst those who presented with abdominal pain (a cardinal symptom of AHP), and 2) who will test positive, amongst those referred. The referral cohort consisted of 747 patients referred for testing and 99,849 contemporaneous patients who were not. The diagnosis cohort consisted of 72 confirmed AHP cases and 347 patients who tested negative. Cases were female predominant and 6-75 years old at the time of diagnosis. Candidate models used a range of architectures. Feature selection was semi-automated and incorporated publicly available data from knowledge graphs. Main Outcomes and Measures: F-score on an outcome-stratified test set. Results: The best center-specific referral models achieved an F-score of 86-91%. The best diagnosis model achieved an F-score of 92%. To further test our model, we contacted 372 current patients who lack an AHP diagnosis but were predicted by our models as potentially having it (≥ 10% probability of referral, ≥ 50% of testing positive). However, we were only able to recruit 10 of these patients for biochemical testing, all of whom were negative. Nonetheless, post hoc evaluations suggested that these models could identify 71% of cases earlier than their diagnosis date, saving 1.2 years. Conclusions and Relevance: ML can reduce diagnostic delays in AHP and other rare diseases. Robust recruitment strategies and multicenter coordination will be needed to validate these models before they can be deployed.

16.
Comput Biol Med ; 164: 107312, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37597408

RESUMO

BACKGROUND: Epilepsy is one of the most common neurological conditions globally, and the fourth most common in the United States. Recurrent non-provoked seizures characterize it and have huge impacts on the quality of life and financial impacts for affected individuals. A rapid and accurate diagnosis is essential in order to instigate and monitor optimal treatments. There is also a compelling need for the accurate interpretation of epilepsy due to the current scarcity in neurologist diagnosticians and a global inequity in access and outcomes. Furthermore, the existing clinical and traditional machine learning diagnostic methods exhibit limitations, warranting the need to create an automated system using deep learning model for epilepsy detection and monitoring using a huge database. METHOD: The EEG signals from 35 channels were used to train the deep learning-based transformer model named (EpilepsyNet). For each training iteration, 1-min-long data were randomly sampled from each participant. Thereafter, each 5-s epoch was mapped to a matrix using the Pearson Correlation Coefficient (PCC), such that the bottom part of the triangle was discarded and only the upper triangle of the matrix was vectorized as input data. PCC is a reliable method used to measure the statistical relationship between two variables. Based on the 5 s of data, single embedding was performed thereafter to generate a 1-dimensional array of signals. In the final stage, a positional encoding with learnable parameters was added to each correlation coefficient's embedding before being fed to the developed EpilepsyNet as input data to epilepsy EEG signals. The ten-fold cross-validation technique was used to generate the model. RESULTS: Our transformer-based model (EpilepsyNet) yielded high classification accuracy, sensitivity, specificity and positive predictive values of 85%, 82%, 87%, and 82%, respectively. CONCLUSION: The proposed method is both accurate and robust since ten-fold cross-validation was employed to evaluate the performance of the model. Compared to the deep models used in existing studies for epilepsy diagnosis, our proposed method is simple and less computationally intensive. This is the earliest study to have uniquely employed the positional encoding with learnable parameters to each correlation coefficient's embedding together with the deep transformer model, using a huge database of 121 participants for epilepsy detection. With the training and validation of the model using a larger dataset, the same study approach can be extended for the detection of other neurological conditions, with a transformative impact on neurological diagnostics worldwide.


Assuntos
Epilepsia , Qualidade de Vida , Humanos , Epilepsia/diagnóstico , Bases de Dados Factuais , Aprendizado de Máquina , Eletroencefalografia
17.
Bioengineering (Basel) ; 10(8)2023 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-37627809

RESUMO

Epicutaneous patch testing is a well-established diagnostic method for identifying substances that may cause Allergic Contact Dermatitis (ACD), a common skin condition caused by exposure to environmental allergens. While the patch test remains the gold standard for identifying allergens, it is prone to observer bias and consumes valuable human resources. Deep learning models can be employed to address this challenge. In this study, we collected a dataset of 1579 multi-modal skin images from 200 patients using the Antera 3D® camera. We then investigated the feasibility of using a deep learning classifier for automating the identification of the allergens causing ACD. We propose a deep learning approach that utilizes a context-retaining pre-processing technique to improve the accuracy of the classifier. In addition, we find promise in the combination of the color image and false-color map of hemoglobin concentration to improve diagnostic accuracy. Our results showed that this approach can potentially achieve more than 86% recall and 94% specificity in identifying skin reactions, and contribute to faster and more accurate diagnosis while reducing clinician workload.

18.
Comput Biol Med ; 163: 107132, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37343468

RESUMO

Retinal vessel segmentation is an important task in medical image analysis and has a variety of applications in the diagnosis and treatment of retinal diseases. In this paper, we propose SegR-Net, a deep learning framework for robust retinal vessel segmentation. SegR-Net utilizes a combination of feature extraction and embedding, deep feature magnification, feature precision and interference, and dense multiscale feature fusion to generate accurate segmentation masks. The model consists of an encoder module that extracts high-level features from the input images and a decoder module that reconstructs the segmentation masks by combining features from the encoder module. The encoder module consists of a feature extraction and embedding block that enhances by dense multiscale feature fusion, followed by a deep feature magnification block that magnifies the retinal vessels. To further improve the quality of the extracted features, we use a group of two convolutional layers after each DFM block. In the decoder module, we utilize a feature precision and interference block and a dense multiscale feature fusion block (DMFF) to combine features from the encoder module and reconstruct the segmentation mask. We also incorporate data augmentation and pre-processing techniques to improve the generalization of the trained model. Experimental results on three fundus image publicly available datasets (CHASE_DB1, STARE, and DRIVE) demonstrate that SegR-Net outperforms state-of-the-art models in terms of accuracy, sensitivity, specificity, and F1 score. The proposed framework can provide more accurate and more efficient segmentation of retinal blood vessels in comparison to the state-of-the-art techniques, which is essential for clinical decision-making and diagnosis of various eye diseases.


Assuntos
Aprendizado Profundo , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Vasos Retinianos/diagnóstico por imagem , Fundo de Olho
19.
J Med Imaging (Bellingham) ; 10(3): 034504, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37274760

RESUMO

Purpose: The adoption of emerging imaging technologies in the medical community is often hampered when they provide a new unfamiliar contrast that requires experience to be interpreted. Dynamic full-field optical coherence tomography (D-FF-OCT) microscopy is such an emerging technique. It provides fast, high-resolution images of excised tissues with a contrast comparable to H&E histology but without any tissue preparation and alteration. Approach: We designed and compared two machine learning approaches to support interpretation of D-FF-OCT images of breast surgical specimens and thus provide tools to facilitate medical adoption. We conducted a pilot study on 51 breast lumpectomy and mastectomy surgical specimens and more than 1000 individual 1.3×1.3 mm2 images and compared with standard H&E histology diagnosis. Results: Using our automatic diagnosis algorithms, we obtained an accuracy above 88% at the image level (1.3×1.3 mm2) and above 96% at the specimen level (above cm2). Conclusions: Altogether, these results demonstrate the high potential of D-FF-OCT coupled to machine learning to provide a rapid, automatic, and accurate histopathology diagnosis with minimal sample alteration.

20.
Stud Health Technol Inform ; 302: 615-616, 2023 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-37203763

RESUMO

The study proposes an integrated approach to automated cervical intraepithelial neoplasia (CIN) diagnosis in epithelial patches extracted from digital histology images. The model ensemble, combined CNN classifier, and highest-performing fusion approach achieved an accuracy of 94.57%. This result demonstrates significant improvement over the state-of-the-art classifiers for cervical cancer histopathology images and promises further improvement in the automated diagnosis of CIN.


Assuntos
Displasia do Colo do Útero , Neoplasias do Colo do Útero , Feminino , Humanos , Displasia do Colo do Útero/diagnóstico , Displasia do Colo do Útero/patologia , Neoplasias do Colo do Útero/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Técnicas Histológicas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA