Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
BMC Med Imaging ; 23(1): 129, 2023 09 15.
Article in English | MEDLINE | ID: mdl-37715137

ABSTRACT

BACKGROUND: Vision transformer-based methods are advancing the field of medical artificial intelligence and cancer imaging, including lung cancer applications. Recently, many researchers have developed vision transformer-based AI methods for lung cancer diagnosis and prognosis. OBJECTIVE: This scoping review aims to identify the recent developments on vision transformer-based AI methods for lung cancer imaging applications. It provides key insights into how vision transformers complemented the performance of AI and deep learning methods for lung cancer. Furthermore, the review also identifies the datasets that contributed to advancing the field. METHODS: In this review, we searched Pubmed, Scopus, IEEEXplore, and Google Scholar online databases. The search terms included intervention terms (vision transformers) and the task (i.e., lung cancer, adenocarcinoma, etc.). Two reviewers independently screened the title and abstract to select relevant studies and performed the data extraction. A third reviewer was consulted to validate the inclusion and exclusion. Finally, the narrative approach was used to synthesize the data. RESULTS: Of the 314 retrieved studies, this review included 34 studies published from 2020 to 2022. The most commonly addressed task in these studies was the classification of lung cancer types, such as lung squamous cell carcinoma versus lung adenocarcinoma, and identifying benign versus malignant pulmonary nodules. Other applications included survival prediction of lung cancer patients and segmentation of lungs. The studies lacked clear strategies for clinical transformation. SWIN transformer was a popular choice of the researchers; however, many other architectures were also reported where vision transformer was combined with convolutional neural networks or UNet model. Researchers have used the publicly available lung cancer datasets of the lung imaging database consortium and the cancer genome atlas. One study used a cluster of 48 GPUs, while other studies used one, two, or four GPUs. CONCLUSION: It can be concluded that vision transformer-based models are increasingly in popularity for developing AI methods for lung cancer applications. However, their computational complexity and clinical relevance are important factors to be considered for future research work. This review provides valuable insights for researchers in the field of AI and healthcare to advance the state-of-the-art in lung cancer diagnosis and prognosis. We provide an interactive dashboard on lung-cancer.onrender.com/ .


Subject(s)
Carcinoma, Non-Small-Cell Lung , Lung Neoplasms , Multiple Pulmonary Nodules , Humans , Artificial Intelligence , Prognosis , Lung Neoplasms/diagnostic imaging
2.
Sensors (Basel) ; 22(18)2022 Sep 15.
Article in English | MEDLINE | ID: mdl-36146323

ABSTRACT

Background: Brain traumas, mental disorders, and vocal abuse can result in permanent or temporary speech impairment, significantly impairing one's quality of life and occasionally resulting in social isolation. Brain-computer interfaces (BCI) can support people who have issues with their speech or who have been paralyzed to communicate with their surroundings via brain signals. Therefore, EEG signal-based BCI has received significant attention in the last two decades for multiple reasons: (i) clinical research has capitulated detailed knowledge of EEG signals, (ii) inexpensive EEG devices, and (iii) its application in medical and social fields. Objective: This study explores the existing literature and summarizes EEG data acquisition, feature extraction, and artificial intelligence (AI) techniques for decoding speech from brain signals. Method: We followed the PRISMA-ScR guidelines to conduct this scoping review. We searched six electronic databases: PubMed, IEEE Xplore, the ACM Digital Library, Scopus, arXiv, and Google Scholar. We carefully selected search terms based on target intervention (i.e., imagined speech and AI) and target data (EEG signals), and some of the search terms were derived from previous reviews. The study selection process was carried out in three phases: study identification, study selection, and data extraction. Two reviewers independently carried out study selection and data extraction. A narrative approach was adopted to synthesize the extracted data. Results: A total of 263 studies were evaluated; however, 34 met the eligibility criteria for inclusion in this review. We found 64-electrode EEG signal devices to be the most widely used in the included studies. The most common signal normalization and feature extractions in the included studies were the bandpass filter and wavelet-based feature extraction. We categorized the studies based on AI techniques, such as machine learning and deep learning. The most prominent ML algorithm was a support vector machine, and the DL algorithm was a convolutional neural network. Conclusions: EEG signal-based BCI is a viable technology that can enable people with severe or temporal voice impairment to communicate to the world directly from their brain. However, the development of BCI technology is still in its infancy.


Subject(s)
Brain-Computer Interfaces , Algorithms , Artificial Intelligence , Electroencephalography/methods , Humans , Quality of Life , Speech
3.
J Pers Med ; 13(8)2023 Aug 16.
Article in English | MEDLINE | ID: mdl-37623518

ABSTRACT

Precision medicine has the potential to revolutionize the way cardiovascular diseases are diagnosed, predicted, and treated by tailoring treatment strategies to the individual characteristics of each patient. Artificial intelligence (AI) has recently emerged as a promising tool for improving the accuracy and efficiency of precision cardiovascular medicine. In this scoping review, we aimed to identify and summarize the current state of the literature on the use of AI in precision cardiovascular medicine. A comprehensive search of electronic databases, including Scopes, Google Scholar, and PubMed, was conducted to identify relevant studies. After applying inclusion and exclusion criteria, a total of 28 studies were included in the review. We found that AI is being increasingly applied in various areas of cardiovascular medicine, including the diagnosis, prognosis of cardiovascular diseases, risk prediction and stratification, and treatment planning. As a result, most of these studies focused on prediction (50%), followed by diagnosis (21%), phenotyping (14%), and risk stratification (14%). A variety of machine learning models were utilized in these studies, with logistic regression being the most used (36%), followed by random forest (32%), support vector machine (25%), and deep learning models such as neural networks (18%). Other models, such as hierarchical clustering (11%), Cox regression (11%), and natural language processing (4%), were also utilized. The data sources used in these studies included electronic health records (79%), imaging data (43%), and omics data (4%). We found that AI is being increasingly applied in various areas of cardiovascular medicine, including the diagnosis, prognosis of cardiovascular diseases, risk prediction and stratification, and treatment planning. The results of the review showed that AI has the potential to improve the performance of cardiovascular disease diagnosis and prognosis, as well as to identify individuals at high risk of developing cardiovascular diseases. However, further research is needed to fully evaluate the clinical utility and effectiveness of AI-based approaches in precision cardiovascular medicine. Overall, our review provided a comprehensive overview of the current state of knowledge in the field of AI-based methods for precision cardiovascular medicine and offered new insights for researchers interested in this research area.

4.
NPJ Digit Med ; 6(1): 197, 2023 Oct 25.
Article in English | MEDLINE | ID: mdl-37880301

ABSTRACT

The increasing prevalence of type 2 diabetes mellitus (T2DM) and its associated health complications highlight the need to develop predictive models for early diagnosis and intervention. While many artificial intelligence (AI) models for T2DM risk prediction have emerged, a comprehensive review of their advancements and challenges is currently lacking. This scoping review maps out the existing literature on AI-based models for T2DM prediction, adhering to the PRISMA extension for Scoping Reviews guidelines. A systematic search of longitudinal studies was conducted across four databases, including PubMed, Scopus, IEEE-Xplore, and Google Scholar. Forty studies that met our inclusion criteria were reviewed. Classical machine learning (ML) models dominated these studies, with electronic health records (EHR) being the predominant data modality, followed by multi-omics, while medical imaging was the least utilized. Most studies employed unimodal AI models, with only ten adopting multimodal approaches. Both unimodal and multimodal models showed promising results, with the latter being superior. Almost all studies performed internal validation, but only five conducted external validation. Most studies utilized the area under the curve (AUC) for discrimination measures. Notably, only five studies provided insights into the calibration of their models. Half of the studies used interpretability methods to identify key risk predictors revealed by their models. Although a minority highlighted novel risk predictors, the majority reported commonly known ones. Our review provides valuable insights into the current state and limitations of AI-based models for T2DM prediction and highlights the challenges associated with their development and clinical integration.

5.
Sci Rep ; 12(1): 17981, 2022 10 26.
Article in English | MEDLINE | ID: mdl-36289266

ABSTRACT

Healthcare data are inherently multimodal, including electronic health records (EHR), medical images, and multi-omics data. Combining these multimodal data sources contributes to a better understanding of human health and provides optimal personalized healthcare. The most important question when using multimodal data is how to fuse them-a field of growing interest among researchers. Advances in artificial intelligence (AI) technologies, particularly machine learning (ML), enable the fusion of these different data modalities to provide multimodal insights. To this end, in this scoping review, we focus on synthesizing and analyzing the literature that uses AI techniques to fuse multimodal medical data for different clinical applications. More specifically, we focus on studies that only fused EHR with medical imaging data to develop various AI methods for clinical applications. We present a comprehensive analysis of the various fusion strategies, the diseases and clinical outcomes for which multimodal fusion was used, the ML algorithms used to perform multimodal fusion for each clinical application, and the available multimodal medical datasets. We followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. We searched Embase, PubMed, Scopus, and Google Scholar to retrieve relevant studies. After pre-processing and screening, we extracted data from 34 studies that fulfilled the inclusion criteria. We found that studies fusing imaging data with EHR are increasing and doubling from 2020 to 2021. In our analysis, a typical workflow was observed: feeding raw data, fusing different data modalities by applying conventional machine learning (ML) or deep learning (DL) algorithms, and finally, evaluating the multimodal fusion through clinical outcome predictions. Specifically, early fusion was the most used technique in most applications for multimodal learning (22 out of 34 studies). We found that multimodality fusion models outperformed traditional single-modality models for the same task. Disease diagnosis and prediction were the most common clinical outcomes (reported in 20 and 10 studies, respectively) from a clinical outcome perspective. Neurological disorders were the dominant category (16 studies). From an AI perspective, conventional ML models were the most used (19 studies), followed by DL models (16 studies). Multimodal data used in the included studies were mostly from private repositories (21 studies). Through this scoping review, we offer new insights for researchers interested in knowing the current state of knowledge within this research field.


Subject(s)
Artificial Intelligence , Electronic Health Records , Humans , Machine Learning , Algorithms , Diagnostic Imaging
6.
Insights Imaging ; 13(1): 98, 2022 Jun 04.
Article in English | MEDLINE | ID: mdl-35662369

ABSTRACT

The performance of artificial intelligence (AI) for brain MRI can improve if enough data are made available. Generative adversarial networks (GANs) showed a lot of potential to generate synthetic MRI data that can capture the distribution of real MRI. Besides, GANs are also popular for segmentation, noise removal, and super-resolution of brain MRI images. This scoping review aims to explore how GANs methods are being used on brain MRI data, as reported in the literature. The review describes the different applications of GANs for brain MRI, presents the most commonly used GANs architectures, and summarizes the publicly available brain MRI datasets for advancing the research and development of GANs-based approaches. This review followed the guidelines of PRISMA-ScR to perform the study search and selection. The search was conducted on five popular scientific databases. The screening and selection of studies were performed by two independent reviewers, followed by validation by a third reviewer. Finally, the data were synthesized using a narrative approach. This review included 139 studies out of 789 search results. The most common use case of GANs was the synthesis of brain MRI images for data augmentation. GANs were also used to segment brain tumors and translate healthy images to diseased images or CT to MRI and vice versa. The included studies showed that GANs could enhance the performance of AI methods used on brain MRI imaging data. However, more efforts are needed to transform the GANs-based methods in clinical applications.

7.
Stud Health Technol Inform ; 295: 517-520, 2022 Jun 29.
Article in English | MEDLINE | ID: mdl-35773925

ABSTRACT

This study aims to develop models to accurately classify patients with type 2 diabetes using the Practice Fusion dataset. We use Random Forest (RF), Support Vector Classifier (SVC), AdaBoost classifier, an ensemble model, and automated machine learning (AutoML) model. We compare the performance of all models in a five-fold cross-validation scheme using four evaluation measures. Experimental results demonstrate that the AutoML model outperformed individual and ensemble models in all evaluation measures.


Subject(s)
Diabetes Mellitus, Type 2 , Diabetes Mellitus, Type 2/diagnosis , Humans , Machine Learning , Support Vector Machine
SELECTION OF CITATIONS
SEARCH DETAIL