Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
J Med Syst ; 46(10): 62, 2022 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-35988110

RESUMO

Variations in COVID-19 lesions such as glass ground opacities (GGO), consolidations, and crazy paving can compromise the ability of solo-deep learning (SDL) or hybrid-deep learning (HDL) artificial intelligence (AI) models in predicting automated COVID-19 lung segmentation in Computed Tomography (CT) from unseen data leading to poor clinical manifestations. As the first study of its kind, "COVLIAS 1.0-Unseen" proves two hypotheses, (i) contrast adjustment is vital for AI, and (ii) HDL is superior to SDL. In a multicenter study, 10,000 CT slices were collected from 72 Italian (ITA) patients with low-GGO, and 80 Croatian (CRO) patients with high-GGO. Hounsfield Units (HU) were automatically adjusted to train the AI models and predict from test data, leading to four combinations-two Unseen sets: (i) train-CRO:test-ITA, (ii) train-ITA:test-CRO, and two Seen sets: (iii) train-CRO:test-CRO, (iv) train-ITA:test-ITA. COVILAS used three SDL models: PSPNet, SegNet, UNet and six HDL models: VGG-PSPNet, VGG-SegNet, VGG-UNet, ResNet-PSPNet, ResNet-SegNet, and ResNet-UNet. Two trained, blinded senior radiologists conducted ground truth annotations. Five types of performance metrics were used to validate COVLIAS 1.0-Unseen which was further benchmarked against MedSeg, an open-source web-based system. After HU adjustment for DS and JI, HDL (Unseen AI) > SDL (Unseen AI) by 4% and 5%, respectively. For CC, HDL (Unseen AI) > SDL (Unseen AI) by 6%. The COVLIAS-MedSeg difference was < 5%, meeting regulatory guidelines.Unseen AI was successfully demonstrated using automated HU adjustment. HDL was found to be superior to SDL.


Assuntos
COVID-19 , Aprendizado Profundo , Inteligência Artificial , COVID-19/diagnóstico por imagem , Humanos , Pulmão/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
2.
Diagnostics (Basel) ; 12(5)2022 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-35626389

RESUMO

Diabetes is one of the main causes of the rising cases of blindness in adults. This microvascular complication of diabetes is termed diabetic retinopathy (DR) and is associated with an expanding risk of cardiovascular events in diabetes patients. DR, in its various forms, is seen to be a powerful indicator of atherosclerosis. Further, the macrovascular complication of diabetes leads to coronary artery disease (CAD). Thus, the timely identification of cardiovascular disease (CVD) complications in DR patients is of utmost importance. Since CAD risk assessment is expensive for low-income countries, it is important to look for surrogate biomarkers for risk stratification of CVD in DR patients. Due to the common genetic makeup between the coronary and carotid arteries, low-cost, high-resolution imaging such as carotid B-mode ultrasound (US) can be used for arterial tissue characterization and risk stratification in DR patients. The advent of artificial intelligence (AI) techniques has facilitated the handling of large cohorts in a big data framework to identify atherosclerotic plaque features in arterial ultrasound. This enables timely CVD risk assessment and risk stratification of patients with DR. Thus, this review focuses on understanding the pathophysiology of DR, retinal and CAD imaging, the role of surrogate markers for CVD, and finally, the CVD risk stratification of DR patients. The review shows a step-by-step cyclic activity of how diabetes and atherosclerotic disease cause DR, leading to the worsening of CVD. We propose a solution to how AI can help in the identification of CVD risk. Lastly, we analyze the role of DR/CVD in the COVID-19 framework.

3.
Comput Biol Med ; 146: 105571, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35751196

RESUMO

BACKGROUND: COVLIAS 1.0: an automated lung segmentation was designed for COVID-19 diagnosis. It has issues related to storage space and speed. This study shows that COVLIAS 2.0 uses pruned AI (PAI) networks for improving both storage and speed, wiliest high performance on lung segmentation and lesion localization. METHOD: ology: The proposed study uses multicenter ∼9,000 CT slices from two different nations, namely, CroMed from Croatia (80 patients, experimental data), and NovMed from Italy (72 patients, validation data). We hypothesize that by using pruning and evolutionary optimization algorithms, the size of the AI models can be reduced significantly, ensuring optimal performance. Eight different pruning techniques (i) differential evolution (DE), (ii) genetic algorithm (GA), (iii) particle swarm optimization algorithm (PSO), and (iv) whale optimization algorithm (WO) in two deep learning frameworks (i) Fully connected network (FCN) and (ii) SegNet were designed. COVLIAS 2.0 was validated using "Unseen NovMed" and benchmarked against MedSeg. Statistical tests for stability and reliability were also conducted. RESULTS: Pruning algorithms (i) FCN-DE, (ii) FCN-GA, (iii) FCN-PSO, and (iv) FCN-WO showed improvement in storage by 92.4%, 95.3%, 98.7%, and 99.8% respectively when compared against solo FCN, and (v) SegNet-DE, (vi) SegNet-GA, (vii) SegNet-PSO, and (viii) SegNet-WO showed improvement by 97.1%, 97.9%, 98.8%, and 99.2% respectively when compared against solo SegNet. AUC > 0.94 (p < 0.0001) on CroMed and > 0.86 (p < 0.0001) on NovMed data set for all eight EA model. PAI <0.25 s per image. DenseNet-121-based Grad-CAM heatmaps showed validation on glass ground opacity lesions. CONCLUSIONS: Eight PAI networks that were successfully validated are five times faster, storage efficient, and could be used in clinical settings.


Assuntos
COVID-19 , Aprendizado Profundo , COVID-19/diagnóstico por imagem , Teste para COVID-19 , Humanos , Processamento de Imagem Assistida por Computador/métodos , Pulmão/diagnóstico por imagem , Redes Neurais de Computação , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos
4.
Comput Biol Med ; 122: 103804, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32658726

RESUMO

MOTIVATION: Brain or central nervous system cancer is the tenth leading cause of death in men and women. Even though brain tumour is not considered as the primary cause of mortality worldwide, 40% of other types of cancer (such as lung or breast cancers) are transformed into brain tumours due to metastasis. Although the biopsy is considered as the gold standard for cancer diagnosis, it poses several challenges such as low sensitivity/specificity, risk during the biopsy procedure, and relatively long waiting times for the biopsy results. Due to an increase in the sheer volume of patients with brain tumours, there is a need for a non-invasive, automatic computer-aided diagnosis tool that can automatically diagnose and estimate the grade of a tumour accurately within a few seconds. METHOD: Five clinically relevant multiclass datasets (two-, three-, four-, five-, and six-class) were designed. A transfer-learning-based Artificial Intelligence paradigm using a Convolutional Neural Network (CCN) was proposed and led to higher performance in brain tumour grading/classification using magnetic resonance imaging (MRI) data. We benchmarked the transfer-learning-based CNN model against six different machine learning (ML) classification methods, namely Decision Tree, Linear Discrimination, Naive Bayes, Support Vector Machine, K-nearest neighbour, and Ensemble. RESULTS: The CNN-based deep learning (DL) model outperforms the six types of ML models when considering five types of multiclass tumour datasets. These five types of data are two-, three-, four-, five, and six-class. The CNN-based AlexNet transfer learning system yielded mean accuracies derived from three kinds of cross-validation protocols (K2, K5, and K10) of 100, 95.97, 96.65, 87.14, and 93.74%, respectively. The mean areas under the curve of DL and ML were found to be 0.99 and 0.87, respectively, for p < 0.0001, and DL showed a 12.12% improvement over ML. Multiclass datasets were benchmarked against the TT protocol (where training and testing samples are the same). The optimal model was validated using a statistical method of a tumour separation index and verified on synthetic data consisting of eight classes. CONCLUSION: The transfer-learning-based AI system is useful in multiclass brain tumour grading and shows better performance than ML systems.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Inteligência Artificial , Teorema de Bayes , Neoplasias Encefálicas/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA