Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 389
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Clin Ultrasound ; 52(2): 131-143, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37983736

RESUMO

PURPOSE: The quality of ultrasound images is degraded by speckle and Gaussian noises. This study aims to develop a deep-learning (DL)-based filter for ultrasound image denoising. METHODS: A novel DL-based filter using adaptive residual (AdaRes) learning was proposed. Five image quality metrics (IQMs) and 27 radiomics features were used to evaluate denoising results. The effect of our proposed filter, AdaRes, on four pre-trained convolutional neural network (CNN) classification models and three radiologists was assessed. RESULTS: AdaRes filter was tested on both natural and ultrasound image databases. IQMs results indicate that AdaRes could remove noises in three different noise levels with the highest performances. In addition, a radiomics study proved that AdaRes did not distort tissue textures and it could preserve most radiomics features. AdaRes could also improve the performance classification using CNNs in different settings. Finally, AdaRes also improved the mean overall performance (AUC) of three radiologists from 0.494 to 0.702 in the classification of benign and malignant lesions. CONCLUSIONS: AdaRes filtered out noises on ultrasound images more effectively and can be used as an auxiliary preprocessing step in computer-aided diagnosis systems. Radiologists may use it to remove unwanted noises and improve the ultrasound image quality before the interpretation.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Humanos , Radiômica , Razão Sinal-Ruído , Ultrassonografia
2.
J Ultrasound Med ; 42(6): 1211-1221, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-36437513

RESUMO

OBJECTIVES: Deep learning algorithms have shown potential in streamlining difficult clinical decisions. In the present study, we report the diagnostic profile of a deep learning model in differentiating malignant and benign lymph nodes in patients with papillary thyroid cancer. METHODS: An in-house deep learning-based model called "ClymphNet" was developed and tested using two datasets containing ultrasound images of 195 malignant and 178 benign lymph nodes. An expert radiologist also viewed these ultrasound images and extracted qualitative imaging features used in routine clinical practice. These signs were used to train three different machine learning algorithms. Then the deep learning model was compared with the machine learning models on internal and external validation datasets containing 22 and 82 malignant and 20 and 76 benign lymph nodes, respectively. RESULTS: Among the three machine learning algorithms, the support vector machine model (SVM) outperformed the best, reaching a sensitivity of 91.35%, specificity of 88.54%, accuracy of 90.00%, and an area under the curve (AUC) of 0.925 in all cohorts. The ClymphNet performed better than the SVM protocol in internal and external validation, achieving a sensitivity of 93.27%, specificity of 92.71%, and an accuracy of 93.00%, and an AUC of 0.948 in all cohorts. CONCLUSION: A deep learning model trained with ultrasound images outperformed three conventional machine learning algorithms fed with qualitative imaging features interpreted by radiologists. Our study provides evidence regarding the utility of ClymphNet in the early and accurate differentiation of benign and malignant lymphadenopathy.


Assuntos
Aprendizado Profundo , Neoplasias da Glândula Tireoide , Humanos , Câncer Papilífero da Tireoide/diagnóstico por imagem , Câncer Papilífero da Tireoide/patologia , Sensibilidade e Especificidade , Semântica , Linfonodos/diagnóstico por imagem , Linfonodos/patologia , Neoplasias da Glândula Tireoide/patologia , Estudos Retrospectivos
3.
J Ultrasound Med ; 42(10): 2257-2268, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37159483

RESUMO

OBJECTIVES: Ultrasound is widely used in diagnosing carpal tunnel syndrome (CTS). However, the limitations of ultrasound in CTS detection are the lack of objective measures in the detection of nerve abnormality and the operator-dependent nature of ultrasound imaging. Therefore, in this study, we developed and proposed externally validated artificial intelligence (AI) models based on deep-radiomics features. METHODS: We have used 416 median nerves from 2 countries (Iran and Colombia) for the development (112 entrapped and 112 normal nerves from Iran) and validation (26 entrapped and 26 normal nerves from Iran, and 70 entrapped and 70 normal nerves from Columbia) of our models. Ultrasound images were fed to the SqueezNet architecture to extract deep-radiomics features. Then a ReliefF method was used to select the clinically significant features. The selected deep-radiomics features were fed to 9 common machine-learning algorithms to choose the best-performing classifier. The 2 best-performing AI models were then externally validated. RESULTS: Our developed model achieved an area under the receiver operating characteristic (ROC) curve (AUC) of 0.910 (88.46% sensitivity, 88.46% specificity) and 0.908 (84.62% sensitivity, 88.46% specificity) with support vector machine and stochastic gradient descent (SGD), respectively using the internal validation dataset. Furthermore, both models consistently performed well in the external validation dataset, and achieved an AUC of 0.890 (85.71% sensitivity, 82.86% specificity) and 0.890 (84.29% sensitivity and 82.86% specificity), with SVM and SGD models, respectively. CONCLUSION: Our proposed AI models fed with deep-radiomics features performed consistently with internal and external datasets. This justifies that our proposed system can be employed for clinical use in hospitals and polyclinics.


Assuntos
Síndrome do Túnel Carpal , Humanos , Síndrome do Túnel Carpal/diagnóstico por imagem , Nervo Mediano/diagnóstico por imagem , Inteligência Artificial , Ultrassonografia/métodos , Curva ROC
4.
Sensors (Basel) ; 23(16)2023 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-37631569

RESUMO

Anxiety, learning disabilities, and depression are the symptoms of attention deficit hyperactivity disorder (ADHD), an isogenous pattern of hyperactivity, impulsivity, and inattention. For the early diagnosis of ADHD, electroencephalogram (EEG) signals are widely used. However, the direct analysis of an EEG is highly challenging as it is time-consuming, nonlinear, and nonstationary in nature. Thus, in this paper, a novel approach (LSGP-USFNet) is developed based on the patterns obtained from Ulam's spiral and Sophia Germain's prime numbers. The EEG signals are initially filtered to remove the noise and segmented with a non-overlapping sliding window of a length of 512 samples. Then, a time-frequency analysis approach, namely continuous wavelet transform, is applied to each channel of the segmented EEG signal to interpret it in the time and frequency domain. The obtained time-frequency representation is saved as a time-frequency image, and a non-overlapping n × n sliding window is applied to this image for patch extraction. An n × n Ulam's spiral is localized on each patch, and the gray levels are acquired from this patch as features where Sophie Germain's primes are located in Ulam's spiral. All gray tones from all patches are concatenated to construct the features for ADHD and normal classes. A gray tone selection algorithm, namely ReliefF, is employed on the representative features to acquire the final most important gray tones. The support vector machine classifier is used with a 10-fold cross-validation criteria. Our proposed approach, LSGP-USFNet, was developed using a publicly available dataset and obtained an accuracy of 97.46% in detecting ADHD automatically. Our generated model is ready to be validated using a bigger database and it can also be used to detect other children's neurological disorders.


Assuntos
Transtorno do Deficit de Atenção com Hiperatividade , Criança , Humanos , Transtorno do Deficit de Atenção com Hiperatividade/diagnóstico , Eletroencefalografia , Algoritmos , Ansiedade , Transtornos de Ansiedade , Niacinamida
5.
Sensors (Basel) ; 23(14)2023 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-37514877

RESUMO

Screening programs for early lung cancer diagnosis are uncommon, primarily due to the challenge of reaching at-risk patients located in rural areas far from medical facilities. To overcome this obstacle, a comprehensive approach is needed that combines mobility, low cost, speed, accuracy, and privacy. One potential solution lies in combining the chest X-ray imaging mode with federated deep learning, ensuring that no single data source can bias the model adversely. This study presents a pre-processing pipeline designed to debias chest X-ray images, thereby enhancing internal classification and external generalization. The pipeline employs a pruning mechanism to train a deep learning model for nodule detection, utilizing the most informative images from a publicly available lung nodule X-ray dataset. Histogram equalization is used to remove systematic differences in image brightness and contrast. Model training is then performed using combinations of lung field segmentation, close cropping, and rib/bone suppression. The resulting deep learning models, generated through this pre-processing pipeline, demonstrate successful generalization on an independent lung nodule dataset. By eliminating confounding variables in chest X-ray images and suppressing signal noise from the bone structures, the proposed deep learning lung nodule detection algorithm achieves an external generalization accuracy of 89%. This approach paves the way for the development of a low-cost and accessible deep learning-based clinical system for lung cancer screening.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Humanos , Redes Neurais de Computação , Raios X , Detecção Precoce de Câncer , Neoplasias Pulmonares/diagnóstico por imagem , Pulmão
6.
Sensors (Basel) ; 23(3)2023 Jan 28.
Artigo em Inglês | MEDLINE | ID: mdl-36772503

RESUMO

Continuous advancements of technologies such as machine-to-machine interactions and big data analysis have led to the internet of things (IoT) making information sharing and smart decision-making possible using everyday devices. On the other hand, swarm intelligence (SI) algorithms seek to establish constructive interaction among agents regardless of their intelligence level. In SI algorithms, multiple individuals run simultaneously and possibly in a cooperative manner to address complex nonlinear problems. In this paper, the application of SI algorithms in IoT is investigated with a special focus on the internet of medical things (IoMT). The role of wearable devices in IoMT is briefly reviewed. Existing works on applications of SI in addressing IoMT problems are discussed. Possible problems include disease prediction, data encryption, missing values prediction, resource allocation, network routing, and hardware failure management. Finally, research perspectives and future trends are outlined.


Assuntos
Internet das Coisas , Dispositivos Eletrônicos Vestíveis , Humanos , Algoritmos , Cognição , Inteligência , Internet
7.
J Digit Imaging ; 36(3): 879-892, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36658376

RESUMO

Incidental adrenal masses are seen in 5% of abdominal computed tomography (CT) examinations. Accurate discrimination of the possible differential diagnoses has important therapeutic and prognostic significance. A new handcrafted machine learning method has been developed for the automated and accurate classification of adrenal gland CT images. A new dataset comprising 759 adrenal gland CT image slices from 96 subjects were analyzed. Experts had labeled the collected images into four classes: normal, pheochromocytoma, lipid-poor adenoma, and metastasis. The images were preprocessed, resized, and the image features were extracted using the center symmetric local binary pattern (CS-LBP) method. CT images were next divided into 16 × 16 fixed-size patches, and further feature extraction using CS-LBP was performed on these patches. Next, extracted features were selected using neighborhood component analysis (NCA) to obtain the most meaningful ones for downstream classification. Finally, the selected features were classified using k-nearest neighbor (kNN), support vector machine (SVM), and neural network (NN) classifiers to obtain the optimum performing model. Our proposed method obtained an accuracy of 99.87%, 99.21%, and 98.81% with kNN, SVM, and NN classifiers, respectively. Hence, the kNN classifier yielded the highest classification results with no pathological image misclassified as normal. Our developed fixed patch CS-LBP-based automatic classification of adrenal gland pathologies on CT images is highly accurate and has low time complexity [Formula: see text]. It has the potential to be used for screening of adrenal gland disease classes with CT images.


Assuntos
Adenoma , Doenças das Glândulas Suprarrenais , Humanos , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação , Aprendizado de Máquina
8.
J Digit Imaging ; 36(3): 973-987, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36797543

RESUMO

Modern computer vision algorithms are based on convolutional neural networks (CNNs), and both end-to-end learning and transfer learning modes have been used with CNN for image classification. Thus, automated brain tumor classification models have been proposed by deploying CNNs to help medical professionals. Our primary objective is to increase the classification performance using CNN. Therefore, a patch-based deep feature engineering model has been proposed in this work. Nowadays, patch division techniques have been used to attain high classification performance, and variable-sized patches have achieved good results. In this work, we have used three types of patches of different sizes (32 × 32, 56 × 56, 112 × 112). Six feature vectors have been obtained using these patches and two layers of the pretrained ResNet50 (global average pooling and fully connected layers). In the feature selection phase, three selectors-neighborhood component analysis (NCA), Chi2, and ReliefF-have been used, and 18 final feature vectors have been obtained. By deploying k nearest neighbors (kNN), 18 results have been calculated. Iterative hard majority voting (IHMV) has been applied to compute the general classification accuracy of this framework. This model uses different patches, feature extractors (two layers of the ResNet50 have been utilized as feature extractors), and selectors, making this a framework that we have named PatchResNet. A public brain image dataset containing four classes (glioblastoma multiforme (GBM), meningioma, pituitary tumor, healthy) has been used to develop the proposed PatchResNet model. Our proposed PatchResNet attained 98.10% classification accuracy using the public brain tumor image dataset. The developed PatchResNet model obtained high classification accuracy and has the advantage of being a self-organized framework. Therefore, the proposed method can choose the best result validation prediction vectors and achieve high image classification performance.


Assuntos
Neoplasias Encefálicas , Redes Neurais de Computação , Humanos , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética , Encéfalo
9.
J Digit Imaging ; 36(6): 2441-2460, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37537514

RESUMO

Detecting neurological abnormalities such as brain tumors and Alzheimer's disease (AD) using magnetic resonance imaging (MRI) images is an important research topic in the literature. Numerous machine learning models have been used to detect brain abnormalities accurately. This study addresses the problem of detecting neurological abnormalities in MRI. The motivation behind this problem lies in the need for accurate and efficient methods to assist neurologists in the diagnosis of these disorders. In addition, many deep learning techniques have been applied to MRI to develop accurate brain abnormality detection models, but these networks have high time complexity. Hence, a novel hand-modeled feature-based learning network is presented to reduce the time complexity and obtain high classification performance. The model proposed in this work uses a new feature generation architecture named pyramid and fixed-size patch (PFP). The main aim of the proposed PFP structure is to attain high classification performance using essential feature extractors with both multilevel and local features. Furthermore, the PFP feature extractor generates low- and high-level features using a handcrafted extractor. To obtain the high discriminative feature extraction ability of the PFP, we have used histogram-oriented gradients (HOG); hence, it is named PFP-HOG. Furthermore, the iterative Chi2 (IChi2) is utilized to choose the clinically significant features. Finally, the k-nearest neighbors (kNN) with tenfold cross-validation is used for automated classification. Four MRI neurological databases (AD dataset, brain tumor dataset 1, brain tumor dataset 2, and merged dataset) have been utilized to develop our model. PFP-HOG and IChi2-based models attained 100%, 94.98%, 98.19%, and 97.80% using the AD dataset, brain tumor dataset1, brain tumor dataset 2, and merged brain MRI dataset, respectively. These findings not only provide an accurate and robust classification of various neurological disorders using MRI but also hold the potential to assist neurologists in validating manual MRI brain abnormality screening.


Assuntos
Doença de Alzheimer , Neoplasias Encefálicas , Humanos , Imageamento por Ressonância Magnética/métodos , Neuroimagem , Encéfalo/diagnóstico por imagem , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Aprendizado de Máquina , Doença de Alzheimer/diagnóstico por imagem
10.
J Digit Imaging ; 36(4): 1675-1686, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37131063

RESUMO

Microscopic examination of urinary sediments is a common laboratory procedure. Automated image-based classification of urinary sediments can reduce analysis time and costs. Inspired by cryptographic mixing protocols and computer vision, we developed an image classification model that combines a novel Arnold Cat Map (ACM)- and fixed-size patch-based mixer algorithm with transfer learning for deep feature extraction. Our study dataset comprised 6,687 urinary sediment images belonging to seven classes: Cast, Crystal, Epithelia, Epithelial nuclei, Erythrocyte, Leukocyte, and Mycete. The developed model consists of four layers: (1) an ACM-based mixer to generate mixed images from resized 224 × 224 input images using fixed-size 16 × 16 patches; (2) DenseNet201 pre-trained on ImageNet1K to extract 1,920 features from each raw input image, and its six corresponding mixed images were concatenated to form a final feature vector of length 13,440; (3) iterative neighborhood component analysis to select the most discriminative feature vector of optimal length 342, determined using a k-nearest neighbor (kNN)-based loss function calculator; and (4) shallow kNN-based classification with ten-fold cross-validation. Our model achieved 98.52% overall accuracy for seven-class classification, outperforming published models for urinary cell and sediment analysis. We demonstrated the feasibility and accuracy of deep feature engineering using an ACM-based mixer algorithm for image preprocessing combined with pre-trained DenseNet201 for feature extraction. The classification model was both demonstrably accurate and computationally lightweight, making it ready for implementation in real-world image-based urine sediment analysis applications.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Microscopia
11.
Inf Fusion ; 90: 364-381, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36217534

RESUMO

The COVID-19 (Coronavirus disease 2019) pandemic has become a major global threat to human health and well-being. Thus, the development of computer-aided detection (CAD) systems that are capable of accurately distinguishing COVID-19 from other diseases using chest computed tomography (CT) and X-ray data is of immediate priority. Such automatic systems are usually based on traditional machine learning or deep learning methods. Differently from most of the existing studies, which used either CT scan or X-ray images in COVID-19-case classification, we present a new, simple but efficient deep learning feature fusion model, called U n c e r t a i n t y F u s e N e t , which is able to classify accurately large datasets of both of these types of images. We argue that the uncertainty of the model's predictions should be taken into account in the learning process, even though most of the existing studies have overlooked it. We quantify the prediction uncertainty in our feature fusion model using effective Ensemble Monte Carlo Dropout (EMCD) technique. A comprehensive simulation study has been conducted to compare the results of our new model to the existing approaches, evaluating the performance of competing models in terms of Precision, Recall, F-Measure, Accuracy and ROC curves. The obtained results prove the efficiency of our model which provided the prediction accuracy of 99.08% and 96.35% for the considered CT scan and X-ray datasets, respectively. Moreover, our U n c e r t a i n t y F u s e N e t model was generally robust to noise and performed well with previously unseen data. The source code of our implementation is freely available at: https://github.com/moloud1987/UncertaintyFuseNet-for-COVID-19-Classification.

12.
Appl Intell (Dordr) ; 53(2): 1548-1566, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-35528131

RESUMO

Chronic Ocular Diseases (COD) such as myopia, diabetic retinopathy, age-related macular degeneration, glaucoma, and cataract can affect the eye and may even lead to severe vision impairment or blindness. According to a recent World Health Organization (WHO) report on vision, at least 2.2 billion individuals worldwide suffer from vision impairment. Often, overt signs indicative of COD do not manifest until the disease has progressed to an advanced stage. However, if COD is detected early, vision impairment can be avoided by early intervention and cost-effective treatment. Ophthalmologists are trained to detect COD by examining certain minute changes in the retina, such as microaneurysms, macular edema, hemorrhages, and alterations in the blood vessels. The range of eye conditions is diverse, and each of these conditions requires a unique patient-specific treatment. Convolutional neural networks (CNNs) have demonstrated significant potential in multi-disciplinary fields, including the detection of a variety of eye diseases. In this study, we combined several preprocessing approaches with convolutional neural networks to accurately detect COD in eye fundus images. To the best of our knowledge, this is the first work that provides a qualitative analysis of preprocessing approaches for COD classification using CNN models. Experimental results demonstrate that CNNs trained on the region of interest segmented images outperform the models trained on the original input images by a substantial margin. Additionally, an ensemble of three preprocessing techniques outperformed other state-of-the-art approaches by 30% and 3%, in terms of Kappa and F 1 scores, respectively. The developed prototype has been extensively tested and can be evaluated on more comprehensive COD datasets for deployment in the clinical setup.

13.
Appl Intell (Dordr) ; : 1-19, 2023 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-36777881

RESUMO

Nowadays, the hectic work life of people has led to sleep deprivation. This may further result in sleep-related disorders and adverse physiological conditions. Therefore, sleep study has become an active research area. Sleep scoring is crucial for detecting sleep-related disorders like sleep apnea, insomnia, narcolepsy, periodic leg movement (PLM), and restless leg syndrome (RLS). Sleep is conventionally monitored in a sleep laboratory using polysomnography (PSG) which is the recording of various physiological signals. The traditional sleep stage scoring (SSG) done by professional sleep scorers is a tedious, strenuous, and time-consuming process as it is manual. Hence, developing a machine-learning model for automatic SSG is essential. In this study, we propose an automated SSG approach based on the biorthogonal wavelet filter bank's (BWFB) novel least squares (LS) design. We have utilized a huge Wisconsin sleep cohort (WSC) database in this study. The proposed study is a pioneering work on automatic sleep stage classification using the WSC database, which includes good sleepers and patients suffering from various sleep-related disorders, including apnea, insomnia, hypertension, diabetes, and asthma. To investigate the generalization of the proposed system, we evaluated the proposed model with the following publicly available databases: cyclic alternating pattern (CAP), sleep EDF, ISRUC, MIT-BIH, and the sleep apnea database from St. Vincent's University. This study uses only two unipolar EEG channels, namely O1-M2 and C3-M2, for the scoring. The Hjorth parameters (HP) are extracted from the wavelet subbands (SBS) that are obtained from the optimal BWFB. To classify sleep stages, the HP features are fed to several supervised machine learning classifiers. 12 different datasets have been created to develop a robust model. A total of 12 classification tasks (CT) have been conducted employing various classification algorithms. Our developed model achieved the best accuracy of 83.2% and Cohen's Kappa of 0.7345 to reliably distinguish five sleep stages, using an ensemble bagged tree classifier with 10-fold cross-validation using WSC data. We also observed that our system is either better or competitive with existing state-of-art systems when we tested with the above-mentioned five databases other than WSC. This method yielded promising results using only two EEG channels using a huge WSC database. Our approach is simple and hence, the developed model can be installed in home-based clinical systems and wearable devices for sleep scoring.

14.
Sensors (Basel) ; 22(5)2022 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-35271154

RESUMO

Recently, deep models have been very popular because they achieve excellent performance with many classification problems. Deep networks have high computational complexities and require specific hardware. To overcome this problem (without decreasing classification ability), a hand-modeled feature selection method is proposed in this paper. A new shape-based local feature extractor is presented which uses the geometric shape of the frustum. By using a frustum pattern, textural features are generated. Moreover, statistical features have been extracted in this model. Textures and statistics features are fused, and a hybrid feature extraction phase is obtained; these features are low-level. To generate high level features, tunable Q factor wavelet transform (TQWT) is used. The presented hybrid feature generator creates 154 feature vectors; hence, it is named Frustum154. In the multilevel feature creation phase, this model can select the appropriate feature vectors automatically and create the final feature vector by merging the appropriate feature vectors. Iterative neighborhood component analysis (INCA) chooses the best feature vector, and shallow classifiers are then used. Frustum154 has been tested on three basic hand-movement sEMG datasets. Hand-movement sEMG datasets are commonly used in biomedical engineering, but there are some problems in this area. The presented models generally required one dataset to achieve high classification ability. In this work, three sEMG datasets have been used to test the performance of Frustum154. The presented model is self-organized and selects the most informative subbands and features automatically. It achieved 98.89%, 94.94%, and 95.30% classification accuracies using shallow classifiers, indicating that Frustum154 can improve classification accuracy.


Assuntos
Algoritmos , Análise de Ondaletas , Mãos , Força da Mão , Movimento
15.
Pattern Recognit Lett ; 153: 67-74, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34876763

RESUMO

Coronavirus (which is also known as COVID-19) is severely impacting the wellness and lives of many across the globe. There are several methods currently to detect and monitor the progress of the disease such as radiological image from patients' chests, measuring the symptoms and applying polymerase chain reaction (RT-PCR) test. X-ray imaging is one of the popular techniques used to visualise the impact of the virus on the lungs. Although manual detection of this disease using radiology images is more popular, it can be time-consuming, and is prone to human errors. Hence, automated detection of lung pathologies due to COVID-19 utilising deep learning (Bowles et al.) techniques can assist with yielding accurate results for huge databases. Large volumes of data are needed to achieve generalizable DL models; however, there are very few public databases available for detecting COVID-19 disease pathologies automatically. Standard data augmentation method can be used to enhance the models' generalizability. In this research, the Extensive COVID-19 X-ray and CT Chest Images Dataset has been used and generative adversarial network (GAN) coupled with trained, semi-supervised CycleGAN (SSA- CycleGAN) has been applied to augment the training dataset. Then a newly designed and finetuned Inception V3 transfer learning model has been developed to train the algorithm for detecting COVID-19 pandemic. The obtained results from the proposed Inception-CycleGAN model indicated Accuracy = 94.2%, Area under Curve = 92.2%, Mean Squared Error = 0.27, Mean Absolute Error = 0.16. The developed Inception-CycleGAN framework is ready to be tested with further COVID-19 X-Ray images of the chest.

16.
Expert Syst Appl ; 204: 117410, 2022 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-35502163

RESUMO

Since the advent of COVID-19, the number of deaths has increased exponentially, boosting the requirement for various research studies that may correctly diagnose the illness at an early stage. Using chest X-rays, this study presents deep learning-based algorithms for classifying patients with COVID illness, healthy controls, and pneumonia classes. Data gathering, pre-processing, feature extraction, and classification are the four primary aspects of the approach. The pictures of chest X-rays utilized in this investigation came from various publicly available databases. The pictures were filtered to increase image quality in the pre-processing stage, and the chest X-ray images were de-noised using the empirical wavelet transform (EWT). Following that, four deep learning models were used to extract features. The first two models, Inception-V3 and Resnet-50, are based on transfer learning models. The Resnet-50 is combined with a temporal convolutional neural network (TCN) to create the third model. The fourth model is our suggested RESCOVIDTCNNet model, which integrates EWT, Resnet-50, and TCN. Finally, an artificial neural network (ANN) and a support vector machine were used to classify the data (SVM). Using five-fold cross-validation for 3-class classification, our suggested RESCOVIDTCNNet achieved a 99.5 percent accuracy. Our prototype can be utilized in developing nations where radiologists are in low supply to acquire a diagnosis quickly.

17.
J Med Virol ; 93(4): 2307-2320, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33247599

RESUMO

Preventing communicable diseases requires understanding the spread, epidemiology, clinical features, progression, and prognosis of the disease. Early identification of risk factors and clinical outcomes might help in identifying critically ill patients, providing appropriate treatment, and preventing mortality. We conducted a prospective study in patients with flu-like symptoms referred to the imaging department of a tertiary hospital in Iran between March 3, 2020, and April 8, 2020. Patients with COVID-19 were followed up after two months to check their health condition. The categorical data between groups were analyzed by Fisher's exact test and continuous data by Wilcoxon rank-sum test. Three hundred and nineteen patients (mean age 45.48 ± 18.50 years, 177 women) were enrolled. Fever, dyspnea, weakness, shivering, C-reactive protein, fatigue, dry cough, anorexia, anosmia, ageusia, dizziness, sweating, and age were the most important symptoms of COVID-19 infection. Traveling in the past 3 months, asthma, taking corticosteroids, liver disease, rheumatological disease, cough with sputum, eczema, conjunctivitis, tobacco use, and chest pain did not show any relationship with COVID-19. To the best of our knowledge, a number of factors associated with mortality due to COVID-19 have been investigated for the first time in this study. Our results might be helpful in early prediction and risk reduction of mortality in patients infected with COVID-19.


Assuntos
COVID-19/mortalidade , COVID-19/patologia , Adulto , COVID-19/diagnóstico , COVID-19/terapia , Estado Terminal , Progressão da Doença , Feminino , Humanos , Irã (Geográfico)/epidemiologia , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Fatores de Risco , SARS-CoV-2/isolamento & purificação
18.
Eur Radiol ; 31(1): 121-130, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-32740817

RESUMO

OBJECTIVES: CT findings of COVID-19 look similar to other atypical and viral (non-COVID-19) pneumonia diseases. This study proposes a clinical computer-aided diagnosis (CAD) system using CT features to automatically discriminate COVID-19 from non-COVID-19 pneumonia patients. METHODS: Overall, 612 patients (306 COVID-19 and 306 non-COVID-19 pneumonia) were recruited. Twenty radiological features were extracted from CT images to evaluate the pattern, location, and distribution of lesions of patients in both groups. All significant CT features were fed in five classifiers namely decision tree, K-nearest neighbor, naïve Bayes, support vector machine, and ensemble to evaluate the best performing CAD system in classifying COVID-19 and non-COVID-19 cases. RESULTS: Location and distribution pattern of involvement, number of the lesion, ground-glass opacity (GGO) and crazy-paving, consolidation, reticular, bronchial wall thickening, nodule, air bronchogram, cavity, pleural effusion, pleural thickening, and lymphadenopathy are the significant features to classify COVID-19 from non-COVID-19 groups. Our proposed CAD system obtained the sensitivity, specificity, and accuracy of 0.965, 93.54%, 90.32%, and 91.94%, respectively, using ensemble (COVIDiag) classifier. CONCLUSIONS: This study proposed a COVIDiag model obtained promising results using CT radiological routine features. It can be considered an adjunct tool by the radiologists during the current COVID-19 pandemic to make an accurate diagnosis. KEY POINTS: • Location and distribution of involvement, number of lesions, GGO and crazy-paving, consolidation, reticular, bronchial wall thickening, nodule, air bronchogram, cavity, pleural effusion, pleural thickening, and lymphadenopathy are the significant features between COVID-19 from non-COVID-19 groups. • The proposed CAD system, COVIDiag, could diagnose COVID-19 pneumonia cases with an AUC of 0.965 (sensitivity = 93.54%; specificity = 90.32%; and accuracy = 91.94%). • The AUC, sensitivity, specificity, and accuracy obtained by radiologist diagnosis are 0.879, 87.10%, 88.71%, and 87.90%, respectively.


Assuntos
COVID-19/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Adulto , Idoso , Teorema de Bayes , Brônquios/diagnóstico por imagem , Brônquios/patologia , COVID-19/patologia , Diagnóstico Diferencial , Feminino , Humanos , Pulmão/patologia , Linfadenopatia/diagnóstico por imagem , Linfadenopatia/patologia , Masculino , Pessoa de Meia-Idade , Pandemias , Derrame Pleural/diagnóstico por imagem , Estudos Retrospectivos , SARS-CoV-2
19.
Sensors (Basel) ; 21(21)2021 Oct 23.
Artigo em Inglês | MEDLINE | ID: mdl-34770340

RESUMO

Parkinson's disease (PD) is the second most common neurodegenerative disorder affecting over 6 million people globally. Although there are symptomatic treatments that can increase the survivability of the disease, there are no curative treatments. The prevalence of PD and disability-adjusted life years continue to increase steadily, leading to a growing burden on patients, their families, society and the economy. Dopaminergic medications can significantly slow down the progression of PD when applied during the early stages. However, these treatments often become less effective with the disease progression. Early diagnosis of PD is crucial for immediate interventions so that the patients can remain self-sufficient for the longest period of time possible. Unfortunately, diagnoses are often late, due to factors such as a global shortage of neurologists skilled in early PD diagnosis. Computer-aided diagnostic (CAD) tools, based on artificial intelligence methods, that can perform automated diagnosis of PD, are gaining attention from healthcare services. In this review, we have identified 63 studies published between January 2011 and July 2021, that proposed deep learning models for an automated diagnosis of PD, using various types of modalities like brain analysis (SPECT, PET, MRI and EEG), and motion symptoms (gait, handwriting, speech and EMG). From these studies, we identify the best performing deep learning model reported for each modality and highlight the current limitations that are hindering the adoption of such CAD tools in healthcare. Finally, we propose new directions to further the studies on deep learning in the automated detection of PD, in the hopes of improving the utility, applicability and impact of such tools to improve early detection of PD globally.


Assuntos
Aprendizado Profundo , Doença de Parkinson , Inteligência Artificial , Marcha , Humanos , Doença de Parkinson/diagnóstico , Fala
20.
Sensors (Basel) ; 21(1)2021 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-33401741

RESUMO

In this paper, the multi-state synchronization of chaotic systems with non-identical, unknown, and time-varying delay in the presence of external perturbations and parametric uncertainties was studied. The presence of unknown delays, unknown bounds of disturbance and uncertainty, as well as changes in system parameters complicate the determination of control function and synchronization. During a synchronization scheme using a robust-adaptive control procedure with the help of the Lyapunov stability theorem, the errors converged to zero, and the updating rules were set to estimate the system parameters and delays. To investigate the performance of the proposed design, simulations have been carried out on two Chen hyper-chaotic systems as the slave and one Chua hyper-chaotic system as the master. Our results showed that the proposed controller outperformed the state-of-the-art techniques in terms of convergence speed of synchronization, parameter estimation, and delay estimation processes. The parameters and time delays were achieved with appropriate approximation. Finally, secure communication was realized with a chaotic masking method, and our results revealed the effectiveness of the proposed method in secure telecommunications.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA