Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 40
Filtrar
1.
IEEE Trans Industr Inform ; 19(1): 1030-1038, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37469712

RESUMO

A fundamental expectation of the stakeholders from the Industrial Internet of Things (IIoT) is its trustworthiness and sustainability to avoid the loss of human lives in performing a critical task. A trustworthy IIoT-enabled network encompasses fundamental security characteristics such as trust, privacy, security, reliability, resilience and safety. The traditional security mechanisms and procedures are insufficient to protect these networks owing to protocol differences, limited update options, and older adaptations of the security mechanisms. As a result, these networks require novel approaches to increase trust-level and enhance security and privacy mechanisms. Therefore, in this paper, we propose a novel approach to improve the trustworthiness of IIoT-enabled networks. We propose an accurate and reliable supervisory control and data acquisition (SCADA) network-based cyberattack detection in these networks. The proposed scheme combines the deep learning-based Pyramidal Recurrent Units (PRU) and Decision Tree (DT) with SCADA-based IIoT networks. We also use an ensemble-learning method to detect cyberattacks in SCADA-based IIoT networks. The non-linear learning ability of PRU and the ensemble DT address the sensitivity of irrelevant features, allowing high detection rates. The proposed scheme is evaluated on fifteen datasets generated from SCADA-based networks. The experimental results show that the proposed scheme outperforms traditional methods and machine learning-based detection approaches. The proposed scheme improves the security and associated measure of trustworthiness in IIoT-enabled networks.

2.
Sensors (Basel) ; 22(20)2022 Oct 21.
Artigo em Inglês | MEDLINE | ID: mdl-36298412

RESUMO

Sensor fusion is the process of merging data from many sources, such as radar, lidar and camera sensors, to provide less uncertain information compared to the information collected from single source [...].


Assuntos
Algoritmos , Aprendizado Profundo , Radar , Visão Ocular , Computadores
3.
Image Vis Comput ; 119: 104375, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35068648

RESUMO

COVID-19 has severely disrupted every aspect of society and left negative impact on our life. Resisting the temptation in engaging face-to-face social connection is not as easy as we imagine. Breaking ties within social circle makes us lonely and isolated, that in turns increase the likelihood of depression related disease and even can leads to death by increasing the chance of heart disease. Not only adults, children's are equally impacted where the contribution of emotional competence to social competence has long term implications. Early identification skill for facial behaviour emotions, deficits, and expression may help to prevent the low social functioning. Deficits in young children's ability to differentiate human emotions can leads to social functioning impairment. However, the existing work focus on adult emotions recognition mostly and ignores emotion recognition in children. By considering the working of pyramidal cells in the cerebral cortex, in this paper, we present progressive lightweight shallow learning for the classification by efficiently utilizing the skip-connection for spontaneous facial behaviour recognition in children. Unlike earlier deep neural networks, we limit the alternative path for the gradient at the earlier part of the network by increase gradually with the depth of the network. Progressive ShallowNet is not only able to explore more feature space but also resolve the over-fitting issue for smaller data, due to limiting the residual path locally, making the network vulnerable to perturbations. We have conducted extensive experiments on benchmark facial behaviour analysis in children that showed significant performance gain comparatively.

4.
World Wide Web ; 25(1): 281-304, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35106059

RESUMO

The ability to explain why the model produced results in such a way is an important problem, especially in the medical domain. Model explainability is important for building trust by providing insight into the model prediction. However, most existing machine learning methods provide no explainability, which is worrying. For instance, in the task of automatic depression prediction, most machine learning models lead to predictions that are obscure to humans. In this work, we propose explainable Multi-Aspect Depression Detection with Hierarchical Attention Network MDHAN, for automatic detection of depressed users on social media and explain the model prediction. We have considered user posts augmented with additional features from Twitter. Specifically, we encode user posts using two levels of attention mechanisms applied at the tweet-level and word-level, calculate each tweet and words' importance, and capture semantic sequence features from the user timelines (posts). Our hierarchical attention model is developed in such a way that it can capture patterns that leads to explainable results. Our experiments show that MDHAN outperforms several popular and robust baseline methods, demonstrating the effectiveness of combining deep learning with multi-aspect features. We also show that our model helps improve predictive performance when detecting depression in users who are posting messages publicly on social media. MDHAN achieves excellent performance and ensures adequate evidence to explain the prediction.

5.
IEEE J Biomed Health Inform ; 28(6): 3228-3235, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38198252

RESUMO

Leakage and tampering problems in collection and transmission of biomedical data have attracted much attention as these concerns instigates negative impression regarding privacy, security, and reputation of medical networks. This article presents a novel security model that establishes a threat-vector database based on the dynamic behaviours of smart healthcare systems. Then, an improved and privacy-preserved SRU network is designed that aims to alleviate fading gradient issue and enhance the learning process by reducing computational cost. Then, an intelligent federated learning algorithm is deployed to enable multiple healthcare networks to form a collaborative security model in a personalized manner without the loss of privacy. The proposed security method is both parallelizable and computationally effective since the dynamic behaviour aggregation strategy empowers the model to work collaboratively and reduce communication overhead by dynamically adjusting the number of participating clients. Additionally, the visualization of the decision process based on the explainability of features enhances the understanding of security experts by enabling them to comprehend the underlying data evidence and causal reasoning. Compared to existing methods, the proposed security method is capable of thoroughly analyzing and detecting severe security threats with high accuracy, reduce overhead and lower computation cost along with enhanced privacy of biomedical data.


Assuntos
Algoritmos , Segurança Computacional , Humanos , Redes de Comunicação de Computadores
6.
IEEE J Biomed Health Inform ; 28(3): 1185-1194, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38446658

RESUMO

Cancer begins when healthy cells change and grow out of control, forming a mass called a tumor. Head and neck (H&N) cancers usually develop in or around the head and neck, including the mouth (oral cavity), nose and sinuses, throat (pharynx), and voice box (larynx). 4% of all cancers are H&N cancers with a very low survival rate (a five-year survival rate of 64.7%). FDG-PET/CT imaging is often used for early diagnosis and staging of H&N tumors, thus improving these patients' survival rates. This work presents a novel 3D-Inception-Residual aided with 3D depth-wise convolution and squeeze and excitation block. We introduce a 3D depth-wise convolution-inception encoder consisting of an additional 3D squeeze and excitation block and a 3D depth-wise convolution-based residual learning decoder (3D-IncNet), which not only helps to recalibrate the channel-wise features but adaptively through explicit inter-dependencies modeling but also integrate the coarse and fine features resulting in accurate tumor segmentation. We further demonstrate the effectiveness of inception-residual encoder-decoder architecture in achieving better dice scores and the impact of depth-wise convolution in lowering the computational cost. We applied random forest for survival prediction on deep, clinical, and radiomics features. Experiments are conducted on the benchmark HECKTOR21 challenge, which showed significantly better performance by surpassing the state-of-the-artwork and achieved 0.836 and 0.811 concordance index and dice scores, respectively. We made the model and code publicly available.


Assuntos
Neoplasias de Cabeça e Pescoço , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Cabeça , Pescoço , Face
7.
Artigo em Inglês | MEDLINE | ID: mdl-38215319

RESUMO

Graph convolutional networks (GCNs) have emerged as a powerful tool for action recognition, leveraging skeletal graphs to encapsulate human motion. Despite their efficacy, a significant challenge remains the dependency on huge labeled datasets. Acquiring such datasets is often prohibitive, and the frequent occurrence of incomplete skeleton data, typified by absent joints and frames, complicates the testing phase. To tackle these issues, we present graph representation alignment (GRA), a novel approach with two main contributions: 1) a self-training (ST) paradigm that substantially reduces the need for labeled data by generating high-quality pseudo-labels, ensuring model stability even with minimal labeled inputs and 2) a representation alignment (RA) technique that utilizes consistency regularization to effectively reduce the impact of missing data components. Our extensive evaluations on the NTU RGB+D and Northwestern-UCLA (N-UCLA) benchmarks demonstrate that GRA not only improves GCN performance in data-constrained environments but also retains impressive performance in the face of data incompleteness.

8.
IEEE Trans Med Imaging ; 43(1): 542-557, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37713220

RESUMO

The early detection of glaucoma is essential in preventing visual impairment. Artificial intelligence (AI) can be used to analyze color fundus photographs (CFPs) in a cost-effective manner, making glaucoma screening more accessible. While AI models for glaucoma screening from CFPs have shown promising results in laboratory settings, their performance decreases significantly in real-world scenarios due to the presence of out-of-distribution and low-quality images. To address this issue, we propose the Artificial Intelligence for Robust Glaucoma Screening (AIROGS) challenge. This challenge includes a large dataset of around 113,000 images from about 60,000 patients and 500 different screening centers, and encourages the development of algorithms that are robust to ungradable and unexpected input data. We evaluated solutions from 14 teams in this paper and found that the best teams performed similarly to a set of 20 expert ophthalmologists and optometrists. The highest-scoring team achieved an area under the receiver operating characteristic curve of 0.99 (95% CI: 0.98-0.99) for detecting ungradable images on-the-fly. Additionally, many of the algorithms showed robust performance when tested on three other publicly available datasets. These results demonstrate the feasibility of robust AI-enabled glaucoma screening.


Assuntos
Inteligência Artificial , Glaucoma , Humanos , Glaucoma/diagnóstico por imagem , Fundo de Olho , Técnicas de Diagnóstico Oftalmológico , Algoritmos
9.
Med Image Anal ; 97: 103230, 2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38875741

RESUMO

Challenges drive the state-of-the-art of automated medical image analysis. The quantity of public training data that they provide can limit the performance of their solutions. Public access to the training methodology for these solutions remains absent. This study implements the Type Three (T3) challenge format, which allows for training solutions on private data and guarantees reusable training methodologies. With T3, challenge organizers train a codebase provided by the participants on sequestered training data. T3 was implemented in the STOIC2021 challenge, with the goal of predicting from a computed tomography (CT) scan whether subjects had a severe COVID-19 infection, defined as intubation or death within one month. STOIC2021 consisted of a Qualification phase, where participants developed challenge solutions using 2000 publicly available CT scans, and a Final phase, where participants submitted their training methodologies with which solutions were trained on CT scans of 9724 subjects. The organizers successfully trained six of the eight Final phase submissions. The submitted codebases for training and running inference were released publicly. The winning solution obtained an area under the receiver operating characteristic curve for discerning between severe and non-severe COVID-19 of 0.815. The Final phase solutions of all finalists improved upon their Qualification phase solutions.

10.
IEEE J Biomed Health Inform ; 27(2): 684-690, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35503855

RESUMO

Federated learning (FL) has recently emerged as a striking framework for allowing machine and deep learning models with thousands of participants to have distributed training to preserve the privacy of users' data. Federated learning comes with the pros of allowing all participants the possibility of creating robust models even in the absence of sufficient training data. Recently, smartphone usage has increased significantly due to its portability and ability to perform many daily life tasks. Typing on a smartphone's soft keyboard generates vibrations that could be abused to detect the typed keys, aiding side-channel attacks. Such data can be collected using smartphone hardware sensors during the entry of sensitive information such as clinical notes, personal medical information, username, and passwords. This study proposes a novel framework based on federated learning for side-channel attack detection to secure this information. We collected a dataset from 10 Android smartphone users who were asked to type on the smartphone soft keyboard. We convert this dataset into two windows of five users to make two clients training local models. The federated learning-based framework aggregates model updates contributed by two clients and trained the Deep Neural Network (DNN) model individually on the dataset. To reduce the over-fitting factor, each client examines the findings three times. Experiments reveal that the DNN model achieves an accuracy of 80.09%, showing that the proposed framework has the potential to detect side-channel attacks.


Assuntos
Privacidade , Smartphone , Humanos , Redes Neurais de Computação , Vibração
11.
ISA Trans ; 132: 199-207, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35641337

RESUMO

Rip Currents are contributing around 25 fatal drownings each year in Australia. Previous research has indicated that most of beachgoers cannot correctly identify a rip current, leaving them at risk of experiencing a drowning incident. Automated detection of rip currents could help to reduce drownings and assist lifeguards in supervision planning; however, varying beach conditions have made this challenging. This work presents the effectiveness of an improved lightweight framework for detecting rip currents: RipDet+1, aided with residual mapping to boost the generalization performance. We have used Yolo-V3 architecture to build RipDet+ framework and utilize pretrained weight by fully exploiting the detection training set from some base classes which in result quickly adapt the detection prediction to the available rip data. Extensive experiments are reported which show the effectiveness of RipDet+ architecture in achieving a detection accuracy of 98.55%, which is significantly greater compared to other state-of-the-art methods for Rip currents detection.

12.
Artigo em Inglês | MEDLINE | ID: mdl-37022816

RESUMO

People across the globe have felt and are still going through the impact of COVID-19. Some of them share their feelings and suffering online via different online social media networks such as Twitter. Due to strict restrictions to reduce the spread of the novel virus, many people are forced to stay at home, which significantly impacts people's mental health. It is mainly because the pandemic has directly affected the lives of the people who were not allowed to leave home due to strict government restrictions. Researchers must mine the related human-generated data and get insights from it to influence government policies and address people's needs. In this paper, we study social media data to understand how COVID-19 has impacted people's depression. We share a large-scale COVID-19 dataset that can be used to analyze depression. We also have modeled the tweets of depressed and non-depressed users before and after the start of the COVID-19 pandemic. To this end, we developed a new approach based on Hierarchical Convolutional Neural Network (HCN) that extracts fine-grained and relevant content on user historical posts. HCN considers the hierarchical structure of user tweets and contains an attention mechanism that can locate the crucial words and tweets in a user document while also considering the context. Our new approach is capable of detecting depressed users occurring within the COVID-19 time frame. Our results on benchmark datasets show that many non-depressed people became depressed during the COVID-19 pandemic.

13.
Artigo em Inglês | MEDLINE | ID: mdl-37527325

RESUMO

Traditional support vector machines (SVMs) are fragile in the presence of outliers; even a single corrupt data point can arbitrarily alter the quality of the approximation. If even a small fraction of columns is corrupted, then classification performance will inevitably deteriorate. This article considers the problem of high-dimensional data classification, where a number of the columns are arbitrarily corrupted. An efficient Support Matrix Machine that simultaneously performs matrix Recovery (SSMRe) is proposed, i.e. feature selection and classification through joint minimization of l2,1 (the nuclear norm of L ). The data are assumed to consist of a low-rank clean matrix plus a sparse noisy matrix. SSMRe works under incoherence and ambiguity conditions and is able to recover an intrinsic matrix of higher rank in the presence of data densely corrupted. The objective function is a spectral extension of the conventional elastic net; it combines the property of matrix recovery along with low rank and joint sparsity to deal with complex high-dimensional noisy data. Furthermore, SSMRe leverages structural information, as well as the intrinsic structure of data, avoiding the inevitable upper bound. Experimental results on different real-time applications, supported by the theoretical analysis and statistical testing, show significant gain for BCI, face recognition, and person identification datasets, especially in the presence of outliers, while preserving a reasonable number of support vectors.

14.
IEEE/ACM Trans Comput Biol Bioinform ; 20(2): 1363-1371, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36194721

RESUMO

Achieving accurate retinal vessel segmentation is critical in the progression and diagnosis of vision-threatening diseases such as diabetic retinopathy and age-related macular degeneration. Existing vessel segmentation methods are based on encoder-decoder architectures, which frequently fail to take into account the retinal vessel structure's context in their analysis. As a result, such methods have difficulty bridging the semantic gap between encoder and decoder characteristics. This paper proposes a Prompt Deep Light-weight Vessel Segmentation Network (PLVS-Net) to address these issues by using prompt blocks. Each prompt block use combination of asymmetric kernel convolutions, depth-wise separable convolutions, and ordinary convolutions to extract useful features. This novel strategy improves the performance of the segmentation network while simultaneously decreasing the number of trainable parameters. Our method outperformed competing approaches in the literature on three benchmark datasets, including DRIVE, STARE, and CHASE.


Assuntos
Benchmarking , Degeneração Macular , Humanos , Degeneração Macular/diagnóstico por imagem , Vasos Retinianos/diagnóstico por imagem , Semântica
15.
Neural Netw ; 165: 310-320, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37327578

RESUMO

Timely and affordable computer-aided diagnosis of retinal diseases is pivotal in precluding blindness. Accurate retinal vessel segmentation plays an important role in disease progression and diagnosis of such vision-threatening diseases. To this end, we propose a Multi-resolution Contextual Network (MRC-Net) that addresses these issues by extracting multi-scale features to learn contextual dependencies between semantically different features and using bi-directional recurrent learning to model former-latter and latter-former dependencies. Another key idea is training in adversarial settings for foreground segmentation improvement through optimization of the region-based scores. This novel strategy boosts the performance of the segmentation network in terms of the Dice score (and correspondingly Jaccard index) while keeping the number of trainable parameters comparatively low. We have evaluated our method on three benchmark datasets, including DRIVE, STARE, and CHASE, demonstrating its superior performance as compared with competitive approaches elsewhere in the literature.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Vasos Retinianos/diagnóstico por imagem
16.
Artigo em Inglês | MEDLINE | ID: mdl-37027692

RESUMO

In recent years, distributed graph convolutional networks (GCNs) training frameworks have achieved great success in learning the representation of graph-structured data with large sizes. However, existing distributed GCN training frameworks require enormous communication costs since a multitude of dependent graph data need to be transmitted from other processors. To address this issue, we propose a graph augmentation-based distributed GCN framework (GAD). In particular, GAD has two main components: GAD-Partition and GAD-Optimizer . We first propose an augmentation-based graph partition (GAD-Partition) that can divide the input graph into augmented subgraphs to reduce communication by selecting and storing as few significant vertices of other processors as possible. To further speed up distributed GCN training and improve the quality of the training result, we design a subgraph variance-based importance calculation formula and propose a novel weighted global consensus method, collectively referred to as GAD-Optimizer . This optimizer adaptively adjusts the importance of subgraphs to reduce the effect of extra variance introduced by GAD-Partition on distributed GCN training. Extensive experiments on four large-scale real-world datasets demonstrate that our framework significantly reduces the communication overhead ( ≈ 50% ), improves the convergence speed ( ≈ 2 × ) of distributed GCN training, and obtains a slight gain in accuracy ( ≈ 0.45% ) based on minimal redundancy compared to the state-of-the-art methods.

17.
IEEE/ACM Trans Comput Biol Bioinform ; 20(4): 2587-2597, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37028339

RESUMO

Depression is a mental disorder characterized by persistent depressed mood or loss of interest in performing activities, causing significant impairment in daily routine. Possible causes include psychological, biological, and social sources of distress. Clinical depression is the more-severe form of depression, also known as major depression or major depressive disorder. Recently, electroencephalography and speech signals have been used for early diagnosis of depression; however, they focus on moderate or severe depression. We have combined audio spectrogram and multiple frequencies of EEG signals to improve diagnostic performance. To do so, we have fused different levels of speech and EEG features to generate descriptive features and applied vision transformers and various pre-trained networks on the speech and EEG spectrum. We have conducted extensive experiments on Multimodal Open Dataset for Mental-disorder Analysis (MODMA) dataset, which showed significant improvement in performance in depression diagnosis (0.972, 0.973 and 0.973 precision, recall and F1 score respectively) for patients at the mild stage. Besides, we provided a web-based framework using Flask and provided the source code publicly.1.


Assuntos
Transtorno Depressivo Maior , Humanos , Transtorno Depressivo Maior/diagnóstico , Depressão/diagnóstico , Fala , Eletroencefalografia , Software
18.
Artigo em Inglês | MEDLINE | ID: mdl-38090821

RESUMO

The availability of large, high-quality annotated datasets in the medical domain poses a substantial challenge in segmentation tasks. To mitigate the reliance on annotated training data, self-supervised pre-training strategies have emerged, particularly employing contrastive learning methods on dense pixel-level representations. In this work, we proposed to capitalize on intrinsic anatomical similarities within medical image data and develop a semantic segmentation framework through a self-supervised fusion network, where the availability of annotated volumes is limited. In a unified training phase, we combine segmentation loss with contrastive loss, enhancing the distinction between significant anatomical regions that adhere to the available annotations. To further improve the segmentation performance, we introduce an efficient parallel transformer module that leverages Multiview multiscale feature fusion and depth-wise features. The proposed transformer architecture, based on multiple encoders, is trained in a self-supervised manner using contrastive loss. Initially, the transformer is trained using an unlabeled dataset. We then fine-tune one encoder using data from the first stage and another encoder using a small set of annotated segmentation masks. These encoder features are subsequently concatenated for the purpose of brain tumor segmentation. The multiencoder-based transformer model yields significantly better outcomes across three medical image segmentation tasks. We validated our proposed solution by fusing images across diverse medical image segmentation challenge datasets, demonstrating its efficacy by outperforming state-of-the-art methodologies.

19.
Artif Intell Rev ; : 1-47, 2023 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-37362896

RESUMO

Chest radiography is the standard and most affordable way to diagnose, analyze, and examine different thoracic and chest diseases. Typically, the radiograph is examined by an expert radiologist or physician to decide about a particular anomaly, if exists. Moreover, computer-aided methods are used to assist radiologists and make the analysis process accurate, fast, and more automated. A tremendous improvement in automatic chest pathologies detection and analysis can be observed with the emergence of deep learning. The survey aims to review, technically evaluate, and synthesize the different computer-aided chest pathologies detection systems. The state-of-the-art of single and multi-pathologies detection systems, which are published in the last five years, are thoroughly discussed. The taxonomy of image acquisition, dataset preprocessing, feature extraction, and deep learning models are presented. The mathematical concepts related to feature extraction model architectures are discussed. Moreover, the different articles are compared based on their contributions, datasets, methods used, and the results achieved. The article ends with the main findings, current trends, challenges, and future recommendations.

20.
Neural Comput Appl ; 35(19): 13775-13789, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-34522068

RESUMO

Coronavirus (COVID-19) is a very contagious infection that has drawn the world's attention. Modeling such diseases can be extremely valuable in predicting their effects. Although classic statistical modeling may provide adequate models, it may also fail to understand the data's intricacy. An automatic COVID-19 detection system based on computed tomography (CT) scan or X-ray images is effective, but a robust system design is challenging. In this study, we propose an intelligent healthcare system that integrates IoT-cloud technologies. This architecture uses smart connectivity sensors and deep learning (DL) for intelligent decision-making from the perspective of the smart city. The intelligent system tracks the status of patients in real time and delivers reliable, timely, and high-quality healthcare facilities at a low cost. COVID-19 detection experiments are performed using DL to test the viability of the proposed system. We use a sensor for recording, transferring, and tracking healthcare data. CT scan images from patients are sent to the cloud by IoT sensors, where the cognitive module is stored. The system decides the patient status by examining the images of the CT scan. The DL cognitive module makes the real-time decision on the possible course of action. When information is conveyed to a cognitive module, we use a state-of-the-art classification algorithm based on DL, i.e., ResNet50, to detect and classify whether the patients are normal or infected by COVID-19. We validate the proposed system's robustness and effectiveness using two benchmark publicly available datasets (Covid-Chestxray dataset and Chex-Pert dataset). At first, a dataset of 6000 images is prepared from the above two datasets. The proposed system was trained on the collection of images from 80% of the datasets and tested with 20% of the data. Cross-validation is performed using a tenfold cross-validation technique for performance evaluation. The results indicate that the proposed system gives an accuracy of 98.6%, a sensitivity of 97.3%, a specificity of 98.2%, and an F1-score of 97.87%. Results clearly show that the accuracy, specificity, sensitivity, and F1-score of our proposed method are high. The comparison shows that the proposed system performs better than the existing state-of-the-art systems. The proposed system will be helpful in medical diagnosis research and healthcare systems. It will also support the medical experts for COVID-19 screening and lead to a precious second opinion.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa