Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
1.
Sci Rep ; 14(1): 22533, 2024 09 28.
Artículo en Inglés | MEDLINE | ID: mdl-39342030

RESUMEN

Recent developments have highlighted the critical role that computer-aided diagnosis (CAD) systems play in analyzing whole-slide digital histopathology images for detecting gastric cancer (GC). We present a novel framework for gastric histology classification and segmentation (GHCS) that offers modest yet meaningful improvements over existing CAD models for GC classification and segmentation. Our methodology achieves marginal improvements over conventional deep learning (DL) and machine learning (ML) models by adaptively focusing on pertinent characteristics of images. This contributes significantly to our study, highlighting that the proposed model, which performs well on normalized images, is robust in certain respects, particularly in handling variability and generalizing to different datasets. We anticipate that this robustness will lead to better results across various datasets. An expectation-maximizing Naïve Bayes classifier that uses an updated Gaussian Mixture Model is at the heart of the suggested GHCS framework. The effectiveness of our classifier is demonstrated by experimental validation on two publicly available datasets, which produced exceptional classification accuracies of 98.87% and 97.28% on validation sets and 98.47% and 97.31% on test sets. Our framework shows a slight but consistent improvement over previously existing techniques in gastric histopathology image classification tasks, as demonstrated by comparative analysis. This may be attributed to its ability to capture critical features of gastric histopathology images better. Furthermore, using an improved Fuzzy c-means method, our study produces good results in GC histopathology picture segmentation, outperforming state-of-the-art segmentation models with a Dice coefficient of 65.21% and a Jaccard index of 60.24%. The model's interpretability is complemented by Grad-CAM visualizations, which help understand the decision-making process and increase the model's trustworthiness for end-users, especially clinicians.


Asunto(s)
Diagnóstico por Computador , Neoplasias Gástricas , Neoplasias Gástricas/patología , Neoplasias Gástricas/clasificación , Neoplasias Gástricas/diagnóstico por imagen , Humanos , Diagnóstico por Computador/métodos , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Teorema de Bayes , Algoritmos , Interpretación de Imagen Asistida por Computador/métodos
4.
Brain Inform ; 10(1): 25, 2023 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-37689601

RESUMEN

Early identification of mental disorders, based on subjective interviews, is extremely challenging in the clinical setting. There is a growing interest in developing automated screening tools for potential mental health problems based on biological markers. Here, we demonstrate the feasibility of an AI-powered diagnosis of different mental disorders using EEG data. Specifically, this work aims to classify different mental disorders in the following ecological context accurately: (1) using raw EEG data, (2) collected during rest, (3) during both eye open, and eye closed conditions, (4) at short 2-min duration, (5) on participants with different psychiatric conditions, (6) with some overlapping symptoms, and (7) with strongly imbalanced classes. To tackle this challenge, we designed and optimized a transformer-based architecture, where class imbalance is addressed through focal loss and class weight balancing. Using the recently released TDBRAIN dataset (n= 1274 participants), our method classifies each participant as either a neurotypical or suffering from major depressive disorder (MDD), attention deficit hyperactivity disorder (ADHD), subjective memory complaints (SMC), or obsessive-compulsive disorder (OCD). We evaluate the performance of the proposed architecture on both the window-level and the patient-level. The classification of the 2-min raw EEG data into five classes achieved a window-level accuracy of 63.2% and 65.8% for open and closed eye conditions, respectively. When the classification is limited to three main classes (MDD, ADHD, SMC), window level accuracy improved to 75.1% and 69.9% for eye open and eye closed conditions, respectively. Our work paves the way for developing novel AI-based methods for accurately diagnosing mental disorders using raw resting-state EEG data.

5.
J Pak Med Assoc ; 73(6): 1349-1352, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37427652

RESUMEN

Institute of Biomedical Sciences (IBMS) at Dow University of Health Sciences (DUHS), organised a two day's conference on Biomedical Sciences. IBMS being the part of one of the largest public sector health universities of Pakistan, is now transforming the research trends to be effectively translated at the community level. Currently with a strong PhD faculty line in basic and clinical sciences, DUHS has a significant contribution in research output of the country. The scientific data however represents a small population per scientific study and the generalization of results may not be inferred. It must be extended through translational research for effectiveness. The conference was planned with a theme to bridge the gap between basic and translational research. The two day's conference conducted in second week of March 2023 at Dow International Medical College Ojha Campus DUHS was able to attract more than 300 participants. The scientific sessions encompassed a vast variety of health issues and their proposed solutions including neurosciences, virtual biopsies, metabolomics, medical writings and incorporation of engineering and artificial intelligence to facilitate detection and prognosis of disease. The conference was able to conclude that the multidisciplinary research studies with collaboration of two or more institutes/organizations are the need of time. Young researchers need an effective platform to showcase their research and make collaborations. Moreover, the incorporation of artificial intelligence would enhance patient care within health systems.


Asunto(s)
Inteligencia Artificial , Investigación Biomédica , Humanos , Pakistán , Docentes , Academias e Institutos , Metabolómica
6.
Front Genet ; 14: 1185065, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37359369

RESUMEN

Introduction: Epilepsy is a group of neurological disorders characterized by recurring seizures and fits. The Epilepsy genes can be classified into four distinct groups, based on involvement of these genes in different pathways leading to Epilepsy as a phenotype. Genetically the disease has been associated with various pathways, leading to pure epilepsy-related disorders caused by CNTN2 variations, or involving physical or systemic issues along with epilepsy caused by CARS2 and ARSA, or developed by genes that are putatively involved in epilepsy lead by CLCN4 variations. Methods: In this study, five families of Pakistani origin (EP-01, EP-02, EP-04, EP-09, and EP-11) were included for molecular diagnosis. Results: Clinical presentations of these patients included neurological symptoms such as delayed development, seizures, regression, myoclonic epilepsy, progressive spastic tetraparesis, vision and hearing impairment, speech problems, muscle fibrillation, tremors, and cognitive decline. Whole exome sequencing in index patients and Sanger sequencing in all available individuals in each family identified four novel homozygous variants in genes CARS2: c.655G>A p.Ala219Thr (EP-01), ARSA: c.338T>C: p.Leu113Pro (EP-02), c.938G>T p.Arg313Leu (EP-11), CNTN2: c.1699G>T p.Glu567Ter (EP-04), and one novel hemizygous variant in gene CLCN4: c.2167C>T p.Arg723Trp (EP-09). Conclusion: To the best of our knowledge these variants were novel and had not been reported in familial epilepsy. These variants were absent in 200 ethnically matched healthy control chromosomes. Three dimensional protein analyses revealed drastic changes in the normal functions of the variant proteins. Furthermore, these variants were designated as "pathogenic" as per guidelines of American College of Medical Genetics 2015. Due to overlapping phenotypes, among the patients, clinical subtyping was not possible. However, whole exome sequencing successfully pinpointed the molecular diagnosis which could be helpful for better management of these patients. Therefore, we recommend that exome sequencing be performed as a first-line molecular diagnostic test in familial cases.

7.
Expert Syst Appl ; 229: 120477, 2023 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-37220492

RESUMEN

In December 2019, the global pandemic COVID-19 in Wuhan, China, affected human life and the worldwide economy. Therefore, an efficient diagnostic system is required to control its spread. However, the automatic diagnostic system poses challenges with a limited amount of labeled data, minor contrast variation, and high structural similarity between infection and background. In this regard, a new two-phase deep convolutional neural network (CNN) based diagnostic system is proposed to detect minute irregularities and analyze COVID-19 infection. In the first phase, a novel SB-STM-BRNet CNN is developed, incorporating a new channel Squeezed and Boosted (SB) and dilated convolutional-based Split-Transform-Merge (STM) block to detect COVID-19 infected lung CT images. The new STM blocks performed multi-path region-smoothing and boundary operations, which helped to learn minor contrast variation and global COVID-19 specific patterns. Furthermore, the diverse boosted channels are achieved using the SB and Transfer Learning concepts in STM blocks to learn texture variation between COVID-19-specific and healthy images. In the second phase, COVID-19 infected images are provided to the novel COVID-CB-RESeg segmentation CNN to identify and analyze COVID-19 infectious regions. The proposed COVID-CB-RESeg methodically employed region-homogeneity and heterogeneity operations in each encoder-decoder block and boosted-decoder using auxiliary channels to simultaneously learn the low illumination and boundaries of the COVID-19 infected region. The proposed diagnostic system yields good performance in terms of accuracy: 98.21 %, F-score: 98.24%, Dice Similarity: 96.40 %, and IOU: 98.85 % for the COVID-19 infected region. The proposed diagnostic system would reduce the burden and strengthen the radiologist's decision for a fast and accurate COVID-19 diagnosis.

8.
Genes (Basel) ; 14(1)2023 01 05.
Artículo en Inglés | MEDLINE | ID: mdl-36672886

RESUMEN

Background: Hermansky-Pudlak syndrome (HSP) was first reported in 1959 as oculocutaneous albinism with bleeding abnormalities, and now consists of 11 distinct heterogenic genetic disorders that are caused by mutations in four protein complexes: AP-3, BLOC1, BLOC2, and BLOC3. Most of the patients show albinism and a bleeding diathesis; additional features may present depending on the nature of a defective protein complex. The subtypes 3 and 4 have been known for mutations in HSP3 and HSP4 genes, respectively. Methods: In this study, two Pakhtun consanguineous families, ALB-09 and ALB-10, were enrolled for clinical and molecular diagnoses. Whole-exome sequencing (WES) of the index patient in each family followed by Sanger sequencing of all available samples was performed using 3Billion. Inc South Korea rare disease diagnostics services. Results: The affected individuals of families ALB-09 and ALB-10 showed typical phenotypes of HPS such as oculocutaneous albinism, poor vision, nystagmus, nystagmus-induced involuntary head nodding, bleeding diathesis, and enterocolitis; however, immune system weakness was not recorded. WES analyses of one index patient revealed a novel nonsense variant (NM_032383.4: HSP3; c.2766T > G) in family ALB-09 and a five bp deletion (NM_001349900.2: HSP4; c.1180_1184delGTTCC) variant in family ALB-10. Sanger sequencing confirmed homozygous segregation of the disease alleles in all affected individuals of the respective family. Conclusions: The substitution c.2766T > G creates a premature protein termination at codon 922 in HPS3, replacing tyrosine amino acid with a stop codon (p.Tyr922Ter), while the deletion mutation c.1180_1184delGTTCC leads to a reading frameshift and a premature termination codon adding 23 abnormal amino acids to HSP4 protein (p:Val394Pro395fsTer23). To the best of our knowledge, the two novel variants identified in HPS3 and HPS4 genes causing Hermansky-Pudlak syndrome are the first report from the Pakhtun Pakistani population. Our work expands the pathogenic spectrum of HPS3 and HPS4 genes, provides successful molecular diagnostics, and helps the families in genetic counselling and reducing the disease burden in their future generations.


Asunto(s)
Síndrome de Hermanski-Pudlak , Humanos , Susceptibilidad a Enfermedades , Mutación del Sistema de Lectura , Síndrome de Hermanski-Pudlak/genética , Péptidos y Proteínas de Señalización Intracelular/genética , Mutación , Proteínas/genética
9.
Environ Monit Assess ; 194(8): 550, 2022 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-35776215

RESUMEN

Climate variability is widely recognized as a major concern, particularly in resource-scarce regions where it limits livelihood opportunities by putting additional strain on already depleting resources, resulting in human insecurity and conflicts. Some vulnerability assessments have created a nexus between climate variability and conflicts. The Climate-Water Conflict Vulnerability Index (CWCVI) and the Climate-Agriculture Conflict Vulnerability Index (CACVI) are applied as a tool for exploring the climate and conflict interactions, as well as contrasting the vulnerabilities of the coastal districts of Badin, Thatta, and Sujawal. The analysis incorporates a dual exposure of communities in the form of climate variability and conflict over water and agricultural resources. The study finds that aggression and feelings of insecurity about depleting resources are the main contributing indicators of climate-conflict vulnerability in the coastal districts. District Sujawal showed higher vulnerability in adaptive capacity as compared to the other districts due to poor infrastructure and high dependency on natural resources. However, the district of Badin demonstrated high vulnerability in terms of sensitivity and its exposure to conflicts over agricultural resources is high. The overall CWCVI and CACVI scores were higher in Badin and Thatta, respectively. This study identifies a number of indicators that can be used to improve the efficacy of mitigation strategies to reduce conflict vulnerability in future policy directions and resource planning.


Asunto(s)
Cambio Climático , Monitoreo del Ambiente , Clima , Humanos , Pakistán , Agua
10.
Data Brief ; 43: 108366, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35734019

RESUMEN

This data article describes the image dataset collection and annotation of the two most common fruitfly species Bactrocera Zonata and Bactrocera Dorsalis. The dataset is released as a collection of more than 2000 images captured through two sources: images of specially reared fruitfly species in laboratory captured by (48-megapixels) smartphone camera, and images of fruitflies captured by (8-megapixels) Raspberry Pi camera through insect traps installed in fruit orchards. Each image sample is associated with a ground truth label that mentions the fruit fly species. The dataset has been classified and annotated using the object detection method into two fruitfly species with an average 85% accuracy. The results of classification and annotation have been validated by expert entomologists by manually examining test samples in a laboratory setting. This dataset is best suited for developing smart monitoring systems to provide advisory services to farmers through mobile applications that provides real-time information about fruitfly species for effective control and management.

11.
Expert Syst Appl ; 202: 117360, 2022 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-35529253

RESUMEN

The recent disaster of COVID-19 has brought the whole world to the verge of devastation because of its highly transmissible nature. In this pandemic, radiographic imaging modalities, particularly, computed tomography (CT), have shown remarkable performance for the effective diagnosis of this virus. However, the diagnostic assessment of CT data is a human-dependent process that requires sufficient time by expert radiologists. Recent developments in artificial intelligence have substituted several personal diagnostic procedures with computer-aided diagnosis (CAD) methods that can make an effective diagnosis, even in real time. In response to COVID-19, various CAD methods have been developed in the literature, which can detect and localize infectious regions in chest CT images. However, most existing methods do not provide cross-data analysis, which is an essential measure for assessing the generality of a CAD method. A few studies have performed cross-data analysis in their methods. Nevertheless, these methods show limited results in real-world scenarios without addressing generality issues. Therefore, in this study, we attempt to address generality issues and propose a deep learning-based CAD solution for the diagnosis of COVID-19 lesions from chest CT images. We propose a dual multiscale dilated fusion network (DMDF-Net) for the robust segmentation of small lesions in a given CT image. The proposed network mainly utilizes the strength of multiscale deep features fusion inside the encoder and decoder modules in a mutually beneficial manner to achieve superior segmentation performance. Additional pre- and post-processing steps are introduced in the proposed method to address the generality issues and further improve the diagnostic performance. Mainly, the concept of post-region of interest (ROI) fusion is introduced in the post-processing step, which reduces the number of false-positives and provides a way to accurately quantify the infected area of lung. Consequently, the proposed framework outperforms various state-of-the-art methods by accomplishing superior infection segmentation results with an average Dice similarity coefficient of 75.7%, Intersection over Union of 67.22%, Average Precision of 69.92%, Sensitivity of 72.78%, Specificity of 99.79%, Enhance-Alignment Measure of 91.11%, and Mean Absolute Error of 0.026.

12.
Cureus ; 14(3): e23172, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35444893

RESUMEN

Cold agglutinin disease (CAD) is a type of hemolytic anemia in which cold agglutinins can cause agglutination of red blood cells in cold parts of the body and hemolytic anemia. Cold agglutinin-mediated hemolytic anemia can occur in the setting of an underlying viral infection, autoimmune disorder, or lymphoid malignancy, referred to as a secondary cold agglutinin syndrome, or without one of these underlying disorders, referred to as primary CAD (also known as idiopathic CAD). We present a case of a 71-year-old female with hemolytic anemia due to primary CAD. The secondary causes of CAD, including infections, autoimmune disorders, and malignancy, were ruled out. She was successfully treated with prednisone.

13.
Cureus ; 14(2): e21918, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35273864

RESUMEN

Cefepime is a fourth-generation cephalosporin with anti-pseudomonal coverage. It has been known to cause neurotoxicity, especially in critically ill patients and those with renal impairment. This neurotoxicity is poorly characterized and under-recognized. We present a case of cefepime-induced neurotoxicity in a 74-year-old woman being treated for cellulitis and osteomyelitis. Symptoms were gradual in onset and included confusion, verbal perseveration, and myoclonus. EEG findings included generalized periodic discharges (GPD) and generalized rhythmic delta activity with admixed sharps (GRDA + S). Symptoms resolved one to two days after the cessation of cefepime and anti-epileptic therapy with lorazepam, topiramate, and levetiracetam. We follow this with a discussion of available literature and recommend regular therapeutic drug monitoring in the future.

14.
J Pers Med ; 12(1)2022 Jan 14.
Artículo en Inglés | MEDLINE | ID: mdl-35055427

RESUMEN

BACKGROUND: Early recognition of prostheses before reoperation can reduce perioperative morbidity and mortality. Because of the intricacy of the shoulder biomechanics, accurate classification of implant models before surgery is fundamental for planning the correct medical procedure and setting apparatus for personalized medicine. Expert surgeons usually use X-ray images of prostheses to set the patient-specific apparatus. However, this subjective method is time-consuming and prone to errors. METHOD: As an alternative, artificial intelligence has played a vital role in orthopedic surgery and clinical decision-making for accurate prosthesis placement. In this study, three different deep learning-based frameworks are proposed to identify different types of shoulder implants in X-ray scans. We mainly propose an efficient ensemble network called the Inception Mobile Fully-Connected Convolutional Network (IMFC-Net), which is comprised of our two designed convolutional neural networks and a classifier. To evaluate the performance of the IMFC-Net and state-of-the-art models, experiments were performed with a public data set of 597 de-identified patients (597 shoulder implants). Moreover, to demonstrate the generalizability of IMFC-Net, experiments were performed with two augmentation techniques and without augmentation, in which our model ranked first, with a considerable difference from the comparison models. A gradient-weighted class activation map technique was also used to find distinct implant characteristics needed for IMFC-Net classification decisions. RESULTS: The results confirmed that the proposed IMFC-Net model yielded an average accuracy of 89.09%, a precision rate of 89.54%, a recall rate of 86.57%, and an F1.score of 87.94%, which were higher than those of the comparison models. CONCLUSION: The proposed model is efficient and can minimize the revision complexities of implants.

15.
J Pers Med ; 11(10)2021 Oct 07.
Artículo en Inglés | MEDLINE | ID: mdl-34683149

RESUMEN

BACKGROUND: Early and accurate detection of COVID-19-related findings (such as well-aerated regions, ground-glass opacity, crazy paving and linear opacities, and consolidation in lung computed tomography (CT) scan) is crucial for preventive measures and treatment. However, the visual assessment of lung CT scans is a time-consuming process particularly in case of trivial lesions and requires medical specialists. METHOD: A recent breakthrough in deep learning methods has boosted the diagnostic capability of computer-aided diagnosis (CAD) systems and further aided health professionals in making effective diagnostic decisions. In this study, we propose a domain-adaptive CAD framework, namely the dilated aggregation-based lightweight network (DAL-Net), for effective recognition of trivial COVID-19 lesions in CT scans. Our network design achieves a fast execution speed (inference time is 43 ms on a single image) with optimal memory consumption (almost 9 MB). To evaluate the performances of the proposed and state-of-the-art models, we considered two publicly accessible datasets, namely COVID-19-CT-Seg (comprising a total of 3520 images of 20 different patients) and MosMed (including a total of 2049 images of 50 different patients). RESULTS: Our method exhibits average area under the curve (AUC) up to 98.84%, 98.47%, and 95.51% for COVID-19-CT-Seg, MosMed, and cross-dataset, respectively, and outperforms various state-of-the-art methods. CONCLUSIONS: These results demonstrate that deep learning-based models are an effective tool for building a robust CAD solution based on CT data in response to present disaster of COVID-19.

16.
J Pers Med ; 11(6)2021 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-34199932

RESUMEN

Accurate nuclear segmentation in histopathology images plays a key role in digital pathology. It is considered a prerequisite for the determination of cell phenotype, nuclear morphometrics, cell classification, and the grading and prognosis of cancer. However, it is a very challenging task because of the different types of nuclei, large intraclass variations, and diverse cell morphologies. Consequently, the manual inspection of such images under high-resolution microscopes is tedious and time-consuming. Alternatively, artificial intelligence (AI)-based automated techniques, which are fast and robust, and require less human effort, can be used. Recently, several AI-based nuclear segmentation techniques have been proposed. They have shown a significant performance improvement for this task, but there is room for further improvement. Thus, we propose an AI-based nuclear segmentation technique in which we adopt a new nuclear segmentation network empowered by residual skip connections to address this issue. Experiments were performed on two publicly available datasets: (1) The Cancer Genome Atlas (TCGA), and (2) Triple-Negative Breast Cancer (TNBC). The results show that our proposed technique achieves an aggregated Jaccard index (AJI) of 0.6794, Dice coefficient of 0.8084, and F1-measure of 0.8547 on TCGA dataset, and an AJI of 0.7332, Dice coefficient of 0.8441, precision of 0.8352, recall of 0.8306, and F1-measure of 0.8329 on the TNBC dataset. These values are higher than those of the state-of-the-art methods.

17.
Sensors (Basel) ; 21(14)2021 Jul 06.
Artículo en Inglés | MEDLINE | ID: mdl-34300373

RESUMEN

Among many available biometrics identification methods, finger-vein recognition has an advantage that is difficult to counterfeit, as finger veins are located under the skin, and high user convenience as a non-invasive image capturing device is used for recognition. However, blurring can occur when acquiring finger-vein images, and such blur can be mainly categorized into three types. First, skin scattering blur due to light scattering in the skin layer; second, optical blur occurs due to lens focus mismatching; and third, motion blur exists due to finger movements. Blurred images generated in these kinds of blur can significantly reduce finger-vein recognition performance. Therefore, restoration of blurred finger-vein images is necessary. Most of the previous studies have addressed the restoration method of skin scattering blurred images and some of the studies have addressed the restoration method of optically blurred images. However, there has been no research on restoration methods of motion blurred finger-vein images that can occur in actual environments. To address this problem, this study proposes a new method for improving the finger-vein recognition performance by restoring motion blurred finger-vein images using a modified deblur generative adversarial network (modified DeblurGAN). Based on an experiment conducted using two open databases, the Shandong University homologous multi-modal traits (SDUMLA-HMT) finger-vein database and Hong Kong Polytechnic University finger-image database version 1, the proposed method demonstrates outstanding performance that is better than those obtained using state-of-the-art methods.


Asunto(s)
Biometría , Venas , Dedos/diagnóstico por imagen , Hong Kong , Humanos , Movimiento (Física)
18.
J Pers Med ; 11(6)2021 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-34072079

RESUMEN

Re-operations and revisions are often performed in patients who have undergone total shoulder arthroplasty (TSA) and reverse total shoulder arthroplasty (RTSA). This necessitates an accurate recognition of the implant model and manufacturer to set the correct apparatus and procedure according to the patient's anatomy as personalized medicine. Owing to unavailability and ambiguity in the medical data of a patient, expert surgeons identify the implants through a visual comparison of X-ray images. False steps cause heedlessness, morbidity, extra monetary weight, and a waste of time. Despite significant advancements in pattern recognition and deep learning in the medical field, extremely limited research has been conducted on classifying shoulder implants. To overcome these problems, we propose a robust deep learning-based framework comprised of an ensemble of convolutional neural networks (CNNs) to classify shoulder implants in X-ray images of different patients. Through our rotational invariant augmentation, the size of the training dataset is increased 36-fold. The modified ResNet and DenseNet are then combined deeply to form a dense residual ensemble-network (DRE-Net). To evaluate DRE-Net, experiments were executed on a 10-fold cross-validation on the openly available shoulder implant X-ray dataset. The experimental results showed that DRE-Net achieved an accuracy, F1-score, precision, and recall of 85.92%, 84.69%, 85.33%, and 84.11%, respectively, which were higher than those of the state-of-the-art methods. Moreover, we confirmed the generalization capability of our network by testing it in an open-world configuration, and the effectiveness of rotational invariant augmentation.

19.
Appl Soft Comput ; 108: 107490, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-33994894

RESUMEN

Currently, the coronavirus disease 2019 (COVID19) pandemic has killed more than one million people worldwide. In the present outbreak, radiological imaging modalities such as computed tomography (CT) and X-rays are being used to diagnose this disease, particularly in the early stage. However, the assessment of radiographic images includes a subjective evaluation that is time-consuming and requires substantial clinical skills. Nevertheless, the recent evolution in artificial intelligence (AI) has further strengthened the ability of computer-aided diagnosis tools and supported medical professionals in making effective diagnostic decisions. Therefore, in this study, the strength of various AI algorithms was analyzed to diagnose COVID19 infection from large-scale radiographic datasets. Based on this analysis, a light-weighted deep network is proposed, which is the first ensemble design (based on MobileNet, ShuffleNet, and FCNet) in medical domain (particularly for COVID19 diagnosis) that encompasses the reduced number of trainable parameters (a total of 3.16 million parameters) and outperforms the various existing models. Moreover, the addition of a multilevel activation visualization layer in the proposed network further visualizes the lesion patterns as multilevel class activation maps (ML-CAMs) along with the diagnostic result (either COVID19 positive or negative). Such additional output as ML-CAMs provides a visual insight of the computer decision and may assist radiologists in validating it, particularly in uncertain situations Additionally, a novel hierarchical training procedure was adopted to perform the training of the proposed network. It proceeds the network training by the adaptive number of epochs based on the validation dataset rather than using the fixed number of epochs. The quantitative results show the better performance of the proposed training method over the conventional end-to-end training procedure. A large collection of CT-scan and X-ray datasets (based on six publicly available datasets) was used to evaluate the performance of the proposed model and other baseline methods. The experimental results of the proposed network exhibit a promising performance in terms of diagnostic decision. An average F1 score (F1) of 94.60% and 95.94% and area under the curve (AUC) of 97.50% and 97.99% are achieved for the CT-scan and X-ray datasets, respectively. Finally, the detailed comparative analysis reveals that the proposed model outperforms the various state-of-the-art methods in terms of both quantitative and computational performance.

20.
IEEE J Biomed Health Inform ; 25(6): 1881-1891, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33835928

RESUMEN

In the present epidemic of the coronavirus disease 2019 (COVID-19), radiological imaging modalities, such as X-ray and computed tomography (CT), have been identified as effective diagnostic tools. However, the subjective assessment of radiographic examination is a time-consuming task and demands expert radiologists. Recent advancements in artificial intelligence have enhanced the diagnostic power of computer-aided diagnosis (CAD) tools and assisted medical specialists in making efficient diagnostic decisions. In this work, we propose an optimal multilevel deep-aggregated boosted network to recognize COVID-19 infection from heterogeneous radiographic data, including X-ray and CT images. Our method leverages multilevel deep-aggregated features and multistage training via a mutually beneficial approach to maximize the overall CAD performance. To improve the interpretation of CAD predictions, these multilevel deep features are visualized as additional outputs that can assist radiologists in validating the CAD results. A total of six publicly available datasets were fused to build a single large-scale heterogeneous radiographic collection that was used to analyze the performance of the proposed technique and other baseline methods. To preserve generality of our method, we selected different patient data for training, validation, and testing, and consequently, the data of same patient were not included in training, validation, and testing subsets. In addition, fivefold cross-validation was performed in all the experiments for a fair evaluation. Our method exhibits promising performance values of 95.38%, 95.57%, 92.53%, 98.14%, 93.16%, and 98.55% in terms of average accuracy, F-measure, specificity, sensitivity, precision, and area under the curve, respectively and outperforms various state-of-the-art methods.


Asunto(s)
COVID-19/diagnóstico por imagen , Aprendizaje Profundo , COVID-19/virología , Diagnóstico por Computador/métodos , Humanos , Redes Neurales de la Computación , SARS-CoV-2/aislamiento & purificación , Tomografía Computarizada por Rayos X/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA