Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 23(18)2023 Sep 14.
Artículo en Inglés | MEDLINE | ID: mdl-37765934

RESUMEN

The automatic detection, visualization, and classification of plant diseases through image datasets are key challenges for precision and smart farming. The technological solutions proposed so far highlight the supremacy of the Internet of Things in data collection, storage, and communication, and deep learning models in automatic feature extraction and feature selection. Therefore, the integration of these technologies is emerging as a key tool for the monitoring, data capturing, prediction, detection, visualization, and classification of plant diseases from crop images. This manuscript presents a rigorous review of the Internet of Things and deep learning models employed for plant disease monitoring and classification. The review encompasses the unique strengths and limitations of different architectures. It highlights the research gaps identified from the related works proposed in the literature. It also presents a comparison of the performance of different deep learning models on publicly available datasets. The comparison gives insights into the selection of the optimum deep learning models according to the size of the dataset, expected response time, and resources available for computation and storage. This review is important in terms of developing optimized and hybrid models for plant disease classification.

2.
Sensors (Basel) ; 23(10)2023 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-37430605

RESUMEN

An increasing number of patients and a lack of awareness about obstructive sleep apnea is a point of concern for the healthcare industry. Polysomnography is recommended by health experts to detect obstructive sleep apnea. The patient is paired up with devices that track patterns and activities during their sleep. Polysomnography, being a complex and expensive process, cannot be adopted by the majority of patients. Therefore, an alternative is required. The researchers devised various machine learning algorithms using single lead signals such as electrocardiogram, oxygen saturation, etc., for the detection of obstructive sleep apnea. These methods have low accuracy, less reliability, and high computation time. Thus, the authors introduced two different paradigms for the detection of obstructive sleep apnea. The first is MobileNet V1, and the other is the convergence of MobileNet V1 with two separate recurrent neural networks, Long-Short Term Memory and Gated Recurrent Unit. They evaluate the efficacy of their proposed method using authentic medical cases from the PhysioNet Apnea-Electrocardiogram database. The model MobileNet V1 achieves an accuracy of 89.5%, a convergence of MobileNet V1 with LSTM achieves an accuracy of 90%, and a convergence of MobileNet V1 with GRU achieves an accuracy of 90.29%. The obtained results prove the supremacy of the proposed approach in comparison to the state-of-the-art methods. To showcase the implementation of devised methods in a real-life scenario, the authors design a wearable device that monitors ECG signals and classifies them into apnea and normal. The device employs a security mechanism to transmit the ECG signals securely over the cloud with the consent of patients.


Asunto(s)
Aprendizaje Profundo , Apnea Obstructiva del Sueño , Humanos , Reproducibilidad de los Resultados , Apnea Obstructiva del Sueño/diagnóstico , Sueño , Algoritmos
3.
Diagnostics (Basel) ; 13(7)2023 Mar 23.
Artículo en Inglés | MEDLINE | ID: mdl-37046431

RESUMEN

Disease severity identification using computational intelligence-based approaches is gaining popularity nowadays. Artificial intelligence and deep-learning-assisted approaches are proving to be significant in the rapid and accurate diagnosis of several diseases. In addition to disease identification, these approaches have the potential to identify the severity of a disease. The problem of disease severity identification can be considered multi-class classification, where the class labels are the severity levels of the disease. Plenty of computational intelligence-based solutions have been presented by researchers for severity identification. This paper presents a comprehensive review of recent approaches for identifying disease severity levels using computational intelligence-based approaches. We followed the PRISMA guidelines and compiled several works related to the severity identification of multidisciplinary diseases of the last decade from well-known publishers, such as MDPI, Springer, IEEE, Elsevier, etc. This article is devoted toward the severity identification of two main diseases, viz. Parkinson's Disease and Diabetic Retinopathy. However, severity identification of a few other diseases, such as COVID-19, autonomic nervous system dysfunction, tuberculosis, sepsis, sleep apnea, psychosis, traumatic brain injury, breast cancer, knee osteoarthritis, and Alzheimer's disease, was also briefly covered. Each work has been carefully examined against its methodology, dataset used, and the type of disease on several performance metrics, accuracy, specificity, etc. In addition to this, we also presented a few public repositories that can be utilized to conduct research on disease severity identification. We hope that this review not only acts as a compendium but also provides insights to the researchers working on disease severity identification using computational intelligence-based approaches.

4.
Sci Rep ; 12(1): 16895, 2022 10 07.
Artículo en Inglés | MEDLINE | ID: mdl-36207314

RESUMEN

Increasing data infringement while transmission and storage have become an apprehension for the data owners. Even the digital images transmitted over the network or stored at servers are prone to unauthorized access. However, several image steganography techniques were proposed in the literature for hiding a secret image by embedding it into cover media. But the low embedding capacity and poor reconstruction quality of images are significant limitations of these techniques. To overcome these limitations, deep learning-based image steganography techniques are proposed in the literature. Convolutional neural network (CNN) based U-Net encoder has gained significant research attention in the literature. However, its performance efficacy as compared to other CNN based encoders like V-Net and U-Net++ is not implemented for image steganography. In this paper, V-Net and U-Net++ encoders are implemented for image steganography. A comparative performance assessment of U-Net, V-Net, and U-Net++ architectures are carried out. These architectures are employed to hide the secret image into the cover image. Further, a unique, robust, and standard decoder for all architectures is designed to extract the secret image from the cover image. Based on the experimental results, it is identified that U-Net architecture outperforms the other two architectures as it reports high embedding capacity and provides better quality stego and reconstructed secret images.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación
5.
Contrast Media Mol Imaging ; 2022: 1306664, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36304775

RESUMEN

Artificial Intelligence (AI) has been applied successfully in many real-life domains for solving complex problems. With the invention of Machine Learning (ML) paradigms, it becomes convenient for researchers to predict the outcome based on past data. Nowadays, ML is acting as the biggest weapon against the COVID-19 pandemic by detecting symptomatic cases at an early stage and warning people about its futuristic effects. It is observed that COVID-19 has blown out globally so much in a short period because of the shortage of testing facilities and delays in test reports. To address this challenge, AI can be effectively applied to produce fast as well as cost-effective solutions. Plenty of researchers come up with AI-based solutions for preliminary diagnosis using chest CT Images, respiratory sound analysis, voice analysis of symptomatic persons with asymptomatic ones, and so forth. Some AI-based applications claim good accuracy in predicting the chances of being COVID-19-positive. Within a short period, plenty of research work is published regarding the identification of COVID-19. This paper has carefully examined and presented a comprehensive survey of more than 110 papers that came from various reputed sources, that is, Springer, IEEE, Elsevier, MDPI, arXiv, and medRxiv. Most of the papers selected for this survey presented candid work to detect and classify COVID-19, using deep-learning-based models from chest X-Rays and CT scan images. We hope that this survey covers most of the work and provides insights to the research community in proposing efficient as well as accurate solutions for fighting the pandemic.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Humanos , COVID-19/diagnóstico por imagen , Pandemias , Inteligencia Artificial , SARS-CoV-2
6.
Comput Methods Programs Biomed ; 224: 107031, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35878485

RESUMEN

PURPOSE: The alarming increase in diseases of urinary system is a cause of concern for the populace and health experts. The traditional techniques used for the diagnosis of these diseases are inconvenient for patients, require high cost, and additional waiting time for generating the reports. The objective of this research is to utilize the proven potential of Artificial Intelligence for organ segmentation. Correct identification and segmentation of the region of interest in a medical image are important to enhance the accuracy of disease diagnosis. Also, it improves the reliability of the system by ensuring the extraction of features only from the region of interest. METHOD: A lot of research works are proposed in the literature for the segmentation of organs using MRI, CT scans, and ultrasound images. But, the segmentation of kidneys, ureters, and bladder from KUB X-ray images is found under explored. Also, there is a lack of validated datasets comprising KUB X-ray images. These challenges motivated the authors to tie up with the team of radiologists and gather the anonymous and validated dataset that can be used to automate the diagnosis of diseases of the urinary system. Further, they proposed a KUB-UNet model for semantic segmentation of the urinary system. RESULTS: The proposed KUB-UNet model reported the highest accuracy of 99.18% for segmentation of organs of urinary system. CONCLUSION: The comparative analysis of its performance with state-of-the-art models and validation of results by radiology experts prove its reliability, robustness, and supremacy. This segmentation phase may prove useful in extracting the features only from the region of interest and improve the accuracy diagnosis.


Asunto(s)
Inteligencia Artificial , Tomografía Computarizada por Rayos X , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Riñón/diagnóstico por imagen , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X/métodos , Rayos X
7.
Comput Methods Programs Biomed ; 224: 107024, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35863123

RESUMEN

BACKGROUND AND OBJECTIVE: Chest radiographs (CXR) are in great demand for visualizing the pathology of the lungs. However, the appearance of bones in the lung region hinders the localization of any lesion or nodule present in the CXR. Thus, bone suppression becomes an important task for the effective screening of lung diseases. Simultaneously, it is equally important to preserve spatial information and image quality because they provide crucial insights on the size and area of infection, color accuracy, structural quality, etc. Many researchers considered bone suppression as an image denoising problem and proposed conditional Generative Adversarial Network-based (cGAN) models for generating bone suppressed images from CXRs. These works do not focus on the retention of spatial features and image quality. The authors of this manuscript developed the Spatial Feature and Resolution Maximization (SFRM) GAN to efficiently minimize the visibility of bones in CXRs while ensuring maximum retention of critical information. METHOD: This task is achieved by modifying the architectures of the discriminator and generator of the pix2pix model. The discriminator is combined with the Wasserstein GAN with Gradient Penalty to increase its performance and training stability. For the generator, a combination of different task-specific loss functions, viz., L1, Perceptual, and Sobel loss are employed to capture the intrinsic information in the image. RESULT: The proposed model reported as measures of performance a mean PSNR of 43.588, mean NMSE of 0.00025, mean SSIM of 0.989, and mean Entropy of 0.454 bits/pixel on a test size of 100 images. Further, the combination of δ=104, α=1, ß=10, and γ=10 are the hyperparameters that provided the best trade-off between image denoising and quality retention. CONCLUSION: The degree of bone suppression and spatial information preservation can be improved by adding the Sobel and Perceptual loss respectively. SFRM-GAN not only suppresses bones but also retains the image quality and intrinsic information. Based on the results of student's t-test it is concluded that SFRM-GAN yields statistically significant results at a 0.95 level of confidence and shows its supremacy over the state-of-the-art models. Thus, it may be used for denoising and preprocessing of images.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Huesos/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Radiografía
8.
Multimed Syst ; 28(4): 1251-1262, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34305327

RESUMEN

Amidst the global pandemic and catastrophe created by 'COVID-19', every research institution and scientist are doing their best efforts to invent or find the vaccine or medicine for the disease. The objective of this research is to design and develop a deep learning-based multi-modal for the screening of COVID-19 using chest radiographs and genomic sequences. The modal is also effective in finding the degree of genomic similarity among the Severe Acute Respiratory Syndrome-Coronavirus 2 and other prevalent viruses such as Severe Acute Respiratory Syndrome-Coronavirus, Middle East Respiratory Syndrome-Coronavirus, Human Immunodeficiency Virus, and Human T-cell Leukaemia Virus. The experimental results on the datasets available at National Centre for Biotechnology Information, GitHub, and Kaggle repositories show that it is successful in detecting the genome of 'SARS-CoV-2' in the host genome with an accuracy of 99.27% and screening of chest radiographs into COVID-19, non-COVID pneumonia and healthy with a sensitivity of 95.47%. Thus, it may prove a useful tool for doctors to quickly classify the infected and non-infected genomes. It can also be useful in finding the most effective drug from the available drugs for the treatment of 'COVID-19'.

9.
Sensors (Basel) ; 21(16)2021 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-34450827

RESUMEN

Decrease in crop yield and degradation in product quality due to plant diseases such as rust and blast in pearl millet is the cause of concern for farmers and the agriculture industry. The stipulation of expert advice for disease identification is also a challenge for the farmers. The traditional techniques adopted for plant disease detection require more human intervention, are unhandy for farmers, and have a high cost of deployment, operation, and maintenance. Therefore, there is a requirement for automating plant disease detection and classification. Deep learning and IoT-based solutions are proposed in the literature for plant disease detection and classification. However, there is a huge scope to develop low-cost systems by integrating these techniques for data collection, feature visualization, and disease detection. This research aims to develop the 'Automatic and Intelligent Data Collector and Classifier' framework by integrating IoT and deep learning. The framework automatically collects the imagery and parametric data from the pearl millet farmland at ICAR, Mysore, India. It automatically sends the collected data to the cloud server and the Raspberry Pi. The 'Custom-Net' model designed as a part of this research is deployed on the cloud server. It collaborates with the Raspberry Pi to precisely predict the blast and rust diseases in pearl millet. Moreover, the Grad-CAM is employed to visualize the features extracted by the 'Custom-Net'. Furthermore, the impact of transfer learning on the 'Custom-Net' and state-of-the-art models viz. Inception ResNet-V2, Inception-V3, ResNet-50, VGG-16, and VGG-19 is shown in this manuscript. Based on the experimental results, and features visualization by Grad-CAM, it is observed that the 'Custom-Net' extracts the relevant features and the transfer learning improves the extraction of relevant features. Additionally, the 'Custom-Net' model reports a classification accuracy of 98.78% that is equivalent to state-of-the-art models viz. Inception ResNet-V2, Inception-V3, ResNet-50, VGG-16, and VGG-19. Although the classification of 'Custom-Net' is comparable to state-of-the-art models, it is effective in reducing the training time by 86.67%. It makes the model more suitable for automating disease detection. This proves that the proposed model is effective in providing a low-cost and handy tool for farmers to improve crop yield and product quality.


Asunto(s)
Pennisetum , Agricultura , Humanos , Aprendizaje Automático , Enfermedades de las Plantas
10.
Sensors (Basel) ; 21(14)2021 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-34300489

RESUMEN

In the modern era, deep learning techniques have emerged as powerful tools in image recognition. Convolutional Neural Networks, one of the deep learning tools, have attained an impressive outcome in this area. Applications such as identifying objects, faces, bones, handwritten digits, and traffic signs signify the importance of Convolutional Neural Networks in the real world. The effectiveness of Convolutional Neural Networks in image recognition motivates the researchers to extend its applications in the field of agriculture for recognition of plant species, yield management, weed detection, soil, and water management, fruit counting, diseases, and pest detection, evaluating the nutrient status of plants, and much more. The availability of voluminous research works in applying deep learning models in agriculture leads to difficulty in selecting a suitable model according to the type of dataset and experimental environment. In this manuscript, the authors present a survey of the existing literature in applying deep Convolutional Neural Networks to predict plant diseases from leaf images. This manuscript presents an exemplary comparison of the pre-processing techniques, Convolutional Neural Network models, frameworks, and optimization techniques applied to detect and classify plant diseases using leaf images as a data set. This manuscript also presents a survey of the datasets and performance metrics used to evaluate the efficacy of models. The manuscript highlights the advantages and disadvantages of different techniques and models proposed in the existing literature. This survey will ease the task of researchers working in the field of applying deep learning techniques for the identification and classification of plant leaf diseases.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Enfermedades de las Plantas , Hojas de la Planta , Encuestas y Cuestionarios
11.
Int J Imaging Syst Technol ; 31(2): 483-498, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-33821094

RESUMEN

The objective of this research is to develop a convolutional neural network model 'COVID-Screen-Net' for multi-class classification of chest X-ray images into three classes viz. COVID-19, bacterial pneumonia, and normal. The model performs the automatic feature extraction from X-ray images and accurately identifies the features responsible for distinguishing the X-ray images of different classes. It plots these features on the GradCam. The authors optimized the number of convolution and activation layers according to the size of the dataset. They also fine-tuned the hyperparameters to minimize the computation time and to enhance the efficiency of the model. The performance of the model has been evaluated on the anonymous chest X-ray images collected from hospitals and the dataset available on the web. The model attains an average accuracy of 97.71% and a maximum recall of 100%. The comparative analysis shows that the 'COVID-Screen-Net' outperforms the existing systems for screening of COVID-19. The effectiveness of the model is validated by the radiology experts on the real-time dataset. Therefore, it may prove a useful tool for quick and low-cost mass screening of patients of COVID-19. This tool may reduce the burden on health experts in the present situation of the Global Pandemic. The copyright of this tool is registered in the names of authors under the laws of Intellectual Property Rights in India with the registration number 'SW-13625/2020'.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...