Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
Med Biol Eng Comput ; 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38684593

RESUMEN

Diabetic retinopathy (DR) and diabetic macular edema (DME) are both serious eye conditions associated with diabetes and if left untreated, and they can lead to permanent blindness. Traditional methods for screening these conditions rely on manual image analysis by experts, which can be time-consuming and costly due to the scarcity of such experts. To overcome the aforementioned challenges, we present the Modified CornerNet approach with DenseNet-100. This system aims to localize and classify lesions associated with DR and DME. To train our model, we first generate annotations for input samples. These annotations likely include information about the location and type of lesions within the retinal images. DenseNet-100 is a deep CNN used for feature extraction, and CornerNet is a one-stage object detection model. CornerNet is known for its ability to accurately localize small objects, which makes it suitable for detecting lesions in retinal images. We assessed our technique on two challenging datasets, EyePACS and IDRiD. These datasets contain a diverse range of retinal images, which is important to estimate the performance of our model. Further, the proposed model is also tested in the cross-corpus scenario on two challenging datasets named APTOS-2019 and Diaretdb1 to assess the generalizability of our system. According to the accomplished analysis, our method outperformed the latest approaches in terms of both qualitative and quantitative results. The ability to effectively localize small abnormalities and handle over-fitted challenges is highlighted as a key strength of the suggested framework which can assist the practitioners in the timely recognition of such eye ailments.

2.
Sensors (Basel) ; 23(10)2023 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-37430604

RESUMEN

One of the most severe types of cancer caused by the uncontrollable proliferation of brain cells inside the skull is brain tumors. Hence, a fast and accurate tumor detection method is critical for the patient's health. Many automated artificial intelligence (AI) methods have recently been developed to diagnose tumors. These approaches, however, result in poor performance; hence, there is a need for an efficient technique to perform precise diagnoses. This paper suggests a novel approach for brain tumor detection via an ensemble of deep and hand-crafted feature vectors (FV). The novel FV is an ensemble of hand-crafted features based on the GLCM (gray level co-occurrence matrix) and in-depth features based on VGG16. The novel FV contains robust features compared to independent vectors, which improve the suggested method's discriminating capabilities. The proposed FV is then classified using SVM or support vector machines and the k-nearest neighbor classifier (KNN). The framework achieved the highest accuracy of 99% on the ensemble FV. The results indicate the reliability and efficacy of the proposed methodology; hence, radiologists can use it to detect brain tumors through MRI (magnetic resonance imaging). The results show the robustness of the proposed method and can be deployed in the real environment to detect brain tumors from MRI images accurately. In addition, the performance of our model was validated via cross-tabulated data.


Asunto(s)
Inteligencia Artificial , Neoplasias Encefálicas , Humanos , Encéfalo , Neoplasias Encefálicas/diagnóstico por imagen , Reproducibilidad de los Resultados
3.
Diagnostics (Basel) ; 13(2)2023 Jan 09.
Artículo en Inglés | MEDLINE | ID: mdl-36673057

RESUMEN

The competence of machine learning approaches to carry out clinical expertise tasks has recently gained a lot of attention, particularly in the field of medical-imaging examination. Among the most frequently used clinical-imaging modalities in the healthcare profession is chest radiography, which calls for prompt reporting of the existence of potential anomalies and illness diagnostics in images. Automated frameworks for the recognition of chest abnormalities employing X-rays are being introduced in health departments. However, the reliable detection and classification of particular illnesses in chest X-ray samples is still a complicated issue because of the complex structure of radiographs, e.g., the large exposure dynamic range. Moreover, the incidence of various image artifacts and extensive inter- and intra-category resemblances further increases the difficulty of chest disease recognition procedures. The aim of this study was to resolve these existing problems. We propose a deep learning (DL) approach to the detection of chest abnormalities with the X-ray modality using the EfficientDet (CXray-EffDet) model. More clearly, we employed the EfficientNet-B0-based EfficientDet-D0 model to compute a reliable set of sample features and accomplish the detection and classification task by categorizing eight categories of chest abnormalities using X-ray images. The effective feature computation power of the CXray-EffDet model enhances the power of chest abnormality recognition due to its high recall rate, and it presents a lightweight and computationally robust approach. A large test of the model employing a standard database from the National Institutes of Health (NIH) was conducted to demonstrate the chest disease localization and categorization performance of the CXray-EffDet model. We attained an AUC score of 0.9080, along with an IOU of 0.834, which clearly determines the competency of the introduced model.

4.
Sci Rep ; 12(1): 18568, 2022 11 03.
Artículo en Inglés | MEDLINE | ID: mdl-36329073

RESUMEN

Tomato plants' disease detection and classification at the earliest stage can save the farmers from expensive crop sprays and can assist in increasing the food quantity. Although, extensive work has been presented by the researcher for the tomato plant disease classification, however, the timely localization and identification of various tomato leaf diseases is a complex job as a consequence of the huge similarity among the healthy and affected portion of plant leaves. Furthermore, the low contrast information between the background and foreground of the suspected sample has further complicated the plant leaf disease detection process. To deal with the aforementioned challenges, we have presented a robust deep learning (DL)-based approach namely ResNet-34-based Faster-RCNN for tomato plant leaf disease classification. The proposed method includes three basic steps. Firstly, we generate the annotations of the suspected images to specify the region of interest (RoI). In the next step, we have introduced ResNet-34 along with Convolutional Block Attention Module (CBAM) as a feature extractor module of Faster-RCNN to extract the deep key points. Finally, the calculated features are utilized for the Faster-RCNN model training to locate and categorize the numerous tomato plant leaf anomalies. We tested the presented work on an accessible standard database, the PlantVillage Kaggle dataset. More specifically, we have obtained the mAP and accuracy values of 0.981, and 99.97% respectively along with the test time of 0.23 s. Both qualitative and quantitative results confirm that the presented solution is robust to the detection of plant leaf disease and can replace the manual systems. Moreover, the proposed method shows a low-cost solution to tomato leaf disease classification which is robust to several image transformations like the variations in the size, color, and orientation of the leaf diseased portion. Furthermore, the framework can locate the affected plant leaves under the occurrence of blurring, noise, chrominance, and brightness variations. We have confirmed through the reported results that our approach is robust to several tomato leaf diseases classification under the varying image capturing conditions. In the future, we plan to extend our approach to apply it to other parts of plants as well.


Asunto(s)
Aprendizaje Profundo , Solanum lycopersicum , Enfermedades de las Plantas , Hojas de la Planta
5.
Front Med (Lausanne) ; 9: 1005920, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36405585

RESUMEN

In the last 2 years, we have witnessed multiple waves of coronavirus that affected millions of people around the globe. The proper cure for COVID-19 has not been diagnosed as vaccinated people also got infected with this disease. Precise and timely detection of COVID-19 can save human lives and protect them from complicated treatment procedures. Researchers have employed several medical imaging modalities like CT-Scan and X-ray for COVID-19 detection, however, little concentration is invested in the ECG imaging analysis. ECGs are quickly available image modality in comparison to CT-Scan and X-ray, therefore, we use them for diagnosing COVID-19. Efficient and effective detection of COVID-19 from the ECG signal is a complex and time-taking task, as researchers usually convert them into numeric values before applying any method which ultimately increases the computational burden. In this work, we tried to overcome these challenges by directly employing the ECG images in a deep-learning (DL)-based approach. More specifically, we introduce an Efficient-ECGNet method that presents an improved version of the EfficientNetV2-B4 model with additional dense layers and is capable of accurately classifying the ECG images into healthy, COVID-19, myocardial infarction (MI), abnormal heartbeats (AHB), and patients with Previous History of Myocardial Infarction (PMI) classes. Moreover, we introduce a module to measure the similarity of COVID-19-affected ECG images with the rest of the diseases. To the best of our knowledge, this is the first effort to approximate the correlation of COVID-19 patients with those having any previous or current history of cardio or respiratory disease. Further, we generate the heatmaps to demonstrate the accurate key-points computation ability of our method. We have performed extensive experimentation on a publicly available dataset to show the robustness of the proposed approach and confirmed that the Efficient-ECGNet framework is reliable to classify the ECG-based COVID-19 samples.

6.
Comput Math Methods Med ; 2022: 7502504, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36276999

RESUMEN

Melanoma is a dangerous form of skin cancer that results in the demise of patients at the developed stage. Researchers have attempted to develop automated systems for the timely recognition of this deadly disease. However, reliable and precise identification of melanoma moles is a tedious and complex activity as there exist huge differences in the mass, structure, and color of the skin lesions. Additionally, the incidence of noise, blurring, and chrominance changes in the suspected images further enhance the complexity of the detection procedure. In the proposed work, we try to overcome the limitations of the existing work by presenting a deep learning (DL) model. Descriptively, after accomplishing the preprocessing task, we have utilized an object detection approach named CornerNet model to detect melanoma lesions. Then the localized moles are passed as input to the fuzzy K-means (FLM) clustering approach to perform the segmentation task. To assess the segmentation power of the proposed approach, two standard databases named ISIC-2017 and ISIC-2018 are employed. Extensive experimentation has been conducted to demonstrate the robustness of the proposed approach through both numeric and pictorial results. The proposed approach is capable of detecting and segmenting the moles of arbitrary shapes and orientations. Furthermore, the presented work can tackle the presence of noise, blurring, and brightness variations as well. We have attained the segmentation accuracy values of 99.32% and 99.63% over the ISIC-2017 and ISIC-2018 databases correspondingly which clearly depicts the effectiveness of our model for the melanoma mole segmentation.


Asunto(s)
Melanoma , Topos , Neoplasias Cutáneas , Humanos , Animales , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Melanoma/diagnóstico por imagen , Análisis por Conglomerados , Neoplasias Cutáneas/diagnóstico por imagen , Dermoscopía/métodos
7.
Front Plant Sci ; 13: 957961, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36160977

RESUMEN

Early recognition of tomato plant leaf diseases is mandatory to improve the food yield and save agriculturalists from costly spray procedures. The correct and timely identification of several tomato plant leaf diseases is a complicated task as the healthy and affected areas of plant leaves are highly similar. Moreover, the incidence of light variation, color, and brightness changes, and the occurrence of blurring and noise on the images further increase the complexity of the detection process. In this article, we have presented a robust approach for tackling the existing issues of tomato plant leaf disease detection and classification by using deep learning. We have proposed a novel approach, namely the DenseNet-77-based CornerNet model, for the localization and classification of the tomato plant leaf abnormalities. Specifically, we have used the DenseNet-77 as the backbone network of the CornerNet. This assists in the computing of the more nominative set of image features from the suspected samples that are later categorized into 10 classes by the one-stage detector of the CornerNet model. We have evaluated the proposed solution on a standard dataset, named PlantVillage, which is challenging in nature as it contains samples with immense brightness alterations, color variations, and leaf images with different dimensions and shapes. We have attained an average accuracy of 99.98% over the employed dataset. We have conducted several experiments to assure the effectiveness of our approach for the timely recognition of the tomato plant leaf diseases that can assist the agriculturalist to replace the manual systems.

8.
Front Plant Sci ; 13: 808380, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35755664

RESUMEN

The role of agricultural development is very important in the economy of a country. However, the occurrence of several plant diseases is a major hindrance to the growth rate and quality of crops. The exact determination and categorization of crop leaf diseases is a complex and time-required activity due to the occurrence of low contrast information in the input samples. Moreover, the alterations in the size, location, structure of crop diseased portion, and existence of noise and blurriness effect in the input images further complicate the classification task. To solve the problems of existing techniques, a robust drone-based deep learning approach is proposed. More specifically, we have introduced an improved EfficientNetV2-B4 with additional added dense layers at the end of the architecture. The customized EfficientNetV2-B4 calculates the deep key points and classifies them in their related classes by utilizing an end-to-end training architecture. For performance evaluation, a standard dataset, namely, the PlantVillage Kaggle along with the samples captured using a drone is used which is complicated in the aspect of varying image samples with diverse image capturing conditions. We attained the average precision, recall, and accuracy values of 99.63, 99.93, and 99.99%, respectively. The obtained results confirm the robustness of our approach in comparison to other recent techniques and also show less time complexity.

9.
Microsc Res Tech ; 85(6): 2313-2330, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35194866

RESUMEN

The COVID-19 pandemic is spreading at a fast pace around the world and has a high mortality rate. Since there is no proper treatment of COVID-19 and its multiple variants, for example, Alpha, Beta, Gamma, and Delta, being more infectious in nature are affecting millions of people, further complicates the detection process, so, victims are at the risk of death. However, timely and accurate diagnosis of this deadly virus can not only save the patients from life loss but can also prevent them from the complex treatment procedures. Accurate segmentation and classification of COVID-19 is a tedious job due to the extensive variations in its shape and similarity with other diseases like Pneumonia. Furthermore, the existing techniques have hardly focused on the infection growth estimation over time which can assist the doctors to better analyze the condition of COVID-19-affected patients. In this work, we tried to overcome the shortcomings of existing studies by proposing a model capable of segmenting, classifying the COVID-19 from computed tomography images, and predicting its behavior over a certain period. The framework comprises four main steps: (i) data preparation, (ii) segmentation, (iii) infection growth estimation, and (iv) classification. After performing the pre-processing step, we introduced the DenseNet-77 based UNET approach. Initially, the DenseNet-77 is used at the Encoder module of the UNET model to calculate the deep keypoints which are later segmented to show the coronavirus region. Then, the infection growth estimation of COVID-19 per patient is estimated using the blob analysis. Finally, we employed the DenseNet-77 framework as an end-to-end network to classify the input images into three classes namely healthy, COVID-19-affected, and pneumonia images. We evaluated the proposed model over the COVID-19-20 and COVIDx CT-2A datasets for segmentation and classification tasks, respectively. Furthermore, unlike existing techniques, we performed a cross-dataset evaluation to show the generalization ability of our method. The quantitative and qualitative evaluation confirms that our method is robust to both COVID-19 segmentation and classification and can accurately predict the infection growth in a certain time frame. RESEARCH HIGHLIGHTS: We present an improved UNET framework with a DenseNet-77-based encoder for deep keypoints extraction to enhance the identification and segmentation performance of the coronavirus while reducing the computational complexity as well. We propose a computationally robust approach for COVID-19 infection segmentation due to fewer model parameters. Robust segmentation of COVID-19 due to accurate feature computation power of DenseNet-77. A module is introduced to predict the infection growth of COVID-19 for a patient to analyze its severity over time. We present such a framework that can effectively classify the samples into several classes, that is, COVID-19, Pneumonia, and healthy samples. Rigorous experimentation was performed including the cross-dataset evaluation to prove the efficacy of the presented technique.


Asunto(s)
COVID-19 , Neumonía , COVID-19/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Pandemias , Tomografía Computarizada por Rayos X/métodos
10.
Sensors (Basel) ; 22(2)2022 Jan 07.
Artículo en Inglés | MEDLINE | ID: mdl-35062405

RESUMEN

Glaucoma is an eye disease initiated due to excessive intraocular pressure inside it and caused complete sightlessness at its progressed stage. Whereas timely glaucoma screening-based treatment can save the patient from complete vision loss. Accurate screening procedures are dependent on the availability of human experts who performs the manual analysis of retinal samples to identify the glaucomatous-affected regions. However, due to complex glaucoma screening procedures and shortage of human resources, we often face delays which can increase the vision loss ratio around the globe. To cope with the challenges of manual systems, there is an urgent demand for designing an effective automated framework that can accurately identify the Optic Disc (OD) and Optic Cup (OC) lesions at the earliest stage. Efficient and effective identification and classification of glaucomatous regions is a complicated job due to the wide variations in the mass, shade, orientation, and shapes of lesions. Furthermore, the extensive similarity between the lesion and eye color further complicates the classification process. To overcome the aforementioned challenges, we have presented a Deep Learning (DL)-based approach namely EfficientDet-D0 with EfficientNet-B0 as the backbone. The presented framework comprises three steps for glaucoma localization and classification. Initially, the deep features from the suspected samples are computed with the EfficientNet-B0 feature extractor. Then, the Bi-directional Feature Pyramid Network (BiFPN) module of EfficientDet-D0 takes the computed features from the EfficientNet-B0 and performs the top-down and bottom-up keypoints fusion several times. In the last step, the resultant localized area containing glaucoma lesion with associated class is predicted. We have confirmed the robustness of our work by evaluating it on a challenging dataset namely an online retinal fundus image database for glaucoma analysis (ORIGA). Furthermore, we have performed cross-dataset validation on the High-Resolution Fundus (HRF), and Retinal Image database for Optic Nerve Evaluation (RIM ONE DL) datasets to show the generalization ability of our work. Both the numeric and visual evaluations confirm that EfficientDet-D0 outperforms the newest frameworks and is more proficient in glaucoma classification.


Asunto(s)
Aprendizaje Profundo , Glaucoma , Disco Óptico , Técnicas de Diagnóstico Oftalmológico , Fondo de Ojo , Glaucoma/diagnóstico , Humanos
11.
Microsc Res Tech ; 85(1): 339-351, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34448519

RESUMEN

Melanoma skin cancer is the most life-threatening and fatal disease among the family of skin cancer diseases. Modern technological developments and research methodologies made it possible to detect and identify this kind of skin cancer more effectively; however, the automated localization and segmentation of skin lesion at earlier stages is still a challenging task due to the low contrast between melanoma moles and skin portion and a higher level of color similarity between melanoma-affected and -nonaffected areas. In this paper, we present a fully automated method for segmenting the skin melanoma at its earliest stage by employing a deep-learning-based approach, namely faster region-based convolutional neural networks (RCNN) along with fuzzy k-means clustering (FKM). Several clinical images are utilized to test the presented method so that it may help the dermatologist in diagnosing this life-threatening disease at its earliest stage. The presented method first preprocesses the dataset images to remove the noise and illumination problems and enhance the visual information before applying the faster-RCNN to obtain the feature vector of fixed length. After that, FKM has been employed to segment the melanoma-affected portion of skin with variable size and boundaries. The performance of the presented method is evaluated on the three standard datasets, namely ISBI-2016, ISIC-2017, and PH2, and the results show that the presented method outperforms the state-of-the-art approaches. The presented method attains an average accuracy of 95.40, 93.1, and 95.6% on the ISIC-2016, ISIC-2017, and PH2 datasets, respectively, which is showing its robustness to skin lesion recognition and segmentation.


Asunto(s)
Aprendizaje Profundo , Melanoma , Neoplasias Cutáneas , Algoritmos , Análisis por Conglomerados , Dermoscopía , Humanos , Melanoma/diagnóstico por imagen , Neoplasias Cutáneas/diagnóstico por imagen
12.
Diagnostics (Basel) ; 11(10)2021 Oct 08.
Artículo en Inglés | MEDLINE | ID: mdl-34679554

RESUMEN

The brain tumor is a deadly disease that is caused by the abnormal growth of brain cells, which affects the human blood cells and nerves. Timely and precise detection of brain tumors is an important task to avoid complex and painful treatment procedures, as it can assist doctors in surgical planning. Manual brain tumor detection is a time-consuming activity and highly dependent on the availability of area experts. Therefore, it is a need of the hour to design accurate automated systems for the detection and classification of various types of brain tumors. However, the exact localization and categorization of brain tumors is a challenging job due to extensive variations in their size, position, and structure. To deal with the challenges, we have presented a novel approach, namely, DenseNet-41-based CornerNet framework. The proposed solution comprises three steps. Initially, we develop annotations to locate the exact region of interest. In the second step, a custom CornerNet with DenseNet-41 as a base network is introduced to extract the deep features from the suspected samples. In the last step, the one-stage detector CornerNet is employed to locate and classify several brain tumors. To evaluate the proposed method, we have utilized two databases, namely, the Figshare and Brain MRI datasets, and attained an average accuracy of 98.8% and 98.5%, respectively. Both qualitative and quantitative analysis show that our approach is more proficient and consistent with detecting and classifying various types of brain tumors than other latest techniques.

13.
Sensors (Basel) ; 21(16)2021 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-34450729

RESUMEN

Diabetic retinopathy (DR) is an eye disease that alters the blood vessels of a person suffering from diabetes. Diabetic macular edema (DME) occurs when DR affects the macula, which causes fluid accumulation in the macula. Efficient screening systems require experts to manually analyze images to recognize diseases. However, due to the challenging nature of the screening method and lack of trained human resources, devising effective screening-oriented treatment is an expensive task. Automated systems are trying to cope with these challenges; however, these methods do not generalize well to multiple diseases and real-world scenarios. To solve the aforementioned issues, we propose a new method comprising two main steps. The first involves dataset preparation and feature extraction and the other relates to improving a custom deep learning based CenterNet model trained for eye disease classification. Initially, we generate annotations for suspected samples to locate the precise region of interest, while the other part of the proposed solution trains the Center Net model over annotated images. Specifically, we use DenseNet-100 as a feature extraction method on which the one-stage detector, CenterNet, is employed to localize and classify the disease lesions. We evaluated our method over challenging datasets, namely, APTOS-2019 and IDRiD, and attained average accuracy of 97.93% and 98.10%, respectively. We also performed cross-dataset validation with benchmark EYEPACS and Diaretdb1 datasets. Both qualitative and quantitative results demonstrate that our proposed approach outperforms state-of-the-art methods due to more effective localization power of CenterNet, as it can easily recognize small lesions and deal with over-fitted training data. Our proposed framework is proficient in correctly locating and classifying disease lesions. In comparison to existing DR and DME classification approaches, our method can extract representative key points from low-intensity and noisy images and accurately classify them. Hence our approach can play an important role in automated detection and recognition of DR and DME lesions.


Asunto(s)
Aprendizaje Profundo , Diabetes Mellitus , Retinopatía Diabética , Edema Macular , Retinopatía Diabética/diagnóstico por imagen , Humanos , Edema Macular/diagnóstico por imagen
14.
Diagnostics (Basel) ; 11(5)2021 Apr 21.
Artículo en Inglés | MEDLINE | ID: mdl-33919358

RESUMEN

A brain tumor is an abnormal growth in brain cells that causes damage to various blood vessels and nerves in the human body. An earlier and accurate diagnosis of the brain tumor is of foremost important to avoid future complications. Precise segmentation of brain tumors provides a basis for surgical planning and treatment to doctors. Manual detection using MRI images is computationally complex in cases where the survival of the patient is dependent on timely treatment, and the performance relies on domain expertise. Therefore, computerized detection of tumors is still a challenging task due to significant variations in their location and structure, i.e., irregular shapes and ambiguous boundaries. In this study, we propose a custom Mask Region-based Convolution neural network (Mask RCNN) with a densenet-41 backbone architecture that is trained via transfer learning for precise classification and segmentation of brain tumors. Our method is evaluated on two different benchmark datasets using various quantitative measures. Comparative results show that the custom Mask-RCNN can more precisely detect tumor locations using bounding boxes and return segmentation masks to provide exact tumor regions. Our proposed model achieved an accuracy of 96.3% and 98.34% for segmentation and classification respectively, demonstrating enhanced robustness compared to state-of-the-art approaches.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA