RESUMEN
Accurate detection of COVID-19 is one of the challenging research topics in today's healthcare sector to control the coronavirus pandemic. Automatic data-powered insights for COVID-19 localization from medical imaging modality like chest CT scan tremendously augment clinical care assistance. In this research, a Contour-aware Attention Decoder CNN has been proposed to precisely segment COVID-19 infected tissues in a very effective way. It introduces a novel attention scheme to extract boundary, shape cues from CT contours and leverage these features in refining the infected areas. For every decoded pixel, the attention module harvests contextual information in its spatial neighborhood from the contour feature maps. As a result of incorporating such rich structural details into decoding via dense attention, the CNN is able to capture even intricate morphological details. The decoder is also augmented with a Cross Context Attention Fusion Upsampling to robustly reconstruct deep semantic features back to high-resolution segmentation map. It employs a novel pixel-precise attention model that draws relevant encoder features to aid in effective upsampling. The proposed CNN was evaluated on 3D scans from MosMedData and Jun Ma benchmarked datasets. It achieved state-of-the-art performance with a high dice similarity coefficient of 85.43% and a recall of 88.10%.
RESUMEN
Evaluating patient criticality is the foremost step in administering appropriate COVID-19 treatment protocols. Learning an Artificial Intelligence (AI) model from clinical data for automatic risk-stratification enables accelerated response to patients displaying critical indicators. Chest CT manifestations including ground-glass opacities and consolidations are a reliable indicator for prognostic studies and show variability with patient condition. To this end, we propose a novel attention framework to estimate COVID-19 severity as a regression score from a weakly annotated CT scan dataset. It takes a non-locality approach that correlates features across different parts and spatial scales of the 3D scan. An explicit guidance mechanism from limited infection labeling drives attention refinement and feature modulation. The resulting encoded representation is further enriched through cross-channel attention. The attention model also infuses global contextual awareness into the deep voxel features by querying the base CT scan to mine relevant features. Consequently, it learns to effectively localize its focus region and chisel out the infection precisely. Experimental validation on the MosMed dataset shows that the proposed architecture has significant potential in augmenting existing methods as it achieved a 0.84 R-squared score and 0.133 mean absolute difference.
RESUMEN
COVID-19 is a deadly viral infection that has brought a significant threat to human lives. Automatic diagnosis of COVID-19 from medical imaging enables precise medication, helps to control community outbreak, and reinforces coronavirus testing methods in place. While there exist several challenges in manually inferring traces of this viral infection from X-ray, Convolutional Neural Network (CNN) can mine data patterns that capture subtle distinctions between infected and normal X-rays. To enable automated learning of such latent features, a custom CNN architecture has been proposed in this research. It learns unique convolutional filter patterns for each kind of pneumonia. This is achieved by restricting certain filters in a convolutional layer to maximally respond only to a particular class of pneumonia/COVID-19. The CNN architecture integrates different convolution types to aid better context for learning robust features and strengthen gradient flow between layers. The proposed work also visualizes regions of saliency on the X-ray that have had the most influence on CNN's prediction outcome. To the best of our knowledge, this is the first attempt in deep learning to learn custom filters within a single convolutional layer for identifying specific pneumonia classes. Experimental results demonstrate that the proposed work has significant potential in augmenting current testing methods for COVID-19. It achieves an F1-score of 97.20% and an accuracy of 99.80% on the COVID-19 X-ray set.
RESUMEN
Osteoporosis is a word used to describe a condition in which bone density has been diminished as a result of inadequate bone tissue development to counteract the elimination of old bone tissue. Osteoporosis diagnosis is made possible by the use of medical imaging technologies such as CT scans, dual X-ray, and X-ray images. In practice, there are various osteoporosis diagnostic methods that may be performed with a single imaging modality to aid in the diagnosis of the disease. The proposed study is to develop a framework, that is, to aid in the diagnosis of osteoporosis which agrees to all of these CT scans, X-ray, and dual X-ray imaging modalities. The framework will be implemented in the near future. The proposed work, CBTCNNOD, is the integration of 3 functional modules. The functional modules are a bilinear filter, grey-level zone length matrix, and CB-CNN. It is constructed in a manner that can provide crisp osteoporosis diagnostic reports based on the images that are fed into the system. All 3 modules work together to improve the performance of the proposed approach, CBTCNNOD, in terms of accuracy by 10.38%, 10.16%, 7.86%, and 14.32%; precision by 11.09%, 9.08%, 10.01%, and 16.51%; sensitivity by 9.77%, 10.74%, 6.20%, and 12.78%; and specificity by 11.01%, 9.52%, 9.5%, and 15.84%, while requiring less processing time of 33.52%, 17.79%, 23.34%, and 10.86%, when compared to the existing techniques of RCETA, BMCOFA, BACBCT, and XSFCV, respectively.
Asunto(s)
Osteoporosis , Humanos , Rayos X , Osteoporosis/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Redes Neurales de la Computación , Aprendizaje AutomáticoRESUMEN
Grape cultivation is important globally, contributing to the agricultural economy and providing diverse grape-based products. However, the susceptibility of grapes to disease poses a significant threat to yield and quality. Traditional disease identification methods demand expert knowledge, which limits scalability and efficiency. To address these limitations our research aims to design an automated deep learning approach for grape leaf disease detection. This research introduces a novel dual-track network for classifying grape leaf diseases, employing a combination of the Swin Transformer and Group Shuffle Residual DeformNet (GSRDN) tracks. The Swin Transformer track exploits shifted window techniques to construct hierarchical feature maps, enhancing global feature extraction. Simultaneously, the GSRDN track combines Group Shuffle Depthwise Residual block and Deformable Convolution block to extract local features with reduced computational complexity. The features from both tracks are concatenated and processed through Triplet Attention for cross-dimensional interaction. The proposed model achieved an accuracy of 98.6%, the precision, recall, and F1-score are recorded as 98.7%, 98.59%, and 98.64%, respectively as validated on a dataset containing grape leaf disease information from the PlantVillage dataset, demonstrating its potential for efficient grape disease classification.
Asunto(s)
Enfermedades de las Plantas , Hojas de la Planta , Vitis , Vitis/clasificación , Enfermedades de las Plantas/parasitología , Aprendizaje Profundo , AlgoritmosRESUMEN
Background and objective: In recent years, Artificial Intelligence has had an evident impact on the way research addresses challenges in different domains. It has proven to be a huge asset, especially in the medical field, allowing for time-efficient and reliable solutions. This research aims to spotlight the impact of deep learning and machine learning models in the detection of COVID-19 from medical images. This is achieved by conducting a review of the state-of-the-art approaches proposed by the recent works in this field. Methods: The main focus of this study is the recent developments of classification and segmentation approaches to image-based COVID-19 detection. The study reviews 140 research papers published in different academic research databases. These papers have been screened and filtered based on specified criteria, to acquire insights prudent to image-based COVID-19 detection. Results: The methods discussed in this review include different types of imaging modality, predominantly X-rays and CT scans. These modalities are used for classification and segmentation tasks as well. This review seeks to categorize and discuss the different deep learning and machine learning architectures employed for these tasks, based on the imaging modality utilized. It also hints at other possible deep learning and machine learning architectures that can be proposed for better results towards COVID-19 detection. Along with that, a detailed overview of the emerging trends and breakthroughs in Artificial Intelligence-based COVID-19 detection has been discussed as well. Conclusion: This work concludes by stipulating the technical and non-technical challenges faced by researchers and illustrates the advantages of image-based COVID-19 detection with Artificial Intelligence techniques.
RESUMEN
BACKGROUND: Osteoporosis is a term used to represent the reduced bone density, which is caused by insufficient bone tissue production to balance the old bone tissue removal. Medical Imaging procedures such as X-Ray, Dual X-Ray and Computed Tomography (CT) scans are used widely in osteoporosis diagnosis. There are several existing procedures in practice to assist osteoporosis diagnosis, which can operate using a single imaging method. OBJECTIVE: The purpose of this proposed work is to introduce a framework to assist the diagnosis of osteoporosis based on consenting all these X-Ray, Dual X-Ray and CT scan imaging techniques. The proposed work named "Aggregation of Region-based and Boundary-based Knowledge biased Segmentation for Osteoporosis Detection from X-Ray, Dual X-Ray and CT images" (ARBKSOD) is the integration of three functional modules. METHODS: Fuzzy Histogram Medical Image Classifier (FHMIC), Log-Gabor Transform based ANN Training for osteoporosis detection (LGTAT) and Knowledge biased Osteoporosis Analyzer (KOA). RESULTS: Together, all these three modules make the proposed method ARBKSOD scored the maximum accuracy of 93.11%, the highest precision value of 93.91% while processing the 6th image batch, the highest sensitivity of 92.93%, the highest specificity of 93.79% is observed during the experiment by ARBKSOD while processing the 6th image batch. The best average processing time of 10244 mS is achieved by ARBKSOD while processing the 7th image batch. CONCLUSION: Together, all these three modules make the proposed method ARBKSOD to produce a better result.
Asunto(s)
Osteoporosis , Huesos , Humanos , Osteoporosis/diagnóstico , Tomografía Computarizada por Rayos X , Rayos XRESUMEN
The first and foremost step in the diagnosis of ischemic stroke is the delineation of the lesion from radiological images for effective treatment planning. Manual delineation of the lesion by radiological experts is generally laborious and time-consuming. Sometimes, it is prone to intra-observer and inter-observer variability. State of the art deep architectures based on Fully Convolutional Networks (FCN) and cascaded CNNs have shown good results in automated lesion segmentation. This work proposes a series of enhancements over the learning paradigm in the existing methods, by focusing on learning meticulous feature representations through the CNN layers for accurate ischemic lesion segmentation from multimodal MRI. Multiple levels of losses, integration of features from multiple scales, an ensemble of prediction maps from sub-networks are employed to enable the CNN to correlate between features seen from different receptive fields. To allow for progressive refinement of features from block to block, a custom dropout module has been proposed that suppresses noisy features. Multi-branch residual connections and attention mechanisms were also included in the CNN blocks to enable the integration of information from multiple receptive fields and selectively weigh significant features. Also, to tackle data imbalance both at voxel and sample level, patch-based modeling and separation of concerns into classification & segmentation functional branches are proposed. By incorporating the above mentioned architectural enhancements, the proposed deep architecture was able to achieve better segmentation performance against the existing models. The proposed approach was evaluated on the ISLES 2015 SISS dataset, and it achieved a mean dice coefficient of 0.775. By combining sample classification and lesion segmentation into a fully automated framework, the proposed approach has yielded better results compared to most of the existing works.
Asunto(s)
Imagen por Resonancia Magnética , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador , Variaciones Dependientes del ObservadorRESUMEN
Thyroid associated ophthalmopathy (TAO), a cardinal clinical pointer to diagnose Graves' disease (GD), is seen less frequently in our country than in the West, but can have sight threatening consequences. Smoking, diabetes, male gender, increasing age and radioactive iodine treatment for thyrotoxicosis are known precipitating factors for TAO. We report four cases of thiazolidinediones (TZD) precipitated TAO. All were male, had autoimmune thyroid disease (three had Graves' disease and one had Hashimoto's thyroiditis) and type 2 diabetes mellitus (T2DM). They developed eye symptoms three to four months after taking TZDs for glycaemic control. Two of them responded to medical treatment, the other two underwent surgical decompression.
Asunto(s)
Diabetes Mellitus Tipo 2/tratamiento farmacológico , Oftalmopatía de Graves/inducido químicamente , Hipoglucemiantes/efectos adversos , Tiazolidinedionas/efectos adversos , Descompresión Quirúrgica , Glucocorticoides/administración & dosificación , Enfermedad de Graves/complicaciones , Oftalmopatía de Graves/terapia , Enfermedad de Hashimoto/complicaciones , Humanos , Masculino , Persona de Mediana Edad , Resultado del TratamientoRESUMEN
Ischemic stroke is the dominant disorder for mortality and morbidity. For immediate diagnosis and treatment plan of ischemic stroke, computed tomography (CT) images are used. This paper proposes a histogram bin based novel algorithm to segment the ischemic stroke lesion using CT and optimal feature group selection to classify normal and abnormal regions. Steps followed are pre-processing, segmentation, extracting texture features, feature ranking, feature grouping, classification and optimal feature group (FG) selection. The first order features, gray level run length matrix features, gray level co-occurrence matrix features and Hu's moment features are extracted. Classification is done using logistic regression (LR), support vector machine classifier (SVMC), random forest classifier (RFC) and neural network classifier (NNC). This proposed approach effectively detects ischemic stroke lesion with a classification accuracy of 88.77%, 97.86%, 99.79% and 99.79% obtained by the LR, SVMC, RFC and NNC when FG12 is opted, which is validated by fourfold cross validation.
RESUMEN
BACKGROUND AND OBJECTIVE: In recent years, deep learning algorithms have created a massive impact on addressing research challenges in different domains. The medical field also greatly benefits from the use of improving deep learning models which save time and produce accurate results. This research aims to emphasize the impact of deep learning models in brain stroke detection and lesion segmentation. This is achieved by discussing the state of the art approaches proposed by the recent works in this field. METHODS: In this study, the advancements in stroke lesion detection and segmentation were focused. The survey analyses 113 research papers published in different academic research databases. The research articles have been filtered out based on specific criteria to obtain the most prominent insights related to stroke lesion detection and segmentation. RESULTS: The features of the stroke lesion vary based on the type of imaging modality. To develop an effective method for stroke lesion detection, the features need to be carefully extracted from the input images. This review takes an attempt to categorize and discuss the different deep architectures employed for stroke lesion detection and segmentation, based on the underlying imaging modality. This further assists in understanding the relevance of the two-deep neural network components in medical image analysis namely Convolutional Neural Network (CNN) and Fully Convolutional Network (FCN). It hints at other possible deep architectures that can be proposed for better results towards stroke lesion detection. Also, the emerging trends and breakthroughs in stroke detection have been detailed in this evaluation. CONCLUSION: This work concludes by examining the technical and non-technical challenges faced by researchers and indicate the future implications in stroke detection. It could support the bio-medical researchers to propose better solutions for stroke lesion detection.
Asunto(s)
Aprendizaje Profundo , Accidente Cerebrovascular , Encéfalo , Humanos , Procesamiento de Imagen Asistido por Computador , Neuroimagen , Accidente Cerebrovascular/diagnóstico por imagenRESUMEN
We report a patient who complained of becoming darker after an abdominal surgery. The index patient not only had a darker complexion after cholecystectomy, but his glycaemic control was also getting better after operation to the extent that he could stop insulin, which he had been taking for five years. Also, he had lost significant weight after operation. Later, we found that he had developed primary hypocortisolism due to unrecognized bilateral adrenal haemorrhage in the immediate postoperative period.