Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
1.
Pak J Med Sci ; 39(6): 1887-1890, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37936725

RESUMO

Pleomorphic adenoma is a benign tumor of the salivary glands. It commonly occurs in the parotid gland, palate, upper lip and cheek. The authors present a rare case of a pleomorphic adenoma of the lower lip in a 30 years old female admitted on 20th of July, 2022 at Akbar Niazi Teaching Hospital, Islamabad with a complaint of painless, slightly itchy swelling on the lower lip for the last four months. Careful history and examination revealed a swelling of the lower lip which had gradually increased in size but was static for the last three months. As the patient complained of cosmetic and social inconvenience, it was surgically managed. Any post-operative complications were ruled out and the patient was sent home in a good condition. Much research is warranted to know the exact etiopathogenesis and appropriate management of pleomorphic adenoma of the lower lip.

2.
Sensors (Basel) ; 22(4)2022 Feb 17.
Artigo em Inglês | MEDLINE | ID: mdl-35214448

RESUMO

The lumbar spine plays a very important role in our load transfer and mobility. Vertebrae localization and segmentation are useful in detecting spinal deformities and fractures. Understanding of automated medical imagery is of main importance to help doctors in handling the time-consuming manual or semi-manual diagnosis. Our paper presents the methods that will help clinicians to grade the severity of the disease with confidence, as the current manual diagnosis by different doctors has dissimilarity and variations in the analysis of diseases. In this paper we discuss the lumbar spine localization and segmentation which help for the analysis of lumbar spine deformities. The lumber spine is localized using YOLOv5 which is the fifth variant of the YOLO family. It is the fastest and the lightest object detector. Mean average precision (mAP) of 0.975 is achieved by YOLOv5. To diagnose the lumbar lordosis, we correlated the angles with region area that is computed from the YOLOv5 centroids and obtained 74.5% accuracy. Cropped images from YOLOv5 bounding boxes are passed through HED U-Net, which is a combination of segmentation and edge detection frameworks, to obtain the segmented vertebrae and its edges. Lumbar lordortic angles (LLAs) and lumbosacral angles (LSAs) are found after detecting the corners of vertebrae using a Harris corner detector with very small mean errors of 0.29° and 0.38°, respectively. This paper compares the different object detectors used to localize the vertebrae, the results of two methods used to diagnose the lumbar deformity, and the results with other researchers.


Assuntos
Aprendizado Profundo , Vértebras Lombares/diagnóstico por imagem , Região Lombossacral , Coluna Vertebral
3.
Sensors (Basel) ; 22(4)2022 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-35214568

RESUMO

Human beings tend to incrementally learn from the rapidly changing environment without comprising or forgetting the already learned representations. Although deep learning also has the potential to mimic such human behaviors to some extent, it suffers from catastrophic forgetting due to which its performance on already learned tasks drastically decreases while learning about newer knowledge. Many researchers have proposed promising solutions to eliminate such catastrophic forgetting during the knowledge distillation process. However, to our best knowledge, there is no literature available to date that exploits the complex relationships between these solutions and utilizes them for the effective learning that spans over multiple datasets and even multiple domains. In this paper, we propose a continual learning objective that encompasses mutual distillation loss to understand such complex relationships and allows deep learning models to effectively retain the prior knowledge while adapting to the new classes, new datasets, and even new applications. The proposed objective was rigorously tested on nine publicly available, multi-vendor, and multimodal datasets that span over three applications, and it achieved the top-1 accuracy of 0.9863% and an F1-score of 0.9930.


Assuntos
Redes Neurais de Computação , Humanos
4.
Sensors (Basel) ; 22(23)2022 Dec 04.
Artigo em Inglês | MEDLINE | ID: mdl-36502183

RESUMO

Emotion charting using multimodal signals has gained great demand for stroke-affected patients, for psychiatrists while examining patients, and for neuromarketing applications. Multimodal signals for emotion charting include electrocardiogram (ECG) signals, electroencephalogram (EEG) signals, and galvanic skin response (GSR) signals. EEG, ECG, and GSR are also known as physiological signals, which can be used for identification of human emotions. Due to the unbiased nature of physiological signals, this field has become a great motivation in recent research as physiological signals are generated autonomously from human central nervous system. Researchers have developed multiple methods for the classification of these signals for emotion detection. However, due to the non-linear nature of these signals and the inclusion of noise, while recording, accurate classification of physiological signals is a challenge for emotion charting. Valence and arousal are two important states for emotion detection; therefore, this paper presents a novel ensemble learning method based on deep learning for the classification of four different emotional states including high valence and high arousal (HVHA), low valence and low arousal (LVLA), high valence and low arousal (HVLA) and low valence high arousal (LVHA). In the proposed method, multimodal signals (EEG, ECG, and GSR) are preprocessed using bandpass filtering and independent components analysis (ICA) for noise removal in EEG signals followed by discrete wavelet transform for time domain to frequency domain conversion. Discrete wavelet transform results in spectrograms of the physiological signal and then features are extracted using stacked autoencoders from those spectrograms. A feature vector is obtained from the bottleneck layer of the autoencoder and is fed to three classifiers SVM (support vector machine), RF (random forest), and LSTM (long short-term memory) followed by majority voting as ensemble classification. The proposed system is trained and tested on the AMIGOS dataset with k-fold cross-validation. The proposed system obtained the highest accuracy of 94.5% and shows improved results of the proposed method compared with other state-of-the-art methods.


Assuntos
Nível de Alerta , Emoções , Humanos , Emoções/fisiologia , Nível de Alerta/fisiologia , Análise de Ondaletas , Eletroencefalografia/métodos , Máquina de Vetores de Suporte
5.
J Pak Med Assoc ; 71(11): 2665-2668, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34783757

RESUMO

Carbuncle is a painful subcutaneous mass of interconnected infected hair follicles with multiple discharging sinuses. It has predisposition in conditions like diabetes, immune-compromised states, chronic skin diseases etc. The authors present a case of a 67 year old diabetic male admitted in July 2020 at Akbar Niazi Teaching Hospital (ANTH) Islamabad, with a giant carbuncle on his back. Due to its large size, systemic co-morbidity, and increased risk of complications in surgical treatment, a multi-disciplinary team approach was employed. Both general and plastic surgeons were involved, who performed excision and soft tissue coverage respectively. The aim of the surgical intervention methods, like wide excision and debridement, application of vacuum assisted wound closure (VAC), and skin grafting was to minimise the healing time and risk of development of post-operative infection. The patient was surgically managed and sent home in a good condition.


Assuntos
Carbúnculo , Idoso , Desbridamento , Humanos , Masculino , Pele , Transplante de Pele , Cicatrização
6.
Sensors (Basel) ; 20(16)2020 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-32823807

RESUMO

Novel trends in affective computing are based on reliable sources of physiological signals such as Electroencephalogram (EEG), Electrocardiogram (ECG), and Galvanic Skin Response (GSR). The use of these signals provides challenges of performance improvement within a broader set of emotion classes in a less constrained real-world environment. To overcome these challenges, we propose a computational framework of 2D Convolutional Neural Network (CNN) architecture for the arrangement of 14 channels of EEG, and a combination of Long Short-Term Memory (LSTM) and 1D-CNN architecture for ECG and GSR. Our approach is subject-independent and incorporates two publicly available datasets of DREAMER and AMIGOS with low-cost, wearable sensors to extract physiological signals suitable for real-world environments. The results outperform state-of-the-art approaches for classification into four classes, namely High Valence-High Arousal, High Valence-Low Arousal, Low Valence-High Arousal, and Low Valence-Low Arousal. Emotion elicitation average accuracy of 98.73% is achieved with ECG right-channel modality, 76.65% with EEG modality, and 63.67% with GSR modality for AMIGOS. The overall highest accuracy of 99.0% for the AMIGOS dataset and 90.8% for the DREAMER dataset is achieved with multi-modal fusion. A strong correlation between spectral- and hidden-layer feature analysis with classification performance suggests the efficacy of the proposed method for significant feature extraction and higher emotion elicitation performance to a broader context for less constrained environments.


Assuntos
Eletrocardiografia , Eletroencefalografia , Emoções , Resposta Galvânica da Pele , Redes Neurais de Computação , Nível de Alerta , Humanos
7.
Sensors (Basel) ; 20(18)2020 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-32947977

RESUMO

Wavelet transformation is one of the most frequent procedures for data denoising, smoothing, decomposition, features extraction, and further related tasks. In order to perform such tasks, we need to select appropriate wavelet settings, including particular wavelet, decomposition level and other parameters, which form the wavelet transformation outputs. Selection of such parameters is a challenging area due to absence of versatile recommendation tools for suitable wavelet settings. In this paper, we propose a versatile recommendation system for prediction of suitable wavelet selection for data smoothing. The proposed system is aimed to generate spatial response matrix for selected wavelets and the decomposition levels. Such response enables the mapping of selected evaluation parameters, determining the efficacy of wavelet settings. The proposed system also enables tracking the dynamical noise influence in the context of Wavelet efficacy by using volumetric response. We provide testing on computed tomography (CT) and magnetic resonance (MR) image data and EMG signals mostly of musculoskeletal system to objectivise system usability for clinical data processing. The experimental testing is done by using evaluation parameters such is MSE (Mean Squared Error), ED (Euclidean distance) and Corr (Correlation index). We also provide the statistical analysis of the results based on Mann-Whitney test, which points out on statistically significant differences for individual Wavelets for the data corrupted with Salt and Pepper and Gaussian noise.


Assuntos
Algoritmos , Eletromiografia , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X , Análise de Ondaletas , Humanos , Distribuição Normal
8.
Sensors (Basel) ; 18(12)2018 Nov 25.
Artigo em Inglês | MEDLINE | ID: mdl-30477277

RESUMO

Clustering is the most common method for organizing unlabeled data into its natural groups (called clusters), based on similarity (in some sense or another) among data objects. The Partitioning Around Medoids (PAM) algorithm belongs to the partitioning-based methods of clustering widely used for objects categorization, image analysis, bioinformatics and data compression, but due to its high time complexity, the PAM algorithm cannot be used with large datasets or in any embedded or real-time application. In this work, we propose a simple and scalable parallel architecture for the PAM algorithm to reduce its running time. This architecture can easily be implemented either on a multi-core processor system to deal with big data or on a reconfigurable hardware platform, such as FPGA and MPSoCs, which makes it suitable for real-time clustering applications. Our proposed model partitions data equally among multiple processing cores. Each core executes the same sequence of tasks simultaneously on its respective data subset and shares intermediate results with other cores to produce results. Experiments show that the computational complexity of the PAM algorithm is reduced exponentially as we increase the number of cores working in parallel. It is also observed that the speedup graph of our proposed model becomes more linear with the increase in number of data points and as the clusters become more uniform. The results also demonstrate that the proposed architecture produces the same results as the actual PAM algorithm, but with reduced computational complexity.


Assuntos
Algoritmos , Análise por Conglomerados , Biologia Computacional/estatística & dados numéricos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Computadores
9.
J Med Syst ; 41(4): 66, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28283997

RESUMO

A condition in which the optic nerve inside the eye is swelled due to increased intracranial pressure is known as papilledema. The abnormalities due to papilledema such as opacification of Retinal Nerve Fiber Layer (RNFL), dilated optic disc capillaries, blurred disc margins, absence of venous pulsations, elevation of optic disc, obscuration of optic disc vessels, dilation of optic disc veins, optic disc splinter hemorrhages, cotton wool spots and hard exudates may result in complete vision loss. The ophthalmologists detect papilledema by means of an ophthalmoscope, Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and ultrasound. Rapid development of computer aided diagnostic systems has revolutionized the world. There is a need to develop such type of system that automatically detects the papilledema. In this paper, an automated system is presented that detects and grades the papilledema through analysis of fundus retinal images. The proposed system extracts 23 features from which six textural features are extracted from Gray-Level Co-occurrence Matrix (GLCM), eight features from optic disc margin obscuration, three color based features and seven vascular features are extracted. A feature vector consisting of these features is used for classification of normal and papilledema images using Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel. The variations in retinal blood vessels, color properties, texture deviation of optic disc and its peripapillary region, and fluctuation of obscured disc margin are effectively identified and used by the proposed system for the detection and grading of papilledema. A dataset of 160 fundus retinal images is used which is taken from publicly available STARE database and local dataset collected from Armed Forces Institute of Ophthalmology (AFIO) Pakistan. The proposed system shows an average accuracy of 92.86% for classification of papilledema and normal images. It also shows an average accuracy of 97.85% for classification of already classified papilledema images into mild and severe papilledema. The proposed system is a novel step towards automated detection and grading of papilledema. The results showed that the technique is reliable and can be used as clinical decision support system.


Assuntos
Fundo de Olho , Interpretação de Imagem Assistida por Computador/métodos , Papiledema/diagnóstico por imagem , Papiledema/diagnóstico , Máquina de Vetores de Suporte , Humanos , Paquistão
10.
Data Brief ; 52: 110069, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38304386

RESUMO

Unmanned aerial vehicles (UAV) rely on a variety of sensors to perceive and navigate their airborne environment with precision. The autopilot software interprets this sensory data, acting as the control mechanism for autonomous flights. As UAVs are exposed to physical environment, they are vulnerable to potential impairments in their sensory mechanism. Their real-time interactions with the actual atmosphere make them susceptible to cyber exploitations as well, where sensory data alterations through counterfeit wireless signals pose a significant threat. In this context, sensor failures can result into unsafe flight conditions, as the fault handling logic may fail to anticipate the context of the issue, allowing autopilot to execute operations without necessary adjustments. Untimely control of sensor failures can result in mid-air collisions or crashes. To address these challenges, we created Biomisa Arducopter Sensory Critique (BASiC) dataset, a state-of-the-art resource for UAV sensor failure analysis. The BASiC dataset comprises 70 autonomous flight data, spanning over 7 hours. It encompasses 3+ hours of (each) pre-failure and post-failure data, along with 1+ hour of no-failure data. We selected the ArduPilot platform as our demonstration aerial vehicle to conduct the experiments. By engineering Software in the Loop (SITL) parameters, we effectively executed sensor failure test simulations. Our dataset incorporates six representative sensors failures which are critical to UAV operations: global positioning system (GPS) for precise aerial positioning, remote control for communication with the ground control station (GCS), accelerometer for measuring linear acceleration, gyroscope for rotational acceleration measurement, compass providing heading information, and barometer for maintaining flight height based on atmospheric pressure data. The availability of the BASiC dataset will benefit the research community, empowering researchers to explore and experiment with state-of-the-art deep learning models by tailoring them for time series signal analysis. It may also contribute in enhancing the safety and reliability of mission-critical autonomous UAV flights.

11.
Front Med (Lausanne) ; 11: 1380405, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38741771

RESUMO

Introduction: Non-melanoma skin cancer comprising Basal cell carcinoma (BCC), Squamous cell carcinoma (SCC), and Intraepidermal carcinoma (IEC) has the highest incidence rate among skin cancers. Intelligent decision support systems may address the issue of the limited number of subject experts and help in mitigating the parity of health services between urban centers and remote areas. Method: In this research, we propose a transformer-based model for the segmentation of histopathology images not only into inflammation and cancers such as BCC, SCC, and IEC but also to identify skin tissues and boundaries that are important in decision-making. Accurate segmentation of these tissue types will eventually lead to accurate detection and classification of non-melanoma skin cancer. The segmentation according to tissue types and their visual representation before classification enhances the trust of pathologists and doctors being relatable to how most pathologists approach this problem. The visualization of the confidence of the model in its prediction through uncertainty maps is also what distinguishes this study from most deep learning methods. Results: The evaluation of proposed system is carried out using publicly available dataset. The application of our proposed segmentation system demonstrated good performance with an F1 score of 0.908, mean intersection over union (mIoU) of 0.653, and average accuracy of 83.1%, advocating that the system can be used as a decision support system successfully and has the potential of subsequently maturing into a fully automated system. Discussion: This study is an attempt to automate the segmentation of the most occurring non-melanoma skin cancer using a transformer-based deep learning technique applied to histopathology skin images. Highly accurate segmentation and visual representation of histopathology images according to tissue types by the proposed system implies that the system can be used for skin-related routine pathology tasks including cancer and other anomaly detection, their classification, and measurement of surgical margins in the case of cancer cases.

12.
Sci Rep ; 14(1): 17080, 2024 07 24.
Artigo em Inglês | MEDLINE | ID: mdl-39048599

RESUMO

Affect recognition in a real-world, less constrained environment is the principal prerequisite of the industrial-level usefulness of this technology. Monitoring the psychological profile using smart, wearable electroencephalogram (EEG) sensors during daily activities without external stimuli, such as memory-induced emotions, is a challenging research gap in emotion recognition. This paper proposed a deep learning framework for improved memory-induced emotion recognition leveraging a combination of 1D-CNN and LSTM as feature extractors integrated with an Extreme Learning Machine (ELM) classifier. The proposed deep learning architecture, combined with the EEG preprocessing, such as the removal of the average baseline signal from each sample and extraction of EEG rhythms (delta, theta, alpha, beta, and gamma), aims to capture repetitive and continuous patterns for memory-induced emotion recognition, underexplored with deep learning techniques. This work has analyzed EEG signals using a wearable, ultra-mobile sports cap while recalling autobiographical emotional memories evoked by affect-denoting words, with self-annotation on the scale of valence and arousal. With extensive experimentation using the same dataset, the proposed framework empirically outperforms existing techniques for the emerging area of memory-induced emotion recognition with an accuracy of 65.6%. The EEG rhythms analysis, such as delta, theta, alpha, beta, and gamma, achieved 65.5%, 52.1%, 65.1%, 64.6%, and 65.0% accuracies for classification with four quadrants of valence and arousal. These results underscore the significant advancement achieved by our proposed method for the real-world environment of memory-induced emotion recognition.


Assuntos
Aprendizado Profundo , Eletroencefalografia , Emoções , Rememoração Mental , Humanos , Eletroencefalografia/métodos , Emoções/fisiologia , Rememoração Mental/fisiologia , Masculino , Feminino , Adulto , Adulto Jovem
13.
Sci Rep ; 14(1): 2335, 2024 01 28.
Artigo em Inglês | MEDLINE | ID: mdl-38282056

RESUMO

Staining is a crucial step in histopathology that prepares tissue sections for microscopic examination. Hematoxylin and eosin (H&E) staining, also known as basic or routine staining, is used in 80% of histopathology slides worldwide. To enhance the histopathology workflow, recent research has focused on integrating generative artificial intelligence and deep learning models. These models have the potential to improve staining accuracy, reduce staining time, and minimize the use of hazardous chemicals, making histopathology a safer and more efficient field. In this study, we introduce a novel three-stage, dual contrastive learning-based, image-to-image generative (DCLGAN) model for virtually applying an "H&E stain" to unstained skin tissue images. The proposed model utilizes a unique learning setting comprising two pairs of generators and discriminators. By employing contrastive learning, our model maximizes the mutual information between traditional H&E-stained and virtually stained H&E patches. Our dataset consists of pairs of unstained and H&E-stained images, scanned with a brightfield microscope at 20 × magnification, providing a comprehensive set of training and testing images for evaluating the efficacy of our proposed model. Two metrics, Fréchet Inception Distance (FID) and Kernel Inception Distance (KID), were used to quantitatively evaluate virtual stained slides. Our analysis revealed that the average FID score between virtually stained and H&E-stained images (80.47) was considerably lower than that between unstained and virtually stained slides (342.01), and unstained and H&E stained (320.4) indicating a similarity virtual and H&E stains. Similarly, the mean KID score between H&E stained and virtually stained images (0.022) was significantly lower than the mean KID score between unstained and H&E stained (0.28) or unstained and virtually stained (0.31) images. In addition, a group of experienced dermatopathologists evaluated traditional and virtually stained images and demonstrated an average agreement of 78.8% and 90.2% for paired and single virtual stained image evaluations, respectively. Our study demonstrates that the proposed three-stage dual contrastive learning-based image-to-image generative model is effective in generating virtual stained images, as indicated by quantified parameters and grader evaluations. In addition, our findings suggest that GAN models have the potential to replace traditional H&E staining, which can reduce both time and environmental impact. This study highlights the promise of virtual staining as a viable alternative to traditional staining techniques in histopathology.


Assuntos
Inteligência Artificial , Benchmarking , Amarelo de Eosina-(YS) , Substâncias Perigosas , Microscopia
14.
PLoS One ; 18(1): e0280352, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36649367

RESUMO

Following its initial identification on December 31, 2019, COVID-19 quickly spread around the world as a pandemic claiming more than six million lives. An early diagnosis with appropriate intervention can help prevent deaths and serious illness as the distinguishing symptoms that set COVID-19 apart from pneumonia and influenza frequently don't show up until after the patient has already suffered significant damage. A chest X-ray (CXR), one of many imaging modalities that are useful for detection and one of the most used, offers a non-invasive method of detection. The CXR image analysis can also reveal additional disorders, such as pneumonia, which show up as anomalies in the lungs. Thus these CXRs can be used for automated grading aiding the doctors in making a better diagnosis. In order to classify a CXR image into the Negative for Pneumonia, Typical, Indeterminate, and Atypical, we used the publicly available CXR image competition dataset SIIM-FISABIO-RSNA COVID-19 from Kaggle. The suggested architecture employed an ensemble of EfficientNetv2-L for classification, which was trained via transfer learning from the initialised weights of ImageNet21K on various subsets of data (Code for the proposed methodology is available at: https://github.com/asadkhan1221/siim-covid19.git). To identify and localise opacities, an ensemble of YOLO was combined using Weighted Boxes Fusion (WBF). Significant generalisability gains were made possible by the suggested technique's addition of classification auxiliary heads to the CNN backbone. The suggested method improved further by utilising test time augmentation for both classifiers and localizers. The results for Mean Average Precision score show that the proposed deep learning model achieves 0.617 and 0.609 on public and private sets respectively and these are comparable to other techniques for the Kaggle dataset.


Assuntos
COVID-19 , Pneumonia Viral , Humanos , COVID-19/diagnóstico por imagem , Raios X , Pneumonia Viral/diagnóstico por imagem , Tórax/diagnóstico por imagem , Redes Neurais de Computação
15.
Comput Biol Med ; 156: 106668, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36863192

RESUMO

Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite of the outstanding results, the widespread adoption of these techniques in clinical practice is still taking place at a moderate pace. One of the major hindrance is that a trained Deep Neural Networks (DNN) model provides a prediction, but questions about why and how that prediction was made remain unanswered. This linkage is of utmost importance for the regulated healthcare domain to increase the trust in the automated diagnosis system by the practitioners, patients and other stakeholders. The application of deep learning for medical imaging has to be interpreted with caution due to the health and safety concerns similar to blame attribution in the case of an accident involving autonomous cars. The consequences of both a false positive and false negative cases are far reaching for patients' welfare and cannot be ignored. This is exacerbated by the fact that the state-of-the-art deep learning algorithms comprise of complex interconnected structures, millions of parameters, and a 'black box' nature, offering little understanding of their inner working unlike the traditional machine learning algorithms. Explainable AI (XAI) techniques help to understand model predictions which help develop trust in the system, accelerate the disease diagnosis, and meet adherence to regulatory requirements. This survey provides a comprehensive review of the promising field of XAI for biomedical imaging diagnostics. We also provide a categorization of the XAI techniques, discuss the open challenges, and provide future directions for XAI which would be of interest to clinicians, regulators and model developers.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Humanos , Diagnóstico por Imagem , Algoritmos , Aprendizado de Máquina
16.
Diagnostics (Basel) ; 13(13)2023 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-37443625

RESUMO

Diabetic retinopathy is one of the abnormalities of the retina in which a diabetic patient suffers from severe vision loss due to an affected retina. Proliferative diabetic retinopathy (PDR) is the final and most critical stage of diabetic retinopathy. Abnormal and fragile blood vessels start to grow on the surface of the retina at this stage. It causes retinal detachment, which may lead to complete blindness in severe cases. In this paper, a novel method is proposed for the detection and grading of neovascularization. The proposed system first performs pre-processing on input retinal images to enhance the vascular pattern, followed by blood vessel segmentation and optic disc localization. Then various features are tested on the candidate regions with different thresholds. In this way, positive and negative advanced diabetic retinopathy cases are separated. Optic disc coordinates are applied for the grading of neovascularization as NVD or NVE. The proposed algorithm improves the quality of automated diagnostic systems by eliminating normal blood vessels and exudates that might cause hindrances in accurate disease detection, thus resulting in more accurate detection of abnormal blood vessels. The evaluation of the proposed system has been carried out using performance parameters such as sensitivity, specificity, accuracy, and positive predictive value (PPV) on a publicly available standard retinal image database and one of the locally available databases. The proposed algorithm gives an accuracy of 98.5% and PPV of 99.8% on MESSIDOR and an accuracy of 96.5% and PPV of 100% on the local database.

17.
Biomed Signal Process Control ; 85: 104855, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36987448

RESUMO

Chest X-rays (CXR) are the most commonly used imaging methodology in radiology to diagnose pulmonary diseases with close to 2 billion CXRs taken every year. The recent upsurge of COVID-19 and its variants accompanied by pneumonia and tuberculosis can be fatal in some cases and lives could be saved through early detection and appropriate intervention for the advanced cases. Thus CXRs can be used for an automated severity grading of pulmonary diseases that can aid radiologists in making better and informed diagnoses. In this article, we propose a single framework for disease classification and severity scoring produced by segmenting the lungs into six regions. We present a modified progressive learning technique in which the amount of augmentations at each step is capped. Our base network in the framework is first trained using modified progressive learning and can then be tweaked for new data sets. Furthermore, the segmentation task makes use of an attention map generated within and by the network itself. This attention mechanism allows to achieve segmentation results that are on par with networks having an order of magnitude or more parameters. We also propose severity score grading for 4 thoracic diseases that can provide a single-digit score corresponding to the spread of opacity in different lung segments with the help of radiologists. The proposed framework is evaluated using the BRAX data set for segmentation and classification into six classes with severity grading for a subset of the classes. On the BRAX validation data set, we achieve F1 scores of 0.924 and 0.939 without and with fine-tuning, respectively. A mean matching score of 80.8% is obtained for severity score grading while an average area under receiver operating characteristic curve of 0.88 is achieved for classification.

18.
Chemosphere ; 313: 137332, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36427576

RESUMO

Conventional chemotherapy poses toxic effects to healthy tissues. A therapeutic system is thus required that can administer, distribute, metabolize, and excrete medicine from human body without damaging healthy cells. This is possible by designing a therapeutic system that can release drug at specific target tissue. In current work, novel chitosan (CS) based polymeric nanoparticles (PNPs) containing N-isopropyl acrylamide (NIPAAM) and 2-(di-isopropyl amino) ethyl methacrylate (DPA) are designed. The presence of available functional groups i.e. OH- (3262 cm-1), -NH2 (1542 cm-1), and CO (1642 cm-1), was confirmed by Fourier Transform Infra-red Spectrophotometry (FTIR). The surface morphology and average particle size (175 nm) was determined through Scanning Electron Microscope (SEM). X-Ray Diffractometry (XRD) studies confirmed the amorphous nature and excellent thermal stability of PNPs up to 100 °C with only 2.69% mass loss was confirmed by Thermogravimetric analysis (TGA). The pH sensitivity of such PNPs for release of encapsulated doxorubicin at malignant site was investigated. The encapsulation efficiency of PNPs was 89% (4.45 mg/5 mg) for doxorubicin (a chemotherapeutic) measured by using UV-Vis Spectrophotometer. The drug release profile of loaded PNPs was 88% (3.92 mg/4.45 mg) at pH 5.3, in 96 h. PNPs with varying DPA concentration can effectively be used to deliver chemotherapeutic agents with high efficacy.


Assuntos
Quitosana , Nanopartículas , Neoplasias , Humanos , Polímeros , Doxorrubicina , Liberação Controlada de Fármacos , Portadores de Fármacos , Tamanho da Partícula , Espectroscopia de Infravermelho com Transformada de Fourier , Microambiente Tumoral
19.
ACS Omega ; 8(7): 6638-6649, 2023 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-36844569

RESUMO

Acyl-amide is extensively used as functional group and is a superior contender for the design of MOFs with the guest accessible functional organic sites. A novel acyl-amide-containing tetracarboxylate ligand, bis(3,5-dicarboxy-pheny1)terephthalamide, has been successfully synthesized. The H4L linker has some fascinating attributes as follows: (i) four carboxylate moieties as the coordination sites confirm affluent coordination approaches to figure a diversity of structure; (ii) two acyl-amide groups as the guest interaction sites can engender guest molecules integrated into the MOF networks through H-bonding interfaces and have a possibility to act as functional organic sites for the condensation reaction. A mesoporous MOF ([Cu2(L)(H2O)3]·4DMF·6H2O) has been prepared in order to produce the amide FOS within the MOF, which will work as guest accessible sites. The prepared MOF was characterized by CHN analysis, PXRD, FTIR spectroscopy, and SEM analysis. The MOF showed superior catalytic activity for Knoevenagel condensation. The catalytic system endures a broad variety of the functional groups and presents high to modest yields of aldehydes containing electron withdrawing groups (4-chloro, 4-fluoro, 4-nitro), offering a yield > 98 in less reaction time as compared to aldehydes with electron donationg groups (4-methyl). The amide decorated MOF (LOCOM-1-) as a heterogeneous catalyst can be simply recovered by centrifugation and recycled again without a flagrant loss of its catalytic efficiency.

20.
Diagnostics (Basel) ; 12(12)2022 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-36553091

RESUMO

Diabetic Retinopathy affects one-third of all diabetic patients and may cause vision impairment. It has four stages of progression, i.e., mild non-proliferative, moderate non-proliferative, severe non-proliferative and proliferative Diabetic Retinopathy. The disease has no noticeable symptoms at early stages and may lead to chronic destruction, thus causing permanent blindness if not detected at an early stage. The proposed research provides deep learning frameworks for autonomous detection of Diabetic Retinopathy at an early stage using fundus images. The first framework consists of cascaded neural networks, spanned in three layers where each layer classifies data into two classes, one is the desired stage and the other output is passed to another classifier until the input image is classified as one of the stages. The second framework takes normalized, HSV and RGB fundus images as input to three Convolutional Neural Networks, and the resultant probabilistic vectors are averaged together to obtain the final output of the input image. Third framework used the Long Short Term Memory Module in CNN to emphasize the network in remembering information over a long time span. Proposed frameworks were tested and compared on the large-scale Kaggle fundus image dataset EYEPAC. The evaluations have shown that the second framework outperformed others and achieved an accuracy of 78.06% and 83.78% without and with augmentation, respectively.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA