Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Sensors (Basel) ; 23(18)2023 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-37765768

RESUMO

Adaptive equalization is crucial in mitigating distortions and compensating for frequency response variations in communication systems. It aims to enhance signal quality by adjusting the characteristics of the received signal. Particle swarm optimization (PSO) algorithms have shown promise in optimizing the tap weights of the equalizer. However, there is a need to enhance the optimization capabilities of PSO further to improve the equalization performance. This paper provides a comprehensive study of the issues and challenges of adaptive filtering by comparing different variants of PSO and analyzing the performance by combining PSO with other optimization algorithms to achieve better convergence, accuracy, and adaptability. Traditional PSO algorithms often suffer from high computational complexity and slow convergence rates, limiting their effectiveness in solving complex optimization problems. To address these limitations, this paper proposes a set of techniques aimed at reducing the complexity and accelerating the convergence of PSO.

2.
Sensors (Basel) ; 20(16)2020 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-32823807

RESUMO

Novel trends in affective computing are based on reliable sources of physiological signals such as Electroencephalogram (EEG), Electrocardiogram (ECG), and Galvanic Skin Response (GSR). The use of these signals provides challenges of performance improvement within a broader set of emotion classes in a less constrained real-world environment. To overcome these challenges, we propose a computational framework of 2D Convolutional Neural Network (CNN) architecture for the arrangement of 14 channels of EEG, and a combination of Long Short-Term Memory (LSTM) and 1D-CNN architecture for ECG and GSR. Our approach is subject-independent and incorporates two publicly available datasets of DREAMER and AMIGOS with low-cost, wearable sensors to extract physiological signals suitable for real-world environments. The results outperform state-of-the-art approaches for classification into four classes, namely High Valence-High Arousal, High Valence-Low Arousal, Low Valence-High Arousal, and Low Valence-Low Arousal. Emotion elicitation average accuracy of 98.73% is achieved with ECG right-channel modality, 76.65% with EEG modality, and 63.67% with GSR modality for AMIGOS. The overall highest accuracy of 99.0% for the AMIGOS dataset and 90.8% for the DREAMER dataset is achieved with multi-modal fusion. A strong correlation between spectral- and hidden-layer feature analysis with classification performance suggests the efficacy of the proposed method for significant feature extraction and higher emotion elicitation performance to a broader context for less constrained environments.


Assuntos
Eletrocardiografia , Eletroencefalografia , Emoções , Resposta Galvânica da Pele , Redes Neurais de Computação , Nível de Alerta , Humanos
3.
J Digit Imaging ; 33(6): 1428-1442, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32968881

RESUMO

Glaucoma is a progressive and deteriorating optic neuropathy that leads to visual field defects. The damage occurs as glaucoma is irreversible, so early and timely diagnosis is of significant importance. The proposed system employs the convolution neural network (CNN) for automatic segmentation of the retinal layers. The inner limiting membrane (ILM) and retinal pigmented epithelium (RPE) are used to calculate cup-to-disc ratio (CDR) for glaucoma diagnosis. The proposed system uses structure tensors to extract candidate layer pixels, and a patch across each candidate layer pixel is extracted, which is classified using CNN. The proposed framework is based upon VGG-16 architecture for feature extraction and classification of retinal layer pixels. The output feature map is merged into SoftMax layer for classification and produces probability map for central pixel of each patch and decides whether it is ILM, RPE, or background pixels. Graph search theory refines the extracted layers by interpolating the missing points, and these extracted ILM and RPE are finally used to compute CDR value and diagnose glaucoma. The proposed system is validated using a local dataset of optical coherence tomography images from 196 patients, including normal and glaucoma subjects. The dataset contains manually annotated ILM and RPE layers; manually extracted patches for ILM, RPE, and background pixels; CDR values; and eventually final finding related to glaucoma. The proposed system is able to extract ILM and RPE with a small absolute mean error of 6.03 and 5.56, respectively, and it finds CDR value within average range of ± 0.09 as compared with glaucoma expert. The proposed system achieves average sensitivity, specificity, and accuracies of 94.6, 94.07, and 94.68, respectively.


Assuntos
Glaucoma , Glaucoma/diagnóstico por imagem , Humanos , Redes Neurais de Computação , Disco Óptico , Retina/diagnóstico por imagem , Tomografia de Coerência Óptica
4.
Sensors (Basel) ; 18(12)2018 Nov 25.
Artigo em Inglês | MEDLINE | ID: mdl-30477277

RESUMO

Clustering is the most common method for organizing unlabeled data into its natural groups (called clusters), based on similarity (in some sense or another) among data objects. The Partitioning Around Medoids (PAM) algorithm belongs to the partitioning-based methods of clustering widely used for objects categorization, image analysis, bioinformatics and data compression, but due to its high time complexity, the PAM algorithm cannot be used with large datasets or in any embedded or real-time application. In this work, we propose a simple and scalable parallel architecture for the PAM algorithm to reduce its running time. This architecture can easily be implemented either on a multi-core processor system to deal with big data or on a reconfigurable hardware platform, such as FPGA and MPSoCs, which makes it suitable for real-time clustering applications. Our proposed model partitions data equally among multiple processing cores. Each core executes the same sequence of tasks simultaneously on its respective data subset and shares intermediate results with other cores to produce results. Experiments show that the computational complexity of the PAM algorithm is reduced exponentially as we increase the number of cores working in parallel. It is also observed that the speedup graph of our proposed model becomes more linear with the increase in number of data points and as the clusters become more uniform. The results also demonstrate that the proposed architecture produces the same results as the actual PAM algorithm, but with reduced computational complexity.


Assuntos
Algoritmos , Análise por Conglomerados , Biologia Computacional/estatística & dados numéricos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Computadores
6.
Sci Rep ; 14(1): 2335, 2024 01 28.
Artigo em Inglês | MEDLINE | ID: mdl-38282056

RESUMO

Staining is a crucial step in histopathology that prepares tissue sections for microscopic examination. Hematoxylin and eosin (H&E) staining, also known as basic or routine staining, is used in 80% of histopathology slides worldwide. To enhance the histopathology workflow, recent research has focused on integrating generative artificial intelligence and deep learning models. These models have the potential to improve staining accuracy, reduce staining time, and minimize the use of hazardous chemicals, making histopathology a safer and more efficient field. In this study, we introduce a novel three-stage, dual contrastive learning-based, image-to-image generative (DCLGAN) model for virtually applying an "H&E stain" to unstained skin tissue images. The proposed model utilizes a unique learning setting comprising two pairs of generators and discriminators. By employing contrastive learning, our model maximizes the mutual information between traditional H&E-stained and virtually stained H&E patches. Our dataset consists of pairs of unstained and H&E-stained images, scanned with a brightfield microscope at 20 × magnification, providing a comprehensive set of training and testing images for evaluating the efficacy of our proposed model. Two metrics, Fréchet Inception Distance (FID) and Kernel Inception Distance (KID), were used to quantitatively evaluate virtual stained slides. Our analysis revealed that the average FID score between virtually stained and H&E-stained images (80.47) was considerably lower than that between unstained and virtually stained slides (342.01), and unstained and H&E stained (320.4) indicating a similarity virtual and H&E stains. Similarly, the mean KID score between H&E stained and virtually stained images (0.022) was significantly lower than the mean KID score between unstained and H&E stained (0.28) or unstained and virtually stained (0.31) images. In addition, a group of experienced dermatopathologists evaluated traditional and virtually stained images and demonstrated an average agreement of 78.8% and 90.2% for paired and single virtual stained image evaluations, respectively. Our study demonstrates that the proposed three-stage dual contrastive learning-based image-to-image generative model is effective in generating virtual stained images, as indicated by quantified parameters and grader evaluations. In addition, our findings suggest that GAN models have the potential to replace traditional H&E staining, which can reduce both time and environmental impact. This study highlights the promise of virtual staining as a viable alternative to traditional staining techniques in histopathology.


Assuntos
Inteligência Artificial , Benchmarking , Amarelo de Eosina-(YS) , Substâncias Perigosas , Microscopia
7.
Biomed Signal Process Control ; 85: 104855, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36987448

RESUMO

Chest X-rays (CXR) are the most commonly used imaging methodology in radiology to diagnose pulmonary diseases with close to 2 billion CXRs taken every year. The recent upsurge of COVID-19 and its variants accompanied by pneumonia and tuberculosis can be fatal in some cases and lives could be saved through early detection and appropriate intervention for the advanced cases. Thus CXRs can be used for an automated severity grading of pulmonary diseases that can aid radiologists in making better and informed diagnoses. In this article, we propose a single framework for disease classification and severity scoring produced by segmenting the lungs into six regions. We present a modified progressive learning technique in which the amount of augmentations at each step is capped. Our base network in the framework is first trained using modified progressive learning and can then be tweaked for new data sets. Furthermore, the segmentation task makes use of an attention map generated within and by the network itself. This attention mechanism allows to achieve segmentation results that are on par with networks having an order of magnitude or more parameters. We also propose severity score grading for 4 thoracic diseases that can provide a single-digit score corresponding to the spread of opacity in different lung segments with the help of radiologists. The proposed framework is evaluated using the BRAX data set for segmentation and classification into six classes with severity grading for a subset of the classes. On the BRAX validation data set, we achieve F1 scores of 0.924 and 0.939 without and with fine-tuning, respectively. A mean matching score of 80.8% is obtained for severity score grading while an average area under receiver operating characteristic curve of 0.88 is achieved for classification.

8.
Data Brief ; 33: 106543, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33304953

RESUMO

In this paper, we present the data set of surface electromyography (sEMG) and an Inertial Measurement Unit (IMU) against human muscle activity during routine activities. The Myo Thalamic Armband is used to acquire the signals from muscles below the elbow. The dataset comprises of raw sEMG, accelerometer, gyro and derived orientation signals for four different activities. The four activities, which are selected for this dataset acquisition, are resting, typing, push up exercise and lifting a heavy object. Therefore, there are five associated files against each activity. The IMU data can be fused with the sEMG data for better classification of activities especially to separate aggressive and normal activities. The data is valuable for researchers working on assistive computer aided support systems for subjects with disabilities due to physical or mental disorder.

9.
Data Brief ; 29: 105342, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32181304

RESUMO

This paper presents the data set of Optic coherence tomography (OCT) and fundus Images of human eye. The OCT machine TOPCON'S 3D OCT-1000 camera is employed to acquire the images. The dataset is comprised of 50 images which includes control and glaucomatous images. For each OCT Image there is a corresponding fundus Image with annotation. Cup to disc ratio (CDR) values annotated by glaucoma specialists through fundus Images are provided in excel file. OCT images are optic nerve head (ONH) centred. Manually annotation is performed for the delineation of the Inner Limiting Membrane (ILM) Layer and Retinal pigmented epithelium (RPE) layer with the help of ophthalmologist. The data is valuable for the development of automated algorithm for glaucoma diagnosis.

10.
Data Brief ; 29: 105282, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32154339

RESUMO

This paper presents a dataset that contains 100 high quality fundus images which are acquired from Armed Forces Institute of Ophthalmology (AFIO), Rawalpindi Pakistan. The dataset has been marked by four expert ophthalmologists to aid clinicians and researchers in screening hypertensive retinopathy, diabetic retinopathy and papilledema cases. Moreover, it contains highly detailed annotations for retinal blood vascular patterns, arteries and veins to calculate arteriovenous ratio (AVR), optic nerve head (ONH) region and other retinal anomalies such as hard exudates and cotton wool spots etc. The dataset is extremely useful for the researchers who are working in the ophthalmic image analysis.

11.
Data Brief ; 33: 106433, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33209967

RESUMO

The paper describes a dataset, entitled Retina Identification Database (RIDB). The stated dataset contains Retinal fundus images acquired using Fundus imaging camera TOPCON-TRC 50 EX. The abovementioned dataset holds a significant position in retinal recognition and identification. Retinal recognition is considered as one of the reliable biometric recognition features. Biometric recognition has become an integral part of any organization's security department. Before biometrics, the information was secured through passwords, pin keys, etc. However, the fear of decryption and hacking retained. Biometric verification includes behavioural (voice, signature, gait), morphological (Fingerprint, face, palm print, retina) and biological (Odour, saliva, DNA) features [1]. Amongst all of them, retina based identification is considered as the spoof proof and most accurate identification system. Since the retina is embedded inside the eye thus is least affected by the outer environment and retain in its original state. Moreover, the vascular pattern in the retina is unique and remains unchanged during the entire life span. The data presented in the paper is composed of 100 retinal images of 20 individuals (5 images were captured from each patient). The dataset is supported by research work [2] and [7]. These research papers proposed retinal recognition algorithms for biometric verification and identification. The proposed method utilized both vascular and non-vascular features for identification and yields recognition rates of 100 % and 92.5% respectively.

12.
Comput Methods Programs Biomed ; 164: 143-157, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30195422

RESUMO

BACKGROUND AND OBJECTIVE: Accurate localization of heart beats in phonocardiogram (PCG) signal is very crucial for correct segmentation and classification of heart sounds into S1 and S2. This task becomes challenging due to inclusion of noise in acquisition process owing to number of different factors. In this paper we propose a system for heart sound localization and classification into S1 and S2. The proposed system introduces the concept of quality assessment before localization, feature extraction and classification of heart sounds. METHODS: The signal quality is assessed by predefined criteria based upon number of peaks and zero crossing of PCG signal. Once quality assessment is performed, then heart beats within PCG signal are localized, which is done by envelope extraction using homomorphic envelogram and finding prominent peaks. In order to classify localized peaks into S1 and S2, temporal and time-frequency based statistical features have been used. Support Vector Machine using radial basis function kernel is used for classification of heart beats into S1 and S2 based upon extracted features. The performance of the proposed system is evaluated using Accuracy, Sensitivity, Specificity, F-measure and Total Error. The dataset provided by PASCAL classifying heart sound challenge is used for testing. RESULTS: Performance of system is significantly improved by quality assessment. Results shows that proposed Localization algorithm achieves accuracy up to 97% and generates smallest total average error among top 3 challenge participants. The classification algorithm achieves accuracy up to 91%. CONCLUSION: The system provides firm foundation for the detection of normal and abnormal heart sounds for cardiovascular disease detection.


Assuntos
Ruídos Cardíacos , Fonocardiografia/estatística & dados numéricos , Algoritmos , Doenças Cardiovasculares/diagnóstico , Doenças Cardiovasculares/fisiopatologia , Bases de Dados Factuais/estatística & dados numéricos , Diagnóstico por Computador/estatística & dados numéricos , Frequência Cardíaca , Humanos , Fonocardiografia/normas , Controle de Qualidade , Processamento de Sinais Assistido por Computador , Razão Sinal-Ruído
13.
PLoS One ; 10(4): e0125230, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25898016

RESUMO

With the increase of transistors' density, popularity of System on Chip (SoC) has increased exponentially. As a communication module for SoC, Network on Chip (NoC) framework has been adapted as its backbone. In this paper, we propose a methodology for designing area-optimized application specific NoC while providing hard Quality of Service (QoS) guarantees for real time flows. The novelty of the proposed system lies in derivation of a Mixed Integer Linear Programming model which is then used to generate a resource optimal Network on Chip (NoC) topology and architecture while considering traffic and QoS requirements. We also present the micro-architectural design features used for enabling traffic and latency guarantees and discuss how the solution adapts for dynamic variations in the application traffic. The paper highlights the effectiveness of proposed method by generating resource efficient NoC solutions for both industrial and benchmark applications. The area-optimized results are generated in few seconds by proposed technique, without resorting to heuristics, even for an application with 48 traffic flows.


Assuntos
Algoritmos , Redes de Comunicação de Computadores , Programação Linear , Simulação por Computador , Humanos , Multimídia , Controle de Qualidade , Processamento de Sinais Assistido por Computador , Tecnologia sem Fio
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA