Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 95
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 21(12)2021 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-34200635

RESUMO

An annotated photoplethysmogram (PPG) is required when evaluating PPG algorithms that have been developed to detect the onset and systolic peaks of PPG waveforms. However, few publicly accessible PPG datasets exist in which the onset and systolic peaks of the waveforms are annotated. Therefore, this study developed a MATLAB toolbox that stitches predetermined annotated PPGs in a random manner to generate a long, annotated PPG signal. With this toolbox, any combination of four annotated PPG templates that represent regular, irregular, fast rhythm, and noisy PPG waveforms can be stitched together to generate a long, annotated PPG. Furthermore, this toolbox can simulate real-life PPG signals by introducing different noise levels and PPG waveforms. The toolbox can implement two stitching methods: one based on the systolic peak and the other on the onset. Additionally, cubic spline interpolation is used to smooth the waveform around the stitching point, and a skewness index is used as a signal quality index to select the final signal output based on the stitching method used. The developed toolbox is free and open-source software, and a graphical user interface is provided. The method of synthesizing by stitching introduced in this paper is a data augmentation strategy that can help researchers significantly increase the size and diversity of annotated PPG signals available for training and testing different feature extraction algorithms.


Assuntos
Algoritmos , Fotopletismografia , Frequência Cardíaca , Processamento de Sinais Assistido por Computador , Software
2.
BMC Med Imaging ; 16: 11, 2016 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-26800667

RESUMO

BACKGROUND: From the viewpoint of the patients' health, reducing the radiation dose in computed tomography (CT) is highly desirable. However, projection measurements acquired under low-dose conditions will contain much noise. Therefore, reconstruction of high-quality images from low-dose scans requires effective denoising of the projection measurements. METHODS: We propose a denoising algorithm that is based on maximizing the data likelihood and sparsity in the gradient domain. For Poisson noise, this formulation automatically leads to a locally adaptive denoising scheme. Because the resulting optimization problem is hard to solve and may also lead to artifacts, we suggest an explicitly local denoising method by adapting an existing algorithm for normally-distributed noise. We apply the proposed method on sets of simulated and real cone-beam projections and compare its performance with two other algorithms. RESULTS: The proposed algorithm effectively suppresses the noise in simulated and real CT projections. Denoising of the projections with the proposed algorithm leads to a substantial improvement of the reconstructed image in terms of noise level, spatial resolution, and visual quality. CONCLUSION: The proposed algorithm can suppress very strong quantum noise in CT projections. Therefore, it can be used as an effective tool in low-dose CT.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X/métodos , Simulação por Computador , Humanos , Distribuição de Poisson , Doses de Radiação , Razão Sinal-Ruído
3.
Sensors (Basel) ; 16(2): 201, 2016 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-26861335

RESUMO

This paper proposes a compressive sensing (CS) method for multi-channel electroencephalogram (EEG) signals in Wireless Body Area Network (WBAN) applications, where the battery life of sensors is limited. For the single EEG channel case, known as the single measurement vector (SMV) problem, the Block Sparse Bayesian Learning-BO (BSBL-BO) method has been shown to yield good results. This method exploits the block sparsity and the intra-correlation (i.e., the linear dependency) within the measurement vector of a single channel. For the multichannel case, known as the multi-measurement vector (MMV) problem, the Spatio-Temporal Sparse Bayesian Learning (STSBL-EM) method has been proposed. This method learns the joint correlation structure in the multichannel signals by whitening the model in the temporal and the spatial domains. Our proposed method represents the multi-channels signal data as a vector that is constructed in a specific way, so that it has a better block sparsity structure than the conventional representation obtained by stacking the measurement vectors of the different channels. To reconstruct the multichannel EEG signals, we modify the parameters of the BSBL-BO algorithm, so that it can exploit not only the linear but also the non-linear dependency structures in a vector. The modified BSBL-BO is then applied on the vector with the better sparsity structure. The proposed method is shown to significantly outperform existing SMV and also MMV methods. It also shows significant lower compression errors even at high compression ratios such as 10:1 on three different datasets.

4.
Biomed Eng Online ; 14: 96, 2015 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-26499452

RESUMO

BACKGROUND: Cervical cancer remains a major health problem, especially in developing countries. Colposcopic examination is used to detect high-grade lesions in patients with a history of abnormal pap smears. New technologies are needed to improve the sensitivity and specificity of this technique. We propose to test the potential of fluorescence confocal microscopy to identify high-grade lesions. METHODS: We examined the quantification of ex vivo confocal fluorescence microscopy to differentiate among normal cervical tissue, low-grade Cervical Intraepithelial Neoplasia (CIN), and high-grade CIN. We sought to (1) quantify nuclear morphology and tissue architecture features by analyzing images of cervical biopsies; and (2) determine the accuracy of high-grade CIN detection via confocal microscopy relative to the accuracy of detection by colposcopic impression. Forty-six biopsies obtained from colposcopically normal and abnormal cervical sites were evaluated. Confocal images were acquired at different depths from the epithelial surface and histological images were analyzed using in-house software. RESULTS: The features calculated from the confocal images compared well with those features obtained from the histological images and histopathological reviews of the specimens (obtained by a gynecologic pathologist). The correlations between two of these features (the nuclear-cytoplasmic ratio and the average of three nearest Delaunay-neighbors distance) and the grade of dysplasia were higher than that of colposcopic impression. The sensitivity of detecting high-grade dysplasia by analysing images collected at the surface of the epithelium, and at 15 and 30 µm below the epithelial surface were respectively 100, 100, and 92 %. CONCLUSIONS: Quantitative analysis of confocal fluorescence images showed its capacity for discriminating high-grade CIN lesions vs. low-grade CIN lesions and normal tissues, at different depth of imaging. This approach could be used to help clinicians identify high-grade CIN in clinical settings.


Assuntos
Microscopia Confocal/métodos , Microscopia de Fluorescência/métodos , Displasia do Colo do Útero/diagnóstico , Neoplasias do Colo do Útero/diagnóstico , Adulto , Colposcopia , Feminino , Humanos , Pessoa de Meia-Idade , Gradação de Tumores , Fenótipo , Neoplasias do Colo do Útero/patologia , Adulto Jovem , Displasia do Colo do Útero/patologia
5.
Sensors (Basel) ; 14(9): 15729-48, 2014 Aug 25.
Artigo em Inglês | MEDLINE | ID: mdl-25157551

RESUMO

We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN) in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling), processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements) as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.


Assuntos
Algoritmos , Redes de Comunicação de Computadores/instrumentação , Compressão de Dados/métodos , Fontes de Energia Elétrica , Eletroencefalografia/instrumentação , Monitorização Ambulatorial/instrumentação , Tecnologia sem Fio/instrumentação , Eletroencefalografia/métodos , Transferência de Energia , Desenho de Equipamento , Análise de Falha de Equipamento
6.
Sensors (Basel) ; 14(2): 2036-51, 2014 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-24469356

RESUMO

The emergence of wireless sensor networks (WSNs) has motivated a paradigm shift in patient monitoring and disease control. Epilepsy management is one of the areas that could especially benefit from the use of WSN. By using miniaturized wireless electroencephalogram (EEG) sensors, it is possible to perform ambulatory EEG recording and real-time seizure detection outside clinical settings. One major consideration in using such a wireless EEG-based system is the stringent battery energy constraint at the sensor side. Different solutions to reduce the power consumption at this side are therefore highly desired. The conventional approach incurs a high power consumption, as it transmits the entire EEG signals wirelessly to an external data server (where seizure detection is carried out). This paper examines the use of data reduction techniques for reducing the amount of data that has to be transmitted and, thereby, reducing the required power consumption at the sensor side. Two data reduction approaches are examined: compressive sensing-based EEG compression and low-complexity feature extraction. Their performance is evaluated in terms of seizure detection effectiveness and power consumption. Experimental results show that by performing low-complexity feature extraction at the sensor side and transmitting only the features that are pertinent to seizure detection to the server, a considerable overall saving in power is achieved. The battery life of the system is increased by 14 times, while the same seizure detection rate as the conventional approach (95%) is maintained.


Assuntos
Convulsões/diagnóstico , Assistência Ambulatorial , Eletroencefalografia , Humanos , Miniaturização , Convulsões/prevenção & controle , Tecnologia sem Fio
7.
Sensors (Basel) ; 14(1): 1474-96, 2014 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-24434840

RESUMO

The use of wireless body sensor networks is gaining popularity in monitoring and communicating information about a person's health. In such applications, the amount of data transmitted by the sensor node should be minimized. This is because the energy available in these battery powered sensors is limited. In this paper, we study the wireless transmission of electroencephalogram (EEG) signals. We propose the use of a compressed sensing (CS) framework to efficiently compress these signals at the sensor node. Our framework exploits both the temporal correlation within EEG signals and the spatial correlations amongst the EEG channels. We show that our framework is up to eight times more energy efficient than the typical wavelet compression method in terms of compression and encoding computations and wireless transmission. We also show that for a fixed compression ratio, our method achieves a better reconstruction quality than the CS-based state-of-the art method. We finally demonstrate that our method is robust to measurement noise and to packet loss and that it is applicable to a wide range of EEG signal types.


Assuntos
Compressão de Dados/métodos , Eletroencefalografia/métodos , Algoritmos , Processamento de Sinais Assistido por Computador
8.
Sensors (Basel) ; 14(10): 18370-89, 2014 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-25275348

RESUMO

Electroencephalogram (EEG) recordings are often contaminated with muscular artifacts that strongly obscure the EEG signals and complicates their analysis. For the conventional case, where the EEG recordings are obtained simultaneously over many EEG channels, there exists a considerable range of methods for removing muscular artifacts. In recent years, there has been an increasing trend to use EEG information in ambulatory healthcare and related physiological signal monitoring systems. For practical reasons, a single EEG channel system must be used in these situations. Unfortunately, there exist few studies for muscular artifact cancellation in single-channel EEG recordings. To address this issue, in this preliminary study, we propose a simple, yet effective, method to achieve the muscular artifact cancellation for the single-channel EEG case. This method is a combination of the ensemble empirical mode decomposition (EEMD) and the joint blind source separation (JBSS) techniques. We also conduct a study that compares and investigates all possible single-channel solutions and demonstrate the performance of these methods using numerical simulations and real-life applications. The proposed method is shown to significantly outperform all other methods. It can successfully remove muscular artifacts without altering the underlying EEG activity. It is thus a promising tool for use in ambulatory healthcare systems.


Assuntos
Artefatos , Eletroencefalografia/métodos , Processamento de Sinais Assistido por Computador , Humanos , Músculos/fisiologia
9.
Sensors (Basel) ; 13(3): 3902-21, 2013 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-23519348

RESUMO

This work addresses the problem of recovering multi-echo T1 or T2 weighted images from their partial K-space scans. Recent studies have shown that the best results are obtained when all the multi-echo images are reconstructed by simultaneously exploiting their intra-image spatial redundancy and inter-echo correlation. The aforesaid studies either stack the vectorised images (formed by row or columns concatenation) as columns of a Multiple Measurement Vector (MMV) matrix or concatenate them as a long vector. Owing to the inter-image correlation, the thus formed MMV matrix or the long concatenated vector is row-sparse or group-sparse respectively in a transform domain (wavelets). Consequently the reconstruction problem was formulated as a row-sparse MMV recovery or a group-sparse vector recovery. In this work we show that when the multi-echo images are arranged in the MMV form, the thus formed matrix is low-rank. We show that better reconstruction accuracy can be obtained when the information about rank-deficiency is incorporated into the row/group sparse recovery problem. Mathematically, this leads to a constrained optimization problem where the objective function promotes the signal's groups-sparsity as well as its rank-deficiency; the objective function is minimized subject to data fidelity constraints. The experiments were carried out on ex vivo and in vivo T2 weighted images of a rat's spinal cord. Results show that this method yields considerably superior results than state-of-the-art reconstruction techniques.


Assuntos
Diagnóstico por Imagem , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Modelos Teóricos , Algoritmos , Animais , Encéfalo/diagnóstico por imagem , Humanos , Aumento da Imagem , Radiografia , Ratos , Ratos Sprague-Dawley
10.
Sensors (Basel) ; 13(12): 16714-35, 2013 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-24316569

RESUMO

State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation) stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than) state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets-eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used-Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods-CS SENSE and l1SPIRiT and two calibration free techniques-Distributed CS and SAKE. Our method yields better reconstruction results than all of them.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Encéfalo/fisiologia , Calibragem , Modelos Teóricos
11.
Bioengineering (Basel) ; 10(6)2023 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-37370561

RESUMO

Electrocardiograms (ECGs) provide crucial information for evaluating a patient's cardiovascular health; however, they are not always easily accessible. Photoplethysmography (PPG), a technology commonly used in wearable devices such as smartwatches, has shown promise for constructing ECGs. Several methods have been proposed for ECG reconstruction using PPG signals, but some require signal alignment during the training phase, which is not feasible in real-life settings where ECG signals are not collected at the same time as PPG signals. To address this challenge, we introduce PPG2ECGps, an end-to-end, patient-specific deep-learning neural network utilizing the W-Net architecture. This novel approach enables direct ECG signal reconstruction from PPG signals, eliminating the need for signal alignment. Our experiments show that the proposed model achieves mean values of 0.977 mV for Pearson's correlation coefficient, 0.037 mV for the root mean square error, and 0.010 mV for the normalized dynamic time-warped distance when comparing reconstructed ECGs to reference ECGs from a dataset of 500 records. As PPG signals are more accessible than ECG signals, our proposed model has significant potential to improve patient monitoring and diagnosis in healthcare settings via wearable devices.

12.
Med Image Anal ; 89: 102871, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37480795

RESUMO

Motor dysfunction in Parkinson's Disease (PD) patients is typically assessed by clinicians employing the Movement Disorder Society's Unified Parkinson's Disease Rating Scale (MDS-UPDRS). Such comprehensive clinical assessments are time-consuming, expensive, semi-subjective, and may potentially result in conflicting labels across different raters. To address this problem, we propose an automatic, objective, and weakly-supervised method for labeling PD patients' gait videos. The proposed method accepts videos of patients and classifies their gait scores as normal (Gait score in MDS-UPDRS = 0) or PD (MDS-UPDRS ≥ 1). Unlike previous work, the proposed method does not require a priori MDS-UPDRS ratings for training, utilizing only domain-specific knowledge obtained from neurologists. We propose several labeling functions that classify patients' gait and use a generative model to learn the accuracy of each labeling function in a self-supervised manner. Since results depended upon the estimated values of the patients' 3D poses, and existing pre-trained 3D pose estimators did not yield accurate results, we propose a weakly-supervised 3D human pose estimation method for fine-tuning pre-trained models in a clinical setting. Using leave-one-out evaluations, the proposed method obtains an accuracy of 89% on a dataset of 29 PD subjects - a significant improvement compared to previous work by 7%-10% depending upon the dataset. The method obtained state-of-the-art results on the Human3.6M dataset. Our results suggest that the use of labeling functions may provide a robust means to interpret and classify patient-oriented videos involving motor tasks.


Assuntos
Doença de Parkinson , Humanos , Marcha , Aprendizagem
13.
J Clin Med ; 12(14)2023 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-37510889

RESUMO

Aortic valve defects are among the most prevalent clinical conditions. A severely damaged or non-functioning aortic valve is commonly replaced with a bioprosthetic heart valve (BHV) via the transcatheter aortic valve replacement (TAVR) procedure. Accurate pre-operative planning is crucial for a successful TAVR outcome. Assessment of computational fluid dynamics (CFD), finite element analysis (FEA), and fluid-solid interaction (FSI) analysis offer a solution that has been increasingly utilized to evaluate BHV mechanics and dynamics. However, the high computational costs and the complex operation of computational modeling hinder its application. Recent advancements in the deep learning (DL) domain can offer a real-time surrogate that can render hemodynamic parameters in a few seconds, thus guiding clinicians to select the optimal treatment option. Herein, we provide a comprehensive review of classical computational modeling approaches, medical imaging, and DL approaches for planning and outcome assessment of TAVR. Particularly, we focus on DL approaches in previous studies, highlighting the utilized datasets, deployed DL models, and achieved results. We emphasize the critical challenges and recommend several future directions for innovative researchers to tackle. Finally, an end-to-end smart DL framework is outlined for real-time assessment and recommendation of the best BHV design for TAVR. Ultimately, deploying such a framework in future studies will support clinicians in minimizing risks during TAVR therapy planning and will help in improving patient care.

14.
J Neuroeng Rehabil ; 9: 50, 2012 Jul 27.
Artigo em Inglês | MEDLINE | ID: mdl-22838499

RESUMO

BACKGROUND: A novel artefact removal algorithm is proposed for a self-paced hybrid brain-computer interface (BCI) system. This hybrid system combines a self-paced BCI with an eye-tracker to operate a virtual keyboard. To select a letter, the user must gaze at the target for at least a specific period of time (dwell time) and then activate the BCI by performing a mental task. Unfortunately, electroencephalogram (EEG) signals are often contaminated with artefacts. Artefacts change the quality of EEG signals and subsequently degrade the BCI's performance. METHODS: To remove artefacts in EEG signals, the proposed algorithm uses the stationary wavelet transform combined with a new adaptive thresholding mechanism. To evaluate the performance of the proposed algorithm and other artefact handling/removal methods, semi-simulated EEG signals (i.e., real EEG signals mixed with simulated artefacts) and real EEG signals obtained from seven participants are used. For real EEG signals, the hybrid BCI system's performance is evaluated in an online-like manner, i.e., using the continuous data from the last session as in a real-time environment. RESULTS: With semi-simulated EEG signals, we show that the proposed algorithm achieves lower signal distortion in both time and frequency domains. With real EEG signals, we demonstrate that for dwell time of 0.0s, the number of false-positives/minute is 2 and the true positive rate (TPR) achieved by the proposed algorithm is 44.7%, which is more than 15.0% higher compared to other state-of-the-art artefact handling methods. As dwell time increases to 1.0s, the TPR increases to 73.1%. CONCLUSIONS: The proposed artefact removal algorithm greatly improves the BCI's performance. It also has the following advantages: a) it does not require additional electrooculogram/electromyogram channels, long data segments or a large number of EEG channels, b) it allows real-time processing, and c) it reduces signal distortion.


Assuntos
Algoritmos , Artefatos , Interfaces Cérebro-Computador , Interpretação Estatística de Dados , Eletroencefalografia/instrumentação , Eletroencefalografia/métodos , Eletromiografia , Eletroculografia , Desenho de Equipamento , Movimentos Oculares/fisiologia , Feminino , Humanos , Masculino , Análise de Regressão , Reprodutibilidade dos Testes , Processamento de Sinais Assistido por Computador , Interface Usuário-Computador , Análise de Ondaletas , Adulto Jovem
15.
Biomedicines ; 10(7)2022 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-35884859

RESUMO

Epilepsy is a neurological disorder that causes recurrent seizures and sometimes loss of awareness. Around 30% of epileptic patients continue to have seizures despite taking anti-seizure medication. The ability to predict the future occurrence of seizures would enable the patients to take precautions against probable injuries and administer timely treatment to abort or control impending seizures. In this study, we introduce a Transformer-based approach called Multi-channel Vision Transformer (MViT) for automated and simultaneous learning of the spatio-temporal-spectral features in multi-channel EEG data. Continuous wavelet transform, a simple yet efficient pre-processing approach, is first used for turning the time-series EEG signals into image-like time-frequency representations named Scalograms. Each scalogram is split into a sequence of fixed-size non-overlapping patches, which are then fed as inputs to the MViT for EEG classification. Extensive experiments on three benchmark EEG datasets demonstrate the superiority of the proposed MViT algorithm over the state-of-the-art seizure prediction methods, achieving an average prediction sensitivity of 99.80% for surface EEG and 90.28-91.15% for invasive EEG data.

16.
Bioengineering (Basel) ; 9(8)2022 Aug 18.
Artigo em Inglês | MEDLINE | ID: mdl-36004927

RESUMO

The continuous prediction of arterial blood pressure (ABP) waveforms via non-invasive methods is of great significance for the prevention and treatment of cardiovascular disease. Photoplethysmography (PPG) can be used to reconstruct ABP signals due to having the same excitation source and high signal similarity. The existing methods of reconstructing ABP signals from PPG only focus on the similarities between systolic, diastolic, and mean arterial pressures without evaluating their global similarity. This paper proposes a deep learning model with a W-Net architecture to reconstruct ABP signals from PPG. The W-Net consists of two concatenated U-Net architectures, the first acting as an encoder and the second as a decoder to reconstruct ABP from PPG. Five hundred records of different lengths were used for training and testing. The experimental results yielded high values for the similarity measures between the reconstructed ABP signals and their reference ABP signals: the Pearson correlation, root mean square error, and normalized dynamic time warping distance were 0.995, 2.236 mmHg, and 0.612 mmHg on average, respectively. The mean absolute errors of the SBP and DBP were 2.602 mmHg and 1.450 mmHg on average, respectively. Therefore, the model can reconstruct ABP signals that are highly similar to the reference ABP signals.

17.
Commun Med (Lond) ; 2: 59, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35637660

RESUMO

Inaccuracies have been reported in pulse oximetry measurements taken from people who identified as Black. Here, we identify substantial ethnic disparities in the population numbers within 12 pulse oximetry databases, which may affect the testing of new oximetry devices and impact patient outcomes.

18.
Front Physiol ; 13: 859763, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35547575

RESUMO

Electrocardiography and photoplethysmography are non-invasive techniques that measure signals from the cardiovascular system. While the cycles of the two measurements are highly correlated, the correlation between the waveforms has rarely been studied. Measuring the photoplethysmogram (PPG) is much easier and more convenient than the electrocardiogram (ECG). Recent research has shown that PPG can be used to reconstruct the ECG, indicating that practitioners can gain a deep understanding of the patients' cardiovascular health using two physiological signals (PPG and ECG) while measuring only PPG. This study proposes a subject-based deep learning model that reconstructs an ECG using a PPG and is based on the bidirectional long short-term memory model. Because the ECG waveform may vary from subject to subject, this model is subject-specific. The model was tested using 100 records from the MIMIC III database. Of these records, 50 had a circulatory disease. The results show that a long ECG signal could be effectively reconstructed from PPG, which is, to our knowledge, the first attempt in this field. A length of 228 s of ECG was constructed by the model, which was trained and validated using 60 s of PPG and ECG signals. To segment the data, a different approach that segments the data into short time segments of equal length (and that do not rely on beats and beat detection) was investigated. Segmenting the PPG and ECG time series data into equal segments of 1-min width gave the optimal results. This resulted in a high Pearson's correlation coefficient between the reconstructed 228 s of ECG and referenced ECG of 0.818, while the root mean square error was only 0.083 mV, and the dynamic time warping distance was 2.12 mV per second on average.

19.
J Healthc Eng ; 2022: 1573076, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35126902

RESUMO

Early prediction of epilepsy seizures can warn the patients to take precautions and improve their lives significantly. In recent years, deep learning has become increasingly predominant in seizure prediction. However, existing deep learning-based approaches in this field require a great deal of labeled data to guarantee performance. At the same time, labeling EEG signals does require the expertise of an experienced pathologist and is incredibly time-consuming. To address this issue, we propose a novel Consistency-based Semisupervised Seizure Prediction Model (CSSPM), where only a fraction of training data is labeled. Our method is based on the principle of consistency regularization, which underlines that a robust model should maintain consistent results for the same input under extra perturbations. Specifically, by using stochastic augmentation and dropout, we consider the entire neural network as a stochastic model and apply a consistency constraint to penalize the difference between the current prediction and previous predictions. In this way, unlabeled data could be fully utilized to improve the decision boundary and enhance prediction performance. Compared with existing studies requiring all training data to be labeled, the proposed method only needs a small portion of data to be labeled while still achieving satisfactory results. Our method provides a promising solution to alleviate the labeling cost for real-world applications.


Assuntos
Epilepsia , Couro Cabeludo , Eletroencefalografia/métodos , Humanos , Redes Neurais de Computação , Convulsões/diagnóstico
20.
IEEE Trans Image Process ; 30: 6701-6714, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34283715

RESUMO

With the great success of convolutional neural networks (CNNs), interpretation of their internal network mechanism has been increasingly critical, while the network decision-making logic is still an open issue. In the bottom-up hierarchical logic of neuroscience, the decision-making process can be deduced from a series of sub-decision-making processes from low to high levels. Inspired by this, we propose the Concept-harmonized HierArchical INference (CHAIN) interpretation scheme. In CHAIN, a network decision-making process from shallow to deep layers is interpreted by the hierarchical backward inference based on visual concepts from high to low semantic levels. Firstly, we learned a general hierarchical visual-concept representation in CNN layered feature space by concept harmonizing model on a large concept dataset. Secondly, for interpreting a specific network decision-making process, we conduct the concept-harmonized hierarchical inference backward from the highest to the lowest semantic level. Specifically, the network learning for a target concept at a deeper layer is disassembled into that for concepts at shallower layers. Finally, a specific network decision-making process is explained as a form of concept-harmonized hierarchical inference, which is intuitively comparable to the bottom-up hierarchical visual recognition way. Quantitative and qualitative experiments demonstrate the effectiveness of the proposed CHAIN at both instance and class levels.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA