Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 50.102
Filtrar
1.
Ultrasonics ; 128: 106882, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36402116

RESUMO

We investigate the role of leaky guided waves in transcranial ultrasound transmission in temporal and parietal bones at large incidence angles. Our numerical and experimental results show that the dispersion characteristics of the fundamental leaky guided wave mode with longitudinal polarization can be leveraged to estimate the critical angle above which efficient shear mode conversion takes place, and below which major transmission drops can be expected. Simulations that employ a numerical propagator matrix and a Semi-Analytical approach establish the transcranial dispersion characteristics and transmission coefficients at different incident angles. Experimental transmission tests conducted at 500 kHz and radiation tests performed in the 200-800 kHz range confirm the numerical findings in terms of transmitted peak pressure and frequency-radiation angle spectra, based on which the connection between critical angles, dispersion and transmission is demonstrated. Our results support the identification of transcranial ultrasound strategies that leverage shear mode conversion, which is less sensitive to phase aberrations compared to normal incidence ultrasound. These findings can also enable higher transmission rates in cranial bones with low porosity by leveraging dispersion information extracted through signal processing, without requiring measurement of geometric and mechanical properties of the cranial bone.


Assuntos
Processamento de Sinais Assistido por Computador , Crânio , Crânio/diagnóstico por imagem , Ultrassonografia , Porosidade
2.
Brain Res ; 1798: 148131, 2023 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-36328069

RESUMO

Epilepsy detection is essential for patients with epilepsy and their families, as well as for researchers and medical staff. The use of electroencephalogram (EEG) as a tool to support the diagnosis of patients with epilepsy is fundamental. Today, machine learning (ML) techniques are widely applied in neuroscience. The main objective of our study is to differentiate patients with idiopathic generalized epilepsy from healthy controls by applying machine learning techniques on interictal electroencephalographic recordings. Our research predicts which patients have idiopathic generalized epilepsy from a scalp EEG study. In addition, this study focuses on using the extreme gradient boosting (XGB) method applied to scalp EEG. XGB is one of the variants of gradient boosting and is a supervised learning algorithm. This type of system is developed to increase performance and processing speed. Through this proposed method, an attempt is made to recognize patterns from scalp EEG recordings that would allow the detection of IGE with high accuracy and differentiate IGE patients from healthy controls, creating an additional tool to support clinicians in their decision-making. Among the ML methods applied, the proposed XGB method achieves a better prediction of the distinct features in EEG signals from patients with IGE. XGB was 6.26% more accurate than the k-Nearest Neighbours method and was more accurate than the support vector machine (10.61%), decision tree (9.71%) and Gaussian Naïve Bayes (11.83%). Besides, the proposed XGB method showed the highest area under the curve (AUC 98%) and balanced accuracy (98.13%) of all methods tested. Application of ML technique in EEG of patients with epilepsy is very recent and is emerging with promising results. In this research work, we showed the usefulness of ML techniques to identify and predict generalized epilepsy from healthy controls in scalp EEG studies. These findings could help develop automated tools that integrate these ML techniques to assist clinicians in differentiating between patients with IGE from healthy controls in daily practice.


Assuntos
Epilepsia Generalizada , Epilepsia , Humanos , Processamento de Sinais Assistido por Computador , Couro Cabeludo , Teorema de Bayes , Eletroencefalografia/métodos , Epilepsia Generalizada/diagnóstico , Epilepsia/diagnóstico , Aprendizado de Máquina , Imunoglobulina E
3.
J Neurosci Methods ; 383: 109736, 2023 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-36349568

RESUMO

Brain-computer interfaces (BCIs) have achieved significant success in controlling external devices through the Electroencephalogram (EEG) signal processing. BCI-based Motor Imagery (MI) system bridges brain and external devices as communication tools to control, for example, wheelchair for people with disabilities, robotic control, and exoskeleton control. This success largely depends on the machine learning (ML) approaches like deep learning (DL) models. DL algorithms provide effective and powerful models to analyze compact and complex EEG data optimally for MI-BCI applications. DL models with CNN network have revolutionized computer vision through end-to-end learning from raw data. Meanwhile, RNN networks have been able to decode EEG signals by processing sequences of time series data. However, many challenges in the MI-BCI field have affected the performance of DL models. A major challenge is the individual differences in the EEG signal of different subjects. Therefore, the model must be retrained from the scratch for each new subject, which leads to computational costs. Analyzing the EEG signals is challenging due to its low signal to noise ratio and non-stationary nature. Additionally, limited size of existence datasets can lead to overfitting which can be prevented by using transfer learning (TF) approaches. The main contributions of this study are discovering major challenges in the MI-BCI field by reviewing the state of art machine learning models and then suggesting solutions to address these challenges by focusing on feature selection, feature extraction and classification methods.


Assuntos
Interfaces Cérebro-Computador , Aprendizado Profundo , Humanos , Eletroencefalografia/métodos , Processamento de Sinais Assistido por Computador , Algoritmos , Imaginação
4.
Biosens Bioelectron ; 220: 114865, 2023 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-36368140

RESUMO

Classification and sorting of cells using image-activated cell sorting (IACS) systems can bring significant insight to biomedical sciences. Incorporating deep learning algorithms into IACS enables cell classification and isolation based on complex and human-vision uninterpretable morphological features within a heterogeneous cell population. However, the limited capabilities and complicated implementation of deep learning-assisted IACS systems reported to date hinder the adoption of the systems for a wide range of biomedical research. Here, we present image-activated cell sorting by applying fast deep learning algorithms to conduct cell sorting without labeling. The overall sorting latency, including signal processing and AI inferencing, is less than 3 ms, and the training time for the deep learning model is less than 30 min with a training dataset of 20,000 images. Both values set the record for IACS with sorting by AI inference. . We demonstrated our system performance through a 2-part polystyrene beads sorting experiment with 96.6% sorting purity, and a 3-part human leukocytes sorting experiment with 89.05% sorting purity for monocytes, 92.00% sorting purity for lymphocytes, and 98.24% sorting purity for granulocytes. The above performance was achieved with simple hardware containing only 1 FPGA, 1 PC and GPU, as a result of an optimized custom CNN UNet and efficient use of computing power. The system provides a compact, sterile, low-cost, label-free, and low-latency cell sorting solution based on real-time AI inferencing and fast training of the deep learning model.


Assuntos
Técnicas Biossensoriais , Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Processamento de Sinais Assistido por Computador
5.
Rev Sci Instrum ; 93(11): 114712, 2022 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-36461500

RESUMO

We describe a custom and open source field-programmable gate array (FPGA)-based data acquisition (DAQ) system developed for electrophysiology and generally useful for closed-loop feedback experiments. FPGA acquisition and processing are combined with high-speed analog and digital converters to enable real-time feedback. The digital approach eases experimental setup and repeatability by allowing for system identification and in situ tuning of filter bandwidths. The FPGA system includes I2C and serial peripheral interface controllers, 1 GiB dynamic RAM for data buffering, and a USB3 interface to Python software. The DAQ system uses common HDMI connectors to support daughtercards that can be customized for a given experiment to make the system modular and expandable. The FPGA-based digital signal processing (DSP) is used to generate fourth-order digital infinite impulse response filters and feedback with microsecond latency. The FPGA-based DSP and an analog inner-loop are demonstrated via an experiment that rapidly steps the voltage of a capacitor isolated from the system by a considerable resistance using a feedback approach that adjusts the driving voltage based on the digitized capacitor current.


Assuntos
Condução de Veículo , Benzoquinonas , Processamento de Sinais Assistido por Computador , Software
6.
Sci Rep ; 12(1): 18396, 2022 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-36319659

RESUMO

Artifacts in the Electrocardiogram (ECG) degrade the quality of the recorded signal and are not conducive to heart rate variability (HRV) analysis. The two types of noise most often found in ECG recordings are technical and physiological artifacts. Current preprocessing methods primarily attend to ectopic beats but do not consider technical issues that affect the ECG. A secondary aim of this study was to investigate the effect of increasing increments of artifacts on 24 of the most used HRV measures. A two-step preprocessing approach for denoising HRV is introduced which targets each type of noise separately. First, the technical artifacts in the ECG are eliminated by applying complete ensemble empirical mode decomposition with adaptive noise. The second step removes physiological artifacts from the HRV signal using a combination filter of single dependent rank order mean and an adaptive filtering algorithm. The performance of the two-step pre-processing tool showed a high correlation coefficient of 0.846 and RMSE value of 7.69 × 10-5 for 6% of added ectopic beats and 6 dB Gaussian noise. All HRV measures studied except HF peak and LF peak are significantly affected by both types of noise. Frequency measures of Total power, HF power, and LF power and fragmentation measures; PAS, PIP, and PSS are the most sensitive to both types of noise.


Assuntos
Eletrocardiografia , Processamento de Sinais Assistido por Computador , Frequência Cardíaca , Eletrocardiografia/métodos , Artefatos , Distribuição Normal , Algoritmos
7.
PLoS One ; 17(11): e0277555, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36374850

RESUMO

The diagnosis of neurological diseases is one of the biggest challenges in modern medicine, which is a major issue at the moment. Electroencephalography (EEG) recordings is usually used to identify various neurological diseases. EEG produces a large volume of multi-channel time-series data that neurologists visually analyze to identify and understand abnormalities within the brain and how they propagate. This is a time-consuming, error-prone, subjective, and exhausting process. Moreover, recent advances in EEG classification have mostly focused on classifying patients of a specific disease from healthy subjects using EEG data, which is not cost effective as it requires multiple systems for checking a subject's EEG data for different neurological disorders. This forces researchers to advance their work and create a single, unified classification framework for identifying various neurological diseases from EEG signal data. Hence, this study aims to meet this requirement by developing a machine learning (ML) based data mining technique for categorizing multiple abnormalities from EEG data. Textural feature extractors and ML-based classifiers are used on time-frequency spectrogram images to develop the classification system. Initially, noises and artifacts are removed from the signal using filtering techniques and then normalized to reduce computational complexity. Afterwards, normalized signals are segmented into small time segments and spectrogram images are generated from those segments using short-time Fourier transform. Then two histogram based textural feature extractors are used to calculate features separately and principal component analysis is used to select significant features from the extracted features. Finally, four different ML based classifiers are used to categorize those selected features into different disease classes. The developed method is tested on four real-time EEG datasets. The obtained result has shown potential in classifying various abnormality types, indicating that it can be utilized to identify various neurological abnormalities from brain signal data.


Assuntos
Algoritmos , Eletroencefalografia , Humanos , Eletroencefalografia/métodos , Encéfalo , Análise de Componente Principal , Aprendizado de Máquina , Processamento de Sinais Assistido por Computador , Máquina de Vetores de Suporte
8.
Sci Rep ; 12(1): 19932, 2022 Nov 19.
Artigo em Inglês | MEDLINE | ID: mdl-36402901

RESUMO

This paper is devoted to the synthesis of new signal processing algorithms based on the methodology of complete sufficient statistics and the possibility of using the Lehmann-Scheffe theorem. Using the example of a sequence of quasi-rectangular pulses, an approach to estimating their period was illustrated, taking into account the duty-off factor and the pulse squareness coefficient. A mathematical model was developed, on the basis of which, estimates of the potential accuracy of the methods were carried out. It is established that for the sample size value (n > 8), the relative root-mean-square error of estimating the repetition period using the methodology of complete sufficient statistics is lower than that of the traditional estimate. In addition to theoretical calculations, simulation results confirming the achieved effect are presented. The results obtained have a wide range of applicability and can be used in the design of control and measuring equipment in the oil and gas industry, in the development of medical equipment, in the field of telecommunications, in the design of pulse-Doppler radars, etc.


Assuntos
Algoritmos , Telecomunicações , Simulação por Computador , Processamento de Sinais Assistido por Computador , Modelos Teóricos
9.
Comput Intell Neurosci ; 2022: 8125186, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36397787

RESUMO

As an input method of signal language, the hand movement classification technology has developed into one of the ways of natural human-computer interaction. The surface electromyogram (sEMG) signal contains abundant human movement information and has significant advantages as the input signal of human-computer interaction. However, how to effectively extract components from sEMG signals to improve the accuracy of hand motion classification is a difficult problem. Therefore, this work proposes a novel method based on wavelet packet transform (WPT) and principal component analysis (PCA) to classify six kinds of hand motions. The method applies WPT to decompose the sEMG signal into multiple sub-band signals. To efficiently extract the intrinsic components of the sEMG signal, the classification performance of different wavelet packet basis functions is evaluated. The PCA algorithm is used to reduce the dimension of the feature space composed of the features reflecting hand motions extracted from each sub-band signal. Besides, to ensure higher classification performance while reducing the dimension of the feature space by the PCA algorithm, the classification performance of different dimensions of the feature space is compared. In addition, the effects of the variability of the sEMG signal and the size of the window on the proposed method are further analyzed. The proposed method was tested on the sEMG for Basic Hand Movements Data Set and achieved an average accuracy of 96.03%. Compared with the existing research, the proposed method has better classification performance, which indicates that the research results can be applied to the fields of exoskeleton robot, rehabilitation training, and intelligent prosthesis.


Assuntos
Mãos , Processamento de Sinais Assistido por Computador , Humanos , Eletromiografia/métodos , Análise de Componente Principal , Movimento
10.
Sensors (Basel) ; 22(22)2022 Nov 08.
Artigo em Inglês | MEDLINE | ID: mdl-36433186

RESUMO

Atrial fibrillation (AF) is the most common cardiac arrhythmia in the world. The arrhythmia and methods developed to cure it have been studied for several decades. However, professionals worldwide are still working to improve treatment quality. One novel technology that can be useful is a wearable device. The two most used recordings from these devices are photoplethysmogram (PPG) and electrocardiogram (ECG) signals. As the price lowers, these devices will become significant technology to increase sensitivity, for monitoring and for treatment quality support. This is important as AF can be challenging to detect in advance, especially during home monitoring. Modern artificial intelligence (AI) has the potential to respond to this challenge. AI has already achieved state of the art results in many applications, including bioengineering. In this perspective, we discuss wearable devices combined with AI for AF detection, an approach that enables a new era of possibilities for the future.


Assuntos
Fibrilação Atrial , Dispositivos Eletrônicos Vestíveis , Humanos , Fibrilação Atrial/diagnóstico , Inteligência Artificial , Fotopletismografia , Processamento de Sinais Assistido por Computador , Tecnologia
11.
Comput Intell Neurosci ; 2022: 3987480, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36345476

RESUMO

Aiming at the difficulty of feature extraction in the case of multicomponent and strong noise in the traditional rolling bearing fault diagnosis method, this paper proposes a bearing fault diagnosis network with double attention mechanism. The original signal with noise is decomposed into a series of intrinsic mode functions (IMFs) by the Empirical Mode Decomposition method. The Pearson correlation coefficient is discussed to filter the IMFs components for signal reconstruction. The spatial features of the reconstructed signal are extracted by attention convolutional networks. Then, time series features are extracted based on the long short-term memory method. Furthermore, the importance of temporal features is measured through a temporal attention mechanism. The Softmax layer of the constructed network is used as the classifier for fault diagnosis. Comparing this method with the existing methods of experiments, the proposed method has not only better diagnosis accuracy but also stronger antiinterference ability and generalization ability, which can accurately diagnose and classify the bearing fault types. The fault diagnosis accuracy rate for each load is above 99%.


Assuntos
Algoritmos , Processamento de Sinais Assistido por Computador , Ruído , Correlação de Dados
12.
Artif Intell Med ; 133: 102417, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36328670

RESUMO

Cardiac auscultation is an essential point-of-care method used for the early diagnosis of heart diseases. Automatic analysis of heart sounds for abnormality detection is faced with the challenges of additive noise and sensor-dependent degradation. This paper aims to develop methods to address the cardiac abnormality detection problem when both of these components are present in the cardiac auscultation sound. We first mathematically analyze the effect of additive noise and convolutional distortion on short-term mel-filterbank energy-based features and a Convolutional Neural Network (CNN) layer. Based on the analysis, we propose a combination of linear and logarithmic spectrogram-image features. These 2D features are provided as input to a residual CNN network (ResNet) for heart sound abnormality detection. Experimental validation is performed first on an open-access, multiclass heart sound dataset where we analyzed the effect of additive noise by mixing lung sound noise with the recordings. In noisy conditions, the proposed method outperforms one of the best-performing methods in the literature achieving an Macc (mean of sensitivity and specificity) of 89.55% and an average F-1 score of 82.96%, respectively, when averaged over all noise levels. Next, we perform heart sound abnormality detection (binary classification) experiments on the 2016 Physionet/CinC Challenge dataset that involves noisy recordings obtained from multiple stethoscope sensors. The proposed method achieves significantly improved results compared to the conventional approaches on this dataset, in the presence of both additive noise and channel distortion, with an area under the ROC (receiver operating characteristics) curve (AUC) of 91.36%, F-1 score of 84.09%, and Macc of 85.08%. We also show that the proposed method shows the best mean accuracy across different source domains, including stethoscope and noise variability, demonstrating its effectiveness in different recording conditions. The proposed combination of linear and logarithmic features along with the ResNet classifier effectively minimizes the impact of background noise and sensor variability for classifying phonocardiogram (PCG) signals. The method thus paves the way toward developing computer-aided cardiac auscultation systems in noisy environments using low-cost stethoscopes.


Assuntos
Ruídos Cardíacos , Processamento de Sinais Assistido por Computador , Gravações de Sons , Redes Neurais de Computação , Auscultação
13.
PLoS One ; 17(11): e0277081, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36331942

RESUMO

The COVID-19 pandemic has exposed the vulnerability of healthcare services worldwide, raising the need to develop novel tools to provide rapid and cost-effective screening and diagnosis. Clinical reports indicated that COVID-19 infection may cause cardiac injury, and electrocardiograms (ECG) may serve as a diagnostic biomarker for COVID-19. This study aims to utilize ECG signals to detect COVID-19 automatically. We propose a novel method to extract ECG signals from ECG paper records, which are then fed into one-dimensional convolution neural network (1D-CNN) to learn and diagnose the disease. To evaluate the quality of digitized signals, R peaks in the paper-based ECG images are labeled. Afterward, RR intervals calculated from each image are compared to RR intervals of the corresponding digitized signal. Experiments on the COVID-19 ECG images dataset demonstrate that the proposed digitization method is able to capture correctly the original signals, with a mean absolute error of 28.11 ms. The 1D-CNN model (SEResNet18), which is trained on the digitized ECG signals, allows to identify between individuals with COVID-19 and other subjects accurately, with classification accuracies of 98.42% and 98.50% for classifying COVID-19 vs. Normal and COVID-19 vs. other classes, respectively. Furthermore, the proposed method also achieves a high-level of performance for the multi-classification task. Our findings indicate that a deep learning system trained on digitized ECG signals can serve as a potential tool for diagnosing COVID-19.


Assuntos
COVID-19 , Humanos , COVID-19/diagnóstico , Processamento de Sinais Assistido por Computador , Pandemias , Algoritmos , Redes Neurais de Computação , Eletrocardiografia
14.
Sci Rep ; 12(1): 20159, 2022 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-36418487

RESUMO

This paper introduces a novel algorithm for effective and accurate extraction of non-invasive fetal electrocardiogram (NI-fECG). In NI-fECG based monitoring, the useful signal is measured along with other signals generated by the pregnant women's body, especially maternal electrocardiogram (mECG). These signals are more distinct in magnitude and overlap in time and frequency domains, making the fECG extraction extremely challenging. The proposed extraction method combines the Grey wolf algorithm (GWO) with sequential analysis (SA). This innovative combination, forming the GWO-SA method, optimises the parameters required to create a template that matches the mECG, which leads to an accurate elimination of the said signal from the input composite signal. The extraction system was tested on two databases consisting of real signals, namely, Labour and Pregnancy. The databases used to test the algorithms are available on a server at the generalist repositories (figshare) integrated with Matonia et al. (Sci Data 7(1):1-14, 2020). The results show that the proposed method extracts the fetal ECG signal with an outstanding efficacy. The efficacy of the results was evaluated based on accurate detection of the fQRS complexes. The parameters used to evaluate are as follows: accuracy (ACC), sensitivity (SE), positive predictive value (PPV), and F1 score. Due to the stochastic nature of the GWO algorithm, ten individual runs were performed for each record in the two databases to assure stability as well as repeatability. Using these parameters, for the Labour dataset, we achieved an average ACC of 94.60%, F1 of 96.82%, SE of 97.49%, and PPV of 98.96%. For the Pregnancy database, we achieved an average ACC of 95.66%, F1 of 97.44%, SE of 98.07%, and PPV of 97.44%. The obtained results show that the fHR related parameters were determined accurately for most of the records, outperforming the other state-of-the-art approaches. The poorer quality of certain signals have caused deviation from the estimated fHR for certain records in the databases. The proposed algorithm is compared with certain well established algorithms, and has proven to be accurate in its fECG extractions.


Assuntos
Monitorização Fetal , Processamento de Sinais Assistido por Computador , Feminino , Gravidez , Humanos , Monitorização Fetal/métodos , Eletrocardiografia/métodos , Algoritmos , Bases de Dados Factuais
15.
Sensors (Basel) ; 22(21)2022 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-36365802

RESUMO

A new approach to the estimation and classification of nonlinear frequency modulated (NLFM) signals is presented in the paper. These problems are crucial in electronic reconnaissance systems whose role is to indicate what signals are being received and recognized by the intercepting receiver. NLFM signals offer a variety of useful properties not available for signals with linear frequency modulation (LFM). In particular, NLFM signals can ensure the desired reduction of sidelobes of an autocorrelation (AC) function and desired power spectral density (PSD); therefore, such signals are more frequently used in modern radar and echolocation systems. Due to their nonlinear properties, the discussed signals are difficult to recognize and therefore require sophisticated methods of analysis, estimation and classification. NLFM signals with frequency content varying with time are mainly analyzed by time-frequency algorithms. However, the methods presented in the paper belong to time-chirp domain, which is relatively rarely cited in the literature. It is proposed to use polynomial approximations of nonlinear frequency and phase functions describing signals. This allows for applying the cubic phase function (CPF) as an estimator of phase polynomial coefficients. Originally, the CPF involved only third-order nonlinearities of the phase function. The extension of the CPF using nonuniform sampling is used to analyse the higher order polynomial phase. In this paper, a sixth order polynomial is considered. It is proposed to estimate the instantaneous frequency using a polynomial with coefficients calculated from the coefficients of the phase polynomial obtained by CPF. The determined coefficients also constitute the set of distinctive features for a classification task. The proposed CPF-based classification method was examined for three common NLFM signals and one LFM signal. Two types of neural network classifiers: learning vector quantization (LVQ) and multilayer perceptron (MLP) are considered for such defined classification problem. The performance of both the estimation and classification processes was analyzed using Monte Carlo simulation studies for different SNRs. The results of the simulation research revealed good estimation performance and error-free classification for the SNR range encountered in practical applications.


Assuntos
Algoritmos , Processamento de Sinais Assistido por Computador , Animais , Redes Neurais de Computação , Simulação por Computador , Método de Monte Carlo
16.
Sensors (Basel) ; 22(21)2022 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-36365824

RESUMO

Classification of motor imagery (MI) tasks provides a robust solution for specially-abled people to connect with the milieu for brain-computer interface. Precise selection of uniform tuning parameters of tunable Q wavelet transform (TQWT) for electroencephalography (EEG) signals is arduous. Therefore, this paper proposes robust TQWT for automatically selecting optimum tuning parameters to decompose non-stationary EEG signals accurately. Three evolutionary optimization algorithms are explored for automating the tuning parameters of robust TQWT. The fitness function of the mean square error of decomposition is used. This paper also exploits channel selection using a Laplacian score for dominant channel selection. Important features elicited from sub-bands of robust TQWT are classified using different kernels of the least square support vector machine classifier. The radial basis function kernel has provided the highest accuracy of 99.78%, proving that the proposed method is superior to other state-of-the-art using the same database.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Humanos , Eletroencefalografia/métodos , Análise de Ondaletas , Imagens, Psicoterapia , Algoritmos , Máquina de Vetores de Suporte , Processamento de Sinais Assistido por Computador
17.
Sensors (Basel) ; 22(21)2022 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-36365862

RESUMO

Respiration and heartrates are important information for surgery. When the vital signs of the patient lying prone are monitored using radar installed on the back of the surgical bed, the surgeon's movements reduce the accuracy of these monitored vital signs. This study proposes a method for enhancing the monitored vital sign accuracies of a patient lying on a surgical bed using a 60 GHz frequency modulated continuous wave (FMCW) radar system with beamforming. The vital sign accuracies were enhanced by applying a fast Fourier transform (FFT) for range and beamforming which suppress the noise generated at different ranges and angles from the patient's position. The experiment was performed for a patient lying on a surgical bed with or without surgeon. Comparing a continuous-wave (CW) Doppler radar, the FMCW radar with beamforming improved almost 22 dB of signal-to-interference and noise ratio (SINR) for vital signals. More than 90% accuracy of monitoring respiration and heartrates was achieved even though the surgeon was located next to the patient as an interferer. It was analyzed using a proposed vital signal model included in the radar IF equation.


Assuntos
Radar , Processamento de Sinais Assistido por Computador , Humanos , Sinais Vitais , Monitorização Fisiológica/métodos , Respiração , Frequência Cardíaca , Algoritmos
18.
Sensors (Basel) ; 22(21)2022 Oct 26.
Artigo em Inglês | MEDLINE | ID: mdl-36365890

RESUMO

Very low frequency (VLF) signals are considered as an important tool to study ionosphere disturbances. We have studied variations in signal amplitude of the Japanese JJI transmitter received by a network of eight Japan stations. The distinctions between characteristics of daytime and nighttime disturbances are considered. Signal processing based on spectral analysis is used to evaluate typical periodicities in the VLF signals in the time range from minutes to hours. In particular, we have retrieved quasi-wave oscillations of the received signal with periods of 4-10 and 20-25 min, which can be associated with atmospheric gravity waves excited by the solar terminator, earthquakes or other reasons. In addition, oscillations at periods of 3-4 h are observed, probably, caused by long-period gravity waves. We also calculate the information entropy to identify main details in daily VLF variations and influence of solar flares. It is shown that the information entropy increases near sunrise and sunset with seasonal variation, and that solar flares also lead to the growth in information entropy. A theoretical interpretation is given to the typical features of ultra-low frequency modulation of VLF electronagnetic wave spectra in Waveguide Earth-Ionosphere, found by processing the experimental data.


Assuntos
Periodicidade , Processamento de Sinais Assistido por Computador , Entropia , Planeta Terra , Japão
19.
Sensors (Basel) ; 22(21)2022 Oct 29.
Artigo em Inglês | MEDLINE | ID: mdl-36366015

RESUMO

With the standardization and commercialization of 5G, research on 6G technology has begun. In this paper, a new low-complexity soft-input-soft-output (SISO) adaptive detection algorithm for short CPM bursts is proposed for low-power, massive Internet of Things (IoT) connectivity in 6G. First, a time-invariant trellis is constructed on the basis of truncation in order to reduce the number of states. Then, adaptive channel estimators, recursive least squares (RLS), or least mean squares (LMS), are assigned to each hypothetical sequence by using the recursive structure of the trellis, and per-survivor processing (PSP) is used to improve the quality of channel estimation and reduce the number of searching paths. Then, the RLS adaptive symbol detector (RLS-ASD) and LMS adaptive symbol detector (LMS-ASD) could be acquired. Compared to using a least-squares estimator, the RLS-ASD avoids matrix inversion for the computation of branch metrics, while the LMS-ASD further reduces the steps in the RLS-ASD at the cost of performance. Lastly, a soft information iteration process is used to further improve performance via turbo equalization. Simulation results and analysis show that the RLS-ASD improves performance by about 1 dB compared to the state-of-the-art approach in time-variant environments while keeping a similar complexity. In addition, the LMS-ASD could further significantly reduce complexity with a power loss of approximately 1 dB. Thus, a flexible choice of detectors can achieve a trade-off of performance and complexity.


Assuntos
Internet das Coisas , Processamento de Sinais Assistido por Computador , Análise dos Mínimos Quadrados , Algoritmos , Simulação por Computador
20.
Sensors (Basel) ; 22(21)2022 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-36366039

RESUMO

Healthcare is an important medical topic in recent years. In this study, the novelty we propose is the intelligent healthcare system using an inequality-type optimization mathematical model with signal-to-noise ratio (SNR) and wavelet-domain low-frequency amplitude adjustment techniques to hide patients' confidential data in their electrocardiogram (ECG) signals. The extraction of the hidden patient information also utilizes the low-frequency amplitude adjustment. The detailed steps of establishing the system are as follows. To integrate confidential patient data into ECG signals, we first propose a nonlinear model to optimize the quality of ECG signals with the embedded patients' confidential data including patient name, patient birthdate, date of medical treatment, and medical history. Then, we apply Simulated Annealing (SA) to solve the nonlinear model such that the ECG signals with embedded patients' confidential data have good SNR, good root mean square error (RMSE), and high similarity. In other words, the distortion of the PQRST complexes and the ECG shape caused by the embedded patients' confidential data is very small, and thus the quality of the embedded ECG signals meets the requirements of physiological diagnostics. In the terminals, one can receive the ECG signals with the embedded patients' confidential data. In addition, the embedded patients' confidential data can be received and extracted without the original ECG signals. The experimental results confirm the efficiency that our method maintains a high quality of each ECG signal with the embedded patient confidential data. Moreover, the embedded confidential data shows a good robustness against common attacks.


Assuntos
Eletrocardiografia , Processamento de Sinais Assistido por Computador , Humanos , Eletrocardiografia/métodos , Razão Sinal-Ruído , Modelos Teóricos , Atenção à Saúde , Algoritmos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...