Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Xray Sci Technol ; 32(4): 1137-1150, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38875073

RESUMO

BACKGROUND: The polychromatic X-rays generated by a linear accelerator (Linac) often result in noticeable hardening artifacts in images, posing a significant challenge to accurate defect identification. To address this issue, a simple yet effective approach is to introduce filters at the radiation source outlet. However, current methods are often empirical, lacking scientifically sound metrics. OBJECTIVE: This study introduces an innovative filter design method that optimizes filter performance by balancing the impact of ray intensity and energy on image quality. MATERIALS AND METHODS: Firstly, different spectra under various materials and thicknesses of filters were obtained using GEometry ANd Tracking (Geant4) simulation. Subsequently, these spectra and their corresponding incident photon counts were used as input sources to generate different reconstructed images. By comprehensively comparing the intensity differences and noise in images of defective and non-defective regions, along with considering hardening indicators, the optimal filter was determined. RESULTS: The optimized filter was applied to a Linac-based X-ray computed tomography (CT) detection system designed for identifying defects in graphite materials within high-temperature gas-cooled reactor (HTR), with defect dimensions of 2 mm. After adding the filter, the hardening effect reduced by 22%, and the Defect Contrast Index (DCI) reached 3.226. CONCLUSION: The filter designed based on the parameters of Average Difference (AD) and Defect Contrast Index (DCI) can effectively improve the quality of defect images.


Assuntos
Desenho de Equipamento , Aceleradores de Partículas , Tomografia Computadorizada por Raios X , Tomografia Computadorizada por Raios X/métodos , Tomografia Computadorizada por Raios X/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Artefatos
2.
Sensors (Basel) ; 22(22)2022 Nov 14.
Artigo em Inglês | MEDLINE | ID: mdl-36433373

RESUMO

Deformation-rate distributed acoustic sensing (DAS), made available by the unique designs of certain interrogator units, acquires seismic data that are theoretically equivalent to the along-fiber particle velocity motion recorded by geophones for scenarios involving elastic ground-fiber coupling. While near-elastic coupling can be achieved in cemented downhole installations, it is less obvious how to do so in lower-cost horizontal deployments. This investigation addresses this challenge by installing and freezing fiber in shallow backfilled trenches (to 0.1 m depth) to achieve improved coupling. This acquisition allows for a reinterpretation of processed deformation-rate DAS waveforms as a "filtered particle velocity" rather than the conventional strain-rate quantity. We present 1D and 2D filtering experiments that suggest 2D velocity-dip filtering can recover improved DAS data panels that exhibit clear surface and refracted arrivals. Data acquired on DAS fibers deployed in backfilled, frozen trenches were more repeatable over a day of acquisition compared to those acquired on a surface-deployed DAS fiber, which exhibited more significant amplitude variations and lower signal-to-noise ratios. These observations suggest that deploying fiber in backfilled, frozen trenches can help limit the impact of environmental factors that would adversely affect interpretations of time-lapse DAS observations.


Assuntos
Acústica , Movimento (Física) , Razão Sinal-Ruído
3.
Sensors (Basel) ; 21(4)2021 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-33672638

RESUMO

WiFi backscatter communication has emerged as a promising enabler of ultralow-power connectivity for Internet of things, wireless sensor network and smart energy. In this paper, we propose a multi-filter design for effective decoding of WiFi backscattered signals. Backscattered signals are relatively weak compared to carrier WiFi signals and therefore require algorithms that filter out original WiFi signals without affecting the backscattered signals. Two multi-filter designs for WiFi backscatter decoding are presented: the summation and delimiter approaches. Both implementations employ the use of additional filters with different window sizes to efficiently cut off undesired noise/interference, thus enhancing frame detection and decoding performance, and can be coupled with a wide range of decoding algorithms. The designs are particularly productive in the frequency-shift WiFi backscatter communication. We demonstrate via prototyping and testbed experiments that the proposed design enhances the performance of various decoding algorithms in real environments.

4.
Sensors (Basel) ; 20(23)2020 Dec 02.
Artigo em Inglês | MEDLINE | ID: mdl-33276453

RESUMO

By placing a color filter in front of a camera we make new spectral sensitivities. The Luther-condition optimization solves for a color filter so that the camera's filtered sensitivities are as close to being linearly related to the XYZ color matching functions (CMFs) as possible, that is, a filter is found that makes the camera more colorimetric. Arguably, the more general Vora-Value approach solves for the filter that best matches all possible target spectral sensitivity sets (e.g., any linear combination of the XYZ CMFs). A concern that we investigate here is that the filters found by the Luther and Vora-Value optimizations are different from one another. In this paper, we unify the Luther and Vora-Value approaches to prefilter design. We prove that if the target of the Luther-condition optimization is an orthonormal basis-a special linear combination of the XYZ CMFs which are orthogonal and are in unit length-the discovered Luther-filter is also the filter that maximizes the Vora-Value. A key advantage of using the Luther-condition formulation to maximize the Vora-Value is that it is both simpler to implement and converges to its optimal answer more quickly. Experiments validate our method.

5.
Sensors (Basel) ; 18(5)2018 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-29735948

RESUMO

Previous research has shown that the effectiveness of selecting filter sets from among a large set of commercial broadband filters by a vector analysis method based on maximum linear independence (MLI). However, the traditional MLI approach is suboptimal due to the need to predefine the first filter of the selected filter set to be the maximum ℓ2 norm among all available filters. An exhaustive imaging simulation with every single filter serving as the first filter is conducted to investigate the features of the most competent filter set. From the simulation, the characteristics of the most competent filter set are discovered. Besides minimization of the condition number, the geometric features of the best-performed filter set comprise a distinct transmittance peak along the wavelength axis of the first filter, a generally uniform distribution for the peaks of the filters and substantial overlaps of the transmittance curves of the adjacent filters. Therefore, the best-performed filter sets can be recognized intuitively by simple vector analysis and just a few experimental verifications. A practical two-step framework for selecting optimal filter set is recommended, which guarantees a significant enhancement of the performance of the systems. This work should be useful for optimizing the spectral sensitivity of broadband multispectral imaging sensors.

6.
Sensors (Basel) ; 17(11)2017 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-29156581

RESUMO

Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration-which are the basis of tracking error estimation-are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (-0.25 cycle, 0.25 cycle) to (-0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz.

7.
J Clin Monit Comput ; 29(5): 659-69, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25516162

RESUMO

The methods for evaluating noninvasive blood pressure (NIBP) monitors using an intra-arterial reference are detailed in the ANSI/AAMI/ISO 81060-2:2009 standard. In a recent study, GE Healthcare obtained invasive radial arterial blood pressure waveforms. The work presented here describes the development of filtering strategies for obtaining high fidelity intra-arterial pressure waveforms for NIBP accuracy testing using the 81060-2 standard. The natural frequency and damping factor of each subject-catheter-transducer system was computed from fast-flush transients. These parameters were used to construct filters for removing or reducing resonance artifacts. Additionally, new optimal damping factors were evaluated for designing compensation filters. Theoretical measurement systems using actual damping factors (< 0.4) and natural frequencies were found capable of generating significant systolic resonance artifacts (≥ 8 mmHg). Typical filters that may be standardly available in monitoring equipment were observed to be potentially inadequate in removing resonance artifact. Filters with particular optimal damping factors (0.6-0.7) were effective in removing resonance artifact. Clinicians need to understand that resonance artifacts potentially exist in intra-arterial waveforms and that the adjustments of monitoring systems may not be adequate. Optimal filters for obtaining intra-arterial waveforms should take into account the damping factor and natural frequency of the measuring system. In research and device evaluation studies it is necessary that optimal filtering be done to minimize the effects of under-damping.


Assuntos
Algoritmos , Artefatos , Determinação da Pressão Arterial/métodos , Pressão Sanguínea/fisiologia , Artéria Radial/fisiologia , Processamento de Sinais Assistido por Computador , Diagnóstico por Computador/métodos , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
8.
J Supercomput ; 79(4): 3643-3665, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36093387

RESUMO

This paper presents a prototype filter design using the orthant optimization technique to assist a filter bank multicarrier (FBMC) modulation scheme of a NextG smart e-healthcare network framework. Low latency and very high reliability are one of the main requirements of a real-time e-healthcare system. In recent times, FBMC modulation has gotten more attention due to its spectral efficiency. The characteristics of a filter bank are determined by t's, prototype filter. A prototype filter cannot be designed to achieve an arbitrary time localization (for low latency) and frequency localization (spectral efficiency), as time and frequency spreading are conflicting goals. Hence, an optimum design needed to be achieved. In this paper, a constraint for perfect or nearly perfect reconstruction is formulated for prototype filter design and an orthant-based enriched sparse ℓ1-optimization method is applied to achieve the optimum performance in terms of higher availability of subcarrier spacing for the given requirement of signal-to-interference ratio. Larger subcarrier spacing ensures lower latency and better performance in real-time applications. The proposed FBMC system, based on an optimum design of the prototype filter, also supports a higher data rate as compared to traditional FBMC and OFDM systems, which is another requirement of real-time communication. In this paper, the solution for the different technical issues of physical layer design is provided. The presented modulation scheme through the proposed prototype filter-based FBMC can suppress the side lobe energy of the constituted filters up to large extent without compromising the recovery of the signal at the receiver end. The proposed system provides very high spectral efficiency; it can sacrifice large guard band frequencies to increase the subcarrier spacing to provide low-latency communication to support the real-time e-healthcare network.

9.
Artigo em Inglês | MEDLINE | ID: mdl-36673775

RESUMO

Safe drinking water remains a major global challenge, especially in rural areas where, according to UNICEF, 80% of those without access to improved water systems reside. While water, sanitation, and hygiene (WASH)-related diseases and deaths are common outcomes of unsafe water, there is also an economic burden associated with unsafe water. These burdens are most prominent in rural areas in less-developed nations. Slow sand filters (SSFs), or biological sand filters (BSFs), are ideal water treatment solutions for these low-resource regions. SSFs are the oldest municipal drinking water treatment systems and improve water quality by removing suspended particles, dissolved organic chemicals, and other contaminants, effectively reducing turbidity and associated taste and odor problems. The removal of turbidity and dissolved organic compounds from the water enables the use of low-cost disinfection methods, such as chlorination. While the working principles of slow sand filtration have remained the same for over two centuries, the design, sizes, and application of slow sand filters have been customized over the years. This paper reviews these adaptations and recent reports on performance regarding contaminant removal. We specifically address the removal of turbidity and microbial contaminants, which are of great concern to rural populations in developing countries.


Assuntos
Água Potável , Purificação da Água , Humanos , Dióxido de Silício/química , Qualidade da Água , Purificação da Água/métodos , Filtração/métodos
10.
Environ Technol ; : 1-8, 2023 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-36848050

RESUMO

Providing the population with high-quality drinking water is one of the main state tasks. Rural water supply systems and water supply systems of small settlements in the region require special attention, namely, the development of technologies for individual, small-sized water treatment equipment, as well as equipment for collective use, designed to purify groundwater for drinking purposes. In many areas, there are groundwaters containing excess levels of several pollutants, which makes their purification much more difficult. Elimination of shortcomings in the known methods of water iron removal is possible by reconstructing existing water supply systems from underground sources in small settlements. A rational solution is to search for groundwater treatment technologies that make it possible to provide the population with high-quality drinking water at a lower cost. The result of increasing the concentration of oxygen in water was obtained in the process of modifying the filter by changing the excess air exhaust system, which was made in the form of a perforated pipeline located in the lower half of the granular filter layer connected to the upper branch pipe. At the same time, high-quality groundwater treatment, sufficient simplicity and reliability in operation are ensured, local conditions and the inaccessibility of many objects and settlements in the region are taken into account as much as possible. After the filter was upgraded, the concentration of iron decreased from 4.4 to 0.27 mg/L and ammonium nitrogen from 3.5 to 1.5 mg/L.

11.
Sensors (Basel) ; 12(8): 10326-38, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23112602

RESUMO

Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available.


Assuntos
Escuridão , Processamento de Imagem Assistida por Computador/métodos , Raios Infravermelhos , Óptica e Fotônica/métodos , Processamento de Sinais Assistido por Computador , Termografia
12.
J Neural Eng ; 19(1)2022 02 09.
Artigo em Inglês | MEDLINE | ID: mdl-35078163

RESUMO

Objective. We present a framework to objectively test and compare stimulation artefact removal techniques in the context of neural spike sorting.Approach. To this end, we used realistic hybrid ground-truth spiking data, with superimposed artefacts fromin vivorecordings. We used the framework to evaluate and compare several techniques: blanking, template subtraction by averaging, linear regression, and a multi-channel Wiener filter (MWF).Main results. Our study demonstrates that blanking and template subtraction result in a poorer spike sorting performance than linear regression and MWF, while the latter two perform similarly. Finally, to validate the conclusions found from the hybrid evaluation framework, we also performed a qualitative analysis onin vivorecordings without artificial manipulations.Significance. Our framework allows direct quantification of the impact of the residual artefact on the spike sorting accuracy, thereby allowing for a more objective and more relevant comparison compared to indirect signal quality metrics that are estimated from the signal statistics. Furthermore, the availability of a ground truth in the form of single-unit spiking activity also facilitates a better estimation of such signal quality metrics.


Assuntos
Artefatos , Modelos Neurológicos , Potenciais de Ação/fisiologia , Algoritmos , Neurônios/fisiologia , Processamento de Sinais Assistido por Computador
13.
ISA Trans ; 108: 196-206, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-32917373

RESUMO

The idea behind designing digital filters is to compute the optimal filter coefficients such that the magnitude response of the designed matches the ideal frequency response using optimization algorithms. The proposed work employed a recently proposed swarm-based optimization technique, namely, a grasshopper optimization algorithm (GOA) to design a linear phase finite impulse response (FIR) low pass, high pass, band pass , and band stop filters. This proposed algorithm models the behaviour of grasshoppers while seeking food sources to solve optimization problems. For the designing of the FIR filter, an absolute error difference fitness function is used, which is minimized using GOA to obtain optimal filter coefficients. The performance comparison of the proposed work is done with already existing algorithms such as cuckoo search, particle swarm optimization, artificial bee colony to prove its superiority and consistency. It is found that GOA based filter meets the objective efficiently with reduced ripples in pass band and higher attenuation in stop band with least execution time.

14.
ISA Trans ; 114: 455-469, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33423766

RESUMO

Deconvolution methods have been proven to be effective tools to extract excitation sources from the noisy measured signal. However, its application is confined by the extraction of incomplete information. To tackle this problem, a new deconvolution method, named period-oriented multi-hierarchy deconvolution (POMHD) is proposed in this paper. Various filters are designed adaptively by the iterative algorithm to update the filter coefficient using the harmonic-to-noise ratio as the deconvolution orientation. Additionally, a novel index, called normalized proportion of harmonics, is proposed as the evaluation criteria for the fault feature. Based on upon, a harmonics proportion diagram is constructed for the diagnostic decisions. The new deconvolution method overcomes the disadvantages of the traditional methods. More importantly, without an accurate fault period as the prior knowledge, the proposed POMHD can simultaneously extract multiple latent fault components by using the adaptive filter and intuitively present different fault information in one diagram. Finally, the simulated and experimental data which includes the signals collected from bearings with both single faults and compound faults is used to evaluate the new method. The results validate the feasibility and robustness of the proposed POMHD.

15.
ISA Trans ; 110: 225-237, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33246645

RESUMO

This paper investigates the robust filtering problem for linear uncertain systems affected by noises in restricted frequency intervals. Different from traditional filter schemes, a finite-frequency memory filter is designed to generalize conventional memoryless ones in such a way that a sequence of latest output measurements are employed for current estimation. To be specific, a memory filter is sought which ensures that for all admissible uncertainties, the filtering error system is asymptotically stable with a prescribed noise-attenuation level in the restricted frequency range. To accomplish this, the finite-frequency specification is characterized by the generalized Kalman-Yakubovich-Popov (KYP) lemma, aiming at improving the capability of noise-attenuation over the given frequency range. Moreover, the homogeneous polynomially parameter-dependent technique is adopted to facilitate filter design and reduce conservatism further. Based on the proposed scheme, we prove rigorously that additional past output measurements contribute to less conservative results. Finally, a quarter-car model with an active-suspension system is used to validate the proposed scheme.

16.
ISA Trans ; 104: 130-137, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-30902498

RESUMO

This paper is concerned with an event-triggered filter design for fuzzy-model-based cyber-physical systems with cyber-attacks. Spurious events may be triggered under the conventional event-triggered mechanism (ETM) when the sampling data has a rapid change arising from unpredicted external disturbance. To avoid spurious decisions on data releasing a new ETM is proposed. Furthermore, the communication network is vulnerable to attacks by malicious attackers. Under this scenario, a new resilient filter is designed to ensure the security. Sufficient conditions are established to make the filtering error system asymptotically stable. A numerical example is provided to show the effectiveness of the proposed results.

17.
J Biomed Phys Eng ; 10(5): 613-622, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33134221

RESUMO

BACKGROUND: Intensity Modulated Radiation Therapy (IMRT) technique is an advanced method of radiotherapy leading into the development of Flattening Filter-Free (FFF) medical linear accelerators (Linacs). Monte Carlo simulation has been a standard method for calculation of particle transport due to precise geometry and material specifications. OBJECTIVE: This study is to obtain the design optimization of Flattening Filter Free (FFF) for 6 MV Linac machine. MATERIAL AND METHODS: In this simulating study, EGSnrc user code was used to simulate particles emitted from head of linac 6MV Varian to achieve the most suitable filter in FFF linac design. Monte Carlo simulation results of the PDD and profile, on the 10 × 10 cm2 field, were compared with the measurements. Differences in small profile beams from Monte Carlo simulation were also evaluated between FF and FFF linac. RESULTS: The spectrum on Monte Carlo simulation in isocenter was compared with Treatment Planning System (TPS) for each filter variation. The slight differences of average spectrum are simulated using 2 mm copper filter and FakeBeam with -1.52 ± 3.82% and -3.13 ± 3.61%. Whereas, for PDD and profiles, each variation has an average difference of 7.10 ± 0.70% and -5.99 ± 1.39%. CONCLUSION: FakeBeam filter is a proper filter for the use of linac design 6MV Varian. It is necessary to decrease the kinetic energy of electrons to perform MC simulations on FFF linac.

18.
Artigo em Inglês | MEDLINE | ID: mdl-31652712

RESUMO

Hypertension (HT) is an extreme increment in blood pressure that can prompt a stroke, kidney disease, and heart attack. HT does not show any symptoms at the early stage, but can lead to various cardiovascular diseases. Hence, it is essential to identify it at the beginning stages. It is tedious to analyze electrocardiogram (ECG) signals visually due to their low amplitude and small bandwidth. Hence, to avoid possible human errors in the diagnosis of HT patients, an automated ECG-based system is developed. This paper proposes the computerized segregation of low-risk hypertension (LRHT) and high-risk hypertension (HRHT) using ECG signals with an optimal orthogonal wavelet filter bank (OWFB) system. The HRHT class is comprised of patients with myocardial infarction, stroke, and syncope ECG signals. The ECG-data are acquired from physionet's smart health for accessing risk via ECG event (SHAREE) database, which contains recordings of a total 139 subjects. First, ECG signals are segmented into epochs of 5 min. The segmented epochs are then decomposed into six wavelet sub-bands (WSBs) using OWFB. We extract the signal fractional dimension (SFD) and log-energy (LOGE) features from all six WSBs. Using Student's t-test ranking, we choose the high ranked WSBs of LOGE and SFD features. We develop a novel hypertension diagnosis index (HDI) using two features (SFD and LOGE) to discriminate LRHT and HRHT classes using a single numeric value. The performance of our developed system is found to be encouraging, and we believe that it can be employed in intensive care units to monitor the abrupt rise in blood pressure while screening the ECG signals, provided this is tested with an extensive independent database.


Assuntos
Eletrocardiografia/métodos , Hipertensão/diagnóstico , Análise de Ondaletas , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Feminino , Humanos , Hipertensão/fisiopatologia , Masculino , Pessoa de Meia-Idade , Medição de Risco
19.
Comput Biol Med ; 115: 103446, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31627019

RESUMO

Malignant arrhythmia can lead to sudden cardiac death (SCD). Shockable arrhythmia can be terminated with device electrical shock therapies. Ventricular-tachycardia (VT) and ventricular fibrillation (VF) are responsive to electrical anti-tachycardia pacing therapy and defibrillation which help to restore normal electrical and mechanical function of the heart. In contrast, non-shockable arrhythmia like asystole and bradycardia are not responsive to electric shock therapy. Distinguishing between shockable and non-shockable arrhythmia is an important diagnostic challenge that has practical clinical relevance. It is difficult to accurately differentiate between these two types of arrhythmia by manual inspection of electrocardiogram (ECG) segments within the short time duration before triggering the device for electrical therapy. Automated defibrillators are equipped with automatic shockable arrhythmia detection algorithms based on ECG morphological features, which may possess variable diagnostic performance depending on machine models. In our work, we have designed a robust system using wavelet decomposition filter banks for extraction of features from the ECG signal and then classifying the features. We believe this method will improve the accuracy of discriminating between shockable and non-shockable arrhythmia compared with existing conventional algorithms. We used a novel three channel orthogonal wavelet filter bank, which extracted features from ECG epochs of duration 2 s to distinguish between shockable and non-shockable arrhythmia. The fuzzy, Renyi and sample entropies are extracted from the various wavelet coefficients and fed to support vector machine (SVM) classifier for automated classification. We have obtained an accuracy of 98.9%, sensitivity and specificity of 99.08% and 97.11.9%, respectively, using 10-fold cross validation. The area under the receiver operating characteristic has been found to be 0.99 with F1-score of 0.994. The system developed is more accurate than the existing algorithms. Hence, the proposed system can be employed in automated defibrillators inside and outside hospitals for emergency revival of patients suffering from SCD. These automated defibrillators can also be implanted inside the human body for automatic detection of potentially fatal shockable arrhythmia and to deliver an appropriate electric shock to the heart.


Assuntos
Algoritmos , Arritmias Cardíacas/diagnóstico , Arritmias Cardíacas/fisiopatologia , Eletrocardiografia , Processamento de Sinais Assistido por Computador , Humanos
20.
Comput Biol Med ; 100: 100-113, 2018 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-29990643

RESUMO

Obstructive sleep apnea (OSA) is a sleep disorder caused due to interruption of breathing resulting in insufficient oxygen to the human body and brain. If the OSA is detected and treated at an early stage the possibility of severe health impairment can be mitigated. Therefore, an accurate automated OSA detection system is indispensable. Generally, OSA based computer-aided diagnosis (CAD) system employs multi-channel, multi-signal physiological signals. However, there is a great need for single-channel bio-signal based low-power, a portable OSA-CAD system which can be used at home. In this study, we propose single-channel electrocardiogram (ECG) based OSA-CAD system using a new class of optimal biorthogonal antisymmetric wavelet filter bank (BAWFB). In this class of filter bank, all filters are of even length. The filter bank design problem is transformed into a constrained optimization problem wherein the objective is to minimize either frequency-spread for the given time-spread or time-spread for the given frequency-spread. The optimization problem is formulated as a semi-definite programming (SDP) problem. In the SDP problem, the objective function (time-spread or frequency-spread), constraints of perfect reconstruction (PR) and zero moment (ZM) are incorporated in their time domain matrix formulations. The global solution for SDP is obtained using interior point algorithm. The newly designed BAWFB is used for the classification of OSA using ECG signals taken from the physionet's Apnea-ECG database. The ECG segments of 1 min duration are decomposed into six wavelet subbands (WSBs) by employing the proposed BAWFB. Then, the fuzzy entropy (FE) and log-energy (LE) features are computed from all six WSBs. The FE and LE features are classified into normal and OSA groups using least squares support vector machine (LS-SVM) with 35-fold cross-validation strategy. The proposed OSA detection model achieved the average classification accuracy, sensitivity, specificity and F-score of 90.11%, 90.87% 88.88% and 0.92, respectively. The performance of the model is found to be better than the existing works in detecting OSA using the same database. Thus, the proposed automated OSA detection system is accurate, cost-effective and ready to be tested with a huge database.


Assuntos
Diagnóstico por Computador , Eletrocardiografia , Processamento de Sinais Assistido por Computador , Apneia Obstrutiva do Sono , Máquina de Vetores de Suporte , Humanos , Apneia Obstrutiva do Sono/diagnóstico , Apneia Obstrutiva do Sono/fisiopatologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA