RESUMO
The most critical aspect of panorama generation is maintaining local semantic consistency. Objects may be projected from different depths in the captured image. When warping the image to a unified canvas, pixels at the semantic boundaries of the different views are significantly misaligned. We propose two lightweight strategies to address this challenge efficiently. First, the original image is segmented as superpixels rather than regular grids to preserve the structure of each cell. We propose effective cost functions to generate the warp matrix for each superpixel. The warp matrix varies progressively for smooth projection, which contributes to a more faithful reconstruction of object structures. Second, to deal with artifacts introduced by stitching, we use a seam line method tailored to superpixels. The algorithm takes into account the feature similarity of neighborhood superpixels, including color difference, structure and entropy. We also consider the semantic information to avoid semantic misalignment. The optimal solution constrained by the cost functions is obtained under a graph model. The resulting stitched images exhibit improved naturalness. Extensive testing on common panorama stitching datasets is performed on the algorithm. Experimental results show that the proposed algorithm effectively mitigates artifacts, preserves the completeness of semantics and produces panoramic images with a subjective quality that is superior to that of alternative methods.
RESUMO
This study introduces a novel framework to apply the artifact subspace reconstruction (ASR) algorithm on single-channel electroencephalogram (EEG) data. ASR is known for its ability to remove artifacts like eye-blinks and movement but traditionally relies on multiple channels. Embedded ASR (E-ASR) addresses this by incorporating a dynamical embedding approach. In this method, an embedded matrix is created from single-channel EEG data using delay vectors, followed by ASR application and reconstruction of the cleaned signal. Data from four subjects with eyes open were collected using Fp1 and Fp2 electrodes via the CameraEEG android app. The E-ASR algorithm was evaluated using metrics like relative root mean square error (RRMSE), correlation coefficient (CC), and average power ratio. The number of eye-blinks with and without the E-ASR approach was also estimated. E-ASR achieved an RRMSE of 43.87% and had a CC of 0.91 on semi-simulated data and effectively reduced artifacts in real EEG data, with eye-blink counts validated against ground truth video data. This framework shows potential for smartphone-based EEG applications in natural environments with minimal electrodes.
Assuntos
Algoritmos , Artefatos , Piscadela , Eletroencefalografia , Processamento de Sinais Assistido por Computador , Humanos , Eletroencefalografia/métodos , Piscadela/fisiologia , Eletrodos , SmartphoneRESUMO
Recently, advancements in image sensor technology have paved the way for the proliferation of high-dynamic-range television (HDRTV). Consequently, there has been a surge in demand for the conversion of standard-dynamic-range television (SDRTV) to HDRTV, especially due to the dearth of native HDRTV content. However, since SDRTV often comes with video encoding artifacts, SDRTV to HDRTV conversion often amplifies these encoding artifacts, thereby reducing the visual quality of the output video. To solve this problem, this paper proposes a multi-frame content-aware mapping network (MCMN), aiming to improve the performance of conversion from low-quality SDRTV to high-quality HDRTV. Specifically, we utilize the temporal spatial characteristics of videos to design a content-aware temporal spatial alignment module for the initial alignment of video features. In the feature prior extraction stage, we innovatively propose a hybrid prior extraction module, including cross-temporal priors, local spatial priors, and global spatial prior extraction. Finally, we design a temporal spatial transformation module to generate an improved tone mapping result. From time to space, from local to global, our method makes full use of multi-frame information to perform inverse tone mapping of single-frame images, while it is also able to better repair coding artifacts.
RESUMO
Visualization of low-density tissue scaffolds made from hydrogels is important yet challenging in tissue engineering and regenerative medicine (TERM). For this, synchrotron radiation propagation-based imaging computed tomography (SR-PBI-CT) has great potential, but is limited due to the ring artifacts commonly observed in SR-PBI-CT images. To address this issue, this study focuses on the integration of SR-PBI-CT and helical acquisition mode (i.e. SR-PBI-HCT) to visualize hydrogel scaffolds. The influence of key imaging parameters on the image quality of hydrogel scaffolds was investigated, including the helical pitch (p), photon energy (E) and the number of acquisition projections per rotation/revolution (Np), and, on this basis, those parameters were optimized to improve image quality and to reduce noise level and artifacts. The results illustrate that SR-PBI-HCT imaging shows impressive advantages in avoiding ring artifacts with p = 1.5, E = 30â keV and Np = 500 for the visualization of hydrogel scaffolds in vitro. Furthermore, the results also demonstrate that hydrogel scaffolds can be visualized using SR-PBI-HCT with good contrast while at a low radiation dose, i.e. 342â mGy (voxel size of 26â µm, suitable for in vivo imaging). This paper presents a systematic study on hydrogel scaffold imaging using SR-PBI-HCT and the results reveal that SR-PBI-HCT is a powerful tool for visualizing and characterizing low-density scaffolds with a high image quality in vitro. This work represents a significant advance toward the non-invasive in vivo visualization and characterization of hydrogel scaffolds at a suitable radiation dose.
Assuntos
Síncrotrons , Alicerces Teciduais , Tomografia Computadorizada por Raios X/métodos , Engenharia Tecidual/métodos , HidrogéisRESUMO
The goal of this study was to test a novel approach (iCanClean) to remove non-brain sources from scalp EEG data recorded in mobile conditions. We created an electrically conductive phantom head with 10 brain sources, 10 contaminating sources, scalp, and hair. We tested the ability of iCanClean to remove artifacts while preserving brain activity under six conditions: Brain, Brain + Eyes, Brain + Neck Muscles, Brain + Facial Muscles, Brain + Walking Motion, and Brain + All Artifacts. We compared iCanClean to three other methods: Artifact Subspace Reconstruction (ASR), Auto-CCA, and Adaptive Filtering. Before and after cleaning, we calculated a Data Quality Score (0-100%), based on the average correlation between brain sources and EEG channels. iCanClean consistently outperformed the other three methods, regardless of the type or number of artifacts present. The most striking result was for the condition with all artifacts simultaneously present. Starting from a Data Quality Score of 15.7% (before cleaning), the Brain + All Artifacts condition improved to 55.9% after iCanClean. Meanwhile, it only improved to 27.6%, 27.2%, and 32.9% after ASR, Auto-CCA, and Adaptive Filtering. For context, the Brain condition scored 57.2% without cleaning (reasonable target). We conclude that iCanClean offers the ability to clear multiple artifact sources in real time and could facilitate human mobile brain-imaging studies with EEG.
Assuntos
Artefatos , Encéfalo , Humanos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Eletroencefalografia/métodos , Couro Cabeludo , Algoritmos , Músculos Faciais , Processamento de Sinais Assistido por ComputadorRESUMO
Motion artifacts hinder source-level analysis of mobile electroencephalography (EEG) data using independent component analysis (ICA). iCanClean is a novel cleaning algorithm that uses reference noise recordings to remove noisy EEG subspaces, but it has not been formally tested in a parameter sweep. The goal of this study was to test iCanClean's ability to improve the ICA decomposition of EEG data corrupted by walking motion artifacts. Our primary objective was to determine optimal settings and performance in a parameter sweep (varying the window length and r2 cleaning aggressiveness). High-density EEG was recorded with 120 + 120 (dual-layer) EEG electrodes in young adults, high-functioning older adults, and low-functioning older adults. EEG data were decomposed by ICA after basic preprocessing and iCanClean. Components well-localized as dipoles (residual variance < 15%) and with high brain probability (ICLabel > 50%) were marked as 'good'. We determined iCanClean's optimal window length and cleaning aggressiveness to be 4-s and r2 = 0.65 for our data. At these settings, iCanClean improved the average number of good components from 8.4 to 13.2 (+57%). Good performance could be maintained with reduced sets of noise channels (12.7, 12.2, and 12.0 good components for 64, 32, and 16 noise channels, respectively). Overall, iCanClean shows promise as an effective method to clean mobile EEG data.
Assuntos
Encéfalo , Eletroencefalografia , Adulto Jovem , Humanos , Idoso , Eletroencefalografia/métodos , Encéfalo/diagnóstico por imagem , Cabeça , Algoritmos , Neuroimagem , Artefatos , Processamento de Sinais Assistido por ComputadorRESUMO
Artifacts are divergent strip artifacts or dark stripe artifacts in Industrial Computed Tomography (ICT) images due to large differences in density among the components of scanned objects, which can significantly distort the actual structure of scanned objects in ICT images. The presence of artifacts can seriously affect the practical application effectiveness of ICT in defect detection and dimensional measurement. In this paper, a series of convolution neural network models are designed and implemented based on preparing the ICT image artifact removal datasets. Our findings indicate that the RF (receptive field) and the spatial resolution of network can significantly impact the effectiveness of artifact removal. Therefore, we propose a dilated residual network for turbine blade ICT image artifact removal (DRAR), which enhances the RF of the network while maintaining spatial resolution with only a slight increase in computational load. Extensive experiments demonstrate that the DRAR achieves exceptional performance in artifact removal.
Assuntos
Artefatos , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Redes Neurais de ComputaçãoRESUMO
Optical Coherence Tomography Angiography (OCTA) has revolutionized non-invasive, high-resolution imaging of blood vessels. However, the challenge of tail artifacts in OCTA images persists. In response, we present the Tail Artifact Removal via Transmittance Effect Subtraction (TAR-TES) algorithm that effectively mitigates these artifacts. Through a simple physics-based model, the TAR-TES accounts for variations in transmittance within the shallow layers with the vasculature, resulting in the removal of tail artifacts in deeper layers after the vessel. Comparative evaluations with alternative correction methods demonstrate that TAR-TES excels in eliminating these artifacts while preserving the essential integrity of vasculature images. Crucially, the success of the TAR-TES is closely linked to the precise adjustment of a weight constant, underlining the significance of individual dataset parameter optimization. In conclusion, TAR-TES emerges as a powerful tool for enhancing OCTA image quality and reliability in both clinical and research settings, promising to reshape the way we visualize and analyze intricate vascular networks within biological tissues. Further validation across diverse datasets is essential to unlock the full potential of this physics-based solution.
Assuntos
Artefatos , Tomografia de Coerência Óptica , Reprodutibilidade dos Testes , Tomografia de Coerência Óptica/métodos , AlgoritmosRESUMO
Simultaneous multi-slice (multiband) accelerated functional magnetic resonance imaging (fMRI) provides dramatically improved temporal and spatial resolution for resting-state functional connectivity (RSFC) studies of the human brain in health and disease. However, multiband acceleration also poses unique challenges for denoising of subject motion induced data artifacts, the presence of which is a major confound in RSFC research that substantively diminishes reliability and reproducibility. We comprehensively evaluated existing and novel approaches to volume censoring-based motion denoising in the Human Connectome Project (HCP) dataset. We show that assumptions underlying common metrics for evaluating motion denoising pipelines, especially those based on quality control-functional connectivity (QC-FC) correlations and differences between high- and low-motion participants, are problematic, and appear to be inappropriate in their current widespread use as indicators of comparative pipeline performance and as targets for investigators to use when tuning pipelines for their own datasets. We further develop two new quantitative metrics that are instead agnostic to QC-FC correlations and other measures that rely upon the null assumption that no true relationships exist between trait measures of subject motion and functional connectivity, and demonstrate their use as benchmarks for comparing volume censoring methods. Finally, we develop and validate quantitative methods for determining dataset-specific optimal volume censoring parameters prior to the final analysis of a dataset, and provide straightforward recommendations and code for all investigators to apply this optimized approach to their own RSFC datasets.
Assuntos
Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Conectoma/métodos , Imageamento por Ressonância Magnética/métodos , Adulto , Artefatos , Conectoma/normas , Movimentos da Cabeça/fisiologia , Humanos , Imageamento por Ressonância Magnética/normasRESUMO
Electroencephalography (EEG) signals are often contaminated with artifacts. It is imperative to develop a practical and reliable artifact removal method to prevent the misinterpretation of neural signals and the underperformance of brain-computer interfaces. Based on the U-Net architecture, we developed a new artifact removal model, IC-U-Net, for removing pervasive EEG artifacts and reconstructing brain signals. IC-U-Net was trained using mixtures of brain and non-brain components decomposed by independent component analysis. It uses an ensemble of loss functions to model complex signal fluctuations in EEG recordings. The effectiveness of the proposed method in recovering brain activities and removing various artifacts (e.g., eye blinks/movements, muscle activities, and line/channel noise) was demonstrated in a simulation study and four real-world EEG experiments. IC-U-Net can reconstruct a multi-channel EEG signal and is applicable to most artifact types, offering a promising end-to-end solution for automatically removing artifacts from EEG recordings. It also meets the increasing need to image natural brain dynamics in a mobile setting. The code and pre-trained IC-U-Net model are available at https://github.com/roseDwayane/AIEEG.
Assuntos
Artefatos , Processamento de Sinais Assistido por Computador , Humanos , Movimentos Oculares , Piscadela , Eletroencefalografia/métodos , AlgoritmosRESUMO
Removing power line noise and other frequency-specific artifacts from electrophysiological data without affecting neural signals remains a challenging task. Recently, an approach was introduced that combines spectral and spatial filtering to effectively remove line noise: Zapline. This algorithm, however, requires manual selection of the noise frequency and the number of spatial components to remove during spatial filtering. Moreover, it assumes that noise frequency and spatial topography are stable over time, which is often not warranted. To overcome these issues, we introduce Zapline-plus, which allows adaptive and automatic removal of frequency-specific noise artifacts from M/electroencephalography (EEG) and LFP data. To achieve this, our extension first segments the data into periods (chunks) in which the noise is spatially stable. Then, for each chunk, it searches for peaks in the power spectrum, and finally applies Zapline. The exact noise frequency around the found target frequency is also determined separately for every chunk to allow fluctuations of the peak noise frequency over time. The number of to-be-removed components by Zapline is automatically determined using an outlier detection algorithm. Finally, the frequency spectrum after cleaning is analyzed for suboptimal cleaning, and parameters are adapted accordingly if necessary before re-running the process. The software creates a detailed plot for monitoring the cleaning. We highlight the efficacy of the different features of our algorithm by applying it to four openly available data sets, two EEG sets containing both stationary and mobile task conditions, and two magnetoencephalography sets containing strong line noise.
Assuntos
Artefatos , Processamento de Sinais Assistido por Computador , Algoritmos , Eletroencefalografia , Humanos , MagnetoencefalografiaRESUMO
Tofu is a toolkit for processing large amounts of images and for tomographic reconstruction. Complex image processing tasks are organized as workflows of individual processing steps. The toolkit is able to reconstruct parallel and cone beam as well as tomographic and laminographic geometries. Many pre- and post-processing algorithms needed for high-quality 3D reconstruction are available, e.g. phase retrieval, ring removal and de-noising. Tofu is optimized for stand-alone GPU workstations on which it achieves reconstruction speed comparable with costly CPU clusters. It automatically utilizes all GPUs in the system and generates 3D reconstruction code with minimal number of instructions given the input geometry (parallel/cone beam, tomography/laminography), hence yielding optimal run-time performance. In order to improve accessibility for researchers with no previous knowledge of programming, tofu contains graphical user interfaces for both optimization of 3D reconstruction parameters and batch processing of data with pre-configured workflows for typical computed tomography reconstruction. The toolkit is open source and extensive documentation is available for both end-users and developers. Thanks to the mentioned features, tofu is suitable for both expert users with specialized image processing needs (e.g. when dealing with data from custom-built computed tomography scanners) and for application-specific end-users who just need to reconstruct their data on off-the-shelf hardware.
Assuntos
Alimentos de Soja , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Tomografia , Tomografia Computadorizada por Raios XRESUMO
BACKGROUND: Electrocardiogram (ECG) signal conditioning is a vital step in the ECG signal processing chain that ensures effective noise removal and accurate feature extraction. OBJECTIVE: This study evaluates the performance of the FDA 510 (k) cleared HeartKey Signal Conditioning and QRS peak detection algorithms on a range of annotated public and proprietary ECG databases (HeartKey is a UK Registered Trademark of B-Secur Ltd). METHODS: Seven hundred fifty-one raw ECG files from a broad range of use cases were individually passed through the HeartKey signal processing engine. The algorithms include several advanced filtering steps to enable significant noise removal and accurate identification of the QRS complex. QRS detection statistics were generated against the annotated ECG files. RESULTS: HeartKey displayed robust performance across 14 ECG databases (seven public, seven proprietary), covering a range of healthy and unhealthy patient data, wet and dry electrode types, various lead configurations, hardware sources, and stationary/ambulatory recordings from clinical and non-clinical settings. Over the NSR, MIT-BIH, AHA, and MIT-AF public databases, average QRS Se and PPV values of 98.90% and 99.08% were achieved. Adaptable performance (Se 93.26%, PPV 90.53%) was similarly observed on the challenging NST database. Crucially, HeartKey's performance effectively translated to the dry electrode space, with an average QRS Se of 99.22% and PPV of 99.00% observed over eight dry electrode databases representing various use cases, including two challenging motion-based collection protocols. CONCLUSION: HeartKey demonstrated robust signal conditioning and QRS detection performance across the broad range of tested ECG signals. It should be emphasized that in no way have the algorithms been altered or trained to optimize performance on a given database, meaning that HeartKey is potentially a universal solution capable of maintaining a high level of performance across a broad range of clinical and everyday use cases.
Assuntos
Eletrocardiografia , Processamento de Sinais Assistido por Computador , Algoritmos , Bases de Dados Factuais , Eletrocardiografia/métodos , HumanosRESUMO
BACKGROUND: The Medtronic "Percept" is the first FDA-approved deep brain stimulation (DBS) device with sensing capabilities during active stimulation. Its real-world signal-recording properties have yet to be fully described. OBJECTIVE: This study details three sources of artifact (and potential mitigations) in local field potential (LFP) signals collected by the Percept and assesses the potential impact of artifact on the future development of adaptive DBS (aDBS) using this device. METHODS: LFP signals were collected from 7 subjects in both experimental and clinical settings. The presence of artifacts and their effect on the spectral content of neural signals were evaluated in both the stimulation ON and OFF states using three distinct offline artifact removal techniques. RESULTS: Template subtraction successfully removed multiple sources of artifact, including (1) electrocardiogram (ECG), (2) nonphysiologic polyphasic artifacts, and (3) ramping-related artifacts seen when changing stimulation amplitudes. ECG removal from stimulation ON (at 0 mA) signals resulted in spectral shapes similar to OFF stimulation spectra (averaged difference in normalized power in theta, alpha, and beta bands ≤3.5%). ECG removal using singular value decomposition was similarly successful, though required subjective researcher input. QRS interpolation produced similar recovery of beta-band signal but resulted in residual low-frequency artifact. CONCLUSIONS: Artifacts present when stimulation is enabled notably affected the spectral properties of sensed signals using the Percept. Multiple discrete artifacts could be successfully removed offline using an automated template subtraction method. The presence of unrejected artifact likely influences online power estimates, with the potential to affect aDBS algorithm performance.
Assuntos
Artefatos , Estimulação Encefálica Profunda , Algoritmos , Encéfalo/fisiologia , Estimulação Encefálica Profunda/métodos , HumanosRESUMO
The Electroencephalography (EEG)-based motor imagery (MI) paradigm is one of the most studied technologies for Brain-Computer Interface (BCI) development. Still, the low Signal-to-Noise Ratio (SNR) poses a challenge when constructing EEG-based BCI systems. Moreover, the non-stationary and nonlinear signal issues, the low-spatial data resolution, and the inter- and intra-subject variability hamper the extraction of discriminant features. Indeed, subjects with poor motor skills have difficulties in practicing MI tasks against low SNR scenarios. Here, we propose a subject-dependent preprocessing approach that includes the well-known Surface Laplacian Filtering and Independent Component Analysis algorithms to remove signal artifacts based on the MI performance. In addition, power- and phase-based functional connectivity measures are studied to extract relevant and interpretable patterns and identify subjects of inefficency. As a result, our proposal, Subject-dependent Artifact Removal (SD-AR), improves the MI classification performance in subjects with poor motor skills. Consequently, electrooculography and volume-conduction EEG artifacts are mitigated within a functional connectivity feature-extraction strategy, which favors the classification performance of a straightforward linear classifier.
Assuntos
Artefatos , Interfaces Cérebro-Computador , Algoritmos , Eletroencefalografia , Humanos , Imagens, Psicoterapia , Processamento de Sinais Assistido por ComputadorRESUMO
With the development of portable EEG acquisition systems, the collected EEG has gradually changed from being multi-channel to few-channel or single-channel, thus the removal of single-channel EEG signal artifacts is extremely significant. For the artifact removal of single-channel EEG signals, the current mainstream method is generally a combination of the decomposition method and the blind source separation (BSS) method. Between them, a combination of empirical mode decomposition (EMD) and its derivative methods and ICA has been used in single-channel EEG artifact removal. However, EMD is prone to modal mixing and it has no relevant theoretical basis, thus it is not as good as variational modal decomposition (VMD) in terms of the decomposition effect. In the ICA algorithm, the implementation method based on high-order statistics is widely used, but it is not as effective as the implementation method based on second order statistics in processing EMG artifacts. Therefore, aiming at the main artifacts in single-channel EEG signals, including EOG and EMG artifacts, this paper proposed a method of artifact removal combining variational mode decomposition (VMD) and second order blind identification (SOBI). Semi-simulation experiments show that, compared with the existing EEMD-SOBI method, this method has a better removal effect on EOG and EMG artifacts, and can preserve useful information to the greatest extent.
Assuntos
Artefatos , Processamento de Sinais Assistido por Computador , Algoritmos , Simulação por Computador , Eletroencefalografia/métodosRESUMO
In this paper, we propose a relatively noninvasive system that can automatically assess the impact of traffic conditions on drivers. We analyze the physiological signals recorded from a set of individuals while driving in a simulated urban scenario in two different traffic scenarios, i.e., with traffic and without traffic. The experiments were carried out in a laboratory located at the University of Udine, employing a driving simulator equipped with a moving platform. We acquired two Skin Potential Response (SPR) signals from the hands of the drivers, and an electrocardiogram (ECG) signal from their chest. In the proposed scheme, the SPR signals are then processed through a Motion Artifact (MA) removal algorithm such that possible motion artifacts arising during the drive are reduced. An analysis considering the scalogram of the single cleaned SPR signal is presented. This signal, along with the ECG, is then fed to various Machine Learning (ML) algorithms. More specifically, some statistical features are extracted from each signal segment which, after being analyzed through a binary ML model, are labeled as corresponding to a stressful situation or not. Our results confirm the applicability of the proposed approach to identify stress in the two scenarios. This is also in accordance with our findings considering the SPR signal scalograms.
Assuntos
Condução de Veículo , Algoritmos , Artefatos , Eletrocardiografia , Humanos , Aprendizado de MáquinaRESUMO
Breathing rate is considered one of the fundamental vital signs and a highly informative indicator of physiological state. Given that the monitoring of heart activity is less complex than the monitoring of breathing, a variety of algorithms have been developed to estimate breathing activity from heart activity. However, estimating breathing rate from heart activity outside of laboratory conditions is still a challenge. The challenge is even greater when new wearable devices with novel sensor placements are being used. In this paper, we present a novel algorithm for breathing rate estimation from photoplethysmography (PPG) data acquired from a head-worn virtual reality mask equipped with a PPG sensor placed on the forehead of a subject. The algorithm is based on advanced signal processing and machine learning techniques and includes a novel quality assessment and motion artifacts removal procedure. The proposed algorithm is evaluated and compared to existing approaches from the related work using two separate datasets that contains data from a total of 37 subjects overall. Numerous experiments show that the proposed algorithm outperforms the compared algorithms, achieving a mean absolute error of 1.38 breaths per minute and a Pearson's correlation coefficient of 0.86. These results indicate that reliable estimation of breathing rate is possible based on PPG data acquired from a head-worn device.
Assuntos
Fotopletismografia , Taxa Respiratória , Frequência Cardíaca/fisiologia , Humanos , Aprendizado de Máquina , Fotopletismografia/métodos , Processamento de Sinais Assistido por ComputadorRESUMO
BACKGROUND AND OBJECTIVE: Since low-dose computed tomography (LDCT) images typically have higher noise that may affect accuracy of disease diagnosis, the objective of this study is to develop and evaluate a new artifact-assisted feature fusion attention (AAFFA) network to extract and reduce image artifact and noise in LDCT images. METHODS: In AAFFA network, a feature fusion attention block is constructed for local multi-scale artifact feature extraction and progressive fusion from coarse to fine. A multi-level fusion architecture based on skip connection and attention modules is also introduced for artifact feature extraction. Specifically, long-range skip connections are used to enhance and fuse artifact features with different depth levels. Then, the fused shallower features enter channel attention for better extraction of artifact features, and the fused deeper features are sent into pixel attention for focusing on the artifact pixel information. Besides, an artifact channel is designed to provide rich artifact features and guide the extraction of noise and artifact features. The AAPM LDCT Challenge dataset is used to train and test the network. The performance is evaluated by using both visual observation and quantitative metrics including peak signal-noise-ratio (PSNR), structural similarity index (SSIM) and visual information fidelity (VIF). RESULTS: Using AAFFA network improves the averaged PSNR/SSIM/VIF values of AAPM LDCT images from 43.4961, 0.9595, 0.3926 to 48.2513, 0.9859, 0.4589, respectively. CONCLUSIONS: The proposed AAFFA network is able to effectively reduce noise and artifacts while preserving object edges. Assessment of visual quality and quantitative index demonstrates the significant improvement compared with other image denoising methods.
Assuntos
Artefatos , Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador/métodos , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X/métodosRESUMO
The non-invasive brain-computer interface (BCI) has gradually become a hot spot of current research, and it has been applied in many fields such as mental disorder detection and physiological monitoring. However, the electroencephalography (EEG) signals required by the non-invasive BCI can be easily contaminated by electrooculographic (EOG) artifacts, which seriously affects the analysis of EEG signals. Therefore, this paper proposed an improved independent component analysis method combined with a frequency filter, which automatically recognizes artifact components based on the correlation coefficient and kurtosis dual threshold. In this method, the frequency difference between EOG and EEG was used to remove the EOG information in the artifact component through frequency filter, so as to retain more EEG information. The experimental results on the public datasets and our laboratory data showed that the method in this paper could effectively improve the effect of EOG artifact removal and improve the loss of EEG information, which is helpful for the promotion of non-invasive BCI.