RESUMEN
Spectral Computed Tomography (CT) is a versatile imaging technique widely utilized in industry, medicine, and scientific research. This technique allows us to observe the energy-dependent X-ray attenuation throughout an object by using Photon Counting Detector (PCD) technology. However, a major drawback of spectral CT is the increase in noise due to a lower achievable photon count when using more energy channels. This challenge often complicates quantitative material identification, which is a major application of the technology. In this study, we investigate the Noise2Inverse image denoising approach for noise removal in spectral computed tomography. Our unsupervised deep learning-based model uses a multi-dimensional U-Net paired with a block-based training approach modified for additional energy-channel regularization. We conducted experiments using two simulated spectral CT phantoms, each with a unique shape and material composition, and a real scan of a biological sample containing a characteristic K-edge. Measuring the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) for the simulated data and the contrast-to-noise ratio (CNR) for the real-world data, our approach not only outperforms previously used methods-namely the unsupervised Low2High method and the total variation-constrained iterative reconstruction method-but also does not require complex parameter tuning.
RESUMEN
Impedance cardiography (ICG) is essential in evaluating cardiac function in patients with cardiovascular diseases. Aiming at the problem that the measurement of ICG signal is easily disturbed by motion artifacts, this paper introduces a de-noising method based on two-step spectral ensemble empirical mode decomposition (EEMD) and canonical correlation analysis (CCA). Firstly, the first spectral EEMD-CCA was performed between ICG and motion signals, and electrocardiogram (ECG) and motion signals, respectively. The component with the strongest correlation coefficient was set to zero to suppress the main motion artifacts. Secondly, the obtained ECG and ICG signals were subjected to a second spectral EEMD-CCA for further denoising. Lastly, the ICG signal is reconstructed using these share components. The experiment was tested on 30 subjects, and the results showed that the quality of the ICG signal is greatly improved after using the proposed denoising method, which could support the subsequent diagnosis and analysis of cardiovascular diseases.
Asunto(s)
Algoritmos , Artefactos , Cardiografía de Impedancia , Enfermedades Cardiovasculares , Electrocardiografía , Procesamiento de Señales Asistido por Computador , Humanos , Cardiografía de Impedancia/métodos , Electrocardiografía/métodos , Enfermedades Cardiovasculares/fisiopatología , Enfermedades Cardiovasculares/diagnóstico , Movimiento (Física)RESUMEN
Congenital heart defects (CHD) are one of the serious problems that arise during pregnancy. Early CHD detection reduces death rates and morbidity but is hampered by the relatively low detection rates (i.e., 60%) of current screening technology. The detection rate could be increased by supplementing ultrasound imaging with fetal ultrasound image evaluation (FUSI) using deep learning techniques. As a result, the non-invasive foetal ultrasound image has clear potential in the diagnosis of CHD and should be considered in addition to foetal echocardiography. This review paper highlights cutting-edge technologies for detecting CHD using ultrasound images, which involve pre-processing, localization, segmentation, and classification. Existing technique of preprocessing includes spatial domain filter, non-linear mean filter, transform domain filter, and denoising methods based on Convolutional Neural Network (CNN); segmentation includes thresholding-based techniques, region growing-based techniques, edge detection techniques, Artificial Neural Network (ANN) based segmentation methods, non-deep learning approaches and deep learning approaches. The paper also suggests future research directions for improving current methodologies.
Asunto(s)
Aprendizaje Profundo , Cardiopatías Congénitas , Redes Neurales de la Computación , Ultrasonografía Prenatal , Humanos , Cardiopatías Congénitas/diagnóstico por imagen , Ultrasonografía Prenatal/métodos , Embarazo , Femenino , Procesamiento de Imagen Asistido por Computador/métodos , Ecocardiografía/métodos , Algoritmos , Corazón Fetal/diagnóstico por imagen , Feto/diagnóstico por imagenRESUMEN
PURPOSE: This work proposed a convolutional neural network (CNN)-based method trained with images acquired with electron density phantoms to reduce quantum noise for coronary artery calcium (CAC) scans reconstructed with slice thickness less than 3 mm. METHODS: A DenseNet model was used to estimate quantum noise for CAC scans reconstructed with slice thickness of 0.5, 1.0 and 1.5 mm. Training data was acquired using electron density phantoms in three different sizes. The label images of the CNN model were real noise maps, while the input images of the CNN model were pseudo noise maps. Image denoising was conducted by subtracting the CNN output images from thin-sliced CAC scans. The efficacy of the proposed method was verified through both phantom study and patient study. RESULTS: By means of phantom study, the proposed method was proven effective in reducing quantum noise in CAC scans reconstructed with 1.5-mm slice thickness without causing significant texture change or variation in HU values. With regard to patient study, calcifications were more clear on the denoised CAC scans reconstructed with slice thickness of 0.5, 1.0 and 1.5 mm than on 3-mm slice images, while over-smooth changes were not observed in the denoised CAC scans reconstructed with 1.5-mm slice thickness. CONCLUSION: Our results demonstrated that the electron density phantoms can be used to generate training data for the proposed CNN-based denoising method to reduce quantum noise for CAC scans reconstructed with 1.5-mm slice thickness. Because anthropomorphic phantom is not a necessity, our method could make image denoising more practical in routine clinical practice.
Asunto(s)
Calcio , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Vasos Coronarios/diagnóstico por imagen , Electrones , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Fantasmas de ImagenRESUMEN
Magnetic resonance imaging (MRI) is a powerful tool for tumor diagnosis in human brain. Here, the MRI images are considered to detect the brain tumor and classify the regions as meningioma, glioma, pituitary and normal types. Numerous existing methods regarding brain tumor detection were suggested previously, but none of the methods accurately categorizes the brain tumor and consumes more computation period. To address these problems, an Evolutionary Gravitational Neocognitron Neural Network optimized with Marine Predators Algorithm is proposed in this article for MRI Brain Tumor Classification (EGNNN-VGG16-MPA-MRI-BTC). Initially, the brain MRI pictures are collected under Brats MRI image dataset. By using Savitzky-Golay Denoising approach, these images are pre-processed. The features are extracted utilizing visual geometry group network (VGG16). By utilizing VGG16, the features, like Grey level features, Haralick Texture features are extracted. These extracted features are given to EGNNN classifier, which categorizes the brain tumor as glioma, meningioma, pituitary gland and normal. Batch Normalization (BN) layer of EGNNN is eliminated and included with VGG16 layer. Marine Predators Optimization Algorithm (MPA) optimizes the weight parameters of EGNNN. The simulation is activated in MATLAB. Finally, the EGNNN-VGG16-MPA-MRI-BTC method attains 38.98%, 46.74%, 23.27% higher accuracy, 24.24%, 37.82%, 13.92% higher precision, 26.94%, 47.04%, 38.94% higher sensitivity compared with the existing AlexNet-SVM-MRI-BTC, RESNET-SGD-MRI-BTC and MobileNet-V2-MRI-BTC models respectively.
Evolutionary Gravitational Neocognitron Neural Network optimized with Marine Predators Algorithm is proposed in this article for MRI Brain Tumor Classification (EGNNN-VGG16-MPA-MRI-BTC). Initially, the brain MRI pictures are collected under Brats MRI image dataset. By using Savitzky-Golay Denoising approach, these images are pre-processed. The features are extracted utilizing visual geometry group network (VGG16). By utilizing VGG16, the features, like Grey level features, Haralick Texture features are extracted. These extracted features are given to EGNNN classifier, which categorizes the brain tumor as glioma, meningioma, pituitary gland and normal. Batch Normalization (BN) layer of EGNNN is eliminated and included with VGG16 layer. Marine Predators Optimization Algorithm (MPA) optimizes the weight parameters of EGNNN.
Asunto(s)
Algoritmos , Neoplasias Encefálicas , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Imagen por Resonancia Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Gravitación , Evolución BiológicaRESUMEN
BACKGROUND: The wrist pulse wave under the optimal pulse pressure plays an important role in detecting human body's physiological and pathological information. Wavelet threshold filtering is a common method for pulse wave de-noising. However, traditional filtering methods cannot smoothen the whole pulse wave well and highlight the details. OBJECTIVE: In view of this, an attempt is made in this paper to propose the pulse wave denoising algorithm for pulse wave under optimal pulse pressure according to the translation invariant wavelet transform (TIWT) and the new threshold function. METHODS: Firstly, by using hyperbolic tangent curve and combining the advantages of soft threshold function and hard threshold function, the new threshold function is derived. Secondly, based on the TIWT, pseudo-Gibbs phenomenon gets suppressed. RESULTS: The experiments show that in comparison to the traditional wavelet filtering algorithm, the novel algorithm can better maintain the pulse wave geometric characteristics and has a higher signal to noise ratio (SNR). CONCLUSION: The TIWT with improved new threshold compensates the shortcomings of the traditional wavelet threshold denoising methods in a better way. It lays a foundation for extracting time-domain characteristics of pulse wave.
Asunto(s)
Algoritmos , Frecuencia Cardíaca , Análisis de Ondículas , Humanos , Presión Sanguínea , Relación Señal-Ruido , Muñeca/fisiologíaRESUMEN
In this article, an adaptive denoising method is suggested to accurate investigate the optical and structural features of polymeric fibers from noisy phase shifting microinterferograms. The mixed class of noise that may produce in the phase-shifting interferometric techniques is established. To our knowledge, this is an early study considered the mixing noises that may occur in microinterferograms. The suggested method utilized the convolution neural networks to detect the noise class and then denoising, it according to its class. Four convolution neural networks (Googlenet, VGG-19, Alexnet, and Alexnet-SVM) are refined to perform the automatic classification process for the noise class in the established data set. The network with the highest validation and testing accuracy of these networks is considered to apply the suggested method on realistic noisy microinterferograms for polymeric fibers, polypropylene and antimicrobial polyethylene terephthalate)/titanium dioxide, recoded using interference microscope. Also, the suggested method is applied on noisy microinterferograms include crazing and nanocomposite material. The demodulated phase maps and the three-dimensional birefringence profiles are calculated for tested fibers according to the suggested method. The obtained results are compared with the published data for these fibers and found to be in good agreements.
Asunto(s)
Aprendizaje Profundo , Algoritmos , Redes Neurales de la Computación , Polímeros , Relación Señal-RuidoRESUMEN
Web-core sandwich panels are a typical lightweight structure utilized in a variety of fields, such as naval, aviation, aerospace, etc. Welding is considered as an effective process to join the face panel to the core panel from the face panel side. However, it is difficult to locate the joint position (i.e., the position of core panel) due to the shielding of the face panel. This paper studies a weld position detection method based on X-ray from the face panel side for aluminum web-core sandwich panels used in aviation and naval structures. First, an experimental system was designed for weld position detection, able to quickly acquire the X-ray intensity signal backscattered by the specimen. An effective signal processing method was developed to accurately extract the characteristic value of X-ray intensity signals representing the center of the joint. Secondly, an analytical model was established to calculate and optimize the detection parameters required for detection of the weld position of a given specimen by analyzing the relationship between the backscattered X-ray intensity signal detected by the detector and the parameters of the detection system and specimen during the detection process. Finally, several experiments were carried out on a 6061 aluminum alloy specimen with a thickness of 3 mm. The experimental results demonstrate that the maximum absolute error of the detection was 0.340 mm, which is sufficiently accurate for locating the position of the joint. This paper aims to provide the technical basis for the automatic tracking of weld joints from the face panel side, required for the high-reliability manufacturing of curved sandwich structures.
RESUMEN
Noise in ECG signals will affect the result of post-processing if left untreated. Since ECG is highly subjective, the linear denoising method with a specific threshold working well on one subject could fail on another. Therefore, in this Letter, sparse-based method, which represents every segment of signal using different linear combinations of atoms from a dictionary, is used to denoise ECG signals, with a view to myoelectric interference existing in ECG signals. Firstly, a denoising model for ECG signals is constructed. Then the model is solved by matching pursuit algorithm. In order to get better results, four kinds of dictionaries are investigated with the ECG signals from MIT-BIH arrhythmia database, compared with wavelet transform (WT)-based method. Signal-noise ratio (SNR) and mean square error (MSE) between estimated signal and original signal are used as indicators to evaluate the performance. The results show that by using the present method, the SNR is higher while the MSE between estimated signal and original signal is smaller.