Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.383
Filtrar
1.
Microsc Res Tech ; 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-39145424

RESUMEN

Ultrasound images are susceptible to various forms of quality degradation that negatively impact diagnosis. Common degradations include speckle noise, Gaussian noise, salt and pepper noise, and blurring. This research proposes an accurate ultrasound image denoising strategy based on firstly detecting the noise type, then, suitable denoising methods can be applied for each corruption. The technique depends on convolutional neural networks to categorize the type of noise affecting an input ultrasound image. Pre-trained convolutional neural network models including GoogleNet, VGG-19, AlexNet and AlexNet-support vector machine (SVM) are developed and trained to perform this classification. A dataset of 782 numerically generated ultrasound images across different diseases and noise types is utilized for model training and evaluation. Results show AlexNet-SVM achieves the highest accuracy of 99.2% in classifying noise types. The results indicate that, the present technique is considered one of the top-performing models is then applied to real ultrasound images with different noise corruptions to demonstrate efficacy of the proposed detect-then-denoise system. RESEARCH HIGHLIGHTS: Proposes an accurate ultrasound image denoising strategy based on detecting noise type first. Uses pre-trained convolutional neural networks to categorize noise type in input images. Evaluates GoogleNet, VGG-19, AlexNet, and AlexNet-support vector machine (SVM) models on a dataset of 782 synthetic ultrasound images. AlexNet-SVM achieves highest accuracy of 99.2% in classifying noise types. Demonstrates efficacy of the proposed detect-then-denoise system on real ultrasound images.

2.
Physiol Meas ; 45(5)2024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-39150768

RESUMEN

Objective.Cardiovascular diseases are a major cause of mortality globally, and electrocardiograms (ECGs) are crucial for diagnosing them. Traditionally, ECGs are stored in printed formats. However, these printouts, even when scanned, are incompatible with advanced ECG diagnosis software that require time-series data. Digitizing ECG images is vital for training machine learning models in ECG diagnosis, leveraging the extensive global archives collected over decades. Deep learning models for image processing are promising in this regard, although the lack of clinical ECG archives with reference time-series data is challenging. Data augmentation techniques using realistic generative data models provide a solution.Approach.We introduceECG-Image-Kit, an open-source toolbox for generating synthetic multi-lead ECG images with realistic artifacts from time-series data, aimed at automating the conversion of scanned ECG images to ECG data points. The tool synthesizes ECG images from real time-series data, applying distortions like text artifacts, wrinkles, and creases on a standard ECG paper background.Main results.As a case study, we used ECG-Image-Kit to create a dataset of 21 801 ECG images from the PhysioNet QT database. We developed and trained a combination of a traditional computer vision and deep neural network model on this dataset to convert synthetic images into time-series data for evaluation. We assessed digitization quality by calculating the signal-to-noise ratio and compared clinical parameters like QRS width, RR, and QT intervals recovered from this pipeline, with the ground truth extracted from ECG time-series. The results show that this deep learning pipeline accurately digitizes paper ECGs, maintaining clinical parameters, and highlights a generative approach to digitization.Significance.The toolbox has broad applications, including model development for ECG image digitization and classification. The toolbox currently supports data augmentation for the 2024 PhysioNet Challenge, focusing on digitizing and classifying paper ECG images.


Asunto(s)
Aprendizaje Profundo , Electrocardiografía , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Procesamiento de Señales Asistido por Computador , Artefactos , Programas Informáticos
3.
Phys Med Biol ; 2024 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-39151463

RESUMEN

OBJECTIVE: Optical coherence tomography (OCT) is widely used in clinical practice for its non-invasive, high-resolution imaging capabilities. However, speckle noise inherent to its low coherence principle can degrade image quality and compromise diagnostic accuracy. While deep learning methods have shown promise in reducing speckle noise, obtaining well-registered image pairs remains challenging, leading to the development of unpaired methods. Despite their potential, existing unpaired methods suffer from redundancy in network structures or interaction mechanisms. Therefore, a more streamlined method for unpaired OCT denoising is essential. APPROACH: In this work, we propose a novel unpaired method for OCT image denoising, referred to as noise-imitation learning (NIL). NIL comprises three primary modules: the noise extraction module, which extracts noise features by denoising noisy images; the noise imitation module, which synthesizes noisy images and generates fake clean images; and the adversarial learning module, which differentiates between real and fake clean images through adversarial training. The complexity of NIL is significantly lower than that of previous unpaired methods, utilizing only one generator and one discriminator for training. MAIN RESULTS: By efficiently fusing unpaired images and employing adversarial training, NIL can extract more speckle noise information to enhance denoising performance. Building on NIL, we propose an OCT image denoising pipeline, NIL-NAFNet. This pipeline achieved PSNR, SSIM, and RMSE values of 31.27 dB, 0.865, and 7.00, respectively, on the PKU37 dataset. Extensive experiments suggest that our method outperforms state-of-the-art unpaired methods both qualitatively and quantitatively. SIGNIFICANCE: These findings indicate that the proposed NIL is a simple yet effective method for unpaired OCT speckle noise reduction. The OCT denoising pipeline based on NIL demonstrates exceptional performance and efficiency. By addressing speckle noise without requiring well-registered image pairs, this method can enhance image quality and diagnostic accuracy in clinical practice.

4.
Syst Biol Reprod Med ; 70(1): 228-239, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39150884

RESUMEN

Recurrent spontaneous miscarriage refers to the repeated loss of two or more clinically detected pregnancies occurring within 24 weeks of gestation. No identifiable cause has been identified for nearly 50% of these cases. This group is referred to as idiopathic recurrent spontaneous miscarriage (IRSM) or miscarriage of unknown origin. Due to lack of robust scientific evidence, guidelines on the diagnosis and management of IRSM are not well defined and often contradictory. This motivates us to explore the vibrational fingerprints of endometrial tissue in these women. Endometrial tissues were collected from women undergoing IRSM (n = 20) and controls (n = 20) corresponding to the window of implantation. Attenuated total reflectance-Fourier transform infrared (ATR-FTIR) spectra were obtained within the range of 400-4000 cm-1 using Agilent Cary 630 FTIR spectrometer. Raman spectra were also generated within the spectral window of 400-4000 cm-1 using Thermo Fisher Scientific, DXR Raman spectrophotometer. Based on the limited molecular information provided by a single spectroscopic tool, fusion strategy combining Raman and ATR-FTIR spectroscopic data of IRSM is proposed. The significant features were extracted applying principal component analysis (PCA) and wavelet threshold denoising (WTD) and fused spectral data used as input into support vector machine (SVM), adaptive boosting (AdaBoost) and decision tree (DT) models. Altered molecular vibrations associated with proteins, glutamate, and lipid metabolism were observed in IRSM using Raman spectroscopy. FTIR analysis indicated changes in the molecular vibrations of lipids and proteins, collagen dysregulation and impaired glucose metabolism. Combination of both spectroscopic data using mid-level fusion (MLF: 92% using AdaBoost and DT models) and high-level fusion (HLF: 92% using SVM models) methods showed improved IRSM classification accuracy as compared to individual spectral models. Our results indicate that spectral fusion technology hold promise in enhancing diagnostic accuracy of IRSM in clinical settings. Validation of these findings in a larger patient population is underway.


Asunto(s)
Aborto Habitual , Espectrometría Raman , Humanos , Espectroscopía Infrarroja por Transformada de Fourier , Femenino , Aborto Habitual/diagnóstico , Adulto , Máquina de Vectores de Soporte , Embarazo , Endometrio/metabolismo , Endometrio/patología , Endometrio/química , Análisis de Componente Principal , Estudios de Casos y Controles , Árboles de Decisión
5.
J Med Imaging (Bellingham) ; 11(4): 044005, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39099642

RESUMEN

Purpose: The trend towards lower radiation doses and advances in computed tomography (CT) reconstruction may impair the operation of pretrained segmentation models, giving rise to the problem of estimating the dose robustness of existing segmentation models. Previous studies addressing the issue suffer either from a lack of registered low- and full-dose CT images or from simplified simulations. Approach: We employed raw data from full-dose acquisitions to simulate low-dose CT scans, avoiding the need to rescan a patient. The accuracy of the simulation is validated using a real CT scan of a phantom. We consider down to 20% reduction of radiation dose, for which we measure deviations of several pretrained segmentation models from the full-dose prediction. In addition, compatibility with existing denoising methods is considered. Results: The results reveal the surprising robustness of the TotalSegmentator approach, showing minimal differences at the pixel level even without denoising. Less robust models show good compatibility with the denoising methods, which help to improve robustness in almost all cases. With denoising based on a convolutional neural network (CNN), the median Dice between low- and full-dose data does not fall below 0.9 (12 for the Hausdorff distance) for all but one model. We observe volatile results for labels with effective radii less than 19 mm and improved results for contrasted CT acquisitions. Conclusion: The proposed approach facilitates clinically relevant analysis of dose robustness for human organ segmentation models. The results outline the robustness properties of a diverse set of models. Further studies are needed to identify the robustness of approaches for lesion segmentation and to rank the factors contributing to dose robustness.

6.
J Voice ; 2024 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-39107213

RESUMEN

Loss of the larynx significantly alters natural voice production, requiring alternative communication modalities and rehabilitation methods to restore speech intelligibility and improve the quality of life of affected individuals. This paper explores advances in alaryngeal speech enhancement to improve signal quality and reduce background noise, focusing on individuals who have undergone laryngectomy. In this study, speech samples were obtained from 23 Lithuanian males who had undergone laryngectomy with secondary implantation of the tracheoesophageal prosthesis (TEP). Pareto-optimized gated long short-term memory was trained on tracheoesophageal speech data to recognize complex temporal connections and contextual information in speech signals. The system was able to distinguish between actual speech and various forms of noise and artifacts, resulting in a 25% drop in the mean signal-to-noise ratio compared to other approaches. According to acoustic analysis, the system significantly decreased the number of unvoiced frames (proportion of voiced frames) from 40% to 10% while maintaining stable proportions of voiced frames (proportion of voiced speech frames) and average voicing evidence (average voice evidence in voiced frames), indicating the accuracy of the approach in selectively attenuating noise and undesired speech artifacts while preserving important speech information.

7.
BMC Med Imaging ; 24(1): 207, 2024 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-39123136

RESUMEN

BACKGROUND: The quality of low-light endoscopic images involves applications in medical disciplines such as physiology and anatomy for the identification and judgement of tissue structures. Due to the use of point light sources and the constraints of narrow physiological structures, medical endoscopic images display uneven brightness, low contrast, and a lack of texture information, presenting diagnostic challenges for physicians. METHODS: In this paper, a nonlinear brightness enhancement and denoising network based on Retinex theory is designed to improve the brightness and details of low-light endoscopic images. The nonlinear luminance enhancement module uses higher-order curvilinear functions to improve overall brightness; the dual-attention denoising module captures detailed features of anatomical structures; and the color loss function mitigates color distortion. RESULTS: Experimental results on the Endo4IE dataset demonstrate that the proposed method outperforms existing state-of-the-art methods in terms of Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS). The PSNR is 27.2202, SSIM is 0.8342, and the LPIPS is 0.1492. It provides a method to enhance image quality in clinical diagnosis and treatment. CONCLUSIONS: It offers an efficient method to enhance images captured by endoscopes and offers valuable insights into intricate human physiological structures, which can effectively assist clinical diagnosis and treatment.


Asunto(s)
Relación Señal-Ruido , Humanos , Endoscopía/métodos , Aumento de la Imagen/métodos , Algoritmos , Dinámicas no Lineales , Procesamiento de Imagen Asistido por Computador/métodos
8.
Sensors (Basel) ; 24(15)2024 Jul 27.
Artículo en Inglés | MEDLINE | ID: mdl-39123933

RESUMEN

With the development of precision sensing instruments and data storage devices, the fusion of multi-sensor data in gearbox fault diagnosis has attracted much attention. However, existing methods have difficulty in capturing the local temporal dependencies of multi-sensor monitoring information, and the inescapable noise severely decreases the accuracy of multi-sensor information fusion diagnosis. To address these issues, this paper proposes a fault diagnosis method based on dynamic graph convolutional neural networks and hard threshold denoising. Firstly, considering that the relationships between monitoring data from different sensors change over time, a dynamic graph structure is adopted to model the temporal dependencies of multi-sensor data, and, further, a graph convolutional neural network is constructed to achieve the interaction and feature extraction of temporal information from multi-sensor data. Secondly, to avoid the influence of noise in practical engineering, a hard threshold denoising strategy is designed, and a learnable hard threshold denoising layer is embedded into the graph neural network. Experimental fault datasets from two typical gearbox fault test benches under environmental noise are used to verify the effectiveness of the proposed method in gearbox fault diagnosis. The experimental results show that the proposed DDGCN method achieves an average diagnostic accuracy of up to 99.7% under different levels of environmental noise, demonstrating good noise resistance.

9.
Curr Med Imaging ; 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39177128

RESUMEN

Background Classifying brain tumors with extraordinary precision using images is critical for prognosis and treatment planning. The aberrant proliferation of brain cells characterizes brain tumors. Variations in neuronal development may occur among individuals. The classification of tumors as benign or malignant is contingent upon their rate of growth. A benign tumor remains localized at its site of origin; one that has spread to distant sites is malignant. Brain tumor identification may be difficult due to the unique characteristics of brain tumor cells. Objective This study presents a method that methodically improves the identification of brain tumor cells and the analysis of functional structures through the utilization of sample training that incorporates features extracted from Magnetic Resonance Imaging (MRI) images. In the image enhancement phase, the color information of the MRI image is converted to greyscale, and its margins are sharpened to facilitate the detection of finer details. For specialists or general practitioners to accurately diagnose life-threatening conditions, such as brain tumors, medical images are required. Picture denoising has been identified in recent research as a potentially fruitful area of study. It is critical to perform image cleanup while preserving the sharpness of the boundaries. Methods In this research, a Prompt Multi Level Segmentation Denoising model with a Fragile Correlated Feature Subset (PMLSD-FCFS) model is proposed for accurate denoising of MRI images and to extract the most relevant features set by applying a feature dimensionality reduction model for better brain tumor predictions. Results The proposed model achieves 98.2% accuracy in Multi-Level Image Segmentation and 98.4% accuracy in Fragile Correlated Feature Subset Generation. Conclusion The experimental findings indicated that the model proposed exhibits superior performance compared to the traditional algorithms. Furthermore, it successfully eliminates the noise from the MRI images, and most relevant features are only considered for brain tumor detection, thereby enhancing the accuracy of classification.

10.
NMR Biomed ; : e5228, 2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-39169274

RESUMEN

Quantitative maps of rotating frame relaxation (RFR) time constants are sensitive and useful magnetic resonance imaging tools with which to evaluate tissue integrity in vivo. However, to date, only moderate image resolutions of 1.6 x 1.6 x 3.6 mm3 have been used for whole-brain coverage RFR mapping in humans at 3 T. For more precise morphometrical examinations, higher spatial resolutions are desirable. Towards achieving the long-term goal of increasing the spatial resolution of RFR mapping without increasing scan times, we explore the use of the recently introduced Transform domain NOise Reduction with DIstribution Corrected principal component analysis (T-NORDIC) algorithm for thermal noise reduction. RFR acquisitions at 3 T were obtained from eight healthy participants (seven males and one female) aged 52 ± 20 years, including adiabatic T1ρ, T2ρ, and nonadiabatic Relaxation Along a Fictitious Field (RAFF) in the rotating frame of rank n = 4 (RAFF4) with both 1.6 x 1.6 x 3.6 mm3 and 1.25 x 1.25 x 2 mm3 image resolutions. We compared RFR values and their confidence intervals (CIs) obtained from fitting the denoised versus nondenoised images, at both voxel and regional levels separately for each resolution and RFR metric. The comparison of metrics obtained from denoised versus nondenoised images was performed with a two-sample paired t-test and statistical significance was set at p less than 0.05 after Bonferroni correction for multiple comparisons. The use of T-NORDIC on the RFR images prior to the fitting procedure decreases the uncertainty of parameter estimation (lower CIs) at both spatial resolutions. The effect was particularly prominent at high-spatial resolution for RAFF4. Moreover, T-NORDIC did not degrade map quality, and it had minimal impact on the RFR values. Denoising RFR images with T-NORDIC improves parameter estimation while preserving the image quality and accuracy of all RFR maps, ultimately enabling high-resolution RFR mapping in scan times that are suitable for clinical settings.

11.
Artículo en Inglés | MEDLINE | ID: mdl-39188162

RESUMEN

In the image Gaussian filtering process, convolving with a Gaussian matrix is essential due to the numerous arithmetic computations involved, predominantly multiplications and additions. This can heavily tax the system's memory, particularly with frequent use. To address this issue, a W/Ta2O5/Ag memristor was employed to substantially mitigate the computational overhead associated with convolution operations. Additionally, an interlayer of ZnO was subsequently introduced into the memristor. The resulting Ta2O5/ZnO heterostructure layer exhibited improved linearity in the pulse response, which enhanced linearity facilitates easy adjustment of the conductance magnitude through a linear mapping of the number of pulses and the conductance. Subsequently, the conductance of the W/Ta2O5/ZnO/Ag bilayer memristor was employed as the weights for the convolution kernel in convolution operations. Gaussian noise removal in image processing was achieved by assembling a 5 × 5 memristor array as the kernel. When denoising was performed using memristor arrays, compared to denoising achieved through Gaussian matrix convolution, an average loss of less than 5% was observed. The provided memristors demonstrate significant potential in convolutional computations, particularly for subsequent applications in convolutional neural networks (CNNs).

12.
Comput Biol Med ; 180: 108981, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39146839

RESUMEN

Early detection of polyps is essential to decrease colorectal cancer(CRC) incidence. Therefore, developing an efficient and accurate polyp segmentation technique is crucial for clinical CRC prevention. In this paper, we propose an end-to-end training approach for polyp segmentation that employs diffusion model. The images are considered as priors, and the segmentation is formulated as a mask generation process. In the sampling process, multiple predictions are generated for each input image using the trained model, and significant performance enhancements are achieved through the use of majority vote strategy. Four public datasets and one in-house dataset are used to train and test the model performance. The proposed method achieves mDice scores of 0.934 and 0.967 for datasets Kvasir-SEG and CVC-ClinicDB respectively. Furthermore, one cross-validation is applied to test the generalization of the proposed model, and the proposed methods outperformed previous state-of-the-art(SOTA) models to the best of our knowledge. The proposed method also significantly improves the segmentation accuracy and has strong generalization capability.


Asunto(s)
Pólipos del Colon , Neoplasias Colorrectales , Humanos , Pólipos del Colon/diagnóstico por imagen , Neoplasias Colorrectales/diagnóstico por imagen , Modelos Estadísticos , Interpretación de Imagen Asistida por Computador/métodos , Algoritmos
13.
Comput Biol Med ; 180: 108933, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39096612

RESUMEN

Medical image segmentation demands precise accuracy and the capability to assess segmentation uncertainty for informed clinical decision-making. Denoising Diffusion Probability Models (DDPMs), with their advancements in image generation, can treat segmentation as a conditional generation task, providing accurate segmentation and uncertainty estimation. However, current DDPMs used in medical image segmentation suffer from low inference efficiency and prediction errors caused by excessive noise at the end of the forward process. To address this issue, we propose an accelerated denoising diffusion probabilistic model via truncated inverse processes (ADDPM) that is specifically designed for medical image segmentation. The inverse process of ADDPM starts from a non-Gaussian distribution and terminates early once a prediction with relatively low noise is obtained after multiple iterations of denoising. We employ a separate powerful segmentation network to obtain pre-segmentation and construct the non-Gaussian distribution of the segmentation based on the forward diffusion rule. By further adopting a separate denoising network, the final segmentation can be obtained with just one denoising step from the predictions with low noise. ADDPM greatly reduces the number of denoising steps to approximately one-tenth of that in vanilla DDPMs. Our experiments on four segmentation tasks demonstrate that ADDPM outperforms both vanilla DDPMs and existing representative accelerating DDPMs methods. Moreover, ADDPM can be easily integrated with existing advanced segmentation models to improve segmentation performance and provide uncertainty estimation. Implementation code: https://github.com/Guoxt/ADDPM.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Modelos Estadísticos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Relación Señal-Ruido
14.
Int J Numer Method Biomed Eng ; : e3858, 2024 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-39196308

RESUMEN

Experimental blood flow measurement techniques are invaluable for a better understanding of cardiovascular disease formation, progression, and treatment. One of the emerging methods is time-resolved three-dimensional phase-contrast magnetic resonance imaging (4D flow MRI), which enables noninvasive time-dependent velocity measurements within large vessels. However, several limitations hinder the usability of 4D flow MRI and other experimental methods for quantitative hemodynamics analysis. These mainly include measurement noise, corrupt or missing data, low spatiotemporal resolution, and other artifacts. Traditional filtering is routinely applied for denoising experimental blood flow data without any detailed discussion on why it is preferred over other methods. In this study, filtering is compared to different singular value decomposition (SVD)-based machine learning and autoencoder-type deep learning methods for denoising and filling in missing data (imputation). An artificially corrupted and voxelized computational fluid dynamics (CFD) simulation as well as in vitro 4D flow MRI data are used to test the methods. SVD-based algorithms achieve excellent results for the idealized case but severely struggle when applied to in vitro data. The autoencoders are shown to be versatile and applicable to all investigated cases. For denoising, the in vitro 4D flow MRI data, the denoising autoencoder (DAE), and the Noise2Noise (N2N) autoencoder produced better reconstructions than filtering both qualitatively and quantitatively. Deep learning methods such as N2N can result in noise-free velocity fields even though they did not use clean data during training. This work presents one of the first comprehensive assessments and comparisons of various classical and modern machine-learning methods for enhancing corrupt cardiovascular flow data in diseased arteries for both synthetic and experimental test cases.

15.
Quant Imaging Med Surg ; 14(8): 5571-5590, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39144020

RESUMEN

Background: Low-dose computed tomography (LDCT) is a diagnostic imaging technique designed to minimize radiation exposure to the patient. However, this reduction in radiation may compromise computed tomography (CT) image quality, adversely impacting clinical diagnoses. Various advanced LDCT methods have emerged to mitigate this challenge, relying on well-matched LDCT and normal-dose CT (NDCT) image pairs for training. Nevertheless, these methods often face difficulties in distinguishing image details from nonuniformly distributed noise, limiting their denoising efficacy. Additionally, acquiring suitably paired datasets in the medical domain poses challenges, further constraining their applicability. Hence, the objective of this study was to develop an innovative denoising framework for LDCT images employing unpaired data. Methods: In this paper, we propose a LDCT denoising network (DNCNN) that alleviates the need for aligning LDCT and NDCT images. Our approach employs generative adversarial networks (GANs) to learn and model the noise present in LDCT images, establishing a mapping from the pseudo-LDCT to the actual NDCT domain without the need for paired CT images. Results: Within the domain of weakly supervised methods, our proposed model exhibited superior objective metrics on the simulated dataset when compared to CycleGAN and selective kernel-based cycle-consistent GAN (SKFCycleGAN): the peak signal-to-noise ratio (PSNR) was 43.9441, the structural similarity index measure (SSIM) was 0.9660, and the visual information fidelity (VIF) was 0.7707. In the clinical dataset, we conducted a visual effect analysis by observing various tissues through different observation windows. Our proposed method achieved a no-reference structural sharpness (NRSS) value of 0.6171, which was closest to that of the NDCT images (NRSS =0.6049), demonstrating its superiority over other denoising techniques in preserving details, maintaining structural integrity, and enhancing edge contrast. Conclusions: Through extensive experiments on both simulated and clinical datasets, we demonstrated the superior efficacy of our proposed method in terms of denoising quality and quantity. Our method exhibits superiority over both supervised techniques, including block-matching and 3D filtering (BM3D), residual encoder-decoder convolutional neural network (RED-CNN), and Wasserstein generative adversarial network-VGG (WGAN-VGG), and over weakly supervised approaches, including CycleGAN and SKFCycleGAN.

16.
Med Image Anal ; 98: 103306, 2024 Aug 17.
Artículo en Inglés | MEDLINE | ID: mdl-39163786

RESUMEN

Positron emission tomography (PET) imaging is widely used in medical imaging for analyzing neurological disorders and related brain diseases. Usually, full-dose imaging for PET ensures image quality but raises concerns about potential health risks of radiation exposure. The contradiction between reducing radiation exposure and maintaining diagnostic performance can be effectively addressed by reconstructing low-dose PET (L-PET) images to the same high-quality as full-dose (F-PET). This paper introduces the Multi Pareto Generative Adversarial Network (MPGAN) to achieve 3D end-to-end denoising for the L-PET images of human brain. MPGAN consists of two key modules: the diffused multi-round cascade generator (GDmc) and the dynamic Pareto-efficient discriminator (DPed), both of which play a zero-sum game for n(n∈1,2,3) rounds to ensure the quality of synthesized F-PET images. The Pareto-efficient dynamic discrimination process is introduced in DPed to adaptively adjust the weights of sub-discriminators for improved discrimination output. We validated the performance of MPGAN using three datasets, including two independent datasets and one mixed dataset, and compared it with 12 recent competing models. Experimental results indicate that the proposed MPGAN provides an effective solution for 3D end-to-end denoising of L-PET images of the human brain, which meets clinical standards and achieves state-of-the-art performance on commonly used metrics.

17.
Phys Med Biol ; 69(17)2024 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-39137818

RESUMEN

Objective.Magnetic particle imaging (MPI) is an emerging tracer-basedin vivoimaging technology. The use of MPI at low superparamagnetic iron oxide nanoparticle concentrations has the potential to be a promising area of clinical application due to the inherent safety for humans. However, low tracer concentrations reduce the signal-to-noise ratio of the magnetization signal, leading to severe noise artifacts in the reconstructed MPI images. Hardware improvements have high complexity, while traditional methods lack robustness to different noise levels, making it difficult to improve the quality of low concentration MPI images.Approach.Here, we propose a novel deep learning method for MPI image denoising and quality enhancing based on a sparse lightweight transformer model. The proposed residual-local transformer structure reduces model complexity to avoid overfitting, in which an information retention block facilitates feature extraction capabilities for the image details. Besides, we design a noisy concentration dataset to train our model. Then, we evaluate our method with both simulated and real MPI image data.Main results.Simulation experiment results show that our method can achieve the best performance compared with the existing deep learning methods for MPI image denoising. More importantly, our method is effectively performed on the real MPI image of samples with an Fe concentration down to 67µgFeml-1.Significance.Our method provides great potential for obtaining high quality MPI images at low concentrations.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Relación Señal-Ruido , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Profundo , Nanopartículas de Magnetita/química
18.
Phys Eng Sci Med ; 2024 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-39115738

RESUMEN

Impedance cardiography (ICG) plays a crucial role in clinically evaluating cardiac systolic and diastolic functions, along with various other cardiac parameters. However, its accuracy heavily depends on precisely identifying feature points reflecting cardiac function. Moreover, traditional signal processing techniques used to mitigate random noise and breathing artifacts may inadvertently distort the amplitude and temporal characteristics of ICG signals. To address this issue, this study investigates a noise and artifact elimination method based on Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (ICEEMDAN) and Particle Swarm Optimization-based Variational Mode Decomposition Algorithm (PSO-VMD). The goal is to preserve the amplitude and temporal features of ICG signals to ensure accurate feature point extraction and computation of associated cardiac parameters. Comparative analysis with signal processing methods employing various wavelet families and Ensemble Empirical Mode Decomposition (EEMD) in ICG signal processing applications reveals that the proposed method achieves superior signal-to-noise ratio (SNR) and lower root-mean-square error (RMSE), while demonstrating enhanced correlation and waveform consistency with the original signal.

19.
Magn Reson Med ; 2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-39030953

RESUMEN

PURPOSE: To develop a SNR enhancement method for CEST imaging using a denoising convolutional autoencoder (DCAE) and compare its performance with state-of-the-art denoising methods. METHOD: The DCAE-CEST model encompasses an encoder and a decoder network. The encoder learns features from the input CEST Z-spectrum via a series of one-dimensional convolutions, nonlinearity applications, and pooling. Subsequently, the decoder reconstructs an output denoised Z-spectrum using a series of up-sampling and convolution layers. The DCAE-CEST model underwent multistage training in an environment constrained by Kullback-Leibler divergence, while ensuring data adaptability through context learning using Principal Component Analysis-processed Z-spectrum as a reference. The model was trained using simulated Z-spectra, and its performance was evaluated using both simulated data and in vivo data from an animal tumor model. Maps of amide proton transfer (APT) and nuclear Overhauser enhancement (NOE) effects were quantified using the multiple-pool Lorentzian fit, along with an apparent exchange-dependent relaxation metric. RESULTS: In digital phantom experiments, the DCAE-CEST method exhibited superior performance, surpassing existing denoising techniques, as indicated by the peak SNR and Structural Similarity Index. Additionally, in vivo data further confirm the effectiveness of the DCAE-CEST in denoising the APT and NOE maps when compared with other methods. Although no significant difference was observed in APT between tumors and normal tissues, there was a significant difference in NOE, consistent with previous findings. CONCLUSION: The DCAE-CEST can learn the most important features of the CEST Z-spectrum and provide the most effective denoising solution compared with other methods.

20.
ArXiv ; 2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38947935

RESUMEN

Background noise in many fields such as medical imaging poses significant challenges for accurate diagnosis, prompting the development of denoising algorithms. Traditional methodologies, however, often struggle to address the complexities of noisy environments in high dimensional imaging systems. This paper introduces a novel quantum-inspired approach for image denoising, drawing upon principles of quantum and condensed matter physics. Our approach views medical images as amorphous structures akin to those found in condensed matter physics and we propose an algorithm that incorporates the concept of mode resolved localization directly into the denoising process. Notably, our approach eliminates the need for hyperparameter tuning. The proposed method is a standalone algorithm with minimal manual intervention, demonstrating its potential to use quantum-based techniques in classical signal denoising. Through numerical validation, we showcase the effectiveness of our approach in addressing noise-related challenges in imaging and especially medical imaging, underscoring its relevance for possible quantum computing applications.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...