Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 99
Filtrar
1.
Sensors (Basel) ; 24(19)2024 Sep 27.
Artículo en Inglés | MEDLINE | ID: mdl-39409299

RESUMEN

Multispectral remote sensing images contain abundant information about the distribution and reflectance of ground objects, playing a crucial role in target detection, environmental monitoring, and resource exploration. However, due to the complexity of the imaging process in multispectral remote sensing, image blur is inevitable, and the blur kernel is typically unknown. In recent years, many researchers have focused on blind image deblurring, but most of these methods are based on single-band images. When applied to CASEarth satellite multispectral images, the spectral correlation is unutilized. To address this limitation, this paper proposes a novel approach that leverages the characteristics of multispectral data more effectively. We introduce an inter-band gradient similarity prior and incorporate it into the patch-wise minimal pixel (PMP)-based deblurring model. This approach aims to utilize the spectral correlation across bands to improve deblurring performance. A solution algorithm is established by combining the half-quadratic splitting method with alternating minimization. Subjectively, the final experiments on CASEarth multispectral images demonstrate that the proposed method offers good visual effects while enhancing edge sharpness. Objectively, our method leads to an average improvement in point sharpness by a factor of 1.6, an increase in edge strength level by a factor of 1.17, and an enhancement in RMS contrast by a factor of 1.11.

2.
Sensors (Basel) ; 24(20)2024 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-39460026

RESUMEN

The motion of an object or camera platform makes the acquired image blurred. This degradation is a major reason to obtain a poor-quality image from an imaging sensor. Therefore, developing an efficient deep-learning-based image processing method to remove the blur artifact is desirable. Deep learning has recently demonstrated significant efficacy in image deblurring, primarily through convolutional neural networks (CNNs) and Transformers. However, the limited receptive fields of CNNs restrict their ability to capture long-range structural dependencies. In contrast, Transformers excel at modeling these dependencies, but they are computationally expensive for high-resolution inputs and lack the appropriate inductive bias. To overcome these challenges, we propose an Efficient Hybrid Network (EHNet) that employs CNN encoders for local feature extraction and Transformer decoders with a dual-attention module to capture spatial and channel-wise dependencies. This synergy facilitates the acquisition of rich contextual information for high-quality image deblurring. Additionally, we introduce the Simple Feature-Embedding Module (SFEM) to replace the pointwise and depthwise convolutions to generate simplified embedding features in the self-attention mechanism. This innovation substantially reduces computational complexity and memory usage while maintaining overall performance. Finally, through comprehensive experiments, our compact model yields promising quantitative and qualitative results for image deblurring on various benchmark datasets.

3.
Neural Netw ; 179: 106591, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39111162

RESUMEN

Most existing model-based and learning-based image deblurring methods usually use synthetic blur-sharp training pairs to remove blur. However, these approaches do not perform well in real-world applications as the blur-sharp training pairs are difficult to be obtained and the blur in real-world scenarios is spatial-variant. In this paper, we propose a self-supervised learning-based image deblurring method that can deal with both uniform and spatial-variant blur distributions. Moreover, our method does not need for blur-sharp pairs for training. In our proposed method, we design the Deblurring Network (D-Net) and the Spatial Degradation Network (SD-Net). Specifically, the D-Net is designed for image deblurring while the SD-Net is used to simulate the spatial-variant degradation. Furthermore, the off-the-shelf pre-trained model is employed as the prior of our model, which facilitates image deblurring. Meanwhile, we design a recursive optimization strategy to accelerate the convergence of the model. Extensive experiments demonstrate that our proposed model achieves favorable performance against existing image deblurring methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Aprendizaje Automático Supervisado , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Profundo , Algoritmos , Humanos
4.
Sensors (Basel) ; 24(15)2024 Jul 24.
Artículo en Inglés | MEDLINE | ID: mdl-39123847

RESUMEN

Recent studies have proposed methods for extracting latent sharp frames from a single blurred image. However, these methods still suffer from limitations in restoring satisfactory images. In addition, most existing methods are limited to decomposing a blurred image into sharp frames with a fixed frame rate. To address these problems, we present an Arbitrary Time Blur Decomposition Triple Generative Adversarial Network (ABDGAN) that restores sharp frames with flexible frame rates. Our framework plays a min-max game consisting of a generator, a discriminator, and a time-code predictor. The generator serves as a time-conditional deblurring network, while the discriminator and the label predictor provide feedback to the generator on producing realistic and sharp image depending on given time code. To provide adequate feedback for the generator, we propose a critic-guided (CG) loss by collaboration of the discriminator and time-code predictor. We also propose a pairwise order-consistency (POC) loss to ensure that each pixel in a predicted image consistently corresponds to the same ground-truth frame. Extensive experiments show that our method outperforms previously reported methods in both qualitative and quantitative evaluations. Compared to the best competitor, the proposed ABDGAN improves PSNR, SSIM, and LPIPS on the GoPro test set by 16.67%, 9.16%, and 36.61%, respectively. For the B-Aist++ test set, our method shows improvements of 6.99%, 2.38%, and 17.05% in PSNR, SSIM, and LPIPS, respectively, compared to the best competitive method.

5.
Front Ophthalmol (Lausanne) ; 4: 1332197, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38984141

RESUMEN

Fundus cameras are widely used by ophthalmologists for monitoring and diagnosing retinal pathologies. Unfortunately, no optical system is perfect, and the visibility of retinal images can be greatly degraded due to the presence of problematic illumination, intraocular scattering, or blurriness caused by sudden movements. To improve image quality, different retinal image restoration/enhancement techniques have been developed, which play an important role in improving the performance of various clinical and computer-assisted applications. This paper gives a comprehensive review of these restoration/enhancement techniques, discusses their underlying mathematical models, and shows how they may be effectively applied in real-life practice to increase the visual quality of retinal images for potential clinical applications including diagnosis and retinal structure recognition. All three main topics of retinal image restoration/enhancement techniques, i.e., illumination correction, dehazing, and deblurring, are addressed. Finally, some considerations about challenges and the future scope of retinal image restoration/enhancement techniques will be discussed.

6.
Med Image Anal ; 97: 103256, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39047605

RESUMEN

Recently, large pretrained vision foundation models based on masked image modeling (MIM) have attracted unprecedented attention and achieved remarkable performance across various tasks. However, the study of MIM for ultrasound imaging remains relatively unexplored, and most importantly, current MIM approaches fail to account for the gap between natural images and ultrasound, as well as the intrinsic imaging characteristics of the ultrasound modality, such as the high noise-to-signal ratio. In this paper, motivated by the unique high noise-to-signal ratio property in ultrasound, we propose a deblurring MIM approach specialized to ultrasound, which incorporates a deblurring task into the pretraining proxy task. The incorporation of deblurring facilitates the pretraining to better recover the subtle details within ultrasound images that are vital for subsequent downstream analysis. Furthermore, we employ a multi-scale hierarchical encoder to extract both local and global contextual cues for improved performance, especially on pixel-wise tasks such as segmentation. We conduct extensive experiments involving 280,000 ultrasound images for the pretraining and evaluate the downstream transfer performance of the pretrained model on various disease diagnoses (nodule, Hashimoto's thyroiditis) and task types (classification, segmentation). The experimental results demonstrate the efficacy of the proposed deblurring MIM, achieving state-of-the-art performance across a wide range of downstream tasks and datasets. Overall, our work highlights the potential of deblurring MIM for ultrasound image analysis, presenting an ultrasound-specific vision foundation model.


Asunto(s)
Ultrasonografía , Ultrasonografía/métodos , Humanos , Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Relación Señal-Ruido , Procesamiento de Imagen Asistido por Computador/métodos
7.
Comput Biol Med ; 178: 108783, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38909446

RESUMEN

Magnetic particle imaging (MPI) is an emerging non-invasive medical imaging tomography technology based on magnetic particles, with excellent imaging depth penetration, high sensitivity and contrast. Spatial resolution and signal-to-noise ratio (SNR) are key performance metrics for evaluating MPI, which are directly influenced by the gradient of the selection field (SF). Increasing the SF gradient can improve the spatial resolution of MPI, but will lead to a decrease in SNR. Deep learning (DL) methods may enable obtaining high-resolution images from low-resolution images to improve the MPI resolution under low gradient conditions. However, existing DL methods overlook the physical procedures contributing to the blurring of MPI images, resulting in low interpretability and hindering breakthroughs in resolution. To address this issue, we propose a dual-channel end-to-end network with prior knowledge embedding for MPI (DENPK-MPI) to effectively establish a latent mapping between low-gradient and high-gradient images, thus improving MPI resolution without compromising SNR. By seamlessly integrating MPI PSF with DL paradigm, DENPK-MPI leads to a significant improvement in spatial resolution performance. Simulation, phantom, and in vivo MPI experiments have collectively confirmed that our method can improve the resolution of low-gradient MPI images without sacrificing SNR, resulting in a decrease in full width at half maximum by 14.8%-23.8 %, and the accuracy of image reconstruction is 18.2 %-27.3 % higher than other DL methods. In conclusion, we propose a DL method that incorporates MPI prior knowledge, which can improve the spatial resolution of MPI without compromising SNR and possess improved biomedical application.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Fantasmas de Imagen , Relación Señal-Ruido , Procesamiento de Imagen Asistido por Computador/métodos , Animales , Ratones , Aprendizaje Profundo , Humanos , Nanopartículas de Magnetita/química , Tomografía/métodos
8.
Sensors (Basel) ; 24(12)2024 Jun 08.
Artículo en Inglés | MEDLINE | ID: mdl-38931524

RESUMEN

Building occupancy information is significant for a variety of reasons, from allocation of resources in smart buildings to responding during emergency situations. As most people spend more than 90% of their time indoors, a comfortable indoor environment is crucial. To ensure comfort, traditional HVAC systems condition rooms assuming maximum occupancy, accounting for more than 50% of buildings' energy budgets in the US. Occupancy level is a key factor in ensuring energy efficiency, as occupancy-controlled HVAC systems can reduce energy waste by conditioning rooms based on actual usage. Numerous studies have focused on developing occupancy estimation models leveraging existing sensors, with camera-based methods gaining popularity due to their high precision and widespread availability. However, the main concern with using cameras for occupancy estimation is the potential violation of occupants' privacy. Unlike previous video-/image-based occupancy estimation methods, we addressed the issue of occupants' privacy in this work by proposing and investigating both motion-based and motion-independent occupancy counting methods on intentionally blurred video frames. Our proposed approach included the development of a motion-based technique that inherently preserves privacy, as well as motion-independent techniques such as detection-based and density-estimation-based methods. To improve the accuracy of the motion-independent approaches, we utilized deblurring methods: an iterative statistical technique and a deep-learning-based method. Furthermore, we conducted an analysis of the privacy implications of our motion-independent occupancy counting system by comparing the original, blurred, and deblurred frames using different image quality assessment metrics. This analysis provided insights into the trade-off between occupancy estimation accuracy and the preservation of occupants' visual privacy. The combination of iterative statistical deblurring and density estimation achieved a 16.29% counting error, outperforming our other proposed approaches while preserving occupants' visual privacy to a certain extent. Our multifaceted approach aims to contribute to the field of occupancy estimation by proposing a solution that seeks to balance the trade-off between accuracy and privacy. While further research is needed to fully address this complex issue, our work provides insights and a step towards a more privacy-aware occupancy estimation system.

9.
Telemed J E Health ; 30(9): 2477-2482, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38934135

RESUMEN

Background: Blurry images in teledermatology and consultation increased the diagnostic difficulty for both deep learning models and physicians. We aim to determine the extent of restoration in diagnostic accuracy after blurry images are deblurred by deep learning models. Methods: We used 19,191 skin images from a public skin image dataset that includes 23 skin disease categories, 54 skin images from a public dataset of blurry skin images, and 53 blurry dermatology consultation photos in a medical center to compare the diagnosis accuracy of trained diagnostic deep learning models and subjective sharpness between blurry and deblurred images. We evaluated five different deblurring models, including models for motion blur, Gaussian blur, Bokeh blur, mixed slight blur, and mixed strong blur. Main Outcomes and Measures: Diagnostic accuracy was measured as sensitivity and precision of correct model prediction of the skin disease category. Sharpness rating was performed by board-certified dermatologists on a 4-point scale, with 4 being the highest image clarity. Results: The sensitivity of diagnostic models dropped 0.15 and 0.22 on slightly and strongly blurred images, respectively, and deblurring models restored 0.14 and 0.17 for each group. The sharpness ratings perceived by dermatologists improved from 1.87 to 2.51 after deblurring. Activation maps showed the focus of diagnostic models was compromised by the blurriness but was restored after deblurring. Conclusions: Deep learning models can restore the diagnostic accuracy of diagnostic models for blurry images and increase image sharpness perceived by dermatologists. The model can be incorporated into teledermatology to help the diagnosis of blurry images.


Asunto(s)
Aprendizaje Profundo , Dermatología , Enfermedades de la Piel , Telemedicina , Humanos , Enfermedades de la Piel/diagnóstico , Enfermedades de la Piel/diagnóstico por imagen , Dermatología/métodos , Fotograbar
10.
Phys Med Biol ; 69(11)2024 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-38640913

RESUMEN

Objective. Digital breast tomosynthesis (DBT) has significantly improved the diagnosis of breast cancer due to its high sensitivity and specificity in detecting breast lesions compared to two-dimensional mammography. However, one of the primary challenges in DBT is the image blur resulting from x-ray source motion, particularly in DBT systems with a source in continuous-motion mode. This motion-induced blur can degrade the spatial resolution of DBT images, potentially affecting the visibility of subtle lesions such as microcalcifications.Approach. We addressed this issue by deriving an analytical in-plane source blur kernel for DBT images based on imaging geometry and proposing a post-processing image deblurring method with a generative diffusion model as an image prior.Main results. We showed that the source blur could be approximated by a shift-invariant kernel over the DBT slice at a given height above the detector, and we validated the accuracy of our blur kernel modeling through simulation. We also demonstrated the ability of the diffusion model to generate realistic DBT images. The proposed deblurring method successfully enhanced spatial resolution when applied to DBT images reconstructed with detector blur and correlated noise modeling.Significance. Our study demonstrated the advantages of modeling the imaging system components such as source motion blur for improving DBT image quality.


Asunto(s)
Mamografía , Mamografía/métodos , Humanos , Difusión , Procesamiento de Imagen Asistido por Computador/métodos , Mama/diagnóstico por imagen , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/fisiopatología , Rayos X , Movimiento , Femenino , Movimiento (Física)
11.
Phys Med Biol ; 69(10)2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38604177

RESUMEN

Objective. To improve intravoxel incoherent motion imaging (IVIM) magnetic resonance Imaging quality using a new image denoising technique and model-independent parameterization of the signal versusb-value curve.Approach. IVIM images were acquired for 13 head-and-neck patients prior to radiotherapy. Post-radiotherapy scans were also acquired for five of these patients. Images were denoised prior to parameter fitting using neural blind deconvolution, a method of solving the ill-posed mathematical problem of blind deconvolution using neural networks. The signal decay curve was then quantified in terms of several area under the curve (AUC) parameters. Improvements in image quality were assessed using blind image quality metrics, total variation (TV), and the correlations between parameter changes in parotid glands with radiotherapy dose levels. The validity of blur kernel predictions was assessed by the testing the method's ability to recover artificial 'pseudokernels'. AUC parameters were compared with monoexponential, biexponential, and triexponential model parameters in terms of their correlations with dose, contrast-to-noise (CNR) around parotid glands, and relative importance via principal component analysis.Main results. Image denoising improved blind image quality metrics, smoothed the signal versusb-value curve, and strengthened correlations between IVIM parameters and dose levels. Image TV was reduced and parameter CNRs generally increased following denoising.AUCparameters were more correlated with dose and had higher relative importance than exponential model parameters.Significance. IVIM parameters have high variability in the literature and perfusion-related parameters are difficult to interpret. Describing the signal versusb-value curve with model-independent parameters like theAUCand preprocessing images with denoising techniques could potentially benefit IVIM image parameterization in terms of reproducibility and functional utility.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Relación Señal-Ruido , Humanos , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Movimiento , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/radioterapia
12.
Bioengineering (Basel) ; 11(4)2024 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-38671752

RESUMEN

In recent years, indirect digital radiography detectors have been actively studied to improve radiographic image performance with low radiation exposure. This study aimed to achieve low-dose radiation imaging with a thick scintillation detector while simultaneously obtaining the resolution of a thin scintillation detector. The proposed method was used to predict the optimal point spread function (PSF) between thin and thick scintillation detectors by considering image quality assessment (IQA). The process of identifying the optimal PSF was performed on each sub-band in the wavelet domain to improve restoration accuracy. In the experiments, the edge preservation index (EPI) values of the non-blind deblurred image with a blurring sigma of σ = 5.13 pixels and the image obtained with optimal parameters from the thick scintillator using the proposed method were approximately 0.62 and 0.76, respectively. The coefficient of variation (COV) values for the two images were approximately 1.02 and 0.63, respectively. The proposed method was validated through simulations and experimental results, and its viability is expected to be verified on various radiological imaging systems.

13.
Phys Med Biol ; 69(8)2024 Apr 09.
Artículo en Inglés | MEDLINE | ID: mdl-38513292

RESUMEN

Objective. To simultaneously deblur and supersample prostate specific membrane antigen (PSMA) positron emission tomography (PET) images using neural blind deconvolution.Approach. Blind deconvolution is a method of estimating the hypothetical 'deblurred' image along with the blur kernel (related to the point spread function) simultaneously. Traditionalmaximum a posterioriblind deconvolution methods require stringent assumptions and suffer from convergence to a trivial solution. A method of modelling the deblurred image and kernel with independent neural networks, called 'neural blind deconvolution' had demonstrated success for deblurring 2D natural images in 2020. In this work, we adapt neural blind deconvolution to deblur PSMA PET images while simultaneous supersampling to double the original resolution. We compare this methodology with several interpolation methods in terms of resultant blind image quality metrics and test the model's ability to predict accurate kernels by re-running the model after applying artificial 'pseudokernels' to deblurred images. The methodology was tested on a retrospective set of 30 prostate patients as well as phantom images containing spherical lesions of various volumes.Main results. Neural blind deconvolution led to improvements in image quality over other interpolation methods in terms of blind image quality metrics, recovery coefficients, and visual assessment. Predicted kernels were similar between patients, and the model accurately predicted several artificially-applied pseudokernels. Localization of activity in phantom spheres was improved after deblurring, allowing small lesions to be more accurately defined.Significance. The intrinsically low spatial resolution of PSMA PET leads to partial volume effects (PVEs) which negatively impact uptake quantification in small regions. The proposed method can be used to mitigate this issue, and can be straightforwardly adapted for other imaging modalities.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía de Emisión de Positrones , Masculino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Estudios Retrospectivos , Tomografía de Emisión de Positrones/métodos
14.
Sci Prog ; 107(1): 368504241231161, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38400510

RESUMEN

In modern urban traffic systems, intersection monitoring systems are used to monitor traffic flows and track vehicles by recognizing license plates. However, intersection monitors often produce motion-blurred images because of the rapid movement of cars. If a deep learning network is used for image deblurring, the blurring of the image can be eliminated first, and then the complete vehicle information can be obtained to improve the recognition rate. To restore a dynamic blurred image to a sharp image, this paper proposes a multi-scale modified U-Net image deblurring network using dilated convolution and employs a variable scaling iterative strategy to make the scheme more adaptable to actual blurred images. Multi-scale architecture uses scale changes to learn the characteristics of different scales of images, and the use of dilated convolution can improve the advantages of the receptive field and obtain more information from features without increasing the computational cost. Experimental results are obtained using a synthetic motion-blurred image dataset and a real blurred image dataset for comparison with existing deblurring methods. The experimental results demonstrate that the image deblurring method proposed in this paper has a favorable effect on actual motion-blurred images.

15.
Magn Reson Med ; 91(3): 1200-1208, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38010065

RESUMEN

PURPOSE: Robust implementation of spiral imaging requires efficient deblurring. A deblurring method was previously proposed to separate and deblur water and fat simultaneously, based on image-space kernel operations. The goal of this work is to improve the performance of the previous deblurring method using kernels with better properties. METHODS: Four types of kernels were formed using different models for the region outside the collected k-space as well as low-pass preconditioning (LP). The performances of the kernels were tested and compared with both phantom and volunteer data. Data were also synthesized to evaluate the SNR. RESULTS: The proposed "square" kernels are much more compact than the previously used circular kernels. Square kernels have better properties in terms of normalized RMS error, structural similarity index measure, and SNR. The square kernels created by LP demonstrated the best performance of artifact mitigation on phantom data. CONCLUSIONS: The sizes of the blurring kernels and thus the computational cost can be reduced by the proposed square kernels instead of the previous circular ones. Using LP may further enhance the performance.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Algoritmos , Fantasmas de Imagen
16.
Comput Biol Med ; 165: 107461, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37708716

RESUMEN

Magnetic particle imaging (MPI) is an emerging medical imaging technique that has high sensitivity, contrast, and excellent depth penetration. In MPI, x-space is a reconstruction method that transforms the measured voltages into particle concentrations. The reconstructed native image can be modeled as a convolution of the magnetic particle concentration with a point-spread function (PSF). The PSF is one of the important parameters in deconvolution. However, accurately measuring or modeling the PSF in the hardware used for deconvolution is challenging due to the various environment and magnetic particle relaxation. The inaccurate PSF estimation may lead to the loss of the content structure of the MPI image, especially in low gradient fields. In this study, we developed a Dual Adversarial Network (DAN) with patch-wise contrastive constraint to deblur the MPI image. This method can overcome the limitations of unpaired data in data acquisition scenarios and remove the blur around the boundary more effectively than the common deconvolution method. We evaluated the performance of the proposed DAN model on simulated and real data. Experimental results confirmed that our model performs favorably against the deconvolution method that is mainly used for deblurring the MPI image and other GAN-based deep learning models.


Asunto(s)
Diagnóstico por Imagen , Fenómenos Magnéticos
17.
Magn Reson Med ; 90(6): 2362-2374, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37578085

RESUMEN

PURPOSE: Deep learning superresolution (SR) is a promising approach to reduce MRI scan time without requiring custom sequences or iterative reconstruction. Previous deep learning SR approaches have generated low-resolution training images by simple k-space truncation, but this does not properly model in-plane turbo spin echo (TSE) MRI resolution degradation, which has variable T2 relaxation effects in different k-space regions. To fill this gap, we developed a T2 -deblurred deep learning SR method for the SR of 3D-TSE images. METHODS: A SR generative adversarial network was trained using physically realistic resolution degradation (asymmetric T2 weighting of raw high-resolution k-space data). For comparison, we trained the same network structure on previous degradation models without TSE physics modeling. We tested all models for both retrospective and prospective SR with 3 × 3 acceleration factor (in the two phase-encoding directions) of genetically engineered mouse embryo model TSE-MR images. RESULTS: The proposed method can produce high-quality 3 × 3 SR images for a typical 500-slice volume with 6-7 mouse embryos. Because 3 × 3 SR was performed, the image acquisition time can be reduced from 15 h to 1.7 h. Compared to previous SR methods without TSE modeling, the proposed method achieved the best quantitative imaging metrics for both retrospective and prospective evaluations and achieved the best imaging-quality expert scores for prospective evaluation. CONCLUSION: The proposed T2 -deblurring method improved accuracy and image quality of deep learning-based SR of TSE MRI. This method has the potential to accelerate TSE image acquisition by a factor of up to 9.


Asunto(s)
Aprendizaje Profundo , Animales , Ratones , Estudios Retrospectivos , Imagen por Resonancia Magnética/métodos , Imagenología Tridimensional/métodos
18.
bioRxiv ; 2023 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-37546886

RESUMEN

Improving the spatial resolution of a fluorescence microscope has been an ongoing challenge in the imaging community. To address this challenge, a variety of approaches have been taken, ranging from instrumentation development to image post-processing. An example of the latter is deconvolution, where images are numerically deblurred based on a knowledge of the microscope point spread function. However, deconvolution can easily lead to noise-amplification artifacts. Deblurring by post-processing can also lead to negativities or fail to conserve local linearity between sample and image. We describe here a simple image deblurring algorithm based on pixel reassignment that inherently avoids such artifacts and can be applied to general microscope modalities and fluorophore types. Our algorithm helps distinguish nearby fluorophores even when these are separated by distances smaller than the conventional resolution limit, helping facilitate, for example, the application of single-molecule localization microscopy in dense samples. We demonstrate the versatility and performance of our algorithm under a variety of imaging conditions.

19.
Plant Methods ; 19(1): 87, 2023 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-37608384

RESUMEN

BACKGROUND: Efficient and site-specific weed management is a critical step in many agricultural tasks. Image captures from drones and modern machine learning based computer vision methods can be used to assess weed infestation in agricultural fields more efficiently. However, the image quality of the captures can be affected by several factors, including motion blur. Image captures can be blurred because the drone moves during the image capturing process, e.g. due to wind pressure or camera settings. These influences complicate the annotation of training and test samples and can also lead to reduced predictive power in segmentation and classification tasks. RESULTS: In this study, we propose DeBlurWeedSeg, a combined deblurring and segmentation model for weed and crop segmentation in motion blurred images. For this purpose, we first collected a new dataset of matching sharp and naturally blurred image pairs of real sorghum and weed plants from drone images of the same agricultural field. The data was used to train and evaluate the performance of DeBlurWeedSeg on both sharp and blurred images of a hold-out test-set. We show that DeBlurWeedSeg outperforms a standard segmentation model that does not include an integrated deblurring step, with a relative improvement of [Formula: see text] in terms of the Sørensen-Dice coefficient. CONCLUSION: Our combined deblurring and segmentation model DeBlurWeedSeg is able to accurately segment weeds from sorghum and background, in both sharp as well as motion blurred drone captures. This has high practical implications, as lower error rates in weed and crop segmentation could lead to better weed control, e.g. when using robots for mechanical weed removal.

20.
Ann Nucl Med ; 37(11): 596-604, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37610591

RESUMEN

OBJECTIVE: Non-blinded image deblurring with deep learning was performed on blurred numerical brain images without point spread function (PSF) reconstruction to obtain edge artifacts (EA)-free images. This study uses numerical simulation to investigate the mechanism of EA in PSF reconstruction based on the spatial frequency characteristics of EA-free images. METHODS: In 256 × 256 matrix brain images, the signal values of gray matter (GM), white matter, and cerebrospinal fluid were set to 1, 0.25, and 0.05, respectively. We assumed ideal projection data of a two-dimensional (2D) parallel beam with no degradation factors other than detector response blur to precisely grasp EA using the PSF reconstruction algorithm from blurred projection data. The detector response was assumed to be a shift-invariant and one-dimensional (1D) Gaussian function with 2-5 mm full width at half maximum (FWHM). Images without PSF reconstruction (non-PSF), PSF reconstruction without regularization (PSF) and with regularization of relative difference function (PSF-RD) were generated by ordered subset expectation maximization (OSEM). For non-PSF, the image deblurring with a deep image prior (DIP) was applied using a 2D Gaussian function with 2-5 mm FWHM. The 1D object-specific modulation transfer function (1D-OMTF), which is the ratio of 1D amplitude spectrum of the original and reconstructed images, was used as the index of spatial frequency characteristics. RESULTS: When the detector response was greater than 3 mm FWHM, EA in PSF was observed in GM borders and narrow GM. No remarkable EA was observed in the DIP, and the FWHM estimated from the recovery coefficient for the deblurred image of non-PSF at 5 mm FWHM was reduced to 3 mm or less. PSF of 5 mm FWHM showed higher spatial frequency characteristics than that of DIP up to around 2.2 cycles/cm but was lower than the latter after 3 cycles/cm. PSF-RD showed almost the same spatial frequency characteristics as that of DIP above 3 cycles/cm but was inferior below 3 cycles/cm. PSF-RD has a lower spatial resolution than DIP. CONCLUSIONS: Unlike DIP, PSF lacks high-frequency components around the Nyquist frequency, generating EA. PSF-RD mitigates EA while simultaneously suppressing the signal, diminishing spatial resolution.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...