Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 97
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Magn Reson Med ; 91(3): 1200-1208, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38010065

RESUMEN

PURPOSE: Robust implementation of spiral imaging requires efficient deblurring. A deblurring method was previously proposed to separate and deblur water and fat simultaneously, based on image-space kernel operations. The goal of this work is to improve the performance of the previous deblurring method using kernels with better properties. METHODS: Four types of kernels were formed using different models for the region outside the collected k-space as well as low-pass preconditioning (LP). The performances of the kernels were tested and compared with both phantom and volunteer data. Data were also synthesized to evaluate the SNR. RESULTS: The proposed "square" kernels are much more compact than the previously used circular kernels. Square kernels have better properties in terms of normalized RMS error, structural similarity index measure, and SNR. The square kernels created by LP demonstrated the best performance of artifact mitigation on phantom data. CONCLUSIONS: The sizes of the blurring kernels and thus the computational cost can be reduced by the proposed square kernels instead of the previous circular ones. Using LP may further enhance the performance.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Algoritmos , Fantasmas de Imagen
2.
Sensors (Basel) ; 24(15)2024 Jul 24.
Artículo en Inglés | MEDLINE | ID: mdl-39123847

RESUMEN

Recent studies have proposed methods for extracting latent sharp frames from a single blurred image. However, these methods still suffer from limitations in restoring satisfactory images. In addition, most existing methods are limited to decomposing a blurred image into sharp frames with a fixed frame rate. To address these problems, we present an Arbitrary Time Blur Decomposition Triple Generative Adversarial Network (ABDGAN) that restores sharp frames with flexible frame rates. Our framework plays a min-max game consisting of a generator, a discriminator, and a time-code predictor. The generator serves as a time-conditional deblurring network, while the discriminator and the label predictor provide feedback to the generator on producing realistic and sharp image depending on given time code. To provide adequate feedback for the generator, we propose a critic-guided (CG) loss by collaboration of the discriminator and time-code predictor. We also propose a pairwise order-consistency (POC) loss to ensure that each pixel in a predicted image consistently corresponds to the same ground-truth frame. Extensive experiments show that our method outperforms previously reported methods in both qualitative and quantitative evaluations. Compared to the best competitor, the proposed ABDGAN improves PSNR, SSIM, and LPIPS on the GoPro test set by 16.67%, 9.16%, and 36.61%, respectively. For the B-Aist++ test set, our method shows improvements of 6.99%, 2.38%, and 17.05% in PSNR, SSIM, and LPIPS, respectively, compared to the best competitive method.

3.
Sensors (Basel) ; 24(12)2024 Jun 08.
Artículo en Inglés | MEDLINE | ID: mdl-38931524

RESUMEN

Building occupancy information is significant for a variety of reasons, from allocation of resources in smart buildings to responding during emergency situations. As most people spend more than 90% of their time indoors, a comfortable indoor environment is crucial. To ensure comfort, traditional HVAC systems condition rooms assuming maximum occupancy, accounting for more than 50% of buildings' energy budgets in the US. Occupancy level is a key factor in ensuring energy efficiency, as occupancy-controlled HVAC systems can reduce energy waste by conditioning rooms based on actual usage. Numerous studies have focused on developing occupancy estimation models leveraging existing sensors, with camera-based methods gaining popularity due to their high precision and widespread availability. However, the main concern with using cameras for occupancy estimation is the potential violation of occupants' privacy. Unlike previous video-/image-based occupancy estimation methods, we addressed the issue of occupants' privacy in this work by proposing and investigating both motion-based and motion-independent occupancy counting methods on intentionally blurred video frames. Our proposed approach included the development of a motion-based technique that inherently preserves privacy, as well as motion-independent techniques such as detection-based and density-estimation-based methods. To improve the accuracy of the motion-independent approaches, we utilized deblurring methods: an iterative statistical technique and a deep-learning-based method. Furthermore, we conducted an analysis of the privacy implications of our motion-independent occupancy counting system by comparing the original, blurred, and deblurred frames using different image quality assessment metrics. This analysis provided insights into the trade-off between occupancy estimation accuracy and the preservation of occupants' visual privacy. The combination of iterative statistical deblurring and density estimation achieved a 16.29% counting error, outperforming our other proposed approaches while preserving occupants' visual privacy to a certain extent. Our multifaceted approach aims to contribute to the field of occupancy estimation by proposing a solution that seeks to balance the trade-off between accuracy and privacy. While further research is needed to fully address this complex issue, our work provides insights and a step towards a more privacy-aware occupancy estimation system.

4.
Telemed J E Health ; 2024 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-38934135

RESUMEN

Background: Blurry images in teledermatology and consultation increased the diagnostic difficulty for both deep learning models and physicians. We aim to determine the extent of restoration in diagnostic accuracy after blurry images are deblurred by deep learning models. Methods: We used 19,191 skin images from a public skin image dataset that includes 23 skin disease categories, 54 skin images from a public dataset of blurry skin images, and 53 blurry dermatology consultation photos in a medical center to compare the diagnosis accuracy of trained diagnostic deep learning models and subjective sharpness between blurry and deblurred images. We evaluated five different deblurring models, including models for motion blur, Gaussian blur, Bokeh blur, mixed slight blur, and mixed strong blur. Main Outcomes and Measures: Diagnostic accuracy was measured as sensitivity and precision of correct model prediction of the skin disease category. Sharpness rating was performed by board-certified dermatologists on a 4-point scale, with 4 being the highest image clarity. Results: The sensitivity of diagnostic models dropped 0.15 and 0.22 on slightly and strongly blurred images, respectively, and deblurring models restored 0.14 and 0.17 for each group. The sharpness ratings perceived by dermatologists improved from 1.87 to 2.51 after deblurring. Activation maps showed the focus of diagnostic models was compromised by the blurriness but was restored after deblurring. Conclusions: Deep learning models can restore the diagnostic accuracy of diagnostic models for blurry images and increase image sharpness perceived by dermatologists. The model can be incorporated into teledermatology to help the diagnosis of blurry images.

5.
Magn Reson Med ; 90(5): 2190-2197, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37379476

RESUMEN

PURPOSE: The combination of SENSE and spiral imaging with fat/water separation enables high temporal efficiency. However, the corresponding computation increases due to the blurring/deblurring operation across the multi-channel data. This study presents two alternative models to simplify computational complexity in the original full model (model 1). The performances of the models are evaluated in terms of the computation time and reconstruction error. METHODS: Two approximated spiral MRI reconstruction models were proposed: the comprehensive blurring before coil operation (model 2) and the regional blurring before coil operation (model 3), respectively, by altering the order of coil-sensitivity encoding process to distribute signals among the multi-channel coils. Four subjects were recruited for scanning both fully sampled T1 - and T2 -weighted brain image data with simulated undersampling for testing the computational efficiency and accuracy on the approximation models. RESULTS: Based on the examples, the computation time can be reduced to 31%-47% using model 2, and to 39%-56% using model 3. The quality of the water image remains unchanged among the three models, whereas the primary difference in image quality is in the fat channel. The fat images from model 3 are consistent with those from model 1, but those from model 2 have higher normalized error, differing by up to 4.8%. CONCLUSION: Model 2 provides the fastest computation but exhibits higher error in the fat channel, particularly in the high field and with long acquisition window. Model 3, an abridged alternative, is also faster than the full model and can maintain high accuracy in reconstruction.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Agua , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Algoritmos , Encéfalo/diagnóstico por imagen
6.
Magn Reson Med ; 89(3): 951-963, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36321560

RESUMEN

PURPOSE: The goal of this work is to present the implementation of 3D spiral high-resolution MPRAGE and to demonstrate that SNR and scan efficiency increase with the increment of readout time. THEORY: Simplified signal equations for MPRAGE indicate that the T1 contrast can be kept approximately the same by a simple relationship between the flip angle and the TR. Furthermore, if T1 contrast remains the same, image SNR depends on the square root of the product of the total scan time and the readout time. METHODS: MPRAGE spiral sequences were implemented with distributed spirals and spiral staircase on 3 Tesla scanners. Brain images of three volunteers were acquired with different readout times. Spiral images were processed with a joint water-fat separation and deblurring algorithm and compared to Cartesian images. Pure noise data sets were also acquired for SNR evaluation. RESULTS: Consistent T1 weighting can be achieved with various spiral readout lengths, and between spiral MPRAGE imaging and the traditional Cartesian MPRAGE imaging. Noise performance analysis demonstrates higher SNR efficiency of spiral MPRAGE imaging with matched T1 contrast compared to the Cartesian reference imaging. CONCLUSION: Fast, high SNR MPRAGE imaging is feasible with long readout spiral trajectories.


Asunto(s)
Imagenología Tridimensional , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Imagenología Tridimensional/métodos , Encéfalo/diagnóstico por imagen , Agua , Algoritmos
7.
Magn Reson Med ; 90(6): 2362-2374, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37578085

RESUMEN

PURPOSE: Deep learning superresolution (SR) is a promising approach to reduce MRI scan time without requiring custom sequences or iterative reconstruction. Previous deep learning SR approaches have generated low-resolution training images by simple k-space truncation, but this does not properly model in-plane turbo spin echo (TSE) MRI resolution degradation, which has variable T2 relaxation effects in different k-space regions. To fill this gap, we developed a T2 -deblurred deep learning SR method for the SR of 3D-TSE images. METHODS: A SR generative adversarial network was trained using physically realistic resolution degradation (asymmetric T2 weighting of raw high-resolution k-space data). For comparison, we trained the same network structure on previous degradation models without TSE physics modeling. We tested all models for both retrospective and prospective SR with 3 × 3 acceleration factor (in the two phase-encoding directions) of genetically engineered mouse embryo model TSE-MR images. RESULTS: The proposed method can produce high-quality 3 × 3 SR images for a typical 500-slice volume with 6-7 mouse embryos. Because 3 × 3 SR was performed, the image acquisition time can be reduced from 15 h to 1.7 h. Compared to previous SR methods without TSE modeling, the proposed method achieved the best quantitative imaging metrics for both retrospective and prospective evaluations and achieved the best imaging-quality expert scores for prospective evaluation. CONCLUSION: The proposed T2 -deblurring method improved accuracy and image quality of deep learning-based SR of TSE MRI. This method has the potential to accelerate TSE image acquisition by a factor of up to 9.


Asunto(s)
Aprendizaje Profundo , Animales , Ratones , Estudios Retrospectivos , Imagen por Resonancia Magnética/métodos , Imagenología Tridimensional/métodos
8.
Magn Reson Med ; 90(5): 1905-1918, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37392415

RESUMEN

PURPOSE: To present the validation of a new Flexible Ultra-Short Echo time (FUSE) pulse sequence using a short-T2 phantom. METHODS: FUSE was developed to include a range of RF excitation pulses, trajectories, dimensionalities, and long-T2 suppression techniques, enabling real-time interchangeability of acquisition parameters. Additionally, we developed an improved 3D deblurring algorithm to correct for off-resonance artifacts. Several experiments were conducted to validate the efficacy of FUSE, by comparing different approaches for off-resonance artifact correction, variations in RF pulse and trajectory combinations, and long-T2 suppression techniques. All scans were performed on a 3 T system using an in-house short-T2 phantom. The evaluation of results included qualitative comparisons and quantitative assessments of the SNR and contrast-to-noise ratio. RESULTS: Using the capabilities of FUSE, we demonstrated that we could combine a shorter readout duration with our improved deblurring algorithm to effectively reduce off-resonance artifacts. Among the different RF and trajectory combinations, the spiral trajectory with the regular half-inc pulse achieves the highest SNRs. The dual-echo subtraction technique delivers better short-T2 contrast and superior suppression of water and agar signals, whereas the off-resonance saturation method successfully suppresses water and lipid signals simultaneously. CONCLUSION: In this work, we have validated the use of our new FUSE sequence using a short T2 phantom, demonstrating that multiple UTE acquisitions can be achieved within a single sequence. This new sequence may be useful for acquiring improved UTE images and the development of UTE imaging protocols.


Asunto(s)
Imagen por Resonancia Magnética , Técnica de Sustracción , Imagen por Resonancia Magnética/métodos , Fantasmas de Imagen , Artefactos , Agua , Imagenología Tridimensional/métodos
9.
NMR Biomed ; 36(10): e4988, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37381057

RESUMEN

Ultralow-field (ULF) magnetic resonance imaging (MRI) can suffer from inferior image quality because of low signal-to-noise ratio (SNR). As an efficient way to cover the k-space, the spiral acquisition technique has shown great potential in improving imaging SNR efficiency at ULF. The current study aimed to address the problems of noise and blurring cancelation in the ULF case with spiral trajectory, and we proposed a spiral-out sequence for brain imaging using a portable 50-mT MRI system. The proposed sequence consisted of three modules: noise calibration, field map acquisition, and imaging. In the calibration step, transfer coefficients were obtained between signals from primary and noise-pick-up coils to perform electromagnetic interference (EMI) cancelation. Embedded field map acquisition was performed to correct accumulated phase error due to main field inhomogeneity. Considering imaging SNR, a lower bandwidth for data sampling was adopted in the sequence design because the 50-mT scanner is in a low SNR regime. Image reconstruction proceeded with sampled data by leveraging system imperfections, such as gradient delays and concomitant fields. The proposed method can provide images with higher SNR efficiency compared with its Cartesian counterparts. An improvement in temporal SNR of approximately 23%-44% was measured via phantom and in vivo experiments. Distortion-free images with a noise suppression rate of nearly 80% were obtained by the proposed technique. A comparison was also made with a state-of-the-art EMI cancelation algorithm used in the ULF-MRI system. SNR efficiency-enhanced spiral acquisitions were investigated for ULF-MR scanners and future studies could focus on various image contrasts based on our proposed approach to widen ULF applications.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Encéfalo/diagnóstico por imagen , Fantasmas de Imagen , Algoritmos
10.
Sensors (Basel) ; 23(16)2023 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-37631724

RESUMEN

Edge detection serves as the foundation for advanced image processing tasks. The accuracy of edge detection is significantly reduced when applied to motion-blurred images. In this paper, we propose an effective deblurring method adapted to the edge detection task, utilizing inertial sensors to aid in the deblurring process. To account for measurement errors of the inertial sensors, we transform them into blur kernel errors and apply a total-least-squares (TLS) based iterative optimization scheme to handle the image deblurring problem involving blur kernel errors, whose relating priors are learned by neural networks. We apply the Canny edge detection algorithm to each intermediate output of the iterative process and use all the edge detection results to calculate the network's total loss function, enabling a closer coupling between the edge detection task and the deblurring iterative process. Based on the BSDS500 edge detection dataset and an independent inertial sensor dataset, we have constructed a synthetic dataset for training and evaluating the network. Results on the synthetic dataset indicate that, compared to existing representative deblurring methods, our proposed approach demonstrates higher accuracy and robustness in edge detection of motion-blurred images.

11.
Sensors (Basel) ; 23(8)2023 Apr 07.
Artículo en Inglés | MEDLINE | ID: mdl-37112126

RESUMEN

Single image deblurring has achieved significant progress for natural daytime images. Saturation is a common phenomenon in blurry images, due to the low light conditions and long exposure times. However, conventional linear deblurring methods usually deal with natural blurry images well but result in severe ringing artifacts when recovering low-light saturated blurry images. To solve this problem, we formulate the saturation deblurring problem as a nonlinear model, in which all the saturated and unsaturated pixels are modeled adaptively. Specifically, we additionally introduce a nonlinear function to the convolution operator to accommodate the procedure of the saturation in the presence of the blurring. The proposed method has two advantages over previous methods. On the one hand, the proposed method achieves the same high quality of restoring the natural image as seen in conventional deblurring methods, while also reducing the estimation errors in saturated areas and suppressing ringing artifacts. On the other hand, compared with the recent saturated-based deblurring methods, the proposed method captures the formation of unsaturated and saturated degradations straightforwardly rather than with cumbersome and error-prone detection steps. Note that, this nonlinear degradation model can be naturally formulated into a maximum-a posterioriframework, and can be efficiently decoupled into several solvable sub-problems via the alternating direction method of multipliers (ADMM). Experimental results on both synthetic and real-world images demonstrate that the proposed deblurring algorithm outperforms the state-of-the-art low-light saturation-based deblurring methods.

12.
Sensors (Basel) ; 23(16)2023 Aug 18.
Artículo en Inglés | MEDLINE | ID: mdl-37631796

RESUMEN

Blurring is one of the main degradation factors in image degradation, so image deblurring is of great interest as a fundamental problem in low-level computer vision. Because of the limited receptive field, traditional CNNs lack global fuzzy region modeling, and do not make full use of rich context information between features. Recently, a transformer-based neural network structure has performed well in natural language tasks, inspiring rapid development in the field of defuzzification. Therefore, in this paper, a hybrid architecture based on CNN and transformers is used for image deblurring. Specifically, we first extract the shallow features of the blurred images using a cross-layer feature fusion block that emphasizes the contextual information of each feature extraction layer. Secondly, an efficient transformer module for extracting deep features is designed, which fully aggregates feature information at medium and long distances using vertical and horizontal intra- and inter-strip attention layers, and a dual gating mechanism is used as a feedforward neural network, which effectively reduces redundant features. Finally, the cross-layer feature fusion block is used to complement the feature information to obtain the deblurred image. Extensive experimental results on publicly available benchmark datasets GoPro, HIDE, and the real dataset RealBlur show that the proposed method outperforms the current mainstream deblurring algorithms and recovers the edge contours and texture details of the images more clearly.

13.
Sensors (Basel) ; 23(6)2023 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-36991602

RESUMEN

Video deblurring aims at removing the motion blur caused by the movement of objects or camera shake. Traditional video deblurring methods have mainly focused on frame-based deblurring, which takes only blurry frames as the input to produce sharp frames. However, frame-based deblurring has shown poor picture quality in challenging cases of video restoration where severely blurred frames are provided as the input. To overcome this issue, recent studies have begun to explore the event-based approach, which uses the event sequence captured by an event camera for motion deblurring. Event cameras have several advantages compared to conventional frame cameras. Among these advantages, event cameras have a low latency in imaging data acquisition (0.001 ms for event cameras vs. 10 ms for frame cameras). Hence, event data can be acquired at a high acquisition rate (up to one microsecond). This means that the event sequence contains more accurate motion information than video frames. Additionally, event data can be acquired with less motion blur. Due to these advantages, the use of event data is highly beneficial for achieving improvements in the quality of deblurred frames. Accordingly, the results of event-based video deblurring are superior to those of frame-based deblurring methods, even for severely blurred video frames. However, the direct use of event data can often generate visual artifacts in the final output frame (e.g., image noise and incorrect textures), because event data intrinsically contain insufficient textures and event noise. To tackle this issue in event-based deblurring, we propose a two-stage coarse-refinement network by adding a frame-based refinement stage that utilizes all the available frames with more abundant textures to further improve the picture quality of the first-stage coarse output. Specifically, a coarse intermediate frame is estimated by performing event-based video deblurring in the first-stage network. A residual hint attention (RHA) module is also proposed to extract useful attention information from the coarse output and all the available frames. This module connects the first and second stages and effectively guides the frame-based refinement of the coarse output. The final deblurred frame is then obtained by refining the coarse output using the residual hint attention and all the available frame information in the second-stage network. We validated the deblurring performance of the proposed network on the GoPro synthetic dataset (33 videos and 4702 frames) and the HQF real dataset (11 videos and 2212 frames). Compared to the state-of-the-art method (D2Net), we achieved a performance improvement of 1 dB in PSNR and 0.05 in SSIM on the GoPro dataset, and an improvement of 1.7 dB in PSNR and 0.03 in SSIM on the HQF dataset.

14.
Sensors (Basel) ; 23(5)2023 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-36904589

RESUMEN

The Vision Transformer (ViT) architecture has been remarkably successful in image restoration. For a while, Convolutional Neural Networks (CNN) predominated in most computer vision tasks. Now, both CNN and ViT are efficient approaches that demonstrate powerful capabilities to restore a better version of an image given in a low-quality format. In this study, the efficiency of ViT in image restoration is studied extensively. The ViT architectures are classified for every task of image restoration. Seven image restoration tasks are considered: Image Super-Resolution, Image Denoising, General Image Enhancement, JPEG Compression Artifact Reduction, Image Deblurring, Removing Adverse Weather Conditions, and Image Dehazing. The outcomes, the advantages, the limitations, and the possible areas for future research are detailed. Overall, it is noted that incorporating ViT in the new architectures for image restoration is becoming a rule. This is due to some advantages compared to CNN, such as better efficiency, especially when more data are fed to the network, robustness in feature extraction, and a better feature learning approach that sees better the variances and characteristics of the input. Nevertheless, some drawbacks exist, such as the need for more data to show the benefits of ViT over CNN, the increased computational cost due to the complexity of the self-attention block, a more challenging training process, and the lack of interpretability. These drawbacks represent the future research direction that should be targeted to increase the efficiency of ViT in the image restoration domain.

15.
J Xray Sci Technol ; 31(2): 393-407, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36710712

RESUMEN

Computed laminography (CL) is one of the best methods for nondestructive testing of plate-like objects. If the object and the detector move continually while the scanning is being done, the data acquisition efficiency of CL will be significantly increased. However, the projection images will contain motion artifact as a result. A multi-angle fusion network (MAFusNet) is presented in order to correct the motion artifact of CL projection images considering the properties of CL projection images. The multi-angle fusion module significantly increases the ability of MAFusNet to deblur by using data from nearby projection images, and the feature fusion module lessens information loss brought on by data flow between the encoders. In contrast to conventional deblurring networks, the MAFusNet network employs synthetic datasets for training and performed well on realistic data, proving the network's outstanding generalization. The multi-angle fusion-based network has a significant improvement in the correction effect of CL motion artifact through ablation study and comparison with existing classical deblurring networks, and the synthetic training dataset can also significantly lower the training cost, which can effectively improve the quality and efficiency of CL imaging in industrial nondestructive testing.

16.
Pattern Recognit ; 1242022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-34949896

RESUMEN

In this work we present a framework of designing iterative techniques for image deblurring in inverse problem. The new framework is based on two observations about existing methods. We used Landweber method as the basis to develop and present the new framework but note that the framework is applicable to other iterative techniques. First, we observed that the iterative steps of Landweber method consist of a constant term, which is a low-pass filtered version of the already blurry observation. We proposed a modification to use the observed image directly. Second, we observed that Landweber method uses an estimate of the true image as the starting point. This estimate, however, does not get updated over iterations. We proposed a modification that updates this estimate as the iterative process progresses. We integrated the two modifications into one framework of iteratively deblurring images. Finally, we tested the new method and compared its performance with several existing techniques, including Landweber method, Van Cittert method, GMRES (generalized minimal residual method), and LSQR (least square), to demonstrate its superior performance in image deblurring.

17.
Sensors (Basel) ; 22(20)2022 Oct 16.
Artículo en Inglés | MEDLINE | ID: mdl-36298213

RESUMEN

The remote sensing imaging environment is complex, in which many factors cause image blur. Thus, without prior knowledge, the restoration model established to obtain clear images can only rely on the observed blurry images. We still build the prior with extreme pixels but no longer traverse all pixels, such as the extreme channels. The features are extracted in units of patches, which are segmented from an image and partially overlap with each other. In this paper, we design a new prior, i.e., overlapped patches' non-linear (OPNL) prior, derived from the ratio of extreme pixels affected by blurring in patches. The analysis of more than 5000 remote sensing images confirms that OPNL prior prefers clear images rather than blurry images in the restoration process. The complexity of the optimization problem is increased due to the introduction of OPNL prior, which makes it impossible to solve it directly. A related solving algorithm is established based on the projected alternating minimization (PAM) algorithm combined with the half-quadratic splitting method, the fast iterative shrinkage-thresholding algorithm (FISTA), fast Fourier transform (FFT), etc. Numerous experiments prove that this algorithm has excellent stability and effectiveness and has obtained competitive processing results in restoring remote sensing images.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tecnología de Sensores Remotos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Análisis de Fourier
18.
Sensors (Basel) ; 22(20)2022 Oct 17.
Artículo en Inglés | MEDLINE | ID: mdl-36298241

RESUMEN

Motion blur recovery is a common method in the field of remote sensing image processing that can effectively improve the accuracy of detection and recognition. Among the existing motion blur recovery methods, the algorithms based on deep learning do not rely on a priori knowledge and, thus, have better generalizability. However, the existing deep learning algorithms usually suffer from feature misalignment, resulting in a high probability of missing details or errors in the recovered images. This paper proposes an end-to-end generative adversarial network (SDD-GAN) for single-image motion deblurring to address this problem and to optimize the recovery of blurred remote sensing images. Firstly, this paper applies a feature alignment module (FAFM) in the generator to learn the offset between feature maps to adjust the position of each sample in the convolution kernel and to align the feature maps according to the context; secondly, a feature importance selection module is introduced in the generator to adaptively filter the feature maps in the spatial and channel domains, preserving reliable details in the feature maps and improving the performance of the algorithm. In addition, this paper constructs a self-constructed remote sensing dataset (RSDATA) based on the mechanism of image blurring caused by the high-speed orbital motion of satellites. Comparative experiments are conducted on self-built remote sensing datasets and public datasets as well as on real remote sensing blurred images taken by an in-orbit satellite (CX-6(02)). The results show that the algorithm in this paper outperforms the comparison algorithm in terms of both quantitative evaluation and visual effects.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Movimiento (Física)
19.
Sensors (Basel) ; 22(20)2022 Oct 18.
Artículo en Inglés | MEDLINE | ID: mdl-36298260

RESUMEN

Underwater target detection and identification technology are currently two of the most important research directions in the information disciplines. Traditionally, underwater target detection technology has struggled to meet the needs of current engineering. However, due to the large manifold error of the underwater sonar array and the complexity of ensuring long-term signal stability, traditional high-resolution array signal processing methods are not ideal for practical underwater applications. In conventional beamforming methods, when the signal-to-noise ratio is lower than -43.05 dB, the general direction can only be vaguely identified in the general direction. To address the above challenges, this paper proposes a beamforming method based on a deep neural network. Through preprocessing, the space-time domain of the target sound signal is converted into two-dimensional data in the angle-time domain. Subsequently, we trained the network with enough sample datasets. Finally, high-resolution recognition and prediction of two-dimensional images are realized. The results of the test dataset in this paper demonstrate the effectiveness of the proposed method, with a minimum signal-to-noise ratio of -48 dB.


Asunto(s)
Redes Neurales de la Computación , Sonido , Procesamiento de Señales Asistido por Computador , Relación Señal-Ruido
20.
Sensors (Basel) ; 22(16)2022 Aug 17.
Artículo en Inglés | MEDLINE | ID: mdl-36015921

RESUMEN

Underwater ghost imaging based on deep learning can effectively reduce the influence of forward scattering and back scattering of water. With the help of data-driven methods, high-quality results can be reconstructed. However, the training of the underwater ghost imaging requires enormous paired underwater datasets, which are difficult to obtain directly. Although the Cycle-GAN method solves the problem to some extent, the blurring degree of the fuzzy class of the paired underwater datasets generated by Cycle-GAN is relatively unitary. To solve this problem, a few-shot underwater image generative network method is proposed. Utilizing the proposed few-shot learning image generative method, the generated paired underwater datasets are better than those obtained by the Cycle-GAN method, especially under the condition of few real underwater datasets. In addition, to reconstruct high-quality results, an underwater deblurring ghost imaging method is proposed. The reconstruction method consists of two parts: reconstruction and deblurring. The experimental and simulation results show that the proposed reconstruction method has better performance in deblurring at a low sampling rate, compared with existing underwater ghost imaging methods based on deep learning. The proposed reconstruction method can effectively increase the clarity degree of the underwater reconstruction target at a low sampling rate and promotes the further applications of underwater ghost imaging.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA