Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Int J Biomed Imaging ; 2024: 8972980, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38725808

RESUMEN

We present a deep learning-based method that corrects motion artifacts and thus accelerates data acquisition and reconstruction of magnetic resonance images. The novel model, the Motion Artifact Correction by Swin Network (MACS-Net), uses a Swin transformer layer as the fundamental block and the Unet architecture as the neural network backbone. We employ a hierarchical transformer with shifted windows to extract multiscale contextual features during encoding. A new dual upsampling technique is employed to enhance the spatial resolutions of feature maps in the Swin transformer-based decoder layer. A raw magnetic resonance imaging dataset is used for network training and testing; the data contain various motion artifacts with ground truth images of the same subjects. The results were compared to six state-of-the-art MRI image motion correction methods using two types of motions. When motions were brief (within 5 s), the method reduced the average normalized root mean square error (NRMSE) from 45.25% to 17.51%, increased the mean structural similarity index measure (SSIM) from 79.43% to 91.72%, and increased the peak signal-to-noise ratio (PSNR) from 18.24 to 26.57 dB. Similarly, when motions were extended from 5 to 10 s, our approach decreased the average NRMSE from 60.30% to 21.04%, improved the mean SSIM from 33.86% to 90.33%, and increased the PSNR from 15.64 to 24.99 dB. The anatomical structures of the corrected images and the motion-free brain data were similar.

2.
Sensors (Basel) ; 23(14)2023 Jul 08.
Artículo en Inglés | MEDLINE | ID: mdl-37514540

RESUMEN

We propose a high-quality, three-dimensional display system based on a simplified light field image acquisition method, and a custom-trained full-connected deep neural network is proposed. The ultimate goal of the proposed system is to acquire and reconstruct the light field images with possibly the most elevated quality from the real-world objects in a general environment. A simplified light field image acquisition method acquires the three-dimensional information of natural objects in a simple way, with high-resolution/high-quality like multicamera-based methods. We trained a full-connected deep neural network model to output desired viewpoints of the object with the same quality. The custom-trained instant neural graphics primitives model with hash encoding output the overall desired viewpoints of the object within the acquired viewing angle in the same quality, based on the input perspectives, according to the pixel density of a display device and lens array specifications within the significantly short processing time. Finally, the elemental image array was rendered through the pixel re-arrangement from the entire viewpoints to visualize the entire field-of-view and re-constructed as a high-quality three-dimensional visualization on the integral imaging display. The system was implemented successfully, and the displayed visualizations and corresponding evaluated results confirmed that the proposed system offers a simple and effective way to acquire light field images from real objects with high-resolution and present high-quality three-dimensional visualization on the integral imaging display system.

3.
Sensors (Basel) ; 22(14)2022 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-35890968

RESUMEN

This study proposes a robust depth map framework based on a convolutional neural network (CNN) to calculate disparities using multi-direction epipolar plane images (EPIs). A combination of three-dimensional (3D) and two-dimensional (2D) CNN-based deep learning networks is used to extract the features from each input stream separately. The 3D convolutional blocks are adapted according to the disparity of different directions of epipolar images, and 2D-CNNs are employed to minimize data loss. Finally, the multi-stream networks are merged to restore the depth information. A fully convolutional approach is scalable, which can handle any size of input and is less prone to overfitting. However, there is some noise in the direction of the edge. A weighted median filtering (WMF) is used to acquire the boundary information and improve the accuracy of the results to overcome this issue. Experimental results indicate that the suggested deep learning network architecture outperforms other architectures in terms of depth estimation accuracy.


Asunto(s)
Microscopía , Redes Neurales de la Computación
4.
Bioengineering (Basel) ; 10(1)2022 Dec 22.
Artículo en Inglés | MEDLINE | ID: mdl-36671594

RESUMEN

When sparsely sampled data are used to accelerate magnetic resonance imaging (MRI), conventional reconstruction approaches produce significant artifacts that obscure the content of the image. To remove aliasing artifacts, we propose an advanced convolutional neural network (CNN) called fully dense attention CNN (FDA-CNN). We updated the Unet model with the fully dense connectivity and attention mechanism for MRI reconstruction. The main benefit of FDA-CNN is that an attention gate in each decoder layer increases the learning process by focusing on the relevant image features and provides a better generalization of the network by reducing irrelevant activations. Moreover, densely interconnected convolutional layers reuse the feature maps and prevent the vanishing gradient problem. Additionally, we also implement a new, proficient under-sampling pattern in the phase direction that takes low and high frequencies from the k-space both randomly and non-randomly. The performance of FDA-CNN was evaluated quantitatively and qualitatively with three different sub-sampling masks and datasets. Compared with five current deep learning-based and two compressed sensing MRI reconstruction techniques, the proposed method performed better as it reconstructed smoother and brighter images. Furthermore, FDA-CNN improved the mean PSNR by 2 dB, SSIM by 0.35, and VIFP by 0.37 compared with Unet for the acceleration factor of 5.

5.
Appl Opt ; 60(14): 4235-4244, 2021 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-33983180

RESUMEN

Holographic stereogram (HS) printing requires extensive memory capacity and long computation time during perspective acquisition and implementation of the pixel re-arrangement algorithm. Hogels contain very weak depth information of the object. We propose a HS printing system that uses simplified digital content generation based on the inverse-directed propagation (IDP) algorithm for hogel generation. Specifically, the IDP algorithm generates an array of hogels using a simple process that acquires the full three-dimensional (3D) information of the object, including parallax, depth, color, and shading, via a computer-generated integral imaging technique. This technique requires a short computation time and is capable of accounting for occlusion and accommodation effects of the object points via the IDP algorithm. Parallel computing is utilized to produce a high-resolution hologram based on the properties of independent hogels. To demonstrate the proposed approach, optical experiments are conducted in which the natural 3D visualizations of real and virtual objects are printed on holographic material. Experimental results demonstrate the simplified computation involved in content generation using the proposed IDP-based HS printing system and the improved image quality of the holograms.

6.
Opt Express ; 27(21): 29746-29758, 2019 Oct 14.
Artículo en Inglés | MEDLINE | ID: mdl-31684232

RESUMEN

A multiple-camera holographic system using non-uniformly sampled 2D images and compressed point cloud gridding (C-PCG) is suggested. High-quality, digital single-lens reflex cameras are used to acquire the depth and color information from real scenes; these are then virtually reconstructed by the uniform point cloud using a non-uniform sampling method. The C-PCG method is proposed to generate efficient depth grids by classifying groups of object points with the same depth values in the red, green, and blue channels. Holograms are obtained by applying fast Fourier transform diffraction calculations to the grids. Compared to wave-front recording plane methods, the quality of the reconstructed images is substantially better, and the computational complexity is dramatically reduced. The feasibility of our method is confirmed both numerically and optically.

7.
Appl Opt ; 58(5): A242-A250, 2019 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-30873983

RESUMEN

Recently, computer-generated holograms (CGHs) of real three-dimensional (3D) objects have become widely used to support holographic displays. Here, a multiple-camera holographic system featuring an efficient depth grid is developed to provide the correct depth cue. Multidepth cameras are used to acquire depth and color information from real scenes, and then to virtually reconstruct point cloud models. Arranging the depth cameras in an inward-facing configuration allowed simultaneous capture of objects from different directions, facilitating rendering of the entire surface. The multiple relocated point cloud gridding method is proposed to generate efficient depth grids by classifying groups of object points with the same depth values in the red, green, and blue channels. CGHs are obtained by applying a fast Fourier transform diffraction calculation to the grids. Full-color reconstructed images were obtained flexibly and efficiently. The utility of our method was confirmed both numerically and optically.

8.
Appl Opt ; 57(15): 4253-4262, 2018 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-29791403

RESUMEN

The calculation of realistic full-color holographic displays is hindered by the high computational cost. Previously, we suggested a point cloud gridding (PCG) method to calculate monochrome holograms of real objects. In this research, a relocated point cloud gridding (R-PCG) method is proposed to enhance the reconstruction quality and accelerate the calculation speed in GPU for a full-color holographic system. We use a depth camera to acquire depth and color information from the real scene then reconstruct the point cloud model virtually. The R-PCG method allows us to classify groups of object points with the same depth values into grids in the red, green, and blue (RGB) channels. Computer-generated holograms (CGHs) are obtained by applying a fast Fourier transform (FFT) diffraction calculation to the grids. The feasibility of the R-PCG method is confirmed by numerical and optical reconstruction.

9.
Opt Lett ; 42(13): 2599-2602, 2017 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-28957294

RESUMEN

We propose a full-color polygon-based holographic system for real three-dimensional (3D) objects using a depth-layer weighted prediction method. The proposed system is composed of four main stages: acquisition, preprocessing, hologram generation, and reconstruction. In the preprocessing stage, the point cloud model is separated into red, green, and blue channels with depth-layer weighted prediction. The color component values are characterized based on the depth information of the real object, then color prediction is derived from the measurement data. The computer-generated holograms reconstruct 3D full-color images with a strong sensation of depth resulting from the polygon approach. The feasibility of the proposed method was confirmed by numerical and optical reconstruction.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA