Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
Asunto de la revista
Intervalo de año de publicación
1.
Appl Opt ; 61(36): 10644-10657, 2022 Dec 20.
Artículo en Inglés | MEDLINE | ID: mdl-36606923

RESUMEN

This paper proposes a coding method for compressing a phase-only hologram video (PoHV), which can be directly displayed in a commercial phase-only spatial light modulator. Recently, there has been active research to use a standard codec as an anchor to develop a new video coding for 3D data such as MPEG point cloud compression. The main merit of this approach is that if a new video codec is developed, the performance of relative coding methods can be increased simultaneously. Furthermore, compatibility is increased by the capability to use various anchor codecs, and the developing time is decreased. This paper uses a currently used video codec as an anchor codec and develops a coding method including progressive scaling and a deep neural network to overcome low temporal correlation between frames of a PoHV. Since it is difficult to temporally predict a correlation between frames of a PoHV, this paper adopts a scaling function and a neural network in the encoding and decoding process, not adding complexity to an anchor itself to predict temporal correlation. The proposed coding method shows an enhanced coding gain of an average of 22%, compared with an anchor in all coding conditions. When observing numerical and optical reconstructions, the result images by the proposed show clearer objects and less juddering than the result by the anchor.

2.
Sensors (Basel) ; 22(3)2022 Jan 31.
Artículo en Inglés | MEDLINE | ID: mdl-35161842

RESUMEN

This paper proposes a new technique for performing 3D static-point cloud registration after calibrating a multi-view RGB-D camera using a 3D (dimensional) joint set. Consistent feature points are required to calibrate a multi-view camera, and accurate feature points are necessary to obtain high-accuracy calibration results. In general, a special tool, such as a chessboard, is used to calibrate a multi-view camera. However, this paper uses joints on a human skeleton as feature points for calibrating a multi-view camera to perform calibration efficiently without special tools. We propose an RGB-D-based calibration algorithm that uses the joint coordinates of the 3D joint set obtained through pose estimation as feature points. Since human body information captured by the multi-view camera may be incomplete, a joint set predicted based on image information obtained through this may be incomplete. After efficiently integrating a plurality of incomplete joint sets into one joint set, multi-view cameras can be calibrated by using the combined joint set to obtain extrinsic matrices. To increase the accuracy of calibration, multiple joint sets are used for optimization through temporal iteration. We prove through experiments that it is possible to calibrate a multi-view camera using a large number of incomplete joint sets.


Asunto(s)
Algoritmos , Imagenología Tridimensional , Calibración , Humanos
3.
Sensors (Basel) ; 22(22)2022 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-36433412

RESUMEN

A sequence of 3D models generated using volumetric capture has the advantage of retaining the characteristics of dynamic objects and scenes. However, in volumetric data, since 3D mesh and texture are synthesized for every frame, the mesh of every frame has a different shape, and the brightness and color quality of the texture is various. This paper proposes an algorithm to consistently create a mesh of 4D volumetric data using dynamic reconstruction. The proposed algorithm comprises remeshing, correspondence searching, and target frame reconstruction by key frame deformation. We make non-rigid deformation possible by applying the surface deformation method of the key frame. Finally, we propose a method of compressing the target frame using the target frame reconstructed using the key frame with error rates of up to 98.88% and at least 20.39% compared to previous studies. The experimental results show the proposed method's effectiveness by measuring the geometric error between the deformed key frame and the target frame. Further, by calculating the residual between two frames, the ratio of data transmitted is measured to show a compression performance of 18.48%.

4.
Sensors (Basel) ; 22(21)2022 Nov 07.
Artículo en Inglés | MEDLINE | ID: mdl-36366264

RESUMEN

Due to the amount of transmitted data and the security of personal or private information in wireless communication, there are cases where the information for a multimedia service should be directly transferred from the user's device to the cloud server without the captured original images. This paper proposes a new method to generate 3D (dimensional) keypoints based on a user's mobile device with a commercial RGB camera in a distributed computing environment such as a cloud server. The images are captured with a moving camera and 2D keypoints are extracted from them. After executing feature extraction between continuous frames, disparities are calculated between frames using the relationships between matched keypoints. The physical distance of the baseline is estimated by using the motion information of the camera, and the actual distance is calculated by using the calculated disparity and the estimated baseline. Finally, 3D keypoints are generated by adding the extracted 2D keypoints to the calculated distance. A keypoint-based scene change method is proposed as well. Due to the existing similarity between continuous frames captured from a camera, not all 3D keypoints are transferred and stored, only the new ones. Compared with the ground truth of the TUM dataset, the average error of the estimated 3D keypoints was measured as 5.98 mm, which shows that the proposed method has relatively good performance considering that it uses a commercial RGB camera on a mobile device. Furthermore, the transferred 3D keypoints were decreased to about 73.6%.


Asunto(s)
Algoritmos , Visión Ocular , Computadores
5.
Appl Opt ; 60(24): 7391-7399, 2021 Aug 20.
Artículo en Inglés | MEDLINE | ID: mdl-34613028

RESUMEN

We propose a new learning and inferring model that generates digital holograms using deep neural networks (DNNs). This DNN uses a generative adversarial network, trained to infer a complex two-dimensional fringe pattern from a single object point. The intensity and fringe patterns inferred for each object point were multiplied, and all the fringe patterns were accumulated to generate a perfect hologram. This method can achieve generality by recording holograms for two spaces (16 Space and 32 Space). The reconstruction results of both spaces proved to be almost the same as numerical computer-generated holograms by showing the performance at 44.56 and 35.11 dB, respectively. Through displaying the generated hologram in the optical equipment, we proved that the holograms generated by the proposed DNN can be optically reconstructed.

6.
Opt Express ; 28(24): 35972-35985, 2020 Nov 23.
Artículo en Inglés | MEDLINE | ID: mdl-33379702

RESUMEN

In this paper, we propose a new system for a real-time holographic augmented reality (AR) video service based on a photorealistic three-dimensional (3D) object point for multiple users to use simultaneously at various locations and viewpoints. To observe the object from all viewpoints, a camera system capable of acquiring the 3D volume of a real object is developed and is used to generate a real object in real-time. Using the normal of the object point, the observable points are mapped to the viewpoint at which the user is located, and a hologram based on the object point is generated. The angle at which the reflected light from each point is incident on the hologram plane is calculated, and the intensity of the interference light is adjusted according to the angle to generate a hologram with a higher 3D effect. The generated hologram is transmitted to each user to provide a holographic AR service. The entire system consists of a camera system comprising eight RGB-D (depth) cameras and two workstations for photorealistic 3D volume and hologram generation. Using this technique, a realistic hologram was generated. Through experiments displaying holograms simultaneously from several different viewpoints, it is confirmed that multiple users can concurrently receive hologram AR.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA