Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros

Base de dados
Assunto principal
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Appl Opt ; 59(35): 11104-11111, 2020 Dec 10.
Artigo em Inglês | MEDLINE | ID: mdl-33361939

RESUMO

Time-of-flight (ToF) cameras can acquire the distance between the sensor and objects with high frame rates, offering bright prospects for ToF cameras in many applications. Low-resolution and depth errors limit the accuracy of ToF cameras, however. In this paper, we present a flexible accuracy improvement method for depth compensation and feature points position correction of ToF cameras. First, a distance-error model of each pixel in the depth image is established to model sinusoidal waves of ToF cameras and compensate for the measured depth data. Second, a more accurate feature point position is estimated with the aid of a high-resolution camera. Experiments evaluate the proposed method, and the result shows the root mean square error is reduced from 4.38 mm to 3.57 mm.

2.
Appl Opt ; 58(23): 6300-6307, 2019 Aug 10.
Artigo em Inglês | MEDLINE | ID: mdl-31503774

RESUMO

The sinusoidal fringe pattern is widely used in fringe projection profilometry. Too much or too little defocusing will affect the quality of sinusoidal fringe patterns and consequently jeopardize the accuracy of measurement results. This paper proposes a method to quantify and ascertain the defocus level by simulations and experiments. By simulating the defocus pattern with a Gaussian low-pass filter, the optimum defocus level of the fringe pattern is determined so that the projected fringe pattern is closer to the sinusoidal function. Then, a method is proposed to adjust the projector to make the projected pattern in the optimal defocus degree. Experiments show the feasibility and the validity of the proposed method, and the accuracy is improved up to 9.9%, compared with the focus-projected pattern.

3.
Sensors (Basel) ; 17(2)2017 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-28216555

RESUMO

Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF) pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments.

4.
Opt Express ; 24(11): 12026-42, 2016 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-27410124

RESUMO

Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.

5.
Appl Opt ; 55(25): 6836-43, 2016 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-27607257

RESUMO

Calibration of line-scan cameras for precision measurement should have large calibration volume and be flexible in the actual measurement field. In this paper, we present a high-precision calibration method. Instead of using a large 3D pattern, we use a small planar pattern and a precalibrated matrix camera to obtain plenty of points with a suitable distribution, which would ensure the precision of the calibration results. The matrix camera removes the necessity of precise adjustment and movement and links the line-scan camera to the world easily, both of which enhance flexibility in the measurement field. The method has been verified by experiments. The experimental results demonstrated that the proposed method gives a practical solution to calibrate line scan cameras for precision measurement.

6.
Sensors (Basel) ; 16(11)2016 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-27869731

RESUMO

The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained.

7.
Sensors (Basel) ; 13(12): 16565-82, 2013 Dec 03.
Artigo em Inglês | MEDLINE | ID: mdl-24300597

RESUMO

A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system.


Assuntos
Robótica/instrumentação , Calibragem , Modelos Teóricos
8.
IEEE Trans Image Process ; 31: 6831-6846, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36269919

RESUMO

Multi-view 3D reconstruction generally adopts the feature fusion strategy to guide the generation of 3D shape for objects with different views. Empirically, the correspondence learning of object regions across different views enables better feature fusion. However, such idea has not been fully exploited in existing methods. Furthermore, current methods fail to explore the intrinsic dependency among regions within a 3D shape, leading to a rough reconstruction result. To address the above issues, we propose a Dual-View 3D Point Cloud reconstruction architecture named DVPC, which takes two views images as inputs, and progressively generates a refined 3D point cloud. First, a point cloud generation network is assigned to generate a coarse point cloud for each input view. Second, a dual-view point clouds synthesis network is presented in DVPC. It constructs a regional attention mechanism to learn a high-quality correspondence among regions across two coarse point clouds in different views, so that our DVPC can achieve feature fusion accurately. And then it develops a point cloud deformation module to produce a relatively-precise point cloud via establishing the communication between the coarse point cloud and the fused feature. Lastly, a point-region transformer network is devised to model the dependency among regions within the relatively-precise point cloud. With the dependency, the relatively-precise point cloud is refined into a desirable 3D point cloud with rich details. Qualitative and quantitative experiments on the ShapeNet and Pix3D datasets demonstrate that the proposed DVPC outperforms the state-of-the-art methods in terms of reconstruction quality.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA