Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(9)2023 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-37177514

RESUMO

Machine vision systems are widely used in assembly lines for providing sensing abilities to robots to allow them to handle dynamic environments. This paper presents a comparison of 3D sensors for evaluating which one is best suited for usage in a machine vision system for robotic fastening operations within an automotive assembly line. The perception system is necessary for taking into account the position uncertainty that arises from the vehicles being transported in an aerial conveyor. Three sensors with different working principles were compared, namely laser triangulation (SICK TriSpector1030), structured light with sequential stripe patterns (Photoneo PhoXi S) and structured light with infrared speckle pattern (Asus Xtion Pro Live). The accuracy of the sensors was measured by computing the root mean square error (RMSE) of the point cloud registrations between their scans and two types of reference point clouds, namely, CAD files and 3D sensor scans. Overall, the RMSE was lower when using sensor scans, with the SICK TriSpector1030 achieving the best results (0.25 mm ± 0.03 mm), the Photoneo PhoXi S having the intermediate performance (0.49 mm ± 0.14 mm) and the Asus Xtion Pro Live obtaining the higher RMSE (1.01 mm ± 0.11 mm). Considering the use case requirements, the final machine vision system relied on the SICK TriSpector1030 sensor and was integrated with a collaborative robot, which was successfully deployed in an vehicle assembly line, achieving 94% success in 53,400 screwing operations.

2.
Sensors (Basel) ; 20(15)2020 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-32759633

RESUMO

Carrying out the task of the exploration of a scene by an autonomous robot entails a set of complex skills, such as the ability to create and update a representation of the scene, the knowledge of the regions of the scene which are yet unexplored, the ability to estimate the most efficient point of view from the perspective of an explorer agent and, finally, the ability to physically move the system to the selected Next Best View (NBV). This paper proposes an autonomous exploration system that makes use of a dual OcTree representation to encode the regions in the scene which are occupied, free, and unknown. The NBV is estimated through a discrete approach that samples and evaluates a set of view hypotheses that are created by a conditioned random process which ensures that the views have some chance of adding novel information to the scene. The algorithm uses ray-casting defined according to the characteristics of the RGB-D sensor, and a mechanism that sorts the voxels to be tested in a way that considerably speeds up the assessment. The sampled view that is estimated to provide the largest amount of novel information is selected, and the system moves to that location, where a new exploration step begins. The exploration session is terminated when there are no more unknown regions in the scene or when those that exist cannot be observed by the system. The experimental setup consisted of a robotic manipulator with an RGB-D sensor assembled on its end-effector, all managed by a Robot Operating System (ROS) based architecture. The manipulator provides movement, while the sensor collects information about the scene. Experimental results span over three test scenarios designed to evaluate the performance of the proposed system. In particular, the exploration performance of the proposed system is compared against that of human subjects. Results show that the proposed approach is able to carry out the exploration of a scene, even when it starts from scratch, building up knowledge as the exploration progresses. Furthermore, in these experiments, the system was able to complete the exploration of the scene in less time when compared to human subjects.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa