Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 14(1): 796, 2024 Jan 08.
Artículo en Inglés | MEDLINE | ID: mdl-38191493

RESUMEN

3D edge features, which represent the boundaries between different objects or surfaces in a 3D scene, are crucial for many computer vision tasks, including object recognition, tracking, and segmentation. They also have numerous real-world applications in the field of robotics, such as vision-guided grasping and manipulation of objects. To extract these features in the noisy real-world depth data, reliable 3D edge detectors are indispensable. However, currently available 3D edge detection methods are either highly parameterized or require ground truth labelling, which makes them challenging to use for practical applications. To this extent, we present a new 3D edge detection approach using unsupervised classification. Our method learns features from depth maps at three different scales using an encoder-decoder network, from which edge-specific features are extracted. These edge features are then clustered using learning to classify each point as an edge or not. The proposed method has two key benefits. First, it eliminates the need for manual fine-tuning of data-specific hyper-parameters and automatically selects threshold values for edge classification. Second, the method does not require any labelled training data, unlike many state-of-the-art methods that require supervised training with extensive hand-labelled datasets. The proposed method is evaluated on five benchmark datasets with single and multi-object scenes, and compared with four state-of-the-art edge detection methods from the literature. Results demonstrate that the proposed method achieves competitive performance, despite not using any labelled data or relying on hand-tuning of key parameters.

2.
Front Robot AI ; 9: 824716, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35391943

RESUMEN

This paper deals with the control of a redundant cobot arm to accomplish peg-in-hole insertion tasks in the context of middle ear surgery. It mainly focuses on the development of two shared control laws that combine local measurements provided by position or force sensors with the globally observed visual information. We first investigate the two classical and well-established control modes, i.e., a position-based end-frame tele-operation controller and a comanipulation controller. Based on these two control architectures, we then propose a combination of visual feedback and position/force-based inputs in the same control scheme. In contrast to the conventional control designs where all degrees of freedom (DoF) are equally controlled, the proposed shared controllers allow teleoperation of linear/translational DoFs while the rotational ones are simultaneously handled by a vision-based controller. Such controllers reduce the task complexity, e.g., a complex peg-in-hole task is simplified for the operator to basic translations in the space where tool orientations are automatically controlled. Various experiments are conducted, using a 7-DoF robot arm equipped with a force/torque sensor and a camera, validating the proposed controllers in the context of simulating a minimally invasive surgical procedure. The obtained results in terms of accuracy, ergonomics and rapidity are discussed in this paper.

3.
Front Robot AI ; 7: 542406, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33501313

RESUMEN

Task-aware robotic grasping is critical if robots are to successfully cooperate with humans. The choice of a grasp is multi-faceted; however, the task to perform primes this choice in terms of hand shaping and placement on the object. This grasping strategy is particularly important for a robot companion, as it can potentially hinder the success of the collaboration with humans. In this work, we investigate how different grasping strategies of a robot passer influence the performance and the perceptions of the interaction of a human receiver. Our findings suggest that a grasping strategy that accounts for the subsequent task of the receiver improves substantially the performance of the human receiver in executing the subsequent task. The time to complete the task is reduced by eliminating the need of a post-handover re-adjustment of the object. Furthermore, the human perceptions of the interaction improve when a task-oriented grasping strategy is adopted. The influence of the robotic grasp strategy increases as the constraints induced by the object's affordances become more restrictive. The results of this work can benefit the wider robotics community, with application ranging from industrial to household human-robot interaction for cooperative and collaborative object manipulation.

4.
Scanning ; 36(4): 419-29, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24578204

RESUMEN

As an imaging system, scanning electron microscope (SEM) performs an important role in autonomous micro-nanomanipulation applications. When it comes to the sub micrometer range and at high scanning speeds, the images produced by the SEM are noisy and need to be evaluated or corrected beforehand. In this article, the quality of images produced by a tungsten gun SEM has been evaluated by quantifying the level of image signal-to-noise ratio (SNR). In order to determine the SNR, an efficient and online monitoring method is developed based on the nonlinear filtering using a single image. Using this method, the quality of images produced by a tungsten gun SEM is monitored at different experimental conditions. The derived results demonstrate the developed method's efficiency in SNR quantification and illustrate the imaging quality evolution in SEM.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA