Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 21(1)2020 Dec 27.
Artigo em Inglês | MEDLINE | ID: mdl-33375400

RESUMO

Transfer of learning or leveraging a pre-trained network and fine-tuning it to perform new tasks has been successfully applied in a variety of machine intelligence fields, including computer vision, natural language processing and audio/speech recognition. Drawing inspiration from neuroscience research that suggests that both visual and tactile stimuli rouse similar neural networks in the human brain, in this work, we explore the idea of transferring learning from vision to touch in the context of 3D object recognition. In particular, deep convolutional neural networks (CNN) pre-trained on visual images are adapted and evaluated for the classification of tactile data sets. To do so, we ran experiments with five different pre-trained CNN architectures and on five different datasets acquired with different technologies of tactile sensors including BathTip, Gelsight, force-sensing resistor (FSR) array, a high-resolution virtual FSR sensor, and tactile sensors on the Barrett robotic hand. The results obtained confirm the transferability of learning from vision to touch to interpret 3D models. Due to its higher resolution, tactile data from optical tactile sensors was demonstrated to achieve higher classification rates based on visual features compared to other technologies relying on pressure measurements. Further analysis of the weight updates in the convolutional layer is performed to measure the similarity between visual and tactile features for each technology of tactile sensing. Comparing the weight updates in different convolutional layers suggests that by updating a few convolutional layers of a pre-trained CNN on visual data, it can be efficiently used to classify tactile data. Accordingly, we propose a hybrid architecture performing both visual and tactile 3D object recognition with a MobileNetV2 backbone. MobileNetV2 is chosen due to its smaller size and thus its capability to be implemented on mobile devices, such that the network can classify both visual and tactile data. An accuracy of 100% for visual and 77.63% for tactile data are achieved by the proposed architecture.


Assuntos
Redes Neurais de Computação , Robótica , Tato , Humanos , Percepção Visual
2.
Sensors (Basel) ; 17(6)2017 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-28613255

RESUMO

Compliance has been exploited in various forms in robotic systems to allow rigid mechanisms to come into contact with fragile objects, or with complex shapes that cannot be accurately modeled. Force feedback control has been the classical approach for providing compliance in robotic systems. However, by integrating other forms of instrumentation with compliance into a single device, it is possible to extend close monitoring of nearby objects before and after contact occurs. As a result, safer and smoother robot control can be achieved both while approaching and while touching surfaces. This paper presents the design and extensive experimental evaluation of a versatile, lightweight, and low-cost instrumented compliant wrist mechanism which can be mounted on any rigid robotic manipulator in order to introduce a layer of compliance while providing the controller with extra sensing signals during close interaction with an object's surface. Arrays of embedded range sensors provide real-time measurements on the position and orientation of surfaces, either located in proximity or in contact with the robot's end-effector, which permits close guidance of its operation. Calibration procedures are formulated to overcome inter-sensor variability and achieve the highest available resolution. A versatile solution is created by embedding all signal processing, while wireless transmission connects the device to any industrial robot's controller to support path control. Experimental work demonstrates the device's physical compliance as well as the stability and accuracy of the device outputs. Primary applications of the proposed instrumented compliant wrist include smooth surface following in manufacturing, inspection, and safe human-robot interaction.


Assuntos
Punho , Desenho de Equipamento , Retroalimentação , Humanos , Robótica , Tato
3.
IEEE Trans Image Process ; 31: 149-163, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34807822

RESUMO

A prevalent family of fully convolutional networks are capable of learning discriminative representations and producing structural prediction in semantic segmentation tasks. However, such supervised learning methods require a large amount of labeled data and show inability of learning cross-domain invariant representations, giving rise to overfitting performance on the source dataset. Domain adaptation, a transfer learning technique that demonstrates strength on aligning feature distributions, can improve the performance of learning methods by providing inter-domain discrepancy alleviation. Recently introduced output-space based adaptation methods provide significant advances on cross-domain semantic segmentation tasks, however, a lack of consideration for intra-domain divergence of domain discrepancy remains prone to over-adaptation results on the target domain. To address the problem, we first leverage prototypical knowledge on the target domain to relax its hard domain label to a continuous domain space, where pixel-wise domain adaptation is developed upon a soft adversarial loss. The development of prototypical knowledge allows to elaborate specific adaptation strategies on under-aligned regions and well-aligned regions of the target domain. Furthermore, aiming to achieve better adaptation performance, we employ a unilateral discriminator to alleviate implicit uncertainty on prototypical knowledge. At last, we theoretically and experimentally demonstrate that the proposed prototypical knowledge oriented adaptation approach provides effective guidance on distribution alignment and alleviation on over-adaptation. The proposed approach shows competitive performance with state-of-the-art methods on two cross-domain segmentation tasks.

4.
Front Robot AI ; 7: 600584, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33501360

RESUMO

Modeling deformable objects is an important preliminary step for performing robotic manipulation tasks with more autonomy and dexterity. Currently, generalization capabilities in unstructured environments using analytical approaches are limited, mainly due to the lack of adaptation to changes in the object shape and properties. Therefore, this paper proposes the design and implementation of a data-driven approach, which combines machine learning techniques on graphs to estimate and predict the state and transition dynamics of deformable objects with initially undefined shape and material characteristics. The learned object model is trained using RGB-D sensor data and evaluated in terms of its ability to estimate the current state of the object shape, in addition to predicting future states with the goal to plan and support the manipulation actions of a robotic hand.

5.
IEEE Trans Syst Man Cybern B Cybern ; 42(3): 740-53, 2012 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-22207640

RESUMO

This paper discusses the design and implementation of a framework that automatically extracts and monitors the shape deformations of soft objects from a video sequence and maps them with force measurements with the goal of providing the necessary information to the controller of a robotic hand to ensure safe model-based deformable object manipulation. Measurements corresponding to the interaction force at the level of the fingertips and to the position of the fingertips of a three-finger robotic hand are associated with the contours of a deformed object tracked in a series of images using neural-network approaches. The resulting model captures the behavior of the object and is able to predict its behavior for previously unseen interactions without any assumption on the object's material. The availability of such models can contribute to the improvement of a robotic hand controller, therefore allowing more accurate and stable grasp while providing more elaborate manipulation capabilities for deformable objects. Experiments performed for different objects, made of various materials, reveal that the method accurately captures and predicts the object's shape deformation while the object is submitted to external forces applied by the robot fingers. The proposed method is also fast and insensitive to severe contour deformations, as well as to smooth changes in lighting, contrast, and background.


Assuntos
Inteligência Artificial , Mãos , Interpretação de Imagem Assistida por Computador/métodos , Modelos Teóricos , Reconhecimento Automatizado de Padrão/métodos , Robótica/métodos , Gravação em Vídeo/métodos , Algoritmos , Biomimética/métodos , Simulação por Computador , Técnicas de Apoio para a Decisão , Módulo de Elasticidade , Humanos , Movimento (Física)
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA