Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 22(23)2022 Dec 06.
Article in English | MEDLINE | ID: mdl-36502245

ABSTRACT

This paper presents the development of a visual-perception system on a dual-arm mobile robot for human-robot interaction. This visual system integrates three subsystems. Hand gesture recognition is utilized to trigger human-robot interaction. Engagement and intention of the participants are detected and quantified through a cognitive system. Visual servoing uses YOLO to identify the object to be tracked and hybrid, model-based tracking to follow the object's geometry. The proposed visual-perception system is implemented in the developed dual-arm mobile robot, and experiments are conducted to validate the proposed method's effects on human-robot interaction applications.


Subject(s)
Robotics , Humans , Algorithms , Visual Perception
2.
Int J Med Robot ; 18(3): e2384, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35199451

ABSTRACT

BACKGROUND: Recent advancements in continuum robotics have accentuated developing efficient and stable controllers to handle shape deformation and compliance. The control of continuum robots (CRs) using physical sensors attached to the robot, particularly in confined spaces, is difficult due to their limited accuracy in three-dimensional deflections and challenging localisation. Therefore, using non-contact imaging sensors finds noticeable importance, particularly in medical scenarios. Accordingly, given the need for direct control of the robot tip and notable uncertainties in the kinematics and dynamics of CRs, many papers have focussed on the visual servoing (VS) of CRs in recent years. METHODS: The significance of this research towards safe human-robot interaction has fuelled our survey on the previous methods, current challenges, and future opportunities. RESULTS: Beginning with actuation modalities and modelling approaches, the paper investigates VS methods in medical and non-medical scenarios. CONCLUSIONS: Finally, challenges and prospects of VS for CRs are discussed, followed by concluding remarks.


Subject(s)
Robotics , Biomechanical Phenomena , Diagnostic Imaging , Humans , Robotics/methods
3.
Sensors (Basel) ; 22(2)2022 Jan 14.
Article in English | MEDLINE | ID: mdl-35062597

ABSTRACT

Assistive robotic arms (ARAs) that provide care to the elderly and people with disabilities, are a significant part of Human-Robot Interaction (HRI). Presently available ARAs provide non-intuitive interfaces such as joysticks for control and thus, lacks the autonomy to perform daily activities. This study proposes that, for inducing autonomous behavior in ARAs, visual sensors integration is vital, and visual servoing in the direct Cartesian control mode is the preferred method. Generally, ARAs are designed in a configuration where its end-effector's position is defined in the fixed base frame while orientation is expressed in the end-effector frame. We denoted this configuration as 'mixed frame robotic arms'. Consequently, conventional visual servo controllers which operate in a single frame of reference are incompatible with mixed frame ARAs. Therefore, we propose a mixed-frame visual servo control framework for ARAs. Moreover, we enlightened the task space kinematics of a mixed frame ARAs, which led us to the development of a novel "mixed frame Jacobian matrix". The proposed framework was validated on a mixed frame JACO-2 7 DoF ARA using an adaptive proportional derivative controller for achieving image-based visual servoing (IBVS), which showed a significant increase of 31% in the convergence rate, outperforming conventional IBVS joint controllers, especially in the outstretched arm positions and near the base frame. Our Results determine the need for the mixed frame controller for deploying visual servo control on modern ARAs, that can inherently cater to the robotic arm's joint limits, singularities, and self-collision problems.


Subject(s)
Disabled Persons , Robotic Surgical Procedures , Self-Help Devices , Aged , Algorithms , Biomechanical Phenomena , Humans
4.
Micromachines (Basel) ; 10(12)2019 Nov 25.
Article in English | MEDLINE | ID: mdl-31775279

ABSTRACT

A nano-stepping motor can translate or rotate when its piezoelectric element pair is electrically driven in-phase or anti-phase. It offers millimeter-level stroke, sub-micron-level stepping size, and sub-nanometer-level scanning resolution. This article proposes a visual servo system to control the nano-stepping motor, since its stepping size is not consistent due to changing contact friction, using a custom built microscopic instrument and image recognition software. Three kinds of trajectories-straight lines, circles, and pentagrams-are performed successfully. The smallest straightness and roundness ever tested are 0.291 µm and 2.380 µm. Experimental results show that the proposed controller can effectively compensate for the error and precisely navigate the rotor along a desired trajectory.

5.
Int J Med Robot ; 14(4): e1904, 2018 Aug.
Article in English | MEDLINE | ID: mdl-29974669

ABSTRACT

BACKGROUND: Surgical robot systems have been used in laparoendoscopic single-site surgery (LESS) to improve patient outcomes. A magnetic anchoring surgical robot system for LESS can effectively extend the operation space. METHODS: A robot system based on visual servo control for LESS is proposed. It includes a magnetic anchoring robot and a control subsystem, in which an uncalibrated visual servo control method obtains an accurate positioning capability of the robot for LESS. RESULTS: The results of the simulation and the tissue experiment show that the robot system can successfully accomplish the expected control functionalities for LESS. The average positioning error of the proposed system is 1.622 mm. CONCLUSION: The magnetic anchoring robot system is able to implement the autonomous positioning of its end-effector through the proposed control approach according to experimental results.


Subject(s)
Robotic Surgical Procedures/instrumentation , Robotics/instrumentation , Algorithms , Equipment Design , Humans , Laparoscopy/instrumentation , Magnetics/instrumentation , Robotic Surgical Procedures/statistics & numerical data , Robotics/statistics & numerical data , Surgery, Computer-Assisted/instrumentation , Surgery, Computer-Assisted/statistics & numerical data , User-Computer Interface
6.
ISA Trans ; 67: 507-514, 2017 Mar.
Article in English | MEDLINE | ID: mdl-27939568

ABSTRACT

In this paper, we use the control Lyapunov function (CLF) technique to present an optimized visual servo control method for constrained eye-in-hand robot visual servoing systems. With the knowledge of camera intrinsic parameters and depth of target changes, visual servo control laws (i.e. translation speed) with adjustable parameters are derived by image point features and some known CLF of the visual servoing system. The Fibonacci method is employed to online compute the optimal value of those adjustable parameters, which yields an optimized control law to satisfy constraints of the visual servoing system. The Lyapunov's theorem and the properties of CLF are used to establish stability of the constrained visual servoing system in the closed-loop with the optimized control law. One merit of the presented method is that there is no requirement of online calculating the pseudo-inverse of the image Jacobian's matrix and the homography matrix. Simulation and experimental results illustrated the effectiveness of the method proposed here.

SELECTION OF CITATIONS
SEARCH DETAIL