Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Front Robot AI ; 11: 1312554, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38476118

RESUMEN

Objective: For transradial amputees, robotic prosthetic hands promise to regain the capability to perform daily living activities. Current control methods based on physiological signals such as electromyography (EMG) are prone to yielding poor inference outcomes due to motion artifacts, muscle fatigue, and many more. Vision sensors are a major source of information about the environment state and can play a vital role in inferring feasible and intended gestures. However, visual evidence is also susceptible to its own artifacts, most often due to object occlusion, lighting changes, etc. Multimodal evidence fusion using physiological and vision sensor measurements is a natural approach due to the complementary strengths of these modalities. Methods: In this paper, we present a Bayesian evidence fusion framework for grasp intent inference using eye-view video, eye-gaze, and EMG from the forearm processed by neural network models. We analyze individual and fused performance as a function of time as the hand approaches the object to grasp it. For this purpose, we have also developed novel data processing and augmentation techniques to train neural network components. Results: Our results indicate that, on average, fusion improves the instantaneous upcoming grasp type classification accuracy while in the reaching phase by 13.66% and 14.8%, relative to EMG (81.64% non-fused) and visual evidence (80.5% non-fused) individually, resulting in an overall fusion accuracy of 95.3%. Conclusion: Our experimental data analyses demonstrate that EMG and visual evidence show complementary strengths, and as a consequence, fusion of multimodal evidence can outperform each individual evidence modality at any given time.

2.
IISE Trans Occup Ergon Hum Factors ; 12(1-2): 123-134, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38498062

RESUMEN

OCCUPATIONAL APPLICATIONS"Overassistive" robots can adversely impact long-term human-robot collaboration in the workplace, leading to risks of worker complacency, reduced workforce skill sets, and diminished situational awareness. Ergonomics practitioners should thus be cautious about solely targeting widely adopted metrics for improving human-robot collaboration, such as user trust and comfort. By contrast, introducing variability and adaptation into a collaborative robot's behavior could prove vital in preventing the negative consequences of overreliance and overtrust in an autonomous partner. This work reported here explored how instilling variability into physical human-robot collaboration can have a measurably positive effect on ergonomics in a repetitive task. A review of principles related to this notion of "stimulating" robot behavior is also provided to further inform ergonomics practitioners of existing human-robot collaboration frameworks.


Background: Collaborative robots, or cobots, are becoming ubiquitous in occupational settings due to benefits that include improved worker safety and increased productivity. Existing research on human-robot collaboration in industry has made progress in enhancing workers' psychophysical states, by optimizing measures of ergonomics risk factors, such as human posture, comfort, and cognitive workload. However, short-term objectives for robotic assistance may conflict with the worker's long-term preferences, needs, and overall wellbeing.Purpose: To investigate the ergonomic advantages and disadvantages of employing a collaborative robotics framework that intentionally imposes variability in the robot's behavior to stimulate the human partner's psychophysical state.Methods: A review of "overassistance" within human-robot collaboration and methods of addressing this phenomenon via adaptive automation. In adaptive approaches, the robot assistance may even challenge the user to better achieve a long-term objective while partially conflicting with their short-term task goals. Common themes across these approaches were extracted to motivate and support the proposed idea of stimulating robot behavior in physical human-robot collaboration.Results: Experimental evidence to justify stimulating robot behavior is presented through a human-robot handover study. A robot handover policy that regularly injects variability into the object transfer location led to significantly larger dynamics in the torso rotations and center of mass of human receivers compared to an "overassistive" policy that constrains receiver motion. Crucially, the stimulating handover policy also generated improvements in widely used ergonomics risk indicators of human posture.Conclusions: Our findings underscore the potential ergonomic benefits of a cobot's actions imposing variability in a user's responsive behavior, rather than indirectly restricting human behavior by optimizing the immediate task objective. Therefore, a transition from cobot policies that optimize instantaneous measures of ergonomics to those that continuously engage users could hold promise for human-robot collaboration in occupational settings characterized by repeated interactions.


Asunto(s)
Ergonomía , Robótica , Humanos , Robótica/métodos , Ergonomía/métodos , Sistemas Hombre-Máquina , Conducta Cooperativa , Movimiento (Física)
3.
Front Robot AI ; 9: 982131, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36313247

RESUMEN

Cluttered environments with partial object occlusions pose significant challenges to robot manipulation. In settings composed of one dominant object type and various undesirable contaminants, occlusions make it difficult to both recognize and isolate undesirable objects. Spatial features alone are not always sufficiently distinct to reliably identify anomalies under multiple layers of clutter, with only a fractional part of the object exposed. We create a multi-modal data representation of cluttered object scenes pairing depth data with a registered hyperspectral data cube. Hyperspectral imaging provides pixel-wise Visible Near-Infrared (VNIR) reflectance spectral curves which are invariant in similar material types. Spectral reflectance data is grounded in the chemical-physical properties of an object, making spectral curves an excellent modality to differentiate inter-class material types. Our approach proposes a new automated method to perform hyperspectral anomaly detection in cluttered workspaces with the goal of improving robot manipulation. We first assume the dominance of a single material class, and coarsely identify the dominant, non-anomalous class. Next these labels are used to train an unsupervised autoencoder to identify anomalous pixels through reconstruction error. To tie our anomaly detection to robot actions, we then apply a set of heuristically-evaluated motion primitives to perturb and further expose local areas containing anomalies. The utility of this approach is demonstrated in numerous cluttered environments including organic and inorganic materials. In each of our four constructed scenarios, our proposed anomaly detection method is able to consistently increase the exposed surface area of anomalies. Our work advances robot perception for cluttered environments by incorporating multi-modal anomaly detection aided by hyperspectral sensing into detecting fractional object presence without need for laboriously curated labels.

4.
Front Robot AI ; 8: 730433, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34568439

RESUMEN

In remote applications that mandate human supervision, shared control can prove vital by establishing a harmonious balance between the high-level cognition of a user and the low-level autonomy of a robot. Though in practice, achieving this balance is a challenging endeavor that largely depends on whether the operator effectively interprets the underlying shared control. Inspired by recent works on using immersive technologies to expose the internal shared control, we develop a virtual reality system to visually guide human-in-the-loop manipulation. Our implementation of shared control teleoperation employs end effector manipulability polytopes, which are geometrical constructs that embed joint limit and environmental constraints. These constructs capture a holistic view of the constrained manipulator's motion and can thus be visually represented as feedback for users on their operable space of movement. To assess the efficacy of our proposed approach, we consider a teleoperation task where users manipulate a screwdriver attached to a robotic arm's end effector. A pilot study with prospective operators is first conducted to discern which graphical cues and virtual reality setup are most preferable. Feedback from this study informs the final design of our virtual reality system, which is subsequently evaluated in the actual screwdriver teleoperation experiment. Our experimental findings support the utility of using polytopes for shared control teleoperation, but hint at the need for longer-term studies to garner their full benefits as virtual guides.

5.
Front Robot AI ; 8: 550644, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34222345

RESUMEN

Nuclear energy will play a critical role in meeting clean energy targets worldwide. However, nuclear environments are dangerous for humans to operate in due to the presence of highly radioactive materials. Robots can help address this issue by allowing remote access to nuclear and other highly hazardous facilities under human supervision to perform inspection and maintenance tasks during normal operations, help with clean-up missions, and aid in decommissioning. This paper presents our research to help realize humanoid robots in supervisory roles in nuclear environments. Our research focuses on National Aeronautics and Space Administration (NASA's) humanoid robot, Valkyrie, in the areas of constrained manipulation and motion planning, increasing stability using support contact, dynamic non-prehensile manipulation, locomotion on deformable terrains, and human-in-the-loop control interfaces.

6.
Intell Serv Robot ; 13(1): 179-185, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33312264

RESUMEN

Upper limb and hand functionality is critical to many activities of daily living and the amputation of one can lead to significant functionality loss for individuals. From this perspective, advanced prosthetic hands of the future are anticipated to benefit from improved shared control between a robotic hand and its human user, but more importantly from the improved capability to infer human intent from multimodal sensor data to provide the robotic hand perception abilities regarding the operational context. Such multimodal sensor data may include various environment sensors including vision, as well as human physiology and behavior sensors including electromyography and inertial measurement units. A fusion methodology for environmental state and human intent estimation can combine these sources of evidence in order to help prosthetic hand motion planning and control. In this paper, we present a dataset of this type that was gathered with the anticipation of cameras being built into prosthetic hands, and computer vision methods will need to assess this hand-view visual evidence in order to estimate human intent. Specifically, paired images from human eye-view and hand-view of various objects placed at different orientations have been captured at the initial state of grasping trials, followed by paired video, EMG and IMU from the arm of the human during a grasp, lift, put-down, and retract style trial structure. For each trial, based on eye-view images of the scene showing the hand and object on a table, multiple humans were asked to sort in decreasing order of preference, five grasp types appropriate for the object in its given configuration relative to the hand. The potential utility of paired eye-view and hand-view images was illustrated by training a convolutional neural network to process hand-view images in order to predict eye-view labels assigned by humans.

7.
Artículo en Inglés | MEDLINE | ID: mdl-26737418

RESUMEN

We posit that it is necessary to investigate the personalization of smart wheelchairs in three aspects interfaces for interaction, controllers for action (top-level, middle-level, and low-level), and feedback in interaction. Our team has been selected as an Innovation Corps (I-Corps) Team by the National Science Foundation to pursue customer discovery research to explore the commercial viability of smart wheelchairs. Through the process, our team has performed more than 110 interviews with powered wheelchair users, manufacturers, therapists, policy makers, and non-profit organization staff. Our findings revealed that the acceptability of fully autonomous systems by the users is still challenging and highly-dependent on the severity of the disability. Furthermore, the cost, ease-of-use and personalization are the most important factors in commercializing smart wheelchair technologies.


Asunto(s)
Personas con Discapacidad/psicología , Silla de Ruedas , Actividades Cotidianas , Diseño de Equipo , Humanos , Entrevistas como Asunto
8.
Artículo en Inglés | MEDLINE | ID: mdl-26737419

RESUMEN

We present our preliminary results from the design process for developing the Worcester Polytechnic Institute's personal assistance robot, FRASIER, as an intelligent service robot for enabling active aging. The robot capabilities include vision-based object detection, tracking the user and help with carrying heavy items such as grocery bags or cafeteria trays. This work-in-progress report outlines our motivation and approach to developing the next generation of service robots for the elderly. Our main contribution in this paper is the development of a set of specifications based on the adopted user-centered design process, and realization of the prototype system designed to meet these specifications.


Asunto(s)
Robótica , Anciano , Envejecimiento , Artritis Reumatoide/fisiopatología , Diseño de Equipo , Humanos , Masculino , Interfaz Usuario-Computador
9.
Artículo en Inglés | MEDLINE | ID: mdl-26737868

RESUMEN

We describe the process towards the design of a safe, reliable, and intuitive emergency treatment unit to facilitate a higher degree of safety and situational awareness for medical staff, leading to an increased level of patient care during an epidemic outbreak in an unprepared, underdeveloped, or disaster stricken area. We start with a human-centered design process to understand the design challenge of working with Ebola treatment units in Western Africa in the latest Ebola outbreak, and show preliminary work towards cyber-physical technologies applicable to potentially helping during the next outbreak.


Asunto(s)
Cibernética/métodos , Fiebre Hemorrágica Ebola/terapia , África Occidental , Algoritmos , Descontaminación , Brotes de Enfermedades/prevención & control , Humanos , Robótica
10.
J Telemed Telecare ; 20(1): 3-10, 2014 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-24352900

RESUMEN

We examined the feasibility of using a remotely manoeuverable robot to make home hazard assessments for fall prevention. We employed use-case simulations to compare robot assessments with in-person assessments. We screened the homes of nine elderly patients (aged 65 years or more) for fall risks using the HEROS screening assessment. We also assessed the participants' perspectives of the remotely-operated robot in a survey. The nine patients had a median Short Blessed Test score of 8 (interquartile range, IQR 2-20) and a median Life-Space Assessment score of 46 (IQR 27-75). Compared to the in-person assessment (mean = 4.2 hazards identified per participant), significantly more home hazards were perceived in the robot video assessment (mean = 7.0). Only two checklist items (adequate bedroom lighting and a clear path from bed to bathroom) had more than 60% agreement between in-person and robot video assessment. Participants were enthusiastic about the robot and did not think it violated their privacy. The study found little agreement between the in-person and robot video hazard assessments. However, it identified several research questions about how to best use remotely-operated robots.


Asunto(s)
Accidentes por Caídas/prevención & control , Accidentes Domésticos/prevención & control , Investigadores/estadística & datos numéricos , Robótica/estadística & datos numéricos , Anciano , Anciano de 80 o más Años , Estudios de Factibilidad , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados , Medición de Riesgo/métodos , Robótica/instrumentación , Robótica/métodos
11.
Artículo en Inglés | MEDLINE | ID: mdl-23366290

RESUMEN

This paper evaluates existing taxonomies aimed at characterizing the interaction between robots and their users and modifies them for health care applications. The modifications are based on existing robot technologies and user acceptance of robotics. Characterization of the user, or in this case the patient, is a primary focus of the paper, as they present a unique new role as robot users. While therapeutic and monitoring-related applications for robots are still relatively uncommon, we believe they will begin to grow and thus it is important that the spurring relationship between robot and patient is well understood.


Asunto(s)
Atención a la Salud/clasificación , Robótica/clasificación , Interfaz Usuario-Computador , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...