Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 20
1.
Front Robot AI ; 11: 1337380, 2024.
Article En | MEDLINE | ID: mdl-38646472

By supporting autonomy, aging in place, and wellbeing in later life, Socially Assistive Robots are expected to help humanity face the challenges posed by the rapid aging of the world's population. For the successful acceptance and assimilation of SARs by older adults, it is necessary to understand the factors affecting their Quality Evaluations Previous studies examining Human-Robot Interaction in later life indicated that three aspects shape older adults' overall QEs of robots: uses, constraints, and outcomes. However, studies were usually limited in duration, focused on acceptance rather than assimilation, and typically explored only one aspect of the interaction. In the present study, we examined uses, constraints, and outcomes simultaneously and over a long period. Nineteen community-dwelling older adults aged 75-97 were given a SAR for physical training for 6 weeks. Their experiences were documented via in-depth interviews conducted before and after the study period, short weekly telephone surveys, and reports produced by the robots. Analysis revealed two distinct groups: (A) The 'Fans' - participants who enjoyed using the SAR, attributed added value to it, and experienced a successful assimilation process; and (B) The 'Skeptics' - participants who did not like it, negatively evaluated its use, and experienced a disappointing assimilation process. Despite the vast differences between the groups, both reported more positive evaluations of SARs at the end of the study than before it began. Overall, the results indicated that the process of SARs' assimilation is not homogeneous and provided a profound understanding of the factors shaping older adults' QE of SARs following actual use. Additionally, the findings demonstrated the theoretical and practical usefulness of a holistic approach in researching older SARs users.

2.
Appl Ergon ; 118: 104269, 2024 Jul.
Article En | MEDLINE | ID: mdl-38490064

Mobile robotic telepresence systems require that information about the environment, the task, and the robot be presented to a remotely located user (operator) who controls the robot for a specific task. In this study, two interaction modes, proactive and reactive, that differ in the way the user receives information from the robot, were compared in an experimental system simulating a healthcare setting. The users controlled a mobile telepresence robot that delivered and received items (medication, food, or drink), and also obtained metrics (vital signs) from a simulated patient while the users performed a secondary healthcare-related task (they compiled health records which were displayed to them on the screen and answered related questions). The effect of the two interaction modes on overall performance and user perception was evaluated through a within-participant study design conducted with 50 participants belonging to two different types of populations (with and without a technological background). Efficiency, effectiveness, understanding, satisfaction, and situation awareness were defined as the dependent variables measured both objectively and subjectively. The proactive mode increased user performance, and understanding of the system and reduced the workload compared to the reactive mode. However, several of the users valued the option of increased user control experienced in the reactive mode. We, therefore, proposed design suggestions to highlight some of the benefits of factoring the reactive mode into the design as a hybrid mode.


Robotics , Task Performance and Analysis , Telemedicine , Humans , Male , Female , Adult , Telemedicine/methods , Telemedicine/instrumentation , User-Computer Interface , Middle Aged , Young Adult , Workload
3.
Plant Phenomics ; 6: 0132, 2024.
Article En | MEDLINE | ID: mdl-38230354

Image-based root phenotyping technologies, including the minirhizotron (MR), have expanded our understanding of the in situ root responses to changing environmental conditions. The conventional manual methods used to analyze MR images are time-consuming, limiting their implementation. This study presents an adaptation of our previously developed convolutional neural network-based models to estimate the total (cumulative) root length (TRL) per MR image without requiring segmentation. Training data were derived from manual annotations in Rootfly, commonly used software for MR image analysis. We compared TRL estimation with 2 models, a regression-based model and a detection-based model that detects the annotated points along the roots. Notably, the detection-based model can assist in examining human annotations by providing a visual inspection of roots in MR images. The models were trained and tested with 4,015 images acquired using 2 MR system types (manual and automated) and from 4 crop species (corn, pepper, melon, and tomato) grown under various abiotic stresses. These datasets are made publicly available as part of this publication. The coefficients of determination (R2), between the measurements made using Rootfly and the suggested TRL estimation models were 0.929 to 0.986 for the main datasets, demonstrating that this tool is accurate and robust. Additional analyses were conducted to examine the effects of (a) the data acquisition system and thus the image quality on the models' performance, (b) automated differentiation between images with and without roots, and (c) the use of the transfer learning technique. These approaches can support precision agriculture by providing real-time root growth information.

4.
Appl Ergon ; 106: 103859, 2023 Jan.
Article En | MEDLINE | ID: mdl-36081185

This paper focuses on how the autonomy level of an assistive robot that offers support for older adults in a daily task and its feedback affect the interaction. Identifying the level of automation (LOA) that prioritizes older adults' preferences while avoiding passiveness and sedentariness is challenging. The feedback mode should match the cognitive and perceptual capabilities of older adults and the LOA. We characterized three LOAs and paired them with two modes of feedback in a human-robot collaborative task. Twenty-seven older adults participated in evaluating the LOA-feedback variations in a mixed experimental design, utilizing an experimental setup of an assistive robot in a table clearing task. The quality of the interaction was evaluated with objective and subjective measures. The combination of high LOA with voice feedback improved the overall interaction when compared to other LOA and feedback combinations. This study emphasizes the importance of appropriate coupling of LOA and feedback for successful interaction of the older adults with an assistive robot.


Robotic Surgical Procedures , Robotics , Self-Help Devices , Humans , Aged , Feedback , Self-Help Devices/psychology
5.
Int J Soc Robot ; 14(8): 1805-1820, 2022.
Article En | MEDLINE | ID: mdl-35996386

We studied politeness in human-robot interaction based on Lakoff's politeness theory. In a series of eight studies, we manipulated three different levels of politeness of non-humanoid robots and evaluated their effects. A table-setting task was developed for two different types of robots (a robotic manipulator and a mobile robot). The studies included two different populations (old and young adults) and were conducted in two conditions (video and live). Results revealed that polite robot behavior positively affected users' perceptions of the interaction with the robots and that participants were able to differentiate between the designed politeness levels. Participants reported higher levels of enjoyment, satisfaction, and trust when they interacted with the politest behavior of the robot. A smaller number of young adults trusted the politest behavior of the robot compared to old adults. Enjoyment and trust of the interaction with the robot were higher when study participants were subjected to the live condition compared to video and participants were more satisfied when they interacted with a mobile robot compared to a manipulator.

6.
Sensors (Basel) ; 22(9)2022 May 08.
Article En | MEDLINE | ID: mdl-35591275

Agricultural industry is facing a serious threat from plant diseases that cause production and economic losses. Early information on disease development can improve disease control using suitable management strategies. This study sought to detect downy mildew (Peronospora) on grapevine (Vitis vinifera) leaves at early stages of development using thermal imaging technology and to determine the best time during the day for image acquisition. In controlled experiments, 1587 thermal images of grapevines grown in a greenhouse were acquired around midday, before inoculation, 1, 2, 4, 5, 6, and 7 days after an inoculation. In addition, images of healthy and infected leaves were acquired at seven different times during the day between 7:00 a.m. and 4:30 p.m. Leaves were segmented using the active contour algorithm. Twelve features were derived from the leaf mask and from meteorological measurements. Stepwise logistic regression revealed five significant features used in five classification models. Performance was evaluated using K-folds cross-validation. The support vector machine model produced the best classification accuracy of 81.6%, F1 score of 77.5% and area under the curve (AUC) of 0.874. Acquiring images in the morning between 10:40 a.m. and 11:30 a.m. resulted in 80.7% accuracy, 80.5% F1 score, and 0.895 AUC.


Oomycetes , Peronospora , Vitis , Disease Resistance , Plant Diseases
7.
Int J Soc Robot ; 14(4): 995-1012, 2022.
Article En | MEDLINE | ID: mdl-35079297

This paper investigates human's preferences for a robot's eye gaze behavior during human-to-robot handovers. We studied gaze patterns for all three phases of the handover process: reach, transfer, and retreat, as opposed to previous work which only focused on the reaching phase. Additionally, we investigated whether the object's size or fragility or the human's posture affect the human's preferences for the robot gaze. A public data-set of human-human handovers was analyzed to obtain the most frequent gaze behaviors that human receivers perform. These were then used to program the robot's receiver gaze behaviors. In two sets of user studies (video and in-person), a collaborative robot exhibited these gaze behaviors while receiving an object from a human. In the video studies, 72 participants watched and compared videos of handovers between a human actor and a robot demonstrating each of the three gaze behaviors. In the in-person studies, a different set of 72 participants physically performed object handovers with the robot and evaluated their perception of the handovers for the robot's different gaze behaviors. Results showed that, for both observers and participants in a handover, when the robot exhibited Face-Hand-Face gaze (gazing at the giver's face and then at the giver's hand during the reach phase and back at the giver's face during the retreat phase), participants considered the handover to be more likable, anthropomorphic, and communicative of timing ( p < 0.0001 ) . However, we did not find evidence of any effect of the object's size or fragility or the giver's posture on the gaze preference.

8.
Int J Soc Robot ; 13(5): 1109-1124, 2021.
Article En | MEDLINE | ID: mdl-33020706

Physical exercise has many physical, psychological and social health benefits leading to improved life quality. This paper presents a robotic system developed as a personal coach for older adults aiming to motivate older adults to participate in physical activities. The robot instructs the participants, demonstrates the exercises and provides real-time corrective and positive feedback according to the participant's performance as monitored by an RGB-D camera. Two robotic systems based on two different humanoid robots (Nao, toy-like and Poppy, mechanical-like) were developed and implemented using the Python programming language. Experimental studies with 32 older adults were conducted, to determine the preferable mode and timing of the feedback provided to the user to accommodate user preferences, motivate the users and improve their interaction with the system. Additionally, user preferences with regards to the two different humanoid robots used were explored. The results revealed that the system motivated the older adults to engage more in physical exercises. The type and timing of feedback influenced this engagement. Most of these older adults also perceived the system as very useful, easy to use, had a positive attitude towards the system and noted their intention to use it. Most users preferred the more mechanical looking robot (Poppy) over the toy-like robot (Nao).

9.
Sensors (Basel) ; 20(13)2020 Jul 06.
Article En | MEDLINE | ID: mdl-32640557

The effect of camera viewpoint and fruit orientation on the performance of a sweet pepper maturity level classification algorithm was evaluated. Image datasets of sweet peppers harvested from a commercial greenhouse were collected using two different methods, resulting in 789 RGB-Red Green Blue (images acquired in a photocell) and 417 RGB-D-Red Green Blue-Depth (images acquired by a robotic arm in the laboratory), which are published as part of this paper. Maturity level classification was performed using a random forest algorithm. Classifications of maturity level from different camera viewpoints, using a combination of viewpoints, and different fruit orientations on the plant were evaluated and compared to manual classification. Results revealed that: (1) the bottom viewpoint is the best single viewpoint for maturity level classification accuracy; (2) information from two viewpoints increases the classification by 25 and 15 percent compared to a single viewpoint for red and yellow peppers, respectively, and (3) classification performance is highly dependent on the fruit's orientation on the plant.

10.
Assist Technol ; 32(2): 79-91, 2020.
Article En | MEDLINE | ID: mdl-29944466

This paper presents the overall design of a prototype home-based system aimed to reduce sedentary behavior of older adults. Quantitative performance indicators were developed to measure the sedentary behavior and daily activities of an older adult. The sedentary behavior is monitored by identifying individual positions (standing, sitting, and lying) within the field of view of a Microsoft Kinect sensor, using a custom designed algorithm. The physical activity of the older adult when outside the field of view of the Microsoft Kinect sensor is monitored by counting the number of steps using a Fitbit Charge HR watch, which the older adult is equipped with. A user interface was developed on a PC platform to interact with the older adult. The user interface is automatically operated and includes several modules. It displays the activity level, and provides feedbacks, alerts, and reminders to reduce sedentary behavior. Evaluations using a mixed methods approach that included a focus group, interviews, and observations were conducted to examine the integrated system, evaluate the users' experience with the system, and compare different types of feedbacks and alerts. The analyses indicated the feasibility of the proposed SIT LESS system along with recommendations for improving the system in future research.


Monitoring, Ambulatory/methods , Sedentary Behavior , Activities of Daily Living , Aged , Aged, 80 and over , Exercise , Feedback , Female , Humans , Independent Living , Male , Monitoring, Ambulatory/instrumentation , Posture , Sitting Position , User-Computer Interface
11.
Sensors (Basel) ; 19(9)2019 May 08.
Article En | MEDLINE | ID: mdl-31071989

This paper presents an automatic parameter tuning procedure specially developed for a dynamic adaptive thresholding algorithm for fruit detection. One of the major algorithm strengths is its high detection performances using a small set of training images. The algorithm enables robust detection in highly-variable lighting conditions. The image is dynamically split into variably-sized regions, where each region has approximately homogeneous lighting conditions. Nine thresholds were selected to accommodate three different illumination levels for three different dimensions in four color spaces: RGB, HSI, LAB, and NDI. Each color space uses a different method to represent a pixel in an image: RGB (Red, Green, Blue), HSI (Hue, Saturation, Intensity), LAB (Lightness, Green to Red and Blue to Yellow) and NDI (Normalized Difference Index, which represents the normal difference between the RGB color dimensions). The thresholds were selected by quantifying the required relation between the true positive rate and false positive rate. A tuning process was developed to determine the best fit values of the algorithm parameters to enable easy adaption to different kinds of fruits (shapes, colors) and environments (illumination conditions). Extensive analyses were conducted on three different databases acquired in natural growing conditions: red apples (nine images with 113 apples), green grape clusters (129 images with 1078 grape clusters), and yellow peppers (30 images with 73 peppers). These databases are provided as part of this paper for future developments. The algorithm was evaluated using cross-validation with 70% images for training and 30% images for testing. The algorithm successfully detected apples and peppers in variable lighting conditions resulting with an F-score of 93.17% and 99.31% respectively. Results show the importance of the tuning process for the generalization of the algorithm to different kinds of fruits and environments. In addition, this research revealed the importance of evaluating different color spaces since for each kind of fruit, a different color space might be superior over the others. The LAB color space is most robust to noise. The algorithm is robust to changes in the threshold learned by the training process and to noise effects in images.


Algorithms , Fruit/anatomy & histology , Automation , Capsicum/anatomy & histology , Color , Databases as Topic , Image Processing, Computer-Assisted , Malus/anatomy & histology , ROC Curve , Vitis/anatomy & histology
12.
Sensors (Basel) ; 19(6)2019 Mar 21.
Article En | MEDLINE | ID: mdl-30901837

Current harvesting robots are limited by low detection rates due to the unstructured and dynamic nature of both the objects and the environment. State-of-the-art algorithms include color- and texture-based detection, which are highly sensitive to the illumination conditions. Deep learning algorithms promise robustness at the cost of significant computational resources and the requirement for intensive databases. In this paper we present a Flash-No-Flash (FNF) controlled illumination acquisition protocol that frees the system from most ambient illumination effects and facilitates robust target detection while using only modest computational resources and no supervised training. The approach relies on the simultaneous acquisition of two images-with/without strong artificial lighting ("Flash"/"no-Flash"). The difference between these images represents the appearance of the target scene as if only the artificial light was present, allowing a tight control over ambient light for color-based detection. A performance evaluation database was acquired in greenhouse conditions using an eye-in-hand RGB camera mounted on a robotic manipulator. The database includes 156 scenes with 468 images containing a total of 344 yellow sweet peppers. Performance of both color blob and deep-learning detection algorithms are compared on Flash-only and FNF images. The collected database is made public.

13.
Sensors (Basel) ; 18(3)2018 Mar 02.
Article En | MEDLINE | ID: mdl-29498683

Multi-sensor systems can play an important role in monitoring tasks and detecting targets. However, real-time allocation of heterogeneous sensors to dynamic targets/tasks that are unknown a priori in their locations and priorities is a challenge. This paper presents a Modified Distributed Bees Algorithm (MDBA) that is developed to allocate stationary heterogeneous sensors to upcoming unknown tasks using a decentralized, swarm intelligence approach to minimize the task detection times. Sensors are allocated to tasks based on sensors' performance, tasks' priorities, and the distances of the sensors from the locations where the tasks are being executed. The algorithm was compared to a Distributed Bees Algorithm (DBA), a Bees System, and two common multi-sensor algorithms, market-based and greedy-based algorithms, which were fitted for the specific task. Simulation analyses revealed that MDBA achieved statistically significant improved performance by 7% with respect to DBA as the second-best algorithm, and by 19% with respect to Greedy algorithm, which was the worst, thus indicating its fitness to provide solutions for heterogeneous multi-sensor systems.

14.
Restor Neurol Neurosci ; 36(2): 261-274, 2018.
Article En | MEDLINE | ID: mdl-29526862

BACKGROUND: Effective human-robot interactions in rehabilitation necessitates an understanding of how these should be tailored to the needs of the human. We report on a robotic system developed as a partner on a 3-D everyday task, using a gamified approach. OBJECTIVES: To: (1) design and test a prototype system, to be ultimately used for upper-limb rehabilitation; (2) evaluate how age affects the response to such a robotic system; and (3) identify whether the robot's physical embodiment is an important aspect in motivating users to complete a set of repetitive tasks. METHODS: 62 healthy participants, young (<30 yo) and old (>60 yo), played a 3D tic-tac-toe game against an embodied (a robotic arm) and a non-embodied (a computer-controlled lighting system) partner. To win, participants had to place three cups in sequence on a physical 3D grid. Cup picking-and-placing was chosen as a functional task that is often practiced in post-stroke rehabilitation. Movement of the participants was recorded using a Kinect camera. RESULTS: The timing of the participants' movement was primed by the response time of the system: participants moved slower when playing with the slower embodied system (p = 0.006). The majority of participants preferred the robot over the computer-controlled system. Slower response time of the robot compared to the computer-controlled one only affected the young group's motivation to continue playing. CONCLUSION: We demonstrated the feasibility of the system to encourage the performance of repetitive 3D functional movements, and track these movements. Young and old participants preferred to interact with the robot, compared with the non-embodied system. We contribute to the growing knowledge concerning personalized human-robot interactions by (1) demonstrating the priming of the human movement by the robotic movement - an important design feature, and (2) identifying response-speed as a design variable, the importance of which depends on the age of the user.


Aging/physiology , Exercise/physiology , Exercise/psychology , Movement/physiology , Robotics , Upper Extremity/physiology , Adult , Aged , Aged, 80 and over , Algorithms , Biomechanical Phenomena , Equipment Design , Female , Games, Experimental , Healthy Volunteers , Humans , Male , Middle Aged , Physical Education and Training , Psychomotor Performance/physiology , Range of Motion, Articular/physiology , Surveys and Questionnaires , Young Adult
15.
Appl Ergon ; 62: 237-246, 2017 Jul.
Article En | MEDLINE | ID: mdl-28411734

Teleoperation of an agricultural robotic system requires effective and efficient human-robot interaction. This paper investigates the usability of different interaction modes for agricultural robot teleoperation. Specifically, we examined the overall influence of two types of output devices (PC screen, head mounted display), two types of peripheral vision support mechanisms (single view, multiple views), and two types of control input devices (PC keyboard, PS3 gamepad) on observed and perceived usability of a teleoperated agricultural sprayer. A modular user interface for teleoperating an agricultural robot sprayer was constructed and field-tested. Evaluation included eight interaction modes: the different combinations of the 3 factors. Thirty representative participants used each interaction mode to navigate the robot along a vineyard and spray grape clusters based on a 2 × 2 × 2 repeated measures experimental design. Objective metrics of the effectiveness and efficiency of the human-robot collaboration were collected. Participants also completed questionnaires related to their user experience with the system in each interaction mode. Results show that the most important factor for human-robot interface usability is the number and placement of views. The type of robot control input device was also a significant factor in certain dependents, whereas the effect of the screen output type was only significant on the participants' perceived workload index. Specific recommendations for mobile field robot teleoperation to improve HRI awareness for the agricultural spraying task are presented.


Agriculture/instrumentation , Man-Machine Systems , Robotics , User-Computer Interface , Adult , Aged , Computer Terminals , Consumer Behavior , Data Display , Female , Humans , Male , Middle Aged , Surveys and Questionnaires , Task Performance and Analysis , Workload
16.
Appl Ergon ; 58: 527-534, 2017 Jan.
Article En | MEDLINE | ID: mdl-27181096

This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface.


Man-Machine Systems , Motion , Robotics/methods , Adult , Female , Gestures , Head/physiology , Heart Rate , Humans , Male , Movement , Robotics/instrumentation , Task Performance and Analysis , User-Computer Interface , Workload , Young Adult
17.
J Dairy Sci ; 99(9): 7714-7725, 2016 Sep.
Article En | MEDLINE | ID: mdl-27320661

Body condition scoring (BCS) is a farm-management tool for estimating dairy cows' energy reserves. Today, BCS is performed manually by experts. This paper presents a 3-dimensional algorithm that provides a topographical understanding of the cow's body to estimate BCS. An automatic BCS system consisting of a Kinect camera (Microsoft Corp., Redmond, WA) triggered by a passive infrared motion detector was designed and implemented. Image processing and regression algorithms were developed and included the following steps: (1) image restoration, the removal of noise; (2) object recognition and separation, identification and separation of the cows; (3) movie and image selection, selection of movies and frames that include the relevant data; (4) image rotation, alignment of the cow parallel to the x-axis; and (5) image cropping and normalization, removal of irrelevant data, setting the image size to 150×200 pixels, and normalizing image values. All steps were performed automatically, including image selection and classification. Fourteen individual features per cow, derived from the cows' topography, were automatically extracted from the movies and from the farm's herd-management records. These features appear to be measurable in a commercial farm. Manual BCS was performed by a trained expert and compared with the output of the training set. A regression model was developed, correlating the features with the manual BCS references. Data were acquired for 4 d, resulting in a database of 422 movies of 101 cows. Movies containing cows' back ends were automatically selected (389 movies). The data were divided into a training set of 81 cows and a test set of 20 cows; both sets included the identical full range of BCS classes. Accuracy tests gave a mean absolute error of 0.26, median absolute error of 0.19, and coefficient of determination of 0.75, with 100% correct classification within 1 step and 91% correct classification within a half step for BCS classes. Results indicated good repeatability, with all standard deviations under 0.33. The algorithm is independent of the background and requires 10 cows for training with approximately 30 movies of 4 s each.


Automation/instrumentation , Cattle/physiology , Dairying/methods , Imaging, Three-Dimensional/veterinary , Algorithms , Animals , Female , Imaging, Three-Dimensional/instrumentation , Imaging, Three-Dimensional/methods
18.
Sensors (Basel) ; 15(8): 20845-62, 2015 Aug 21.
Article En | MEDLINE | ID: mdl-26308000

Image registration is the process of aligning two or more images of the same scene taken at different times; from different viewpoints; and/or by different sensors. This research focuses on developing a practical method for automatic image registration for agricultural systems that use multimodal sensory systems and operate in natural environments. While not limited to any particular modalities; here we focus on systems with visual and thermal sensory inputs. Our approach is based on pre-calibrating a distance-dependent transformation matrix (DDTM) between the sensors; and representing it in a compact way by regressing the distance-dependent coefficients as distance-dependent functions. The DDTM is measured by calculating a projective transformation matrix for varying distances between the sensors and possible targets. To do so we designed a unique experimental setup including unique Artificial Control Points (ACPs) and their detection algorithms for the two sensors. We demonstrate the utility of our approach using different experiments and evaluation criteria.

19.
Appl Ergon ; 42(6): 820-9, 2011 Nov.
Article En | MEDLINE | ID: mdl-21376306

An in-depth evaluation of the usability and situation awareness performance of different displays and destination controls of robots are presented. In two experiments we evaluate the way information is presented to the operator and assess different means for controlling the robot. Our study compares three types of displays: a "blocks" display, a HUD (head-up display), and a radar display, and two types of controls: touch screen and hand gestures. The HUD demonstrated better performance when compared to the blocks display and was perceived to have greater usability compared to the radar display. The HUD was also found to be more useful when the operation of the robot was more difficult, i.e., when using the hand-gesture method. The experiments also pointed to the importance of using a wide viewing angle to minimize distortion and for easier coping with the difficulties of locating objects in the field of view margins. The touch screen was found to be superior in terms of both objective performance and its perceived usability. No differences were found between the displays and the controllers in terms of situation awareness. This research sheds light on the preferred display type and controlling method for operating robots from a distance, making it easier to cope with the challenges of operating such systems.


Robotics/methods , Adult , Awareness , Data Display , Ergonomics , Female , Humans , Male , Remote Sensing Technology , Robotics/standards , Surveys and Questionnaires , Task Performance and Analysis , User-Computer Interface , Young Adult
20.
J Am Med Inform Assoc ; 15(3): 321-3, 2008.
Article En | MEDLINE | ID: mdl-18451034

The use of doctor-computer interaction devices in the operation room (OR) requires new modalities that support medical imaging manipulation while allowing doctors' hands to remain sterile, supporting their focus of attention, and providing fast response times. This paper presents "Gestix," a vision-based hand gesture capture and recognition system that interprets in real-time the user's gestures for navigation and manipulation of images in an electronic medical record (EMR) database. Navigation and other gestures are translated to commands based on their temporal trajectories, through video capture. "Gestix" was tested during a brain biopsy procedure. In the in vivo experiment, this interface prevented the surgeon's focus shift and change of location while achieving a rapid intuitive reaction and easy interaction. Data from two usability tests provide insights and implications regarding human-computer interaction based on nonverbal conversational modalities.


Gestures , Radiology , User-Computer Interface , Consumer Behavior , Equipment Contamination/prevention & control , Humans , Imaging, Three-Dimensional , Magnetic Resonance Imaging , Man-Machine Systems , Medical Records Systems, Computerized , Neurosurgery/instrumentation , Radiology/instrumentation , Radiology Information Systems
...