Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Clin Exp Dent Res ; 10(5): e70001, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39308130

ABSTRACT

OBJECTIVES: Bruxism is a parafunctional orofacial behavior. For diagnosis, wearable devices that use sounds as biomarkers can be applied to provide the necessary information. Human beings emit various verbal and nonverbal sounds, making it challenging to identify bruxism-induced sounds. We wanted to investigate whether the acoustic emissions of different oral behaviors have distinctive characteristics and if the placement of the transducer has an impact on recording the sound signals. MATERIAL AND METHODS: Sounds from five oral behaviors were investigated: jaw clenching, teeth grinding, reading, eating, and drinking. Eight transducers were used; six were attached to the temporal, frontal, and zygomatic bones with the aid of medical tape, and two were integrated into two commercial earphones. The data from 15 participants were analyzed using time-domain energy, spectral flux, and zero crossing rate (ZCR). RESULTS: In summary, all oral behaviors showed distinct characteristic features except jaw clenching, though there was a peak in the recording, possibly due to tooth tapping, before its expected onset. For teeth grinding, the transducer placement did not have a significant impact (p > 0.05) based on energy, spectral flux, and ZCR. For jaw clenching, the transducer placement had an impact with regard to spectral flux (p < 0.01). For reading and eating, the transducer placement had a significant impact with regard to energy (p < 0.05 for reading, p < 0.01 for eating), spectral flux (p < 0.001 for reading, p < 0.01 for eating), and ZCR (p < 0.001 for both reading and eating). For drinking, the transducer placement only had a significant impact with regard to ZCR (p < 0.01). CONCLUSIONS: We were able to record the sounds of various oral behaviors from different locations on the head. However, the ears were an advantageous location to place the transducer, since they could compensate for various head movements and ear devices are socially tolerable.


Subject(s)
Bruxism , Transducers , Wearable Electronic Devices , Humans , Female , Adult , Male , Bruxism/diagnosis , Bruxism/physiopathology , Young Adult , Eating/physiology , Drinking/physiology , Sound
2.
Int J Comput Assist Radiol Surg ; 18(11): 1951-1959, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37296352

ABSTRACT

PURPOSE: Understanding the properties and aspects of the robotic system is essential to a successful medical intervention, as different capabilities and limits characterize each. Robot positioning is a crucial step in the surgical setup that ensures proper reachability to the desired port locations and facilitates docking procedures. This very demanding task requires much experience to master, especially with multiple trocars, increasing the barrier of entry for surgeons in training. METHODS: Previously, we demonstrated an Augmented Reality-based system to visualize the rotational workspace of the robotic system and proved it helps the surgical staff to optimize patient positioning for single-port interventions. In this work, we implemented a new algorithm to allow for an automatic, real-time robotic arm positioning for multiple ports. RESULTS: Our system, based on the rotational workspace data of the robotic arm and the set of trocar locations, can calculate the optimal position of the robotic arm in milliseconds for the positional and in seconds for the rotational workspace in virtual and augmented reality setups. CONCLUSIONS: Following the previous work, we extended our system to support multiple ports to cover a broader range of surgical procedures and introduced the automatic positioning component. Our solution can decrease the surgical setup time and eliminate the need to repositioning the robot mid-procedure and is suitable both for the preoperative planning step using VR and in the operating room-running on an AR headset.

3.
IEEE Trans Haptics ; 14(2): 335-346, 2021.
Article in English | MEDLINE | ID: mdl-32986561

ABSTRACT

The handle design of telemanipulation master devices has not been extensively studied so far. However, the master device handle is an integral part of the robotic system through which the user interacts with the system. Previous work showed that the size and shape of the functional rotational workspace of the human-robot system and its usability are influenced by the design of the master device handle. Still, in certain situations, e.g., due to user preference, a specific grasp type handle might be desired. Therefore, in this article, we provide a systematic approach on how to assess and adjust the functional rotational workspace of a human-robot system. We investigated the functional rotational workspace with two exemplary grasp type handles and two different mounting orientations for each handle. The results showed that by adapting the handle orientation in the home configuration of the telemanipulator, the functional rotational workspace of the human-robot system can be adjusted systematically to cover more of the mechanical workspace of the master device. Finally, we deduct recommendations on how to choose and adjust a telemanipulator handle.


Subject(s)
Robotic Surgical Procedures , Robotics , Equipment Design , Hand Strength , Humans
4.
Int J Comput Assist Radiol Surg ; 15(11): 1797-1805, 2020 Nov.
Article in English | MEDLINE | ID: mdl-32959159

ABSTRACT

PURPOSE: We present a feasibility study for the visuo-haptic simulation of pedicle screw tract palpation in virtual reality, using an approach that requires no manual processing or segmentation of the volumetric medical data set. METHODS: In a first experiment, we quantified the forces and torques present during the palpation of a pedicle screw tract in a real boar vertebra. We equipped a ball-tipped pedicle probe with a 6-axis force/torque sensor and a motion capture marker cluster. We simultaneously recorded the pose of the probe relative to the vertebra and measured the generated forces and torques during palpation. This allowed us replaying the recorded palpation movements in our simulator and to fine-tune the haptic rendering to approximate the measured forces and torques. In a second experiment, we asked two neurosurgeons to palpate a virtual version of the same vertebra in our simulator, while we logged the forces and torques sent to the haptic device. RESULTS: In the experiments with the real vertebra, the maximum measured force along the longitudinal axis of the probe was 7.78 N and the maximum measured bending torque was 0.13 Nm. In an offline simulation of the motion of the pedicle probe recorded during the palpation of a real pedicle screw tract, our approach generated forces and torques that were similar in magnitude and progression to the measured ones. When surgeons tested our simulator, the distributions of the computed forces and torques were similar to the measured ones; however, higher forces and torques occurred more frequently. CONCLUSIONS: We demonstrated the suitability of direct visual and haptic volume rendering to simulate a specific surgical procedure. Our approach of fine-tuning the simulation by measuring the forces and torques that are prevalent while palpating a real vertebra produced promising results.


Subject(s)
Computer Simulation , Pedicle Screws , Spinal Fusion/methods , Swine/surgery , Virtual Reality , Animals , Feasibility Studies , Male , Motion , Palpation , Simulation Training , Torque , User-Computer Interface
5.
Sci Robot ; 4(27)2019 02 20.
Article in English | MEDLINE | ID: mdl-33137742

ABSTRACT

A multitude of robotic systems have been developed to foster motor learning. Some of these robotic systems featured augmented visual or haptic feedback, which was automatically adjusted to the trainee's performance. However, selecting the type of feedback to achieve the training goal usually remained up to a human trainer. We automated this feedback selection within a robotic rowing simulator: Four spatial errors and one velocity error were considered, all related to trunk-arm sweep rowing set as the training goal to be learned. In an alternating sequence of assessments without augmented feedback and training sessions with augmented, concurrent feedback, the experimental group received feedback, thus addressing the main shortcoming of the previous assessment. With this approach, each participant of the experimental group received an individual sequence of 10 training sessions with feedback. The training sequences from participants in the experimental group were consecutively applied for participants in the control group. Both groups were able to reduce spatial and velocity errors due to training. The learning rate of the requested velocity profile was significantly higher for the experimental group compared with the control group. Thus, our robotic rowing simulator accelerated motor learning by automated feedback selection. This demonstration of a working, closed-loop selection of types of feedback, i.e., training conditions, could serve as the basis for other robotic trainers incorporating further human expertise and artificial intelligence.

6.
PLoS One ; 13(1): e0189275, 2018.
Article in English | MEDLINE | ID: mdl-29293512

ABSTRACT

BACKGROUND: Goal-directed reaching for real-world objects by humans is enabled through visual depth cues. In virtual environments, the number and quality of available visual depth cues is limited, which may affect reaching performance and quality of reaching movements. METHODS: We assessed three-dimensional reaching movements in five experimental groups each with ten healthy volunteers. Three groups used a two-dimensional computer screen and two groups used a head-mounted display. The first screen group received the typically recreated visual depth cues, such as aerial and linear perspective, occlusion, shadows, and texture gradients. The second screen group received an abstract minimal rendering lacking those. The third screen group received the cues of the first screen group and absolute depth cues enabled by retinal image size of a known object, which realized with visual renderings of the handheld device and a ghost handheld at the target location. The two head-mounted display groups received the same virtually recreated visual depth cues as the second or the third screen group respectively. Additionally, they could rely on stereopsis and motion parallax due to head-movements. RESULTS AND CONCLUSION: All groups using the screen performed significantly worse than both groups using the head-mounted display in terms of completion time normalized by the straight-line distance to the target. Both groups using the head-mounted display achieved the optimal minimum in number of speed peaks and in hand path ratio, indicating that our subjects performed natural movements when using a head-mounted display. Virtually recreated visual depth cues had a minor impact on reaching performance. Only the screen group with rendered handhelds could outperform the other screen groups. Thus, if reaching performance in virtual environments is in the main scope of a study, we suggest applying a head-mounted display. Otherwise, when two-dimensional screens are used, achievable performance is likely limited by the reduced depth perception and not just by subjects' motor skills.


Subject(s)
Cues , Depth Perception , Hand/physiology , Virtual Reality , Adult , Biomechanical Phenomena , Female , Humans , Linear Models , Male , Psychomotor Performance , Task Performance and Analysis
7.
Front Neurosci ; 11: 756, 2017.
Article in English | MEDLINE | ID: mdl-29375294

ABSTRACT

This paper presents a new approach to benchmarking brain-computer interfaces (BCIs) outside the lab. A computer game was created that mimics a real-world application of assistive BCIs, with the main outcome metric being the time needed to complete the game. This approach was used at the Cybathlon 2016, a competition for people with disabilities who use assistive technology to achieve tasks. The paper summarizes the technical challenges of BCIs, describes the design of the benchmarking game, then describes the rules for acceptable hardware, software and inclusion of human pilots in the BCI competition at the Cybathlon. The 11 participating teams, their approaches, and their results at the Cybathlon are presented. Though the benchmarking procedure has some limitations (for instance, we were unable to identify any factors that clearly contribute to BCI performance), it can be successfully used to analyze BCI performance in realistic, less structured conditions. In the future, the parameters of the benchmarking game could be modified to better mimic different applications (e.g., the need to use some commands more frequently than others). Furthermore, the Cybathlon has the potential to showcase such devices to the general public.

SELECTION OF CITATIONS
SEARCH DETAIL