ABSTRACT
PURPOSE: To validate the feasibility of quantitative combined potassium (39 K) and sodium (23 Na) MRI in human calf muscle tissue, as well as to evaluate the reproducibility of the apparent tissue potassium concentration (aTPC) and apparent tissue sodium concentration (aTSC) determination in healthy muscle tissue. METHODS: Quantitative 23 Na and 39 K MRI acquisition protocols were implemented on a 7 T MR system. A double-resonant 23 Na/39 K birdcage RF coil was used. Measurements of human lower leg were performed in a total acquisition time of TANa = 10:54 min/TAK = 8:06 min and using a nominal spatial resolution of 2.5 × 2.5 × 15 mm3 /7.5 × 7.5 × 30 mm3 for 23 Na/39 K MRI. Two aTSC and aTPC examinations in muscle tissue were performed during the same day on 10 healthy subjects. RESULTS: The proposed acquisition and postprocessing workflow for 23 Na and 39 K MRI data sets provided reproducible aTSC and aTPC measurements. In human calf muscle tissue, the coefficient of variation between scan and re-scan was 5.7% for both aTSC and aTPC determination. Overall, mean values of aTSC = (17 ± 1) mM and aTPC = (85 ± 5) mM were measured. Moreover, for 39 K in calf muscle tissue, T2∗ components of T2f∗ = (1.2 ± 0.2) ms and T2s∗ = (7.9 ± 0.9) ms, as well as a residual quadrupolar interaction of ωq¯ = (143 ± 17) Hz, were determined. The fraction of the fast component was f = (58 ± 4)%. CONCLUSION: Using the presented measurement and postprocessing approach, a reproducible aTSC and aTPC determination using 23 Na and 39 K MRI at 7 T in human skeletal muscle tissue is feasible in clinically acceptable acquisition durations.
Subject(s)
Magnetic Resonance Imaging , Potassium , Sodium , Humans , Muscle, Skeletal/diagnostic imaging , Reproducibility of ResultsABSTRACT
The concept of Human-Robot Collaboration (HRC) describes innovative industrial work procedures, in which human staff works in close vicinity with robots on a shared task. Current HRC scenarios often deploy hand-guided robots or remote controls operated by the human collaboration partner. As HRC envisions active collaboration between both parties, ongoing research efforts aim to enhance the capabilities of industrial robots not only in the technical dimension but also in the robot's socio-interactive features. Apart from enabling the robot to autonomously complete the respective shared task in conjunction with a human partner, one essential aspect lifted from the group collaboration among humans is the communication between both entities. State-of-the-art research has identified communication as a significant contributor to successful collaboration between humans and industrial robots. Non-verbal gestures have been shown to be contributing aspect in conveying the respective state of the robot during the collaboration procedure. Research indicates that, depending on the viewing perspective, the usage of non-verbal gestures in humans can impact the interpersonal attribution of certain characteristics. Applied to collaborative robots such as the Yumi IRB 14000, which is equipped with two arms, specifically to mimic human actions, the perception of the robots' non-verbal behavior can affect the collaboration. Most important in this context are dominance emitting gestures by the robot that can reinforce negative attitudes towards robots, thus hampering the users' willingness and effectiveness to collaborate with the robot. By using a 3 × 3 within-subjects design online study, we investigated the effect of dominance gestures (Akimbo, crossing arms, and large arm spread) working in a standing position with an average male height, working in a standing position with an average female height, and working in a seated position on the perception of dominance of the robot. Overall 115 participants (58 female and 57 male) with an average age of 23 years evaluated nine videos of the robot. Results indicated that all presented gestures affect a person's perception of the robot in regards to its perceived characteristics and willingness to cooperate with the robot. The data also showed participants' increased attribution of dominance based on the presented viewing perspective.