Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Eur Urol Focus ; 2023 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-37923632

RESUMEN

BACKGROUND: Artificial intelligence (AI) has the potential to enhance diagnostic accuracy and improve treatment outcomes. However, AI integration into clinical workflows and patient perspectives remain unclear. OBJECTIVE: To determine patients' trust in AI and their perception of urologists relying on AI, and future diagnostic and therapeutic AI applications for patients. DESIGN, SETTING, AND PARTICIPANTS: A prospective trial was conducted involving patients who received diagnostic or therapeutic interventions for prostate cancer (PC). INTERVENTION: Patients were asked to complete a survey before magnetic resonance imaging, prostate biopsy, or radical prostatectomy. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: The primary outcome was patient trust in AI. Secondary outcomes were the choice of AI in treatment settings and traits attributed to AI and urologists. RESULTS AND LIMITATIONS: Data for 466 patients were analyzed. The cumulative affinity for technology was positively correlated with trust in AI (correlation coefficient 0.094; p = 0.04), whereas patient age, level of education, and subjective perception of illness were not (p > 0.05). The mean score (± standard deviation) for trust in capability was higher for physicians than for AI for responding in an individualized way when communicating a diagnosis (4.51 ± 0.76 vs 3.38 ± 1.07; mean difference [MD] 1.130, 95% confidence interval [CI] 1.010-1.250; t924 = 18.52, p < 0.001; Cohen's d = 1.040) and for explaining information in an understandable way (4.57 ± vs 3.18 ± 1.09; MD 1.392, 95% CI 1.275-1.509; t921 = 27.27, p < 0.001; Cohen's d = 1.216). Patients stated that they had higher trust in a diagnosis made by AI controlled by a physician versus AI not controlled by a physician (4.31 ± 0.88 vs 1.75 ± 0.93; MD 2.561, 95% CI 2.444-2.678; t925 = 42.89, p < 0.001; Cohen's d = 2.818). AI-assisted physicians (66.74%) were preferred over physicians alone (29.61%), physicians controlled by AI (2.36%), and AI alone (0.64%) for treatment in the current clinical scenario. CONCLUSIONS: Trust in future diagnostic and therapeutic AI-based treatment relies on optimal integration with urologists as the human-machine interface to leverage human and AI capabilities. PATIENT SUMMARY: Artificial intelligence (AI) will play a role in diagnostic decisions in prostate cancer in the future. At present, patients prefer AI-assisted urologists over urologists alone, AI alone, and AI-controlled urologists. Specific traits of AI and urologists could be used to optimize diagnosis and treatment for patients with prostate cancer.

2.
IEEE Trans Vis Comput Graph ; 28(11): 3874-3884, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36048991

RESUMEN

Creating social Virtual Environments (VEs) is an ongoing challenge. Traces of prior human interactions, or traces of use, are used in Physical Environments (PEs) to create more meaningful relationships with the PE and the people within it. In this paper, we explore how the concept of traces of use can be transferred from PEs to VEs to increase known success factors for social VEs, such as increased social presence. First, we introduce a conceptualization and discussion ($N=4$ expert interviews) of a "Traces in VEs" framework. Second, we evaluate the framework in two lab studies ($N=46$ in total), exploring the effect of traces in (i) VE vs. PE, and (ii) on social presence. Our findings confirm that traces increase the feeling of social presence. However, their meaning may differ depending on the environment. Our framework offers a structured overview of relevant components and relationships that need to be considered when designing meaningful user experiences in VE using traces. Thus, our work is valuable for practitioners and researchers who systematically want to create social VEs.


Asunto(s)
Gráficos por Computador , Emociones , Humanos , Ambiente , Interfaz Usuario-Computador
3.
Front Robot AI ; 8: 554578, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33928129

RESUMEN

With impressive developments in human-robot interaction it may seem that technology can do anything. Especially in the domain of social robots which suggest to be much more than programmed machines because of their anthropomorphic shape, people may overtrust the robot's actual capabilities and its reliability. This presents a serious problem, especially when personal well-being might be at stake. Hence, insights about the development and influencing factors of overtrust in robots may form an important basis for countermeasures and sensible design decisions. An empirical study [N = 110] explored the development of overtrust using the example of a pet feeding robot. A 2 × 2 experimental design and repeated measurements contrasted the effect of one's own experience, skill demonstration, and reputation through experience reports of others. The experiment was realized in a video environment where the participants had to imagine they were going on a four-week safari trip and leaving their beloved cat at home, making use of a pet feeding robot. Every day, the participants had to make a choice: go to a day safari without calling options (risk and reward) or make a boring car trip to another village to check if the feeding was successful and activate an emergency call if not (safe and no reward). In parallel to cases of overtrust in other domains (e.g., autopilot), the feeding robot performed flawlessly most of the time until in the fourth week; it performed faultily on three consecutive days, resulting in the cat's death if the participants had decided to go for the day safari on these days. As expected, with repeated positive experience about the robot's reliability on feeding the cat, trust levels rapidly increased and the number of control calls decreased. Compared to one's own experience, skill demonstration and reputation were largely neglected or only had a temporary effect. We integrate these findings in a conceptual model of (over)trust over time and connect these to related psychological concepts such as positivism, instant rewards, inappropriate generalization, wishful thinking, dissonance theory, and social concepts from human-human interaction. Limitations of the present study as well as implications for robot design and future research are discussed.

4.
Front Robot AI ; 7: 546724, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33501314

RESUMEN

Current robot designs often reflect an anthropomorphic approach, apparently aiming to convince users through an ideal system, being most similar or even on par with humans. The present paper challenges human-likeness as a design goal and questions whether simulating human appearance and performance adequately fits into how humans think about robots in a conceptual sense, i.e., human's mental models of robots and their self. Independent of the technical possibilities and limitations, our paper explores robots' attributed potential to become human-like by means of a thought experiment. Four hundred eighty-one participants were confronted with fictional transitions from human-to-robot and robot-to-human, consisting of 20 subsequent steps. In each step, one part or area of the human (e.g., brain, legs) was replaced with robotic parts providing equal functionalities and vice versa. After each step, the participants rated the remaining humanness and remaining self of the depicted entity on a scale from 0 to 100%. It showed that the starting category (e.g., human, robot) serves as an anchor for all former judgments and can hardly be overcome. Even if all body parts had been exchanged, a former robot was not perceived as totally human-like and a former human not as totally robot-like. Moreover, humanness appeared as a more sensible and easier denied attribute than robotness, i.e., after the objectively same transition and exchange of the same parts, the former human was attributed less humanness and self left compared to the former robot's robotness and self left. The participants' qualitative statements about why the robot has not become human-like, often concerned the (unnatural) process of production, or simply argued that no matter how many parts are exchanged, the individual keeps its original entity. Based on such findings, we suggest that instead of designing most human-like robots in order to reach acceptance, it might be more promising to understand robots as an own "species" and underline their specific characteristics and benefits. Limitations of the present study and implications for future HRI research and practice are discussed.

5.
IEEE Trans Vis Comput Graph ; 20(12): 2201-10, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26356934

RESUMEN

Data sculptures are a promising type of visualizations in which data is given a physical form. In the past, they have mostly been used for artistic, communicative or educational purposes, and designers of data sculptures argue that in such situations, physical visualizations can be more enriching than pixel-based visualizations. We present the design of Activity Sculptures: data sculptures of running activity. In a three-week field study we investigated the impact of the sculptures on 14 participants' running activity, the personal and social behaviors generated by the sculptures, as well as participants' experiences when receiving these individual physical tokens generated from the specific data of their runs. The physical rewards generated curiosity and personal experimentation but also social dynamics such as discussion on runs or envy/competition. We argue that such passive (or calm) visualizations can complement nudging and other mechanisms of persuasion with a more playful and reflective look at ones' activity.


Asunto(s)
Gráficos por Computador , Motivación/fisiología , Recompensa , Carrera/psicología , Escultura , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Satisfacción Personal , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...