Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
J Med Internet Res ; 24(11): e38525, 2022 11 15.
Artículo en Inglés | MEDLINE | ID: mdl-36378515

RESUMEN

BACKGROUND: Health care and well-being are 2 main interconnected application areas of conversational agents (CAs). There is a significant increase in research, development, and commercial implementations in this area. In parallel to the increasing interest, new challenges in designing and evaluating CAs have emerged. OBJECTIVE: This study aims to identify key design, development, and evaluation challenges of CAs in health care and well-being research. The focus is on the very recent projects with their emerging challenges. METHODS: A review study was conducted with 17 invited studies, most of which were presented at the ACM (Association for Computing Machinery) CHI 2020 conference workshop on CAs for health and well-being. Eligibility criteria required the studies to involve a CA applied to a health or well-being project (ongoing or recently finished). The participating studies were asked to report on their projects' design and evaluation challenges. We used thematic analysis to review the studies. RESULTS: The findings include a range of topics from primary care to caring for older adults to health coaching. We identified 4 major themes: (1) Domain Information and Integration, (2) User-System Interaction and Partnership, (3) Evaluation, and (4) Conversational Competence. CONCLUSIONS: CAs proved their worth during the pandemic as health screening tools, and are expected to stay to further support various health care domains, especially personal health care. Growth in investment in CAs also shows the value as a personal assistant. Our study shows that while some challenges are shared with other CA application areas, safety and privacy remain the major challenges in the health care and well-being domains. An increased level of collaboration across different institutions and entities may be a promising direction to address some of the major challenges that otherwise would be too complex to be addressed by the projects with their limited scope and budget.


Asunto(s)
Comunicación , Atención a la Salud , Humanos , Anciano , Personal de Salud
2.
J Endourol ; 2024 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-37905524

RESUMEN

Introduction: Automated skills assessment can provide surgical trainees with objective, personalized feedback during training. Here, we measure the efficacy of artificial intelligence (AI)-based feedback on a robotic suturing task. Materials and Methods: Forty-two participants with no robotic surgical experience were randomized to a control or feedback group and video-recorded while completing two rounds (R1 and R2) of suturing tasks on a da Vinci surgical robot. Participants were assessed on needle handling and needle driving, and feedback was provided via a visual interface after R1. For feedback group, participants were informed of their AI-based skill assessment and presented with specific video clips from R1. For control group, participants were presented with randomly selected video clips from R1 as a placebo. Participants from each group were further labeled as underperformers or innate-performers based on a median split of their technical skill scores from R1. Results: Demographic features were similar between the control (n = 20) and feedback group (n = 22) (p > 0.05). Observing the improvement from R1 to R2, the feedback group had a significantly larger improvement in needle handling score (0.30 vs -0.02, p = 0.018) when compared with the control group, although the improvement of needle driving score was not significant when compared with the control group (0.17 vs -0.40, p = 0.074). All innate-performers exhibited similar improvements across rounds, regardless of feedback (p > 0.05). In contrast, underperformers in the feedback group improved more than the control group in needle handling (p = 0.02). Conclusion: AI-based feedback facilitates surgical trainees' acquisition of robotic technical skills, especially underperformers. Future research will extend AI-based feedback to additional suturing skills, surgical tasks, and experience groups.

3.
J Robot Surg ; 17(2): 597-603, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36149590

RESUMEN

Our group previously defined a dissection gesture classification system that deconstructs robotic tissue dissection into its most elemental yet meaningful movements. The purpose of this study was to expand upon this framework by adding an assessment of gesture efficacy (ineffective, effective, or erroneous) and analyze dissection patterns between groups of surgeons of varying experience. We defined three possible gesture efficacies as ineffective (no meaningful effect on the tissue), effective (intended effect on the tissue), and erroneous (unintended disruption of the tissue). Novices (0 prior robotic cases), intermediates (1-99 cases), and experts (≥ 100 cases) completed a robotic dissection task in a dry-lab training environment. Video recordings were reviewed to classify each gesture and determine its efficacy, then dissection patterns between groups were analyzed. 23 participants completed the task, with 9 novices, 8 intermediates with median caseload 60 (IQR 41-80), and 6 experts with median caseload 525 (IQR 413-900). For gesture selection, we found increasing experience associated with increasing proportion of overall dissection gestures (p = 0.009) and decreasing proportion of retraction gestures (p = 0.009). For gesture efficacy, novices performed the greatest proportion of ineffective gestures (9.8%, p < 0.001), intermediates commit the greatest proportion of erroneous gestures (26.8%, p < 0.001), and the three groups performed similar proportions of overall effective gestures, though experts performed the greatest proportion of effective retraction gestures (85.6%, p < 0.001). Between groups of experience, we found significant differences in gesture selection and gesture efficacy. These relationships may provide insight into further improving surgical training.


Asunto(s)
Procedimientos Quirúrgicos Robotizados , Robótica , Humanos , Procedimientos Quirúrgicos Robotizados/métodos , Gestos , Movimiento
4.
JAMA Netw Open ; 6(6): e2320702, 2023 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-37378981

RESUMEN

Importance: Live feedback in the operating room is essential in surgical training. Despite the role this feedback plays in developing surgical skills, an accepted methodology to characterize the salient features of feedback has not been defined. Objective: To quantify the intraoperative feedback provided to trainees during live surgical cases and propose a standardized deconstruction for feedback. Design, Setting, and Participants: In this qualitative study using a mixed methods analysis, surgeons at a single academic tertiary care hospital were audio and video recorded in the operating room from April to October 2022. Urological residents, fellows, and faculty attending surgeons involved in robotic teaching cases during which trainees had active control of the robotic console for at least some portion of a surgery were eligible to voluntarily participate. Feedback was time stamped and transcribed verbatim. An iterative coding process was performed using recordings and transcript data until recurring themes emerged. Exposure: Feedback in audiovisual recorded surgery. Main Outcomes and Measures: The primary outcomes were the reliability and generalizability of a feedback classification system in characterizing surgical feedback. Secondary outcomes included assessing the utility of our system. Results: In 29 surgical procedures that were recorded and analyzed, 4 attending surgeons, 6 minimally invasive surgery fellows, and 5 residents (postgraduate years, 3-5) were involved. For the reliability of the system, 3 trained raters achieved moderate to substantial interrater reliability in coding cases using 5 types of triggers, 6 types of feedback, and 9 types of responses (prevalence-adjusted and bias-adjusted κ range: a 0.56 [95% CI, 0.45-0.68] minimum for triggers to a 0.99 [95% CI, 0.97-1.00] maximum for feedback and responses). For the generalizability of the system, 6 types of surgical procedures and 3711 instances of feedback were analyzed and coded with types of triggers, feedback, and responses. Significant differences in triggers, feedback, and responses reflected surgeon experience level and surgical task being performed. For example, as a response, attending surgeons took over for safety concerns more often for fellows than residents (prevalence rate ratio [RR], 3.97 [95% CI, 3.12-4.82]; P = .002), and suturing involved more errors that triggered feedback than dissection (RR, 1.65 [95% CI, 1.03-3.33]; P = .007). For the utility of the system, different combinations of trainer feedback had associations with rates of different trainee responses. For example, technical feedback with a visual component was associated with an increased rate of trainee behavioral change or verbal acknowledgment responses (RR, 1.11 [95% CI, 1.03-1.20]; P = .02). Conclusions and Relevance: These findings suggest that identifying different types of triggers, feedback, and responses may be a feasible and reliable method for classifying surgical feedback across several robotic procedures. Outcomes suggest that a system that can be generalized across surgical specialties and for trainees of different experience levels may help galvanize novel surgical education strategies.


Asunto(s)
Especialidades Quirúrgicas , Cirujanos , Humanos , Retroalimentación , Reproducibilidad de los Resultados , Recurrencia Local de Neoplasia , Cirujanos/educación
5.
Eur Urol Open Sci ; 46: 15-21, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36506257

RESUMEN

Background: There is no standard for the feedback that an attending surgeon provides to a training surgeon, which may lead to variable outcomes in teaching cases. Objective: To create and administer standardized feedback to medical students in an attempt to improve performance and learning. Design setting and participants: A cohort of 45 medical students was recruited from a single medical school. Participants were randomly assigned to two groups. Both completed two rounds of a robotic surgical dissection task on a da Vinci Xi surgical system. The first round was the baseline assessment. In the second round, one group received feedback and the other served as the control (no feedback). Outcome measurements and statistical analysis: Video from each round was retrospectively reviewed by four blinded raters and given a total error tally (primary outcome) and a technical skills score (Global Evaluative Assessment of Robotic Surgery [GEARS]). Generalized linear models were used for statistical modeling. According to their initial performance, each participant was categorized as either an innate performer or an underperformer, depending on whether their error tally was above or below the median. Results and limitations: In round 2, the intervention group had a larger decrease in error rate than the control group, with a risk ratio (RR) of 1.51 (95% confidence interval [CI] 1.07-2.14; p = 0.02). The intervention group also had a greater increase in GEARS score in comparison to the control group, with a mean group difference of 2.15 (95% CI 0.81-3.49; p < 0.01). The interaction effect between innate performers versus underperformers and the intervention was statistically significant for the error rates, at F(1,38) = 5.16 (p = 0.03). Specifically, the intervention had a statistically significant effect on the error rate for underperformers (RR 2.23, 95% CI 1.37-3.62; p < 0.01) but not for innate performers (RR 1.03, 95% CI 0.63-1.68; p = 0.91). Conclusions: Real-time feedback improved performance globally compared to the control. The benefit of real-time feedback was stronger for underperformers than for trainees with innate skill. Patient summary: We found that real-time feedback during a training task using a surgical robot improved the performance of trainees when the task was repeated. This feedback approach could help in training doctors in robotic surgery.

6.
J Endourol ; 36(5): 712-720, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-34913734

RESUMEN

Purpose: We attempt to understand the relationship between surgeon technical skills, cognitive workload, and errors during a simulated robotic dissection task. Materials and Methods: Participant surgeons performed a robotic surgery dissection exercise. Participants were grouped based on surgical experience. Technical skills were evaluated utilizing the validated Global Evaluative Assessment of Robotic Skills (GEARS) assessment tool. The dissection task was evaluated for errors during active dissection or passive retraction maneuvers. We quantified cognitive workload of surgeon participants as an index of cognitive activity (ICA), derived from task-evoked pupillary response metrics; ICA ranged 0 to 1, with 1 representing maximum ICA. Generalized estimating equation (GEE) was used for all modelings to establish relationships between surgeon technical skills, cognitive workload, and errors. Results: We found a strong association between technical skills as measured by multiple GEARS domains (depth perception, force sensitivity, and robotic control) and passive errors, with higher GEARS scores associated with a lower relative risk of errors (all p < 0.01). For novice surgeons, as average GEARS scores increased, the average estimated ICA decreased. In contrast, as average GEARS increased for expert surgeons, the average estimated ICA increased. When exhibiting optimal technical skill (maximal GEARS scores), novices and experts reached a similar range of ICA scores (ICA: 0.47 and 0.42, respectively). Conclusions: This study found that there is an optimal cognitive workload level for surgeons of all experience levels during our robotic surgical exercise. Select technical skill domains were strong predictors of errors. Future research will explore whether an ideal cognitive workload range truly optimizes surgical training and reduces surgical errors.


Asunto(s)
Procedimientos Quirúrgicos Robotizados , Robótica , Cirujanos , Competencia Clínica , Cognición , Humanos , Procedimientos Quirúrgicos Robotizados/educación , Cirujanos/educación
7.
AMIA Annu Symp Proc ; 2019: 552-561, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-32308849

RESUMEN

Accessing patients' social needs is a critical challenge at emergency departments (EDs). However, most EDs do not have extra staff to administer screeners, and without personnel administration, response rates are low especially for low health literacy patients. To facilitate engagement with such low health literacy patients, we designed a chatbot - HarborBot for social needs screening. Through a study with 30 participants, where participants took a social needs screener both via a traditional survey platform and HarborBot, we found that the two platforms resulted in comparable data (equivalent in 87% of the responses). We also found that while the high health literate participants preferred the traditional survey platform because of efficiency (allowing participants to proceed at their own pace), the low health literate participants preferred HarborBot as it was more engaging, personal, and more understandable. We conclude with a discussion on the design implications for chatbots for social needs screening.


Asunto(s)
Inteligencia Artificial , Actitud hacia los Computadores , Servicio de Urgencia en Hospital , Determinantes Sociales de la Salud , Encuestas y Cuestionarios , Comprensión , Alfabetización en Salud , Humanos , Tamizaje Masivo , Satisfacción del Paciente , Programas Informáticos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA