Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 100
Filtrar
1.
Med Educ ; 2024 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-38989816
2.
J Crohns Colitis ; 2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-39028803

RESUMEN

BACKGROUND AND AIMS: Intestinal ultrasound has become a crucial tool for assessing inflammation in patients with inflammatory bowel disease, prompting a surge in demand for trained sonographers. While educational programs exist, the length of training needed to reach proficiency in correctly classifying inflammation remains unclear. Our study addresses this gap partly by exploring the learning curves associated with the deliberate practice of sonographic disease assessment, focusing on the key disease activity parameters of bowel wall thickness, bowel wall stratification, color Doppler signal, and inflammatory fat. METHODS: Twenty-one novices and six certified intestinal ultrasound practitioners engaged in an 80-case deliberate practice online training program. A panel of three experts independently graded ultrasound images representing various degrees of disease activity and agreed upon a consensus score. We used statistical analyses, including mixed-effects regression models, to evaluate learning trajectories. Pass/fail thresholds distinguishing novices from certified practitioners were determined through contrasting-groups analyses. RESULTS: Novices showed significant improvement in interpreting bowel wall thickness, surpassing the pass/fail threshold, and reached mastery level by case 80. For color Doppler signal and inflammatory fat, novices surpassed the pass/fail threshold but did not achieve mastery. Novices did not improve in assessing bowel wall stratification. CONCLUSIONS: We found considerable individual and group-level differences in learning curves supporting the concept of competency-based training for assessing bowel wall thickness, color Doppler signal and inflammatory fat. However, despite practice over 80 cases, novices did not improve in their interpretation of bowel wall stratification, suggesting that a different approach is needed for this parameter.

5.
Prenat Diagn ; 44(6-7): 688-697, 2024 06.
Artículo en Inglés | MEDLINE | ID: mdl-38738737

RESUMEN

OBJECTIVE: To examine the feasibility and performance of implementing a standardized fetal cardiac scan at the time of a routine first-trimester ultrasound scan. METHOD: A retrospective, single-center study in an unselected population between March 2021 and July 2022. A standardized cardiac scan protocol consisting of a four-chamber and 3-vessel trachea view with color Doppler was implemented as part of the routine first-trimester scan. Sonographers were asked to categorize the fetal heart anatomy. Data were stratified into two groups based on the possibility of evaluating the fetal heart. The influence of maternal and fetal characteristics and the detection of major congenital heart disease were investigated. RESULTS: A total of 5083 fetuses were included. The fetal heart evaluation was completed in 84.9%. The proportion of successful scans increased throughout the study period from 76% in the first month to 92% in the last month. High maternal body mass index and early gestational age at scan significantly decreased the feasibility. The first-trimester detection of major congenital heart defects was 7/16, of which four cases were identified by the cardiac scan protocol with no false-positive cases. CONCLUSION: First-trimester evaluation of the fetal heart by a standardized scan protocol is feasible to implement in daily practice. It can contribute to the earlier detection of congenital heart defects at a very low false positive rate.


Asunto(s)
Corazón Fetal , Cardiopatías Congénitas , Primer Trimestre del Embarazo , Ultrasonografía Prenatal , Humanos , Femenino , Embarazo , Cardiopatías Congénitas/diagnóstico por imagen , Cardiopatías Congénitas/epidemiología , Cardiopatías Congénitas/diagnóstico , Estudios Retrospectivos , Ultrasonografía Prenatal/métodos , Adulto , Corazón Fetal/diagnóstico por imagen , Estudios de Factibilidad
6.
Perspect Med Educ ; 13(1): 250-254, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38680196

RESUMEN

The use of the p-value in quantitative research, particularly its threshold of "P < 0.05" for determining "statistical significance," has long been a cornerstone of statistical analysis in research. However, this standard has been increasingly scrutinized for its potential to mislead findings, especially when the practical significance, the number of comparisons, or the suitability of statistical tests are not properly considered. In response to controversy around use of p-values, the American Statistical Association published a statement in 2016 that challenged the research community to abandon the term "statistically significant". This stance has been echoed by leading scientific journals to urge a significant reduction or complete elimination in the reliance on p-values when reporting results. To provide guidance to researchers in health professions education, this paper provides a succinct overview of the ongoing debate regarding the use of p-values and the definition of p-values. It reflects on the controversy by highlighting the common pitfalls associated with p-value interpretation and usage, such as misinterpretation, overemphasis, and false dichotomization between "significant" and "non-significant" results. This paper also outlines specific recommendations for the effective use of p-values in statistical reporting including the importance of reporting effect sizes, confidence intervals, the null hypothesis, and conducting sensitivity analyses for appropriate interpretation. These considerations aim to guide researchers toward a more nuanced and informative use of p-values.


Asunto(s)
Proyectos de Investigación , Humanos , Interpretación Estadística de Datos , Proyectos de Investigación/normas , Proyectos de Investigación/tendencias , Proyectos de Investigación/estadística & datos numéricos
7.
Sci Rep ; 14(1): 5809, 2024 03 09.
Artículo en Inglés | MEDLINE | ID: mdl-38461322

RESUMEN

This study aimed to develop a deep learning model to assess the quality of fetal echocardiography and to perform prospective clinical validation. The model was trained on data from the 18-22-week anomaly scan conducted in seven hospitals from 2008 to 2018. Prospective validation involved 100 patients from two hospitals. A total of 5363 images from 2551 pregnancies were used for training and validation. The model's segmentation accuracy depended on image quality measured by a quality score (QS). It achieved an overall average accuracy of 0.91 (SD 0.09) across the test set, with images having above-average QS scoring 0.97 (SD 0.03). During prospective validation of 192 images, clinicians rated 44.8% (SD 9.8) of images as equal in quality, 18.69% (SD 5.7) favoring auto-captured images and 36.51% (SD 9.0) preferring manually captured ones. Images with above average QS showed better agreement on segmentations (p < 0.001) and QS (p < 0.001) with fetal medicine experts. Auto-capture saved additional planes beyond protocol requirements, resulting in more comprehensive echocardiographies. Low QS had adverse effect on both model performance and clinician's agreement with model feedback. The findings highlight the importance of developing and evaluating AI models based on 'noisy' real-life data rather than pursuing the highest accuracy possible with retrospective academic-grade data.


Asunto(s)
Ecocardiografía , Femenino , Embarazo , Humanos , Estudios Retrospectivos
8.
Neonatology ; 121(3): 314-326, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38408441

RESUMEN

INTRODUCTION: Simulation-based training (SBT) aids healthcare providers in acquiring the technical skills necessary to improve patient outcomes and safety. However, since SBT may require significant resources, training all skills to a comparable extent is impractical. Hence, a strategic prioritization of technical skills is necessary. While the European Training Requirements in Neonatology provide guidance on necessary skills, they lack prioritization. We aimed to identify and prioritize technical skills for a SBT curriculum in neonatology. METHODS: A three-round modified Delphi process of expert neonatologists and neonatal trainees was performed. In round one, the participants listed all the technical skills newly trained neonatologists should master. The content analysis excluded duplicates and non-technical skills. In round two, the Copenhagen Academy for Medical Education and Simulation Needs Assessment Formula (CAMES-NAF) was used to preliminarily prioritize the technical skills according to frequency, importance of competency, SBT impact on patient safety, and feasibility for SBT. In round three, the participants further refined and reprioritized the technical skills. Items achieving consensus (agreement of ≥75%) were included. RESULTS: We included 168 participants from 10 European countries. The response rates in rounds two and three were 80% (135/168) and 87% (117/135), respectively. In round one, the participants suggested 1964 different items. Content analysis revealed 81 unique technical skills prioritized in round two. In round three, 39 technical skills achieved consensus and were included. CONCLUSION: We reached a European consensus on a prioritized list of 39 technical skills to be included in a SBT curriculum in neonatology.


Asunto(s)
Competencia Clínica , Curriculum , Técnica Delphi , Neonatología , Entrenamiento Simulado , Neonatología/educación , Humanos , Europa (Continente) , Entrenamiento Simulado/métodos , Femenino , Masculino , Adulto
9.
Med Teach ; 46(4): 471-485, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-38306211

RESUMEN

Changes in digital technology, increasing volume of data collection, and advances in methods have the potential to unleash the value of big data generated through the education of health professionals. Coupled with this potential are legitimate concerns about how data can be used or misused in ways that limit autonomy, equity, or harm stakeholders. This consensus statement is intended to address these issues by foregrounding the ethical imperatives for engaging with big data as well as the potential risks and challenges. Recognizing the wide and ever evolving scope of big data scholarship, we focus on foundational issues for framing and engaging in research. We ground our recommendations in the context of big data created through data sharing across and within the stages of the continuum of the education and training of health professionals. Ultimately, the goal of this statement is to support a culture of trust and quality for big data research to deliver on its promises for health professions education (HPE) and the health of society. Based on expert consensus and review of the literature, we report 19 recommendations in (1) framing scholarship and research through research, (2) considering unique ethical practices, (3) governance of data sharing collaborations that engage stakeholders, (4) data sharing processes best practices, (5) the importance of knowledge translation, and (6) advancing the quality of scholarship through multidisciplinary collaboration. The recommendations were modified and refined based on feedback from the 2022 Ottawa Conference attendees and subsequent public engagement. Adoption of these recommendations can help HPE scholars share data ethically and engage in high impact big data scholarship, which in turn can help the field meet the ultimate goal: high-quality education that leads to high-quality healthcare.


Asunto(s)
Macrodatos , Empleos en Salud , Difusión de la Información , Humanos , Empleos en Salud/educación , Consenso
10.
Eur Arch Otorhinolaryngol ; 281(4): 1905-1911, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38177897

RESUMEN

PURPOSE: This study aimed to assess the validity of simulation-based assessment of ultrasound skills for thyroid ultrasound. METHODS: The study collected validity evidence for simulation-based ultrasound assessment of thyroid ultrasound skills. Experts (n = 8) and novices (n = 21) completed a test containing two tasks and four cases on a virtual reality ultrasound simulator (U/S Mentor's Neck Ultrasound Module). Validity evidence was collected and structured according to Messick's validity framework. The assessments being evaluated included built-in simulator metrics and expert-based evaluations using the Objective Structured Assessment of Ultrasound Skills (OSAUS) scale. RESULTS: Out of 64 built-in simulator metrics, 9 (14.1%) exhibited validity evidence. The internal consistency of these metrics was strong (Cronbach's α = 0.805) with high test-retest reliability (intraclass correlation coefficient = 0.911). Novices achieved an average score of 41.9% (SD = 24.3) of the maximum, contrasting with experts at 81.9% (SD = 16.7). Time comparisons indicated minor differences between experts (median: 359 s) and novices (median: 376.5 s). All OSAUS items differed significantly between the two groups. The correlation between correctly entered clinical findings and the OSAUS scores was 0.748 (p < 0.001). The correlation between correctly entered clinical findings and the metric scores was 0.801 (p < 0.001). CONCLUSION: While simulation-based training is promising, only 14% of built-in simulator metrics could discriminate between novices and ultrasound experts. Already-established competency frameworks such as OSAUS provided strong validity evidence for the assessment of otorhinolaryngology ultrasound competence.


Asunto(s)
Competencia Clínica , Realidad Virtual , Humanos , Reproducibilidad de los Resultados , Ultrasonografía , Simulación por Computador
11.
Pediatr Res ; 2024 Jan 11.
Artículo en Inglés | MEDLINE | ID: mdl-38200325

RESUMEN

INTRODUCTION: Using pre-procedure analgesia with the risk of apnoea may complicate the Less Invasive Surfactant Administration (LISA) procedure or reduce the effect of LISA. METHODS: The NONA-LISA trial (ClinicalTrials.gov, NCT05609877) is a multicentre, blinded, randomised controlled trial aiming at including 324 infants born before 30 gestational weeks, meeting the criteria for surfactant treatment by LISA. Infants will be randomised to LISA after administration of fentanyl 0.5-1 mcg/kg intravenously (fentanyl group) or isotonic saline solution intravenously (saline group). All infants will receive standardised non-pharmacological comfort care before and during the LISA procedure. Additional analgesics will be provided at the clinician's discretion. The primary outcome is the need for invasive ventilation, meaning mechanical or manual ventilation via an endotracheal tube, for at least 30 min (cumulated) within 24 h of the procedure. Secondary outcomes include the modified COMFORTneo score during the procedure, bronchopulmonary dysplasia at 36 weeks, and mortality at 36 weeks. DISCUSSION: The NONA-LISA trial has the potential to provide evidence for a standardised approach to relief from discomfort in preterm infants during LISA and to reduce invasive ventilation. The results may affect future clinical practice. IMPACT: Pre-procedure analgesia is associated with apnoea and may complicate procedures that rely on regular spontaneous breathing, such as Less Invasive Surfactant Administration (LISA). This randomised controlled trial addresses the effect of analgesic premedication in LISA by comparing fentanyl with a placebo (isotonic saline) in infants undergoing the LISA procedure. All infants will receive standardised non-pharmacological comfort. The NONA-LISA trial has the potential to provide evidence for a standardised approach to relief from discomfort or pain in preterm infants during LISA and to reduce invasive ventilation. The results may affect future clinical practice regarding analgesic treatment associated with the LISA procedure.

12.
BMC Med Educ ; 24(1): 15, 2024 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-38172820

RESUMEN

BACKGROUND: Ultrasound is a safe and effective diagnostic tool used within several specialties. However, the quality of ultrasound scans relies on sufficiently skilled clinician operators. The aim of this study was to explore the validity of automated assessments of upper abdominal ultrasound skills using an ultrasound simulator. METHODS: Twenty five novices and five experts were recruited, all of whom completed an assessment program for the evaluation of upper abdominal ultrasound skills on a virtual reality simulator. The program included five modules that assessed different organ systems using automated simulator metrics. We used Messick's framework to explore the validity evidence of these simulator metrics to determine the contents of a final simulator test. We used the contrasting groups method to establish a pass/fail level for the final simulator test. RESULTS: Thirty seven out of 60 metrics were able to discriminate between novices and experts (p < 0.05). The median simulator score of the final simulator test including the metrics with validity evidence was 26.68% (range: 8.1-40.5%) for novices and 85.1% (range: 56.8-91.9%) for experts. The internal structure was assessed by Cronbach alpha (0.93) and intraclass correlation coefficient (0.89). The pass/fail level was determined to be 50.9%. This pass/fail criterion found no passing novices or failing experts. CONCLUSIONS: This study collected validity evidence for simulation-based assessment of upper abdominal ultrasound examinations, which is the first step toward competency-based training. Future studies may examine how competency-based training in the simulated setting translates into improvements in clinical performances.


Asunto(s)
Internado y Residencia , Realidad Virtual , Humanos , Competencia Clínica , Simulación por Computador , Ultrasonografía , Reproducibilidad de los Resultados
13.
J Robot Surg ; 18(1): 47, 2024 Jan 20.
Artículo en Inglés | MEDLINE | ID: mdl-38244130

RESUMEN

To collect validity evidence for the assessment of surgical competence through the classification of general surgical gestures for a simulated robot-assisted radical prostatectomy (RARP). We used 165 video recordings of novice and experienced RARP surgeons performing three parts of the RARP procedure on the RobotiX Mentor. We annotated the surgical tasks with different surgical gestures: dissection, hemostatic control, application of clips, needle handling, and suturing. The gestures were analyzed using idle time (periods with minimal instrument movements) and active time (whenever a surgical gesture was annotated). The distribution of surgical gestures was described using a one-dimensional heat map, snail tracks. All surgeons had a similar percentage of idle time but novices had longer phases of idle time (mean time: 21 vs. 15 s, p < 0.001). Novices used a higher total number of surgical gestures (number of phases: 45 vs. 35, p < 0.001) and each phase was longer compared with those of the experienced surgeons (mean time: 10 vs. 8 s, p < 0.001). There was a different pattern of gestures between novices and experienced surgeons as seen by a different distribution of the phases. General surgical gestures can be used to assess surgical competence in simulated RARP and can be displayed as a visual tool to show how performance is improving. The established pass/fail level may be used to ensure the competence of the residents before proceeding with supervised real-life surgery. The next step is to investigate if the developed tool can optimize automated feedback during simulator training.


Asunto(s)
Procedimientos Quirúrgicos Robotizados , Masculino , Humanos , Procedimientos Quirúrgicos Robotizados/métodos , Gestos , Competencia Clínica , Próstata , Prostatectomía/métodos
14.
Med Educ ; 58(1): 105-117, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37615058

RESUMEN

BACKGROUND: Artificial intelligence (AI) is becoming increasingly used in medical education, but our understanding of the validity of AI-based assessments (AIBA) as compared with traditional clinical expert-based assessments (EBA) is limited. In this study, the authors aimed to compare and contrast the validity evidence for the assessment of a complex clinical skill based on scores generated from an AI and trained clinical experts, respectively. METHODS: The study was conducted between September 2020 to October 2022. The authors used Kane's validity framework to prioritise and organise their evidence according to the four inferences: scoring, generalisation, extrapolation and implications. The context of the study was chorionic villus sampling performed within the simulated setting. AIBA and EBA were used to evaluate performances of experts, intermediates and novice based on video recordings. The clinical experts used a scoring instrument developed in a previous international consensus study. The AI used convolutional neural networks for capturing features on video recordings, motion tracking and eye movements to arrive at a final composite score. RESULTS: A total of 45 individuals participated in the study (22 novices, 12 intermediates and 11 experts). The authors demonstrated validity evidence for scoring, generalisation, extrapolation and implications for both EBA and AIBA. The plausibility of assumptions related to scoring, evidence of reproducibility and relation to different training levels was examined. Issues relating to construct underrepresentation, lack of explainability, and threats to robustness were identified as potential weak links in the AIBA validity argument compared with the EBA validity argument. CONCLUSION: There were weak links in the use of AIBA compared with EBA, mainly in their representation of the underlying construct but also regarding their explainability and ability to transfer to other datasets. However, combining AI and clinical expert-based assessments may offer complementary benefits, which is a promising subject for future research.


Asunto(s)
Competencia Clínica , Educación Médica , Humanos , Evaluación Educacional , Inteligencia Artificial , Reproducibilidad de los Resultados
15.
Med Teach ; 46(7): 948-955, 2024 07.
Artículo en Inglés | MEDLINE | ID: mdl-38145618

RESUMEN

BACKGROUND: A significant factor of clinicians' learning is based on their ability to effectively transfer acquired knowledge, skills, and attitudes from specialty-specific clinical courses to their working environment. MATERIAL AND METHOD: We conducted semi-structured interviews with 20 anaesthesiologist trainees (i.e. residents) in four group and five individual interviews using SRL principles as sensitizing concepts. Data were collected and analyzed iteratively using thematic analysis. RESULTS: Advanced trainees are highly motivated to explore what they have learned in specialty-specific courses, but they often face several barriers in implementing their learning in the workplace environment. Four themes emerged from the interview data: 'Be ready to learn', "Take the 'take-home-messages' home', "Be ready to create your own opportunities', and "Face it, it's not entirely up to you'. Understanding the challenges regarding transferring knowledge from courses to the working environment is an important lesson for assisting trainees set their learning goals, monitor their progress, and re-evaluate their SRL processes. CONCLUSION: Even for advanced trainees, successfully transferring knowledge from specialty-specific courses often requires adequate commitment and support. Medical supervisors and other relevant stakeholders must be aware of their shared responsibility for creating individual environments that support opportunities for trainees to self-regulate their learning.


Asunto(s)
Competencia Clínica , Entrevistas como Asunto , Aprendizaje , Humanos , Femenino , Masculino , Internado y Residencia/organización & administración , Anestesiología/educación , Investigación Cualitativa , Lugar de Trabajo , Adulto
16.
BMC Med Educ ; 23(1): 921, 2023 Dec 05.
Artículo en Inglés | MEDLINE | ID: mdl-38053134

RESUMEN

BACKGROUND: Ultrasound is an essential diagnostic examination used in several medical specialties. However, the quality of ultrasound examinations is dependent on mastery of certain skills, which may be difficult and costly to attain in the clinical setting. This study aimed to explore mastery learning for trainees practicing general abdominal ultrasound using a virtual reality simulator and to evaluate the associated cost per student achieving the mastery learning level. METHODS: Trainees were instructed to train on a virtual reality ultrasound simulator until the attainment of a mastery learning level was established in a previous study. Automated simulator scores were used to track performances during each round of training, and these scores were recorded to determine learning curves. Finally, the costs of the training were evaluated using a micro-costing procedure. RESULTS: Twenty-one out of the 24 trainees managed to attain the predefined mastery level two times consecutively. The trainees completed their training with a median of 2h38min (range: 1h20min-4h30min) using a median of 7 attempts (range: 3-11 attempts) at the simulator test. The cost of training one trainee to the mastery level was estimated to be USD 638. CONCLUSION: Complete trainees can obtain mastery learning levels in general abdominal ultrasound examinations within 3 hours of training in the simulated setting and at an average cost of USD 638 per trainee. Future studies are needed to explore how the cost of simulation-based training is best balanced against the costs of clinical training.


Asunto(s)
Entrenamiento Simulado , Realidad Virtual , Humanos , Competencia Clínica , Ultrasonografía , Simulación por Computador , Entrenamiento Simulado/métodos , Curva de Aprendizaje
17.
JMIR Dermatol ; 6: e48357, 2023 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-37624707

RESUMEN

BACKGROUND: Skin cancer diagnostics is challenging, and mastery requires extended periods of dedicated practice. OBJECTIVE: The aim of the study was to determine if self-paced pattern recognition training in skin cancer diagnostics with clinical and dermoscopic images of skin lesions using a large-scale interactive image repository (LIIR) with patient cases improves primary care physicians' (PCPs') diagnostic skills and confidence. METHODS: A total of 115 PCPs were randomized (allocation ratio 3:1) to receive or not receive self-paced pattern recognition training in skin cancer diagnostics using an LIIR with patient cases through a quiz-based smartphone app during an 8-day period. The participants' ability to diagnose skin cancer was evaluated using a 12-item multiple-choice questionnaire prior to and 8 days after the educational intervention period. Their thoughts on the use of dermoscopy were assessed using a study-specific questionnaire. A learning curve was calculated through the analysis of data from the mobile app. RESULTS: On average, participants in the intervention group spent 2 hours 26 minutes quizzing digital patient cases and 41 minutes reading the educational material. They had an average preintervention multiple choice questionnaire score of 52.0% of correct answers, which increased to 66.4% on the postintervention test; a statistically significant improvement of 14.3 percentage points (P<.001; 95% CI 9.8-18.9) with intention-to-treat analysis. Analysis of participants who received the intervention as per protocol (500 patient cases in 8 days) showed an average increase of 16.7 percentage points (P<.001; 95% CI 11.3-22.0) from 53.9% to 70.5%. Their overall ability to correctly recognize malignant lesions in the LIIR patient cases improved over the intervention period by 6.6 percentage points from 67.1% (95% CI 65.2-69.3) to 73.7% (95% CI 72.5-75.0) and their ability to set the correct diagnosis improved by 10.5 percentage points from 42.5% (95% CI 40.2%-44.8%) to 53.0% (95% CI 51.3-54.9). The diagnostic confidence of participants in the intervention group increased on a scale from 1 to 4 by 32.9% from 1.6 to 2.1 (P<.001). Participants in the control group did not increase their postintervention score or their diagnostic confidence during the same period. CONCLUSIONS: Self-paced pattern recognition training in skin cancer diagnostics through the use of a digital LIIR with patient cases delivered by a quiz-based mobile app improves the diagnostic accuracy of PCPs. TRIAL REGISTRATION: ClinicalTrials.gov NCT05661370; https://classic.clinicaltrials.gov/ct2/show/NCT05661370.

18.
Ann Surg Open ; 4(1): e271, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37600868
19.
Surg Endosc ; 37(8): 6588-6601, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37389741

RESUMEN

BACKGROUND: The increasing use of robot-assisted surgery (RAS) has led to the need for new methods of assessing whether new surgeons are qualified to perform RAS, without the resource-demanding process of having expert surgeons do the assessment. Computer-based automation and artificial intelligence (AI) are seen as promising alternatives to expert-based surgical assessment. However, no standard protocols or methods for preparing data and implementing AI are available for clinicians. This may be among the reasons for the impediment to the use of AI in the clinical setting. METHOD: We tested our method on porcine models with both the da Vinci Si and the da Vinci Xi. We sought to capture raw video data from the surgical robots and 3D movement data from the surgeons and prepared the data for the use in AI by a structured guide to acquire and prepare video data using the following steps: 'Capturing image data from the surgical robot', 'Extracting event data', 'Capturing movement data of the surgeon', 'Annotation of image data'. RESULTS: 15 participant (11 novices and 4 experienced) performed 10 different intraabdominal RAS procedures. Using this method we captured 188 videos (94 from the surgical robot, and 94 corresponding movement videos of the surgeons' arms and hands). Event data, movement data, and labels were extracted from the raw material and prepared for use in AI. CONCLUSION: With our described methods, we could collect, prepare, and annotate images, events, and motion data from surgical robotic systems in preparation for its use in AI.


Asunto(s)
Procedimientos Quirúrgicos Robotizados , Cirujanos , Humanos , Animales , Porcinos , Procedimientos Quirúrgicos Robotizados/métodos , Inteligencia Artificial , Aprendizaje Automático , Movimiento (Física)
20.
Adv Health Sci Educ Theory Pract ; 28(3): 659-664, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37335338

RESUMEN

This editorial examines the implications of artificial intelligence (AI), specifically large language models (LLMs) such as ChatGPT, on the authorship and authority of academic papers, and the potential ethical concerns and challenges in health professions education (HPE).


Asunto(s)
Inteligencia Artificial , Becas , Humanos , Autoria , Lenguaje , Empleos en Salud
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA