Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
J Robot Surg ; 18(1): 102, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38427094

RESUMO

Artificial intelligence (AI) is revolutionizing nearly every aspect of modern life. In the medical field, robotic surgery is the sector with some of the most innovative and impactful advancements. In this narrative review, we outline recent contributions of AI to the field of robotic surgery with a particular focus on intraoperative enhancement. AI modeling is allowing surgeons to have advanced intraoperative metrics such as force and tactile measurements, enhanced detection of positive surgical margins, and even allowing for the complete automation of certain steps in surgical procedures. AI is also Query revolutionizing the field of surgical education. AI modeling applied to intraoperative surgical video feeds and instrument kinematics data is allowing for the generation of automated skills assessments. AI also shows promise for the generation and delivery of highly specialized intraoperative surgical feedback for training surgeons. Although the adoption and integration of AI show promise in robotic surgery, it raises important, complex ethical questions. Frameworks for thinking through ethical dilemmas raised by AI are outlined in this review. AI enhancements in robotic surgery is some of the most groundbreaking research happening today, and the studies outlined in this review represent some of the most exciting innovations in recent years.


Assuntos
Inteligência Artificial , Procedimentos Cirúrgicos Robóticos , Humanos , Automação , Benchmarking , Procedimentos Cirúrgicos Robóticos/métodos , Cirurgiões
2.
J Surg Educ ; 81(3): 422-430, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38290967

RESUMO

OBJECTIVE: Surgical skill assessment tools such as the End-to-End Assessment of Suturing Expertise (EASE) can differentiate a surgeon's experience level. In this simulation-based study, we define a competency benchmark for intraoperative robotic suturing using EASE as a validated measure of performance. DESIGN: Participants conducted a dry-lab vesicourethral anastomosis (VUA) exercise. Videos were each independently scored by 2 trained, blinded reviewers using EASE. Inter-rater reliability was measured with prevalence-adjusted bias-adjusted Kappa (PABAK) using 2 example videos. All videos were reviewed by an expert surgeon, who determined if the suturing skills exhibited were at a competency level expected for residency graduation (pass or fail). The Contrasting Group (CG) method was then used to set a pass/fail score at the intercept of the pass and fail cohorts' EASE score distributions. SETTING: Keck School of Medicine, University of Southern California. PARTICIPANTS: Twenty-six participants: 8 medical students, 8 junior residents (PGY 1-2), 7 senior residents (PGY 3-5) and 3 attending urologists. RESULTS: After 1 round of consensus-building, average PABAK across EASE subskills was 0.90 (Range 0.67-1.0). The CG method produced a competency benchmark EASE score of >35/39, with a pass rate of 10/26 (38%); 27% were deemed competent by expert evaluation. False positives and negatives were defined as medical students who passed and attendings who failed the assessment, respectively. This pass/fail score produced no false positives or negatives, and fewer JR than SR were considered competent by both the expert and CG benchmark. CONCLUSIONS: Using an absolute standard setting method, competency scores were set to identify trainees who could competently execute a standardized dry-lab robotic suturing exercise. This standard can be used for high stakes decisions regarding a trainee's technical readiness for independent practice. Future work includes validation of this standard in the clinical environment through correlation with clinical outcomes.


Assuntos
Internato e Residência , Procedimentos Cirúrgicos Robóticos , Robótica , Cirurgiões , Humanos , Procedimentos Cirúrgicos Robóticos/educação , Reprodutibilidade dos Testes , Competência Clínica
3.
Curr Opin Urol ; 34(1): 37-42, 2024 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-37909886

RESUMO

PURPOSE OF REVIEW: This review outlines recent innovations in simulation technology as it applies to urology. It is essential for the next generation of urologists to attain a solid foundation of technical and nontechnical skills, and simulation technology provides a variety of safe, controlled environments to acquire this baseline knowledge. RECENT FINDINGS: With a focus on urology, this review first outlines the evidence to support surgical simulation, then discusses the strides being made in the development of 3D-printed models for surgical skill training and preoperative planning, virtual reality models for different urologic procedures, surgical skill assessment for simulation, and integration of simulation into urology residency curricula. SUMMARY: Simulation continues to be an integral part of the journey towards the mastery of skills necessary for becoming an expert urologist. Clinicians and researchers should consider how to further incorporate simulation technology into residency training and help future generations of urologists throughout their career.


Assuntos
Internato e Residência , Treinamento por Simulação , Urologia , Humanos , Urologia/educação , Competência Clínica , Treinamento por Simulação/métodos , Simulação por Computador , Procedimentos Cirúrgicos Urológicos
5.
Urol Pract ; 10(5): 436-443, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37410015

RESUMO

INTRODUCTION: This study assessed ChatGPT's ability to generate readable, accurate, and clear layperson summaries of urological studies, and compared the performance of ChatGPT-generated summaries with original abstracts and author-written patient summaries to determine its effectiveness as a potential solution for creating accessible medical literature for the public. METHODS: Articles from the top 5 ranked urology journals were selected. A ChatGPT prompt was developed following guidelines to maximize readability, accuracy, and clarity, minimizing variability. Readability scores and grade-level indicators were calculated for the ChatGPT summaries, original abstracts, and patient summaries. Two MD physicians independently rated the accuracy and clarity of the ChatGPT-generated layperson summaries. Statistical analyses were conducted to compare readability scores. Cohen's κ coefficient was used to assess interrater reliability for correctness and clarity evaluations. RESULTS: A total of 256 journal articles were included. The ChatGPT-generated summaries were created with an average time of 17.5 (SD 15.0) seconds. The readability scores of the ChatGPT-generated summaries were significantly better than the original abstracts, with Global Readability Score 54.8 (12.3) vs 29.8 (18.5), Flesch Kincade Reading Ease 54.8 (12.3) vs 29.8 (18.5), Flesch Kincaid Grade Level 10.4 (2.2) vs 13.5 (4.0), Gunning Fog Score 12.9 (2.6) vs 16.6 (4.1), Smog Index 9.1 (2.0) vs 12.0 (3.0), Coleman Liau Index 12.9 (2.1) vs 14.9 (3.7), and Automated Readability Index 11.1 (2.5) vs 12.0 (5.7; P < .0001 for all except Automated Readability Index, which was P = .037). The correctness rate of ChatGPT outputs was >85% across all categories assessed, with interrater agreement (Cohen's κ) between 2 independent physician reviewers ranging from 0.76-0.95. CONCLUSIONS: ChatGPT can create accurate summaries of scientific abstracts for patients, with well-crafted prompts enhancing user-friendliness. Although the summaries are satisfactory, expert verification is necessary for improved accuracy.


Assuntos
Letramento em Saúde , Urologia , Humanos , Reprodutibilidade dos Testes , Compreensão , Idioma
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA