Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
JAMA Surg ; 159(2): 185-192, 2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38055227

RESUMO

Objective: To overcome limitations of open surgery artificial intelligence (AI) models by curating the largest collection of annotated videos and to leverage this AI-ready data set to develop a generalizable multitask AI model capable of real-time understanding of clinically significant surgical behaviors in prospectively collected real-world surgical videos. Design, Setting, and Participants: The study team programmatically queried open surgery procedures on YouTube and manually annotated selected videos to create the AI-ready data set used to train a multitask AI model for 2 proof-of-concept studies, one generating surgical signatures that define the patterns of a given procedure and the other identifying kinematics of hand motion that correlate with surgeon skill level and experience. The Annotated Videos of Open Surgery (AVOS) data set includes 1997 videos from 23 open-surgical procedure types uploaded to YouTube from 50 countries over the last 15 years. Prospectively recorded surgical videos were collected from a single tertiary care academic medical center. Deidentified videos were recorded of surgeons performing open surgical procedures and analyzed for correlation with surgical training. Exposures: The multitask AI model was trained on the AI-ready video data set and then retrospectively applied to the prospectively collected video data set. Main Outcomes and Measures: Analysis of open surgical videos in near real-time, performance on AI-ready and prospectively collected videos, and quantification of surgeon skill. Results: Using the AI-ready data set, the study team developed a multitask AI model capable of real-time understanding of surgical behaviors-the building blocks of procedural flow and surgeon skill-across space and time. Through principal component analysis, a single compound skill feature was identified, composed of a linear combination of kinematic hand attributes. This feature was a significant discriminator between experienced surgeons and surgical trainees across 101 prospectively collected surgical videos of 14 operators. For each unit increase in the compound feature value, the odds of the operator being an experienced surgeon were 3.6 times higher (95% CI, 1.67-7.62; P = .001). Conclusions and Relevance: In this observational study, the AVOS-trained model was applied to analyze prospectively collected open surgical videos and identify kinematic descriptors of surgical skill related to efficiency of hand motion. The ability to provide AI-deduced insights into surgical structure and skill is valuable in optimizing surgical skill acquisition and ultimately improving surgical care.


Assuntos
Inteligência Artificial , Aprendizado de Máquina , Humanos , Estudos Retrospectivos , Gravação em Vídeo/métodos , Centros Médicos Acadêmicos
2.
AMIA Annu Symp Proc ; 2020: 1373-1382, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-34025905

RESUMO

Open, or non-laparoscopic surgery, represents the vast majority of all operating room procedures, but few tools exist to objectively evaluate these techniques at scale. Current efforts involve human expert-based visual assessment. We leverage advances in computer vision to introduce an automated approach to video analysis of surgical execution. A state-of-the-art convolutional neural network architecture for object detection was used to detect operating hands in open surgery videos. Automated assessment was expanded by combining model predictions with a fast object tracker to enable surgeon-specific hand tracking. To train our model, we used publicly available videos of open surgery from YouTube and annotated these with spatial bounding boxes of operating hands. Our model's spatial detections of operating hands significantly outperforms the detections achieved using pre-existing hand-detection datasets, and allow for insights into intra-operative movement patterns and economy of motion.


Assuntos
Mãos , Movimento , Automação , Computadores , Humanos , Redes Neurais de Computação , Cirurgiões
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA