RESUMEN
The ability to accurately recognize elementary surgical gestures is a stepping stone to automated surgical assessment and surgical training. However, as the pool of subjects increases, variation in surgical techniques and unanticipated motion increases the challenge of creating robust statistical models of gestures. This paper examines the applicability of advanced modeling techniques from automated speech recognition to the problem of increasing variability in surgical motions. In particular, we demonstrate the effectiveness of automatically bootstrapped user-adaptive models on diverse data acquired from the da Vinci surgical robot.
Asunto(s)
Simulación por Computador , Cirugía General/métodos , Gestos , Humanos , Modelos Estadísticos , Robótica , Software de Reconocimiento del Habla , Estados Unidos , Interfaz Usuario-ComputadorRESUMEN
This paper addresses automatic skill assessment in robotic minimally invasive surgery. Hidden Markov models (HMMs) are developed for individual surgical gestures (or surgemes) that comprise a typical bench-top surgical training task. It is known that such HMMs can be used to recognize and segment surgemes in previously unseen trials. Here, the topology of each surgeme HMM is designed in a data-driven manner, mixing trials from multiple surgeons with varying skill levels, resulting in HMM states that model skill-specific sub-gestures. The sequence of HMM states visited while performing a surgeme are therefore indicative of the surgeon's skill level. This expectation is confirmed by the average edit distance between the state-level "transcripts" of the same surgeme performed by two surgeons with different expertise levels. Some surgemes are further shown to be more indicative of skill than others.