Your browser doesn't support javascript.
loading
Automated tool detection with deep learning for monitoring kinematics and eye-hand coordination in microsurgery.
Koskinen, Jani; Torkamani-Azar, Mastaneh; Hussein, Ahmed; Huotarinen, Antti; Bednarik, Roman.
Afiliação
  • Koskinen J; School of Computing, University of Eastern Finland, Länsikatu 15, Joensuu, 80100, Pohjois-Karjala, Finland. Electronic address: jani.koskinen@uef.fi.
  • Torkamani-Azar M; School of Computing, University of Eastern Finland, Länsikatu 15, Joensuu, 80100, Pohjois-Karjala, Finland.
  • Hussein A; Microsurgery Center, Kuopio University Hospital, Kuopio, 70211, Pohjois-Savo, Finland; Department of Neurosurgery, Faculty of Medicine, Assiut University, Assiut, 71111, Egypt.
  • Huotarinen A; Microsurgery Center, Kuopio University Hospital, Kuopio, 70211, Pohjois-Savo, Finland; Department of Neurosurgery, Institute of Clinical Medicine, Kuopio University Hospital, Kuopio, 70211, Pohjois-Savo, Finland.
  • Bednarik R; School of Computing, University of Eastern Finland, Länsikatu 15, Joensuu, 80100, Pohjois-Karjala, Finland.
Comput Biol Med ; 141: 105121, 2022 02.
Article em En | MEDLINE | ID: mdl-34968859
ABSTRACT
In microsurgical procedures, surgeons use micro-instruments under high magnifications to handle delicate tissues. These procedures require highly skilled attentional and motor control for planning and implementing eye-hand coordination strategies. Eye-hand coordination in surgery has mostly been studied in open, laparoscopic, and robot-assisted surgeries, as there are no available tools to perform automatic tool detection in microsurgery. We introduce and investigate a method for simultaneous detection and processing of micro-instruments and gaze during microsurgery. We train and evaluate a convolutional neural network for detecting 17 microsurgical tools with a dataset of 7500 frames from 20 videos of simulated and real surgical procedures. Model evaluations result in mean average precision at the 0.5 threshold of 89.5-91.4% for validation and 69.7-73.2% for testing over partially unseen surgical settings, and the average inference time of 39.90 ± 1.2 frames/second. While prior research has mostly evaluated surgical tool detection on homogeneous datasets with limited number of tools, we demonstrate the feasibility of transfer learning, and conclude that detectors that generalize reliably to new settings require data from several different surgical procedures. In a case study, we apply the detector with a microscope eye tracker to investigate tool use and eye-hand coordination during an intracranial vessel dissection task. The results show that tool kinematics differentiate microsurgical actions. The gaze-to-microscissors distances are also smaller during dissection than other actions when the surgeon has more space to maneuver. The presented detection pipeline provides the clinical and research communities with a valuable resource for automatic content extraction and objective skill assessment in various microsurgical environments.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Procedimentos Cirúrgicos Robóticos / Aprendizado Profundo Tipo de estudo: Diagnostic_studies Idioma: En Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Procedimentos Cirúrgicos Robóticos / Aprendizado Profundo Tipo de estudo: Diagnostic_studies Idioma: En Ano de publicação: 2022 Tipo de documento: Article