Your browser doesn't support javascript.
loading
Convolutional-de-convolutional neural networks for recognition of surgical workflow.
Chen, Yu-Wen; Zhang, Ju; Wang, Peng; Hu, Zheng-Yu; Zhong, Kun-Hua.
Afiliación
  • Chen YW; Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, China.
  • Zhang J; Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, China.
  • Wang P; Southwest Hospital, Third Military Medical University, Chongqing, China.
  • Hu ZY; Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, China.
  • Zhong KH; Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, China.
Front Comput Neurosci ; 16: 998096, 2022.
Article en En | MEDLINE | ID: mdl-36157842
ABSTRACT
Computer-assisted surgery (CAS) has occupied an important position in modern surgery, further stimulating the progress of methodology and technology. In recent years, a large number of computer vision-based methods have been widely used in surgical workflow recognition tasks. For training the models, a lot of annotated data are necessary. However, the annotation of surgical data requires expert knowledge and thus becomes difficult and time-consuming. In this paper, we focus on the problem of data deficiency and propose a knowledge transfer learning method based on artificial neural network to compensate a small amount of labeled training data. To solve this problem, we propose an unsupervised method for pre-training a Convolutional-De-Convolutional (CDC) neural network for sequencing surgical workflow frames, which performs neural convolution in space (for semantic abstraction) and neural de-convolution in time (for frame level resolution) simultaneously. Specifically, through neural convolution transfer learning, we only fine-tuned the CDC neural network to classify the surgical phase. We performed some experiments for validating the model, and it showed that the proposed model can effectively extract the surgical feature and determine the surgical phase. The accuracy (Acc), recall, precision (Pres) of our model reached 91.4, 78.9, and 82.5%, respectively.
Palabras clave

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Idioma: En Revista: Front Comput Neurosci Año: 2022 Tipo del documento: Article País de afiliación: China

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Idioma: En Revista: Front Comput Neurosci Año: 2022 Tipo del documento: Article País de afiliación: China