Your browser doesn't support javascript.
loading
Advancing surgical VQA with scene graph knowledge.
Yuan, Kun; Kattel, Manasi; Lavanchy, Joël L; Navab, Nassir; Srivastav, Vinkle; Padoy, Nicolas.
Afiliação
  • Yuan K; University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France. kyuan@unistra.fr.
  • Kattel M; IHU, Strasbourg, France. kyuan@unistra.fr.
  • Lavanchy JL; CAMP, Technische Universität München, Munich, Germany. kyuan@unistra.fr.
  • Navab N; University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France.
  • Srivastav V; IHU, Strasbourg, France.
  • Padoy N; IHU, Strasbourg, France.
Int J Comput Assist Radiol Surg ; 19(7): 1409-1417, 2024 Jul.
Article em En | MEDLINE | ID: mdl-38780829
ABSTRACT

PURPOSE:

The modern operating room is becoming increasingly complex, requiring innovative intra-operative support systems. While the focus of surgical data science has largely been on video analysis, integrating surgical computer vision with natural language capabilities is emerging as a necessity. Our work aims to advance visual question answering (VQA) in the surgical context with scene graph knowledge, addressing two main challenges in the current surgical VQA systems removing question-condition bias in the surgical VQA dataset and incorporating scene-aware reasoning in the surgical VQA model design.

METHODS:

First, we propose a surgical scene graph-based dataset, SSG-VQA, generated by employing segmentation and detection models on publicly available datasets. We build surgical scene graphs using spatial and action information of instruments and anatomies. These graphs are fed into a question engine, generating diverse QA pairs. We then propose SSG-VQA-Net, a novel surgical VQA model incorporating a lightweight Scene-embedded Interaction Module, which integrates geometric scene knowledge in the VQA model design by employing cross-attention between the textual and the scene features.

RESULTS:

Our comprehensive analysis shows that our SSG-VQA dataset provides a more complex, diverse, geometrically grounded, unbiased and surgical action-oriented dataset compared to existing surgical VQA datasets and SSG-VQA-Net outperforms existing methods across different question types and complexities. We highlight that the primary limitation in the current surgical VQA systems is the lack of scene knowledge to answer complex queries.

CONCLUSION:

We present a novel surgical VQA dataset and model and show that results can be significantly improved by incorporating geometric scene features in the VQA model design. We point out that the bottleneck of the current surgical visual question-answer model lies in learning the encoded representation rather than decoding the sequence. Our SSG-VQA dataset provides a diagnostic benchmark to test the scene understanding and reasoning capabilities of the model. The source code and the dataset will be made publicly available at https//github.com/CAMMA-public/SSG-VQA .
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article