Your browser doesn't support javascript.
loading
Endoscopic scene labelling and augmentation using intraoperative pulsatile motion and colour appearance cues with preoperative anatomical priors.
Nosrati, Masoud S; Amir-Khalili, Alborz; Peyrat, Jean-Marc; Abinahed, Julien; Al-Alao, Osama; Al-Ansari, Abdulla; Abugharbieh, Rafeef; Hamarneh, Ghassan.
Afiliación
  • Nosrati MS; Medical Image Analysis Lab, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada. smasoudn@gmail.com.
  • Amir-Khalili A; BiSICL, University of British Columbia, Vancouver, BC, Canada.
  • Peyrat JM; Qatar Robotic Surgery Centre, Qatar Science and Technology Park, Doha, Qatar.
  • Abinahed J; Qatar Robotic Surgery Centre, Qatar Science and Technology Park, Doha, Qatar.
  • Al-Alao O; Urology Department, Hamad General Hospital, Hamad Medical Corporation, Doha, Qatar.
  • Al-Ansari A; Urology Department, Hamad General Hospital, Hamad Medical Corporation, Doha, Qatar.
  • Abugharbieh R; BiSICL, University of British Columbia, Vancouver, BC, Canada.
  • Hamarneh G; Medical Image Analysis Lab, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada.
Int J Comput Assist Radiol Surg ; 11(8): 1409-18, 2016 Aug.
Article en En | MEDLINE | ID: mdl-26872810
PURPOSE: Despite great advances in medical image segmentation, the accurate and automatic segmentation of endoscopic scenes remains a challenging problem. Two important aspects have to be considered in segmenting an endoscopic scene: (1) noise and clutter due to light reflection and smoke from cutting tissue, and (2) structure occlusion (e.g. vessels occluded by fat, or endophytic tumours occluded by healthy kidney tissue). METHODS: In this paper, we propose a variational technique to augment a surgeon's endoscopic view by segmenting visible as well as occluded structures in the intraoperative endoscopic view. Our method estimates the 3D pose and deformation of anatomical structures segmented from 3D preoperative data in order to align to and segment corresponding structures in 2D intraoperative endoscopic views. Our preoperative to intraoperative alignment is driven by, first, spatio-temporal, signal processing based vessel pulsation cues and, second, machine learning based analysis of colour and textural visual cues. To our knowledge, this is the first work that utilizes vascular pulsation cues for guiding preoperative to intraoperative registration. In addition, we incorporate a tissue-specific (i.e. heterogeneous) physically based deformation model into our framework to cope with the non-rigid deformation of structures that occurs during the intervention. RESULTS: We validated the utility of our technique on fifteen challenging clinical cases with 45 % improvements in accuracy compared to the state-of-the-art method. CONCLUSIONS: A new technique for localizing both visible and occluded structures in an endoscopic view was proposed and tested. This method leverages both preoperative data, as a source of patient-specific prior knowledge, as well as vasculature pulsation and endoscopic visual cues in order to accurately segment the highly noisy and cluttered environment of an endoscopic video. Our results on in vivo clinical cases of partial nephrectomy illustrate the potential of the proposed framework for augmented reality applications in minimally invasive surgeries.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Imagenología Tridimensional / Endoscopía Límite: Humans Idioma: En Revista: Int J Comput Assist Radiol Surg Asunto de la revista: RADIOLOGIA Año: 2016 Tipo del documento: Article País de afiliación: Canadá

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Imagenología Tridimensional / Endoscopía Límite: Humans Idioma: En Revista: Int J Comput Assist Radiol Surg Asunto de la revista: RADIOLOGIA Año: 2016 Tipo del documento: Article País de afiliación: Canadá
...