Your browser doesn't support javascript.
loading
Learned optical flow for intra-operative tracking of the retinal fundus.
Ravasio, Claudio S; Pissas, Theodoros; Bloch, Edward; Flores, Blanca; Jalali, Sepehr; Stoyanov, Danail; Cardoso, Jorge M; Da Cruz, Lyndon; Bergeles, Christos.
Afiliação
  • Ravasio CS; University College London, London, UK. claudio.ravasio@kcl.ac.uk.
  • Pissas T; King's College London, London, UK. claudio.ravasio@kcl.ac.uk.
  • Bloch E; University College London, London, UK.
  • Flores B; King's College London, London, UK.
  • Jalali S; Moorfields Eye Hospital NHS Foundation Trust, London, UK.
  • Stoyanov D; Moorfields Eye Hospital NHS Foundation Trust, London, UK.
  • Cardoso JM; University College London, London, UK.
  • Da Cruz L; University College London, London, UK.
  • Bergeles C; King's College London, London, UK.
Int J Comput Assist Radiol Surg ; 15(5): 827-836, 2020 May.
Article em En | MEDLINE | ID: mdl-32323210
PURPOSE: Sustained delivery of regenerative retinal therapies by robotic systems requires intra-operative tracking of the retinal fundus. We propose a supervised deep convolutional neural network to densely predict semantic segmentation and optical flow of the retina as mutually supportive tasks, implicitly inpainting retinal flow information missing due to occlusion by surgical tools. METHODS: As manual annotation of optical flow is infeasible, we propose a flexible algorithm for generation of large synthetic training datasets on the basis of given intra-operative retinal images. We evaluate optical flow estimation by tracking a grid and sparsely annotated ground truth points on a benchmark of challenging real intra-operative clips obtained from an extensive internally acquired dataset encompassing representative vitreoretinal surgical cases. RESULTS: The U-Net-based network trained on the synthetic dataset is shown to generalise well to the benchmark of real surgical videos. When used to track retinal points of interest, our flow estimation outperforms variational baseline methods on clips containing tool motions which occlude the points of interest, as is routinely observed in intra-operatively recorded surgery videos. CONCLUSIONS: The results indicate that complex synthetic training datasets can be used to specifically guide optical flow estimation. Our proposed algorithm therefore lays the foundation for a robust system which can assist with intra-operative tracking of moving surgical targets even when occluded.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Retina / Redes Neurais de Computação / Aprendizado Profundo Tipo de estudo: Guideline / Prognostic_studies Limite: Humans Idioma: En Revista: Int J Comput Assist Radiol Surg Ano de publicação: 2020 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Retina / Redes Neurais de Computação / Aprendizado Profundo Tipo de estudo: Guideline / Prognostic_studies Limite: Humans Idioma: En Revista: Int J Comput Assist Radiol Surg Ano de publicação: 2020 Tipo de documento: Article