Your browser doesn't support javascript.
loading
A new generative approach for optical coherence tomography data scarcity: unpaired mutual conversion between scanning presets.
Gende, Mateo; de Moura, Joaquim; Novo, Jorge; Penedo, Manuel G; Ortega, Marcos.
Afiliação
  • Gende M; Grupo, VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, Xubias de Arriba, 84, A Coruña, 15006, A Coruña, Spain.
  • de Moura J; Centro de investigación, CITIC, Universidade da Coruña, Campus de Elviña s/n, A Coruña, 15071, A Coruña, Spain.
  • Novo J; Grupo, VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, Xubias de Arriba, 84, A Coruña, 15006, A Coruña, Spain. joaquim.demoura@udc.es.
  • Penedo MG; Centro de investigación, CITIC, Universidade da Coruña, Campus de Elviña s/n, A Coruña, 15071, A Coruña, Spain. joaquim.demoura@udc.es.
  • Ortega M; Grupo, VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, Xubias de Arriba, 84, A Coruña, 15006, A Coruña, Spain.
Med Biol Eng Comput ; 61(5): 1093-1112, 2023 May.
Article em En | MEDLINE | ID: mdl-36680707
ABSTRACT
In optical coherence tomography (OCT), there is a trade-off between the scanning time and image quality, leading to a scarcity of high quality data. OCT platforms provide different scanning presets, producing visually distinct images, limiting their compatibility. In this work, a fully automatic methodology for the unpaired visual conversion of the two most prevalent scanning presets is proposed. Using contrastive unpaired translation generative adversarial architectures, low quality images acquired with the faster Macular Cube preset can be converted to the visual style of high visibility Seven Lines scans and vice-versa. This modifies the visual appearance of the OCT images generated by each preset while preserving natural tissue structure. The quality of original and synthetic generated images was compared using BRISQUE. The synthetic generated images achieved very similar scores to original images of their target preset. The generative models were validated in automatic and expert separability tests. These models demonstrated they were able to replicate the genuine look of the original images. This methodology has the potential to create multi-preset datasets with which to train robust computer-aided diagnosis systems by exposing them to the visual features of different presets they may encounter in real clinical scenarios without having to obtain additional data. Graphical Abstract Unpaired mutual conversion between scanning presets. Two generative adversarial models are trained for the conversion of OCT images into images of another scanning preset, replicating the visual features that characterise said preset.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Diagnóstico por Computador / Tomografia de Coerência Óptica Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Diagnóstico por Computador / Tomografia de Coerência Óptica Idioma: En Ano de publicação: 2023 Tipo de documento: Article