Minimal data requirement for realistic endoscopic image generation with Stable Diffusion.
Int J Comput Assist Radiol Surg
; 19(3): 531-539, 2024 Mar.
Article
em En
| MEDLINE
| ID: mdl-37934401
PURPOSE: Computer-assisted surgical systems provide support information to the surgeon, which can improve the execution and overall outcome of the procedure. These systems are based on deep learning models that are trained on complex and challenging-to-annotate data. Generating synthetic data can overcome these limitations, but it is necessary to reduce the domain gap between real and synthetic data. METHODS: We propose a method for image-to-image translation based on a Stable Diffusion model, which generates realistic images starting from synthetic data. Compared to previous works, the proposed method is better suited for clinical application as it requires a much smaller amount of input data and allows finer control over the generation of details by introducing different variants of supporting control networks. RESULTS: The proposed method is applied in the context of laparoscopic cholecystectomy, using synthetic and real data from public datasets. It achieves a mean Intersection over Union of 69.76%, significantly improving the baseline results (69.76 vs. 42.21%). CONCLUSIONS: The proposed method for translating synthetic images into images with realistic characteristics will enable the training of deep learning methods that can generalize optimally to real-world contexts, thereby improving computer-assisted intervention guidance systems.
Palavras-chave
Texto completo:
1
Base de dados:
MEDLINE
Assunto principal:
Processamento de Imagem Assistida por Computador
/
Endoscopia
Limite:
Humans
Idioma:
En
Ano de publicação:
2024
Tipo de documento:
Article