Your browser doesn't support javascript.
loading
Reciprocal assistance of intravascular imaging in three-dimensional stent reconstruction: Using cross-modal translation based on disentanglement representation.
Wu, Peng; Qiao, Yuchuan; Chu, Miao; Zhang, Su; Bai, Jingfeng; Gutierrez-Chico, Juan Luis; Tu, Shengxian.
Afiliación
  • Wu P; Biomedical Instrument Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
  • Qiao Y; Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China.
  • Chu M; Biomedical Instrument Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
  • Zhang S; Biomedical Instrument Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
  • Bai J; Biomedical Instrument Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China. Electronic address: jfbai@sjtu.edu.cn.
  • Gutierrez-Chico JL; Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
  • Tu S; Biomedical Instrument Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China. Electronic address: sxtu@sjtu.edu.cn.
Comput Med Imaging Graph ; 104: 102166, 2023 03.
Article en En | MEDLINE | ID: mdl-36586195
BACKGROUND: Accurate and efficient 3-dimension (3D) reconstruction of coronary stents in intravascular imaging of optical coherence tomography (OCT) or intravascular ultrasound (IVUS) is important for optimization of complex percutaneous coronary interventions (PCI). Deep learning has been used to address this technical challenge. However, manual annotation of stent is strenuous, especially for IVUS images. To this end, we aim to explore whether the OCT and IVUS images can assist each other in stent 3D reconstruction when one of them is lack of labeled dataset. METHODS: We firstly performed cross-modal translation between OCT and IVUS images, where disentangled representation was employed to generate synthetic images with good stent consistency. The reciprocal assistance of OCT and IVUS in stent 3D reconstruction was then conducted by applying unsupervised and semi-supervised learning with the aid of synthetic images. Stent consistency in synthetic images and reciprocal effectiveness in stent 3D reconstruction were quantitatively assessed by F1-Score (FS) on two datasets: OCT-High Definition IVUS (HD IVUS) and OCT-Conventional IVUS (IVUS). RESULTS: The employment of disentangled representation achieved higher stent consistency in synthetic images (OCT to HD IVUS: FS=0.789 vs 0.684; HD IVUS to OCT: FS=0.766 vs 0.682; OCT to IVUS: FS=0.806 vs 0.664; IVUS to OCT: FS=0.724 vs 0.673). For stent 3D reconstruction, the assistance from synthetic images significantly promoted unsupervised adaptation across modalities (OCT to HD IVUS: FS=0.776 vs 0.109; HD IVUS to OCT: FS=0.826 vs 0.125; OCT to IVUS: FS=0.782 vs 0.068; IVUS to OCT: FS=0.815 vs 0.123), and improved performance in semi-supervised learning, especially when only limited labeled data was available. CONCLUSION: The intravascular images of OCT and IVUS can provide reciprocal assistance to each other in stent 3D reconstruction by cross-modal translation, where the stent consistency in synthetic images was maintained by disentangled representation.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Enfermedad de la Arteria Coronaria / Intervención Coronaria Percutánea Límite: Humans Idioma: En Revista: Comput Med Imaging Graph Asunto de la revista: DIAGNOSTICO POR IMAGEM Año: 2023 Tipo del documento: Article País de afiliación: China Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Enfermedad de la Arteria Coronaria / Intervención Coronaria Percutánea Límite: Humans Idioma: En Revista: Comput Med Imaging Graph Asunto de la revista: DIAGNOSTICO POR IMAGEM Año: 2023 Tipo del documento: Article País de afiliación: China Pais de publicación: Estados Unidos