Your browser doesn't support javascript.
loading
SuperWarp: Supervised Learning and Warping on U-Net for Invariant Subvoxel-Precise Registration.
Young, Sean I; Balbastre, Yaël; Dalca, Adrian V; Wells, William M; Iglesias, Juan Eugenio; Fischl, Bruce.
Afiliação
  • Young SI; Athinoula A. Martinos Center for Biomedical Imaging and Massachusetts Institute of Technology.
  • Balbastre Y; Athinoula A. Martinos Center for Biomedical Imaging and Massachusetts Institute of Technology.
  • Dalca AV; Athinoula A. Martinos Center for Biomedical Imaging and Massachusetts Institute of Technology.
  • Wells WM; Athinoula A. Martinos Center for Biomedical Imaging and Massachusetts Institute of Technology.
  • Iglesias JE; Athinoula A. Martinos Center for Biomedical Imaging and Massachusetts Institute of Technology.
  • Fischl B; Athinoula A. Martinos Center for Biomedical Imaging and Massachusetts Institute of Technology.
Biomed Image Regist (2022) ; 13386: 103-115, 2022 Jul.
Article em En | MEDLINE | ID: mdl-36383500
In recent years, learning-based image registration methods have gradually moved away from direct supervision with target warps to instead use self-supervision, with excellent results in several registration benchmarks. These approaches utilize a loss function that penalizes the intensity differences between the fixed and moving images, along with a suitable regularizer on the deformation. However, since images typically have large untextured regions, merely maximizing similarity between the two images is not sufficient to recover the true deformation. This problem is exacerbated by texture in other regions, which introduces severe non-convexity into the landscape of the training objective and ultimately leads to overfitting. In this paper, we argue that the relative failure of supervised registration approaches can in part be blamed on the use of regular U-Nets, which are jointly tasked with feature extraction, feature matching and deformation estimation. Here, we introduce a simple but crucial modification to the U-Net that disentangles feature extraction and matching from deformation prediction, allowing the U-Net to warp the features, across levels, as the deformation field is evolved. With this modification, direct supervision using target warps begins to outperform self-supervision approaches that require segmentations, presenting new directions for registration when images do not have segmentations. We hope that our findings in this preliminary workshop paper will re-ignite research interest in supervised image registration techniques. Our code is publicly available from http://github.com/balbasty/superwarp.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2022 Tipo de documento: Article