Your browser doesn't support javascript.
loading
Cramér-Rao bound-informed training of neural networks for quantitative MRI.
Zhang, Xiaoxia; Duchemin, Quentin; Liu, Kangning; Gultekin, Cem; Flassbeck, Sebastian; Fernandez-Granda, Carlos; Assländer, Jakob.
Afiliación
  • Zhang X; Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York City, New York, USA.
  • Duchemin Q; Center for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, New York University School of Medicine, New York City, New York, USA.
  • Liu K; LAMA, Univ Gustave Eiffel, Univ Paris Est Creteil, Marne-la-Vallée, France.
  • Gultekin C; Center for Data Science, New York University Grossman School of Medicine, New York City, New York, USA.
  • Flassbeck S; Courant Institute of Mathematical Sciences, New York University, New York City, New York, USA.
  • Fernandez-Granda C; Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York City, New York, USA.
  • Assländer J; Center for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, New York University School of Medicine, New York City, New York, USA.
Magn Reson Med ; 88(1): 436-448, 2022 07.
Article en En | MEDLINE | ID: mdl-35344614
ABSTRACT

PURPOSE:

To improve the performance of neural networks for parameter estimation in quantitative MRI, in particular when the noise propagation varies throughout the space of biophysical parameters. THEORY AND

METHODS:

A theoretically well-founded loss function is proposed that normalizes the squared error of each estimate with respective Cramér-Rao bound (CRB)-a theoretical lower bound for the variance of an unbiased estimator. This avoids a dominance of hard-to-estimate parameters and areas in parameter space, which are often of little interest. The normalization with corresponding CRB balances the large errors of fundamentally more noisy estimates and the small errors of fundamentally less noisy estimates, allowing the network to better learn to estimate the latter. Further, proposed loss function provides an absolute evaluation metric for performance A network has an average loss of 1 if it is a maximally efficient unbiased estimator, which can be considered the ideal performance. The performance gain with proposed loss function is demonstrated at the example of an eight-parameter magnetization transfer model that is fitted to phantom and in vivo data.

RESULTS:

Networks trained with proposed loss function perform close to optimal, that is, their loss converges to approximately 1, and their performance is superior to networks trained with the standard mean-squared error (MSE). The proposed loss function reduces the bias of the estimates compared to the MSE loss, and improves the match of the noise variance to the CRB. This performance gain translates to in vivo maps that align better with the literature.

CONCLUSION:

Normalizing the squared error with the CRB during the training of neural networks improves their performance in estimating biophysical parameters.
Asunto(s)
Palabras clave

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Imagen por Resonancia Magnética / Redes Neurales de la Computación Idioma: En Revista: Magn Reson Med Asunto de la revista: DIAGNOSTICO POR IMAGEM Año: 2022 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Imagen por Resonancia Magnética / Redes Neurales de la Computación Idioma: En Revista: Magn Reson Med Asunto de la revista: DIAGNOSTICO POR IMAGEM Año: 2022 Tipo del documento: Article País de afiliación: Estados Unidos