Your browser doesn't support javascript.
loading
Deep-Learning Segmentation of Epicardial Adipose Tissue Using Four-Chamber Cardiac Magnetic Resonance Imaging.
Daudé, Pierre; Ancel, Patricia; Confort Gouny, Sylviane; Jacquier, Alexis; Kober, Frank; Dutour, Anne; Bernard, Monique; Gaborit, Bénédicte; Rapacchi, Stanislas.
Afiliación
  • Daudé P; Aix-Marseille Univ, CNRS, CRMBM, 13005 Marseille, France.
  • Ancel P; APHM, Hôpital Universitaire Timone, CEMEREM, 13385 Marseille, France.
  • Confort Gouny S; Department of Radiology, APHM, La Timone Hospital, 13005 Marseille, France.
  • Jacquier A; Aix-Marseille Univ, INSERM, INRAE, C2VN, 13005 Marseille, France.
  • Kober F; Aix-Marseille Univ, CNRS, CRMBM, 13005 Marseille, France.
  • Dutour A; APHM, Hôpital Universitaire Timone, CEMEREM, 13385 Marseille, France.
  • Bernard M; Aix-Marseille Univ, CNRS, CRMBM, 13005 Marseille, France.
  • Gaborit B; APHM, Hôpital Universitaire Timone, CEMEREM, 13385 Marseille, France.
  • Rapacchi S; Department of Radiology, APHM, La Timone Hospital, 13005 Marseille, France.
Diagnostics (Basel) ; 12(1)2022 Jan 06.
Article en En | MEDLINE | ID: mdl-35054297
ABSTRACT
In magnetic resonance imaging (MRI), epicardial adipose tissue (EAT) overload remains often overlooked due to tedious manual contouring in images. Automated four-chamber EAT area quantification was proposed, leveraging deep-learning segmentation using multi-frame fully convolutional networks (FCN). The investigation involved 100 subjects-comprising healthy, obese, and diabetic patients-who underwent 3T cardiac cine MRI, optimized U-Net and FCN (noted FCNB) were trained on three consecutive cine frames for segmentation of central frame using dice loss. Networks were trained using 4-fold cross-validation (n = 80) and evaluated on an independent dataset (n = 20). Segmentation performances were compared to inter-intra observer bias with dice (DSC) and relative surface error (RSE). Both systole and diastole four-chamber area were correlated with total EAT volume (r = 0.77 and 0.74 respectively). Networks' performances were equivalent to inter-observers' bias (EAT DSCInter = 0.76, DSCU-Net = 0.77, DSCFCNB = 0.76). U-net outperformed (p < 0.0001) FCNB on all metrics. Eventually, proposed multi-frame U-Net provided automated EAT area quantification with a 14.2% precision for the clinically relevant upper three quarters of EAT area range, scaling patients' risk of EAT overload with 70% accuracy. Exploiting multi-frame U-Net in standard cine provided automated EAT quantification over a wide range of EAT quantities. The method is made available to the community through a FSLeyes plugin.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Guideline Idioma: En Revista: Diagnostics (Basel) Año: 2022 Tipo del documento: Article País de afiliación: Francia

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Guideline Idioma: En Revista: Diagnostics (Basel) Año: 2022 Tipo del documento: Article País de afiliación: Francia