RESUMO
Purpose: To analyze the performance of deep learning (DL) models for segmentation of the neonatal lung in MRI and investigate the use of automated MRI-based features for assessment of neonatal lung disease. Materials and Methods: Quiet-breathing MRI was prospectively performed in two independent cohorts of preterm infants (median gestational age, 26.57 weeks; IQR, 25.3-28.6 weeks; 55 female and 48 male infants) with (n = 86) and without (n = 21) chronic lung disease (bronchopulmonary dysplasia [BPD]). Convolutional neural networks were developed for lung segmentation, and a three-dimensional reconstruction was used to calculate MRI features for lung volume, shape, pixel intensity, and surface. These features were explored as indicators of BPD and disease-associated lung structural remodeling through correlation with lung injury scores and multinomial models for BPD severity stratification. Results: The lung segmentation model reached a volumetric Dice coefficient of 0.908 in cross-validation and 0.880 on the independent test dataset, matching expert-level performance across disease grades. MRI lung features demonstrated significant correlations with lung injury scores and added structural information for the separation of neonates with BPD (BPD vs no BPD: average area under the receiver operating characteristic curve [AUC], 0.92 ± 0.02 [SD]; no or mild BPD vs moderate or severe BPD: average AUC, 0.84 ± 0.03). Conclusion: This study demonstrated high performance of DL models for MRI neonatal lung segmentation and showed the potential of automated MRI features for diagnostic assessment of neonatal lung disease while avoiding radiation exposure.Keywords: Bronchopulmonary Dysplasia, Chronic Lung Disease, Preterm Infant, Lung Segmentation, Lung MRI, BPD Severity Assessment, Deep Learning, Lung Imaging Biomarkers, Lung Topology Supplemental material is available for this article. Published under a CC BY 4.0 license.See also the commentary by Parraga and Sharma in this issue.
RESUMO
BACKGROUND: High-throughput live-cell imaging is a powerful tool to study dynamic cellular processes in single cells but creates a bottleneck at the stage of data analysis, due to the large amount of data generated and limitations of analytical pipelines. Recent progress on deep learning dramatically improved cell segmentation and tracking. Nevertheless, manual data validation and correction is typically still required and tools spanning the complete range of image analysis are still needed. RESULTS: We present Cell-ACDC, an open-source user-friendly GUI-based framework written in Python, for segmentation, tracking and cell cycle annotations. We included state-of-the-art deep learning models for single-cell segmentation of mammalian and yeast cells alongside cell tracking methods and an intuitive, semi-automated workflow for cell cycle annotation of single cells. Using Cell-ACDC, we found that mTOR activity in hematopoietic stem cells is largely independent of cell volume. By contrast, smaller cells exhibit higher p38 activity, consistent with a role of p38 in regulation of cell size. Additionally, we show that, in S. cerevisiae, histone Htb1 concentrations decrease with replicative age. CONCLUSIONS: Cell-ACDC provides a framework for the application of state-of-the-art deep learning models to the analysis of live cell imaging data without programming knowledge. Furthermore, it allows for visualization and correction of segmentation and tracking errors as well as annotation of cell cycle stages. We embedded several smart algorithms that make the correction and annotation process fast and intuitive. Finally, the open-source and modularized nature of Cell-ACDC will enable simple and fast integration of new deep learning-based and traditional methods for cell segmentation, tracking, and downstream image analysis. Source code: https://github.com/SchmollerLab/Cell_ACDC.