RESUMO
Introduction: Patients with MS are MRI scanned continuously throughout their disease course resulting in a large manual workload for radiologists which includes lesion detection and size estimation. Though many models for automatic lesion segmentation have been published, few are used broadly in clinic today, as there is a lack of testing on clinical datasets. By collecting a large, heterogeneous training dataset directly from our MS clinic we aim to present a model which is robust to different scanner protocols and artefacts and which only uses MRI modalities present in routine clinical examinations. Methods: We retrospectively included 746 patients from routine examinations at our MS clinic. The inclusion criteria included acquisition at one of seven different scanners and an MRI protocol including 2D or 3D T2-w FLAIR, T2-w and T1-w images. Reference lesion masks on the training (n = 571) and validation (n = 70) datasets were generated using a preliminary segmentation model and subsequent manual correction. The test dataset (n = 100) was manually delineated. Our segmentation model https://github.com/CAAI/AIMS/ was based on the popular nnU-Net, which has won several biomedical segmentation challenges. We tested our model against the published segmentation models HD-MS-Lesions, which is also based on nnU-Net, trained with a more homogenous patient cohort. We furthermore tested model robustness to data from unseen scanners by performing a leave-one-scanner-out experiment. Results: We found that our model was able to segment MS white matter lesions with a performance comparable to literature: DSC = 0.68, precision = 0.90, recall = 0.70, f1 = 0.78. Furthermore, the model outperformed HD-MS-Lesions in all metrics except precision = 0.96. In the leave-one-scanner-out experiment there was no significant change in performance (p < 0.05) between any of the models which were only trained on part of the dataset and the full segmentation model. Conclusion: In conclusion we have seen, that by including a large, heterogeneous dataset emulating clinical reality, we have trained a segmentation model which maintains a high segmentation performance while being robust to data from unseen scanners. This broadens the applicability of the model in clinic and paves the way for clinical implementation.