RESUMO
Hyperfluorescence (HF) and reduced autofluorescence (RA) are important biomarkers in fundus autofluorescence images (FAF) for the assessment of health of the retinal pigment epithelium (RPE), an important indicator of disease progression in geographic atrophy (GA) or central serous chorioretinopathy (CSCR). Autofluorescence images have been annotated by human raters, but distinguishing biomarkers (whether signals are increased or decreased) from the normal background proves challenging, with borders being particularly open to interpretation. Consequently, significant variations emerge among different graders, and even within the same grader during repeated annotations. Tests on in-house FAF data show that even highly skilled medical experts, despite previously discussing and settling on precise annotation guidelines, reach a pair-wise agreement measured in a Dice score of no more than 63-80% for HF segmentations and only 14-52% for RA. The data further show that the agreement of our primary annotation expert with herself is a 72% Dice score for HF and 51% for RA. Given these numbers, the task of automated HF and RA segmentation cannot simply be refined to the improvement in a segmentation score. Instead, we propose the use of a segmentation ensemble. Learning from images with a single annotation, the ensemble reaches expert-like performance with an agreement of a 64-81% Dice score for HF and 21-41% for RA with all our experts. In addition, utilizing the mean predictions of the ensemble networks and their variance, we devise ternary segmentations where FAF image areas are labeled either as confident background, confident HF, or potential HF, ensuring that predictions are reliable where they are confident (97% Precision), while detecting all instances of HF (99% Recall) annotated by all experts.
RESUMO
OBJECTIVE: The prospect of being able to gain relevant information from cardiovascular magnetic resonance (CMR) image analysis automatically opens up new potential to assist the evaluating physician. For machine-learning-based classification of complex congenital heart disease, only few studies have used CMR. MATERIALS AND METHODS: This study presents a tailor-made neural network architecture for detection of 7 distinctive anatomic landmarks in CMR images of patients with hypoplastic left heart syndrome (HLHS) in Fontan circulation or healthy controls and demonstrates the potential of the spatial arrangement of the landmarks to identify HLHS. The method was applied to the axial SSFP CMR scans of 46 patients with HLHS and 33 healthy controls. RESULTS: The displacement between predicted and annotated landmark had a standard deviation of 8-17 mm and was larger than the interobserver variability by a factor of 1.1-2.0. A high overall classification accuracy of 98.7% was achieved. DISCUSSION: Decoupling the identification of clinically meaningful anatomic landmarks from the actual classification improved transparency of classification results. Information from such automated analysis could be used to quickly jump to anatomic positions and guide the physician more efficiently through the analysis depending on the detected condition, which may ultimately improve work flow and save analysis time.
Assuntos
Sistema Cardiovascular , Síndrome do Coração Esquerdo Hipoplásico , Humanos , Síndrome do Coração Esquerdo Hipoplásico/diagnóstico por imagem , Síndrome do Coração Esquerdo Hipoplásico/cirurgia , Imageamento por Ressonância Magnética/métodos , Aprendizado de Máquina , Redes Neurais de ComputaçãoRESUMO
Optical coherence tomography (OCT) and fundus autofluorescence (FAF) are important imaging modalities for the assessment and prognosis of central serous chorioretinopathy (CSCR). However, setting the findings from both into spatial and temporal contexts as desirable for disease analysis remains a challenge due to both modalities being captured in different perspectives: sparse three-dimensional (3D) cross sections for OCT and two-dimensional (2D) en face images for FAF. To bridge this gap, we propose a visualisation pipeline capable of projecting OCT labels to en face image modalities such as FAF. By mapping OCT B-scans onto the accompanying en face infrared (IR) image and then registering the IR image onto the FAF image by a neural network, we can directly compare OCT labels to other labels in the en face plane. We also present a U-Net inspired segmentation model to predict segmentations in unlabeled OCTs. Evaluations show that both our networks achieve high precision (0.853 Dice score and 0.913 Area under Curve). Furthermore, medical analysis performed on exemplary, chronologically arranged CSCR progressions of 12 patients visualized with our pipeline indicates that, on CSCR, two patterns emerge: subretinal fluid (SRF) in OCT preceding hyperfluorescence (HF) in FAF and vice versa.
RESUMO
Deep learning has been successfully applied to many classification problems including underwater challenges. However, a long-standing issue with deep learning is the need for large and consistently labeled datasets. Although current approaches in semi-supervised learning can decrease the required amount of annotated data by a factor of 10 or even more, this line of research still uses distinct classes. For underwater classification, and uncurated real-world datasets in general, clean class boundaries can often not be given due to a limited information content in the images and transitional stages of the depicted objects. This leads to different experts having different opinions and thus producing fuzzy labels which could also be considered ambiguous or divergent. We propose a novel framework for handling semi-supervised classifications of such fuzzy labels. It is based on the idea of overclustering to detect substructures in these fuzzy labels. We propose a novel loss to improve the overclustering capability of our framework and show the benefit of overclustering for fuzzy labels. We show that our framework is superior to previous state-of-the-art semi-supervised methods when applied to real-world plankton data with fuzzy labels. Moreover, we acquire 5 to 10% more consistent predictions of substructures.