Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 53
Filter
1.
Med Image Anal ; 94: 103150, 2024 May.
Article in English | MEDLINE | ID: mdl-38574545

ABSTRACT

Self-supervised representation learning can boost the performance of a pre-trained network on downstream tasks for which labeled data is limited. A popular method based on this paradigm, known as contrastive learning, works by constructing sets of positive and negative pairs from the data, and then pulling closer the representations of positive pairs while pushing apart those of negative pairs. Although contrastive learning has been shown to improve performance in various classification tasks, its application to image segmentation has been more limited. This stems in part from the difficulty of defining positive and negative pairs for dense feature maps without having access to pixel-wise annotations. In this work, we propose a novel self-supervised pre-training method that overcomes the challenges of contrastive learning in image segmentation. Our method leverages Information Invariant Clustering (IIC) as an unsupervised task to learn a local representation of images in the decoder of a segmentation network, but addresses three important drawbacks of this approach: (i) the difficulty of optimizing the loss based on mutual information maximization; (ii) the lack of clustering consistency for different random transformations of the same image; (iii) the poor correspondence of clusters obtained by IIC with region boundaries in the image. Toward this goal, we first introduce a regularized mutual information maximization objective that encourages the learned clusters to be balanced and consistent across different image transformations. We also propose a boundary-aware loss based on cross-correlation, which helps the learned clusters to be more representative of important regions in the image. Compared to contrastive learning applied in dense features, our method does not require computing positive and negative pairs and also enhances interpretability through the visualization of learned clusters. Comprehensive experiments involving four different medical image segmentation tasks reveal the high effectiveness of our self-supervised representation learning method. Our results show the proposed method to outperform by a large margin several state-of-the-art self-supervised and semi-supervised approaches for segmentation, reaching a performance close to full supervision with only a few labeled examples.


Subject(s)
Image Processing, Computer-Assisted , Learning , Humans , Supervised Machine Learning
2.
Med Image Anal ; 93: 103085, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38219499

ABSTRACT

Recently, deep reinforcement learning (RL) has been proposed to learn the tractography procedure and train agents to reconstruct the structure of the white matter without manually curated reference streamlines. While the performances reported were competitive, the proposed framework is complex, and little is still known about the role and impact of its multiple parts. In this work, we thoroughly explore the different components of the proposed framework, such as the choice of the RL algorithm, seeding strategy, the input signal and reward function, and shed light on their impact. Approximately 7,400 models were trained for this work, totalling nearly 41,000 h of GPU time. Our goal is to guide researchers eager to explore the possibilities of deep RL for tractography by exposing what works and what does not work with the category of approach. As such, we ultimately propose a series of recommendations concerning the choice of RL algorithm, the input to the agents, the reward function and more to help future work using reinforcement learning for tractography. We also release the open source codebase, trained models, and datasets for users and researchers wanting to explore reinforcement learning for tractography.


Subject(s)
Learning , Reinforcement, Psychology , Humans , Reward , Algorithms
3.
IEEE Trans Med Imaging ; 43(4): 1449-1461, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38032771

ABSTRACT

Despite the remarkable progress in semi-supervised medical image segmentation methods based on deep learning, their application to real-life clinical scenarios still faces considerable challenges. For example, insufficient labeled data often makes it difficult for networks to capture the complexity and variability of the anatomical regions to be segmented. To address these problems, we design a new semi-supervised segmentation framework that aspires to produce anatomically plausible predictions. Our framework comprises two parallel networks: shape-agnostic and shape-aware networks. These networks learn from each other, enabling effective utilization of unlabeled data. Our shape-aware network implicitly introduces shape guidance to capture shape fine-grained information. Meanwhile, shape-agnostic networks employ uncertainty estimation to further obtain reliable pseudo-labels for the counterpart. We also employ a cross-style consistency strategy to enhance the network's utilization of unlabeled data. It enriches the dataset to prevent overfitting and further eases the coupling of the two networks that learn from each other. Our proposed architecture also incorporates a novel loss term that facilitates the learning of the local context of segmentation by the network, thereby enhancing the overall accuracy of prediction. Experiments on three different datasets of medical images show that our method outperforms many excellent semi-supervised segmentation methods and outperforms them in perceiving shape. The code can be seen at https://github.com/igip-liu/SLC-Net.


Subject(s)
Image Processing, Computer-Assisted , Supervised Machine Learning , Uncertainty
4.
Med Image Anal ; 90: 102974, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37774534

ABSTRACT

Reconstructing and segmenting cortical surfaces from MRI is essential to a wide range of brain analyses. However, most approaches follow a multi-step slow process, such as a sequential spherical inflation and registration, which requires considerable computation times. To overcome the limitations arising from these multi-steps, we propose SegRecon, an integrated end-to-end deep learning method to jointly reconstruct and segment cortical surfaces directly from an MRI volume in one single step. We train a volume-based neural network to predict, for each voxel, the signed distances to multiple nested surfaces and their corresponding spherical representation in atlas space. This is, for instance, useful for jointly reconstructing and segmenting the white-to-gray-matter interface and the gray-matter-to-CSF (pial) surface. We evaluate the performance of our surface reconstruction and segmentation method with a comprehensive set of experiments on the MindBoggle, ABIDE and OASIS datasets. Our reconstruction error is found to be less than 0.52 mm and 0.97 mm in terms of average Hausdorff distance to the FreeSurfer generated surfaces. Likewise, the parcellation results show over 4% improvements in average Dice with respect to FreeSurfer, in addition to an observed drastic speed-up from hours to seconds of computation on a standard desktop station.

5.
Med Image Anal ; 90: 102958, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37769549

ABSTRACT

The performance of learning-based algorithms improves with the amount of labelled data used for training. Yet, manually annotating data is particularly difficult for medical image segmentation tasks because of the limited expert availability and intensive manual effort required. To reduce manual labelling, active learning (AL) targets the most informative samples from the unlabelled set to annotate and add to the labelled training set. On the one hand, most active learning works have focused on the classification or limited segmentation of natural images, despite active learning being highly desirable in the difficult task of medical image segmentation. On the other hand, uncertainty-based AL approaches notoriously offer sub-optimal batch-query strategies, while diversity-based methods tend to be computationally expensive. Over and above methodological hurdles, random sampling has proven an extremely difficult baseline to outperform when varying learning and sampling conditions. This work aims to take advantage of the diversity and speed offered by random sampling to improve the selection of uncertainty-based AL methods for segmenting medical images. More specifically, we propose to compute uncertainty at the level of batches instead of samples through an original use of stochastic batches (SB) during sampling in AL. Stochastic batch querying is a simple and effective add-on that can be used on top of any uncertainty-based metric. Extensive experiments on two medical image segmentation datasets show that our strategy consistently improves conventional uncertainty-based sampling methods. Our method can hence act as a strong baseline for medical image segmentation. The code is available on: https://github.com/Minimel/StochasticBatchAL.git.

6.
Sci Rep ; 13(1): 13259, 2023 08 15.
Article in English | MEDLINE | ID: mdl-37582862

ABSTRACT

Neonatal MRIs are used increasingly in preterm infants. However, it is not always feasible to analyze this data. Having a tool that assesses brain maturation during this period of extraordinary changes would be immensely helpful. Approaches based on deep learning approaches could solve this task since, once properly trained and validated, they can be used in practically any system and provide holistic quantitative information in a matter of minutes. However, one major deterrent for radiologists is that these tools are not easily interpretable. Indeed, it is important that structures driving the results be detailed and survive comparison to the available literature. To solve these challenges, we propose an interpretable pipeline based on deep learning to predict postmenstrual age at scan, a key measure for assessing neonatal brain development. For this purpose, we train a state-of-the-art deep neural network to segment the brain into 87 different regions using normal preterm and term infants from the dHCP study. We then extract informative features for brain age estimation using the segmented MRIs and predict the brain age at scan with a regression model. The proposed framework achieves a mean absolute error of 0.46 weeks to predict postmenstrual age at scan. While our model is based solely on structural T2-weighted images, the results are superior to recent, arguably more complex approaches. Furthermore, based on the extracted knowledge from the trained models, we found that frontal and parietal lobes are among the most important structures for neonatal brain age estimation.


Subject(s)
Infant, Premature , Premature Birth , Female , Humans , Infant, Newborn , Infant , Brain/diagnostic imaging , Magnetic Resonance Imaging/methods , Neural Networks, Computer
7.
Cancers (Basel) ; 15(15)2023 Jul 28.
Article in English | MEDLINE | ID: mdl-37568655

ABSTRACT

The use of multiparametric magnetic resonance imaging (mpMRI) has become a common technique used in guiding biopsy and developing treatment plans for prostate lesions. While this technique is effective, non-invasive methods such as radiomics have gained popularity for extracting imaging features to develop predictive models for clinical tasks. The aim is to minimize invasive processes for improved management of prostate cancer (PCa). This study reviews recent research progress in MRI-based radiomics for PCa, including the radiomics pipeline and potential factors affecting personalized diagnosis. The integration of artificial intelligence (AI) with medical imaging is also discussed, in line with the development trend of radiogenomics and multi-omics. The survey highlights the need for more data from multiple institutions to avoid bias and generalize the predictive model. The AI-based radiomics model is considered a promising clinical tool with good prospects for application.

8.
IEEE Trans Med Imaging ; 42(8): 2338-2347, 2023 08.
Article in English | MEDLINE | ID: mdl-37027662

ABSTRACT

We present an unsupervised domain adaptation method for image segmentation which aligns high-order statistics, computed for the source and target domains, encoding domain-invariant spatial relationships between segmentation classes. Our method first estimates the joint distribution of predictions for pairs of pixels whose relative position corresponds to a given spatial displacement. Domain adaptation is then achieved by aligning the joint distributions of source and target images, computed for a set of displacements. Two enhancements of this method are proposed. The first one uses an efficient multi-scale strategy that enables capturing long-range relationships in the statistics. The second one extends the joint distribution alignment loss to features in intermediate layers of the network by computing their cross-correlation. We test our method on the task of unpaired multi-modal cardiac segmentation using the Multi-Modality Whole Heart Segmentation Challenge dataset and prostate segmentation task where images from two datasets are taken as data in different domains. Our results show the advantages of our method compared to recent approaches for cross-domain image segmentation. Code is available at https://github.com/WangPing521/Domain_adaptation_shape_prior.


Subject(s)
Heart , Pelvis , Male , Humans , Heart/diagnostic imaging , Prostate , Image Processing, Computer-Assisted
9.
IEEE Trans Med Imaging ; 42(8): 2146-2161, 2023 08.
Article in English | MEDLINE | ID: mdl-37022409

ABSTRACT

Deep learning models for semi-supervised medical image segmentation have achieved unprecedented performance for a wide range of tasks. Despite their high accuracy, these models may however yield predictions that are considered anatomically impossible by clinicians. Moreover, incorporating complex anatomical constraints into standard deep learning frameworks remains challenging due to their non-differentiable nature. To address these limitations, we propose a Constrained Adversarial Training (CAT) method that learns how to produce anatomically plausible segmentations. Unlike approaches focusing solely on accuracy measures like Dice, our method considers complex anatomical constraints like connectivity, convexity, and symmetry which cannot be easily modeled in a loss function. The problem of non-differentiable constraints is solved using a Reinforce algorithm which enables to obtain a gradient for violated constraints. To generate constraint-violating examples on the fly, and thereby obtain useful gradients, our method adopts an adversarial training strategy which modifies training images to maximize the constraint loss, and then updates the network to be robust to these adversarial examples. The proposed method offers a generic and efficient way to add complex segmentation constraints on top of any segmentation network. Experiments on synthetic data and four clinically-relevant datasets demonstrate the effectiveness of our method in terms of segmentation accuracy and anatomical plausibility.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Supervised Machine Learning
10.
IEEE J Biomed Health Inform ; 27(1): 157-165, 2023 01.
Article in English | MEDLINE | ID: mdl-35503845

ABSTRACT

Deep learning methods have shown outstanding potential in dermatology for skin lesion detection and identification. However, they usually require annotations beforehand and can only classify lesion classes seen in the training set. Moreover, large-scale, open-sourced medical datasets normally have far fewer annotated classes than in real life, further aggravating the problem. This paper proposes a novel method called DNF-OOD, which applies a non-parametric deep forest-based approach to the problem of out-of-distribution (OOD) detection. By leveraging a maximum probabilistic routing strategy and over-confidence penalty term, the proposed method can achieve better performance on the task of detecting OOD skin lesion images, which is challenging due to the large intra-class variability in such images. We evaluate our OOD detection method on images from two large, publicly-available skin lesion datasets, ISIC2019 and DermNet, and compare it against recently-proposed approaches. Results demonstrate the potential of our DNF-OOD framework for detecting OOD skin images.


Subject(s)
Deep Learning , Skin Diseases , Humans , Skin
11.
Med Image Anal ; 83: 102670, 2023 01.
Article in English | MEDLINE | ID: mdl-36413905

ABSTRACT

Despite achieving promising results in a breadth of medical image segmentation tasks, deep neural networks (DNNs) require large training datasets with pixel-wise annotations. Obtaining these curated datasets is a cumbersome process which limits the applicability of DNNs in scenarios where annotated images are scarce. Mixed supervision is an appealing alternative for mitigating this obstacle. In this setting, only a small fraction of the data contains complete pixel-wise annotations and other images have a weaker form of supervision, e.g., only a handful of pixels are labeled. In this work, we propose a dual-branch architecture, where the upper branch (teacher) receives strong annotations, while the bottom one (student) is driven by limited supervision and guided by the upper branch. Combined with a standard cross-entropy loss over the labeled pixels, our novel formulation integrates two important terms: (i) a Shannon entropy loss defined over the less-supervised images, which encourages confident student predictions in the bottom branch; and (ii) a Kullback-Leibler (KL) divergence term, which transfers the knowledge (i.e., predictions) of the strongly supervised branch to the less-supervised branch and guides the entropy (student-confidence) term to avoid trivial solutions. We show that the synergy between the entropy and KL divergence yields substantial improvements in performance. We also discuss an interesting link between Shannon-entropy minimization and standard pseudo-mask generation, and argue that the former should be preferred over the latter for leveraging information from unlabeled pixels. We evaluate the effectiveness of the proposed formulation through a series of quantitative and qualitative experiments using two publicly available datasets. Results demonstrate that our method significantly outperforms other strategies for semantic segmentation within a mixed-supervision framework, as well as recent semi-supervised approaches. Moreover, in line with recent observations in classification, we show that the branch trained with reduced supervision and guided by the top branch largely outperforms the latter. Our code is publicly available: https://github.com/by-liu/ConfKD.


Subject(s)
Neural Networks, Computer , Semantics , Humans , Entropy
12.
Med Image Anal ; 81: 102567, 2022 10.
Article in English | MEDLINE | ID: mdl-35994969

ABSTRACT

The automatic segmentation of lumbar anatomy is a fundamental problem for the diagnosis and treatment of lumbar disease. The recent development of deep learning techniques has led to remarkable progress in this task, including the possible segmentation of nerve roots, intervertebral discs, and dural sac in a single step. Despite these advances, lumbar anatomy segmentation remains a challenging problem due to the weak contrast and noise of input images, as well as the variability of intensities and size in lumbar structures across different subjects. To overcome these challenges, we propose a coarse-to-fine deep neural network framework for lumbar anatomy segmentation, which obtains a more accurate segmentation using two strategies. First, a progressive refinement process is employed to correct low-confidence regions by enhancing the feature representation in these regions. Second, a grayscale self-adjusting network (GSA-Net) is proposed to optimize the distribution of intensities dynamically. Experiments on datasets comprised of 3D computed tomography (CT) and magnetic resonance (MR) images show the advantage of our method over current segmentation approaches and its potential for diagnosing and lumbar disease treatment.


Subject(s)
Intervertebral Disc , Magnetic Resonance Imaging , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer
13.
Front Neuroimaging ; 1: 930496, 2022.
Article in English | MEDLINE | ID: mdl-37555146

ABSTRACT

The physical and clinical constraints surrounding diffusion-weighted imaging (DWI) often limit the spatial resolution of the produced images to voxels up to eight times larger than those of T1w images. The detailed information contained in accessible high-resolution T1w images could help in the synthesis of diffusion images with a greater level of detail. However, the non-Euclidean nature of diffusion imaging hinders current deep generative models from synthesizing physically plausible images. In this work, we propose the first Riemannian network architecture for the direct generation of diffusion tensors (DT) and diffusion orientation distribution functions (dODFs) from high-resolution T1w images. Our integration of the log-Euclidean Metric into a learning objective guarantees, unlike standard Euclidean networks, the mathematically-valid synthesis of diffusion. Furthermore, our approach improves the fractional anisotropy mean squared error (FA MSE) between the synthesized diffusion and the ground-truth by more than 23% and the cosine similarity between principal directions by almost 5% when compared to our baselines. We validate our generated diffusion by comparing the resulting tractograms to our expected real data. We observe similar fiber bundles with streamlines having <3% difference in length, <1% difference in volume, and a visually close shape. While our method is able to generate diffusion images from structural inputs in a high-resolution space within 15 s, we acknowledge and discuss the limits of diffusion inference solely relying on T1w images. Our results nonetheless suggest a relationship between the high-level geometry of the brain and its overall white matter architecture that remains to be explored.

14.
IEEE Trans Pattern Anal Mach Intell ; 44(2): 864-876, 2022 02.
Article in English | MEDLINE | ID: mdl-33006927

ABSTRACT

Brain surface analysis is essential to neuroscience, however, the complex geometry of the brain cortex hinders computational methods for this task. The difficulty arises from a discrepancy between 3D imaging data, which is represented in Euclidean space, and the non-Euclidean geometry of the highly-convoluted brain surface. Recent advances in machine learning have enabled the use of neural networks for non-Euclidean spaces. These facilitate the learning of surface data, yet pooling strategies often remain constrained to a single fixed-graph. This paper proposes a new learnable graph pooling method for processing multiple surface-valued data to output subject-based information. The proposed method innovates by learning an intrinsic aggregation of graph nodes based on graph spectral embedding. We illustrate the advantages of our approach with in-depth experiments on two large-scale benchmark datasets. The ablation study in the paper illustrates the impact of various factors affecting our learnable pooling method. The flexibility of the pooling strategy is evaluated on four different prediction tasks, namely, subject-sex classification, regression of cortical region sizes, classification of Alzheimer's disease stages, and brain age regression. Our experiments demonstrate the superiority of our learnable pooling approach compared to other pooling techniques for graph convolutional networks, with results improving the state-of-the-art in brain surface analysis.


Subject(s)
Algorithms , Alzheimer Disease , Alzheimer Disease/diagnostic imaging , Brain/diagnostic imaging , Humans , Machine Learning , Neural Networks, Computer
15.
IEEE Trans Neural Netw Learn Syst ; 33(1): 3-11, 2022 01.
Article in English | MEDLINE | ID: mdl-34669582

ABSTRACT

This article proposes to encode the distribution of features learned from a convolutional neural network (CNN) using a Gaussian mixture model (GMM). These parametric features, called GMM-CNN, are derived from chest computed tomography (CT) and X-ray scans of patients with coronavirus disease 2019 (COVID-19). We use the proposed GMM-CNN features as input to a robust classifier based on random forests (RFs) to differentiate between COVID-19 and other pneumonia cases. Our experiments assess the advantage of GMM-CNN features compared with standard CNN classification on test images. Using an RF classifier (80% samples for training; 20% samples for testing), GMM-CNN features encoded with two mixture components provided a significantly better performance than standard CNN classification ( ). Specifically, our method achieved an accuracy in the range of 96.00%-96.70% and an area under the receiver operator characteristic (ROC) curve in the range of 99.29%-99.45%, with the best performance obtained by combining GMM-CNN features from both CT and X-ray images. Our results suggest that the proposed GMM-CNN features could improve the prediction of COVID-19 in chest CT and X-ray scans.


Subject(s)
COVID-19/diagnostic imaging , COVID-19/diagnosis , Algorithms , Diagnosis, Differential , Humans , Neural Networks, Computer , Normal Distribution , Pneumonia/diagnosis , Pneumonia/diagnostic imaging , Predictive Value of Tests , Prognosis , ROC Curve , Reproducibility of Results , Tomography, X-Ray Computed , X-Rays
16.
IEEE Trans Med Imaging ; 41(4): 836-845, 2022 04.
Article in English | MEDLINE | ID: mdl-34699353

ABSTRACT

We propose a novel pairwise distance measure between image keypoint sets, for the purpose of large-scale medical image indexing. Our measure generalizes the Jaccard index to account for soft set equivalence (SSE) between keypoint elements, via an adaptive kernel framework modeling uncertainty in keypoint appearance and geometry. A new kernel is proposed to quantify the variability of keypoint geometry in location and scale. Our distance measure may be estimated between O (N 2) image pairs in [Formula: see text] operations via keypoint indexing. Experiments report the first results for the task of predicting family relationships from medical images, using 1010 T1-weighted MRI brain volumes of 434 families including monozygotic and dizygotic twins, siblings and half-siblings sharing 100%-25% of their polymorphic genes. Soft set equivalence and the keypoint geometry kernel improve upon standard hard set equivalence (HSE) and appearance kernels alone in predicting family relationships. Monozygotic twin identification is near 100%, and three subjects with uncertain genotyping are automatically paired with their self-reported families, the first reported practical application of image-based family identification. Our distance measure can also be used to predict group categories, sex is predicted with an AUC = 0.97. Software is provided for efficient fine-grained curation of large, generic image datasets.


Subject(s)
Magnetic Resonance Imaging , Twins, Monozygotic , Humans , Neuroimaging , Software
17.
Diagnostics (Basel) ; 11(11)2021 Nov 03.
Article in English | MEDLINE | ID: mdl-34829379

ABSTRACT

Radiomics with deep learning models have become popular in computer-aided diagnosis and have outperformed human experts on many clinical tasks. Specifically, radiomic models based on artificial intelligence (AI) are using medical data (i.e., images, molecular data, clinical variables, etc.) for predicting clinical tasks such as autism spectrum disorder (ASD). In this review, we summarized and discussed the radiomic techniques used for ASD analysis. Currently, the limited radiomic work of ASD is related to the variation of morphological features of brain thickness that is different from texture analysis. These techniques are based on imaging shape features that can be used with predictive models for predicting ASD. This review explores the progress of ASD-based radiomics with a brief description of ASD and the current non-invasive technique used to classify between ASD and healthy control (HC) subjects. With AI, new radiomic models using the deep learning techniques will be also described. To consider the texture analysis with deep CNNs, more investigations are suggested to be integrated with additional validation steps on various MRI sites.

18.
Med Image Anal ; 74: 102191, 2021 12.
Article in English | MEDLINE | ID: mdl-34509168

ABSTRACT

Image normalization is a building block in medical image analysis. Conventional approaches are customarily employed on a per-dataset basis. This strategy, however, prevents the current normalization algorithms from fully exploiting the complex joint information available across multiple datasets. Consequently, ignoring such joint information has a direct impact on the processing of segmentation algorithms. This paper proposes to revisit the conventional image normalization approach by, instead, learning a common normalizing function across multiple datasets. Jointly normalizing multiple datasets is shown to yield consistent normalized images as well as an improved image segmentation when intensity shifts are large. To do so, a fully automated adversarial and task-driven normalization approach is employed as it facilitates the training of realistic and interpretable images while keeping performance on par with the state-of-the-art. The adversarial training of our network aims at finding the optimal transfer function to improve both, jointly, the segmentation accuracy and the generation of realistic images. We have evaluated the performance of our normalizer on both infant and adult brain images from the iSEG, MRBrainS and ABIDE datasets. The results indicate that our contribution does provide an improved realism to the normalized images, while retaining a segmentation accuracy at par with the state-of-the-art learnable normalization approaches.


Subject(s)
Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Adult , Algorithms , Humans
19.
Med Image Anal ; 73: 102146, 2021 10.
Article in English | MEDLINE | ID: mdl-34274692

ABSTRACT

Deep co-training has recently been proposed as an effective approach for image segmentation when annotated data is scarce. In this paper, we improve existing approaches for semi-supervised segmentation with a self-paced and self-consistent co-training method. To help distillate information from unlabeled images, we first design a self-paced learning strategy for co-training that lets jointly-trained neural networks focus on easier-to-segment regions first, and then gradually consider harder ones. This is achieved via an end-to-end differentiable loss in the form of a generalized Jensen Shannon Divergence (JSD). Moreover, to encourage predictions from different networks to be both consistent and confident, we enhance this generalized JSD loss with an uncertainty regularizer based on entropy. The robustness of individual models is further improved using a self-ensembling loss that enforces their prediction to be consistent across different training iterations. We demonstrate the potential of our method on three challenging image segmentation problems with different image modalities, using a small fraction of labeled data. Results show clear advantages in terms of performance compared to the standard co-training baselines and recently proposed state-of-the-art approaches for semi-supervised segmentation.


Subject(s)
Neural Networks, Computer , Supervised Machine Learning , Entropy , Humans , Image Processing, Computer-Assisted , Uncertainty
20.
Front Aging Neurosci ; 13: 633752, 2021.
Article in English | MEDLINE | ID: mdl-34025389

ABSTRACT

Diagnosis of Parkinson's disease (PD) is commonly based on medical observations and assessment of clinical signs, including the characterization of a variety of motor symptoms. However, traditional diagnostic approaches may suffer from subjectivity as they rely on the evaluation of movements that are sometimes subtle to human eyes and therefore difficult to classify, leading to possible misclassification. In the meantime, early non-motor symptoms of PD may be mild and can be caused by many other conditions. Therefore, these symptoms are often overlooked, making diagnosis of PD at an early stage challenging. To address these difficulties and to refine the diagnosis and assessment procedures of PD, machine learning methods have been implemented for the classification of PD and healthy controls or patients with similar clinical presentations (e.g., movement disorders or other Parkinsonian syndromes). To provide a comprehensive overview of data modalities and machine learning methods that have been used in the diagnosis and differential diagnosis of PD, in this study, we conducted a literature review of studies published until February 14, 2020, using the PubMed and IEEE Xplore databases. A total of 209 studies were included, extracted for relevant information and presented in this review, with an investigation of their aims, sources of data, types of data, machine learning methods and associated outcomes. These studies demonstrate a high potential for adaptation of machine learning methods and novel biomarkers in clinical decision making, leading to increasingly systematic, informed diagnosis of PD.

SELECTION OF CITATIONS
SEARCH DETAIL
...