Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Brief Bioinform ; 23(3)2022 05 13.
Artigo em Inglês | MEDLINE | ID: mdl-35453146

RESUMO

Many DNA methylation (DNAm) data are from tissues composed of various cell types, and hence cell deconvolution methods are needed to infer their cell compositions accurately. However, a bottleneck for DNAm data is the lack of cell-type-specific DNAm references. On the other hand, scRNA-seq data are being accumulated rapidly with various cell-type transcriptomic signatures characterized, and also, many paired bulk RNA-DNAm data are publicly available currently. Hence, we developed the R package scDeconv to use these resources to solve the reference deficiency problem of DNAm data and deconvolve them from scRNA-seq data in a trans-omics manner. It assumes that paired samples have similar cell compositions. So the cell content information deconvolved from the scRNA-seq and paired RNA data can be transferred to the paired DNAm samples. Then an ensemble model is trained to fit these cell contents with DNAm features and adjust the paired RNA deconvolution in a co-training manner. Finally, the model can be used on other bulk DNAm data to predict their relative cell-type abundances. The effectiveness of this method is proved by its accurate deconvolution on the three testing datasets here, and if given an appropriate paired dataset, scDeconv can also deconvolve other omics, such as ATAC-seq data. Furthermore, the package also contains other functions, such as identifying cell-type-specific inter-group differential features from bulk DNAm data. scDeconv is available at: https://github.com/yuabrahamliu/scDeconv.


Assuntos
RNA , Análise de Célula Única , Metilação de DNA , Perfilação da Expressão Gênica , RNA/genética , RNA-Seq , Análise de Sequência de RNA
2.
Sensors (Basel) ; 24(7)2024 Mar 24.
Artigo em Inglês | MEDLINE | ID: mdl-38610284

RESUMO

For decades, soft sensors have been extensively renowned for their efficiency in real-time tracking of expensive variables for advanced process control. However, despite the diverse efforts lavished on enhancing their models, the issue of label sparsity when modeling the soft sensors has always posed challenges across various processes. In this paper, a fledgling technique, called co-training, is studied for leveraging only a small ratio of labeled data, to hone and formulate a more advantageous framework in soft sensor modeling. Dissimilar to the conventional routine where only two players are employed, we investigate the efficient number of players in batch processes, making a multiple-player learning scheme to assuage the sparsity issue. Meanwhile, a sliding window spanning across both time and batch direction is used to aggregate the samples for prediction, and account for the unique 2D correlations among the general batch process data. Altogether, the forged framework can outperform the other prevalent methods, especially when the ratio of unlabeled data is climbing up, and two case studies are showcased to demonstrate its effectiveness.

3.
Entropy (Basel) ; 26(4)2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38667882

RESUMO

Automatic crack segmentation plays an essential role in maintaining the structural health of buildings and infrastructure. Despite the success in fully supervised crack segmentation, the costly pixel-level annotation restricts its application, leading to increased exploration in weakly supervised crack segmentation (WSCS). However, WSCS methods inevitably bring in noisy pseudo-labels, which results in large fluctuations. To address this problem, we propose a novel confidence-aware co-training (CAC) framework for WSCS. This framework aims to iteratively refine pseudo-labels, facilitating the learning of a more robust segmentation model. Specifically, a co-training mechanism is designed and constructs two collaborative networks to learn uncertain crack pixels, from easy to hard. Moreover, the dynamic division strategy is designed to divide the pseudo-labels based on the crack confidence score. Among them, the high-confidence pseudo-labels are utilized to optimize the initialization parameters for the collaborative network, while low-confidence pseudo-labels enrich the diversity of crack samples. Extensive experiments conducted on the Crack500, DeepCrack, and CFD datasets demonstrate that the proposed CAC significantly outperforms other WSCS methods.

4.
Sensors (Basel) ; 23(4)2023 Feb 13.
Artigo em Inglês | MEDLINE | ID: mdl-36850703

RESUMO

Recently, convolutional neural networks (CNNs) have shown significant advantages in the tasks of image classification; however, these usually require a large number of labeled samples for training. In practice, it is difficult and costly to obtain sufficient labeled samples of polarimetric synthetic aperture radar (PolSAR) images. To address this problem, we propose a novel semi-supervised classification method for PolSAR images in this paper, using the co-training of CNN and a support vector machine (SVM). In our co-training method, an eight-layer CNN with residual network (ResNet) architecture is designed as the primary classifier, and an SVM is used as the auxiliary classifier. In particular, the SVM is used to enhance the performance of our algorithm in the case of limited labeled samples. In our method, more and more pseudo-labeled samples are iteratively yielded for training through a two-stage co-training of CNN and SVM, which gradually improves the performance of the two classifiers. The trained CNN is employed as the final classifier due to its strong classification capability with enough samples. We carried out experiments on two C-band airborne PolSAR images acquired by the AIRSAR systems and an L-band spaceborne PolSAR image acquired by the GaoFen-3 system. The experimental results demonstrate that the proposed method can effectively integrate the complementary advantages of SVM and CNN, providing overall classification accuracy of more than 97%, 96% and 93% with limited labeled samples (10 samples per class) for the above three images, respectively, which is superior to the state-of-the-art semi-supervised methods for PolSAR image classification.

5.
Sensors (Basel) ; 21(9)2021 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-34064323

RESUMO

Top-performing computer vision models are powered by convolutional neural networks (CNNs). Training an accurate CNN highly depends on both the raw sensor data and their associated ground truth (GT). Collecting such GT is usually done through human labeling, which is time-consuming and does not scale as we wish. This data-labeling bottleneck may be intensified due to domain shifts among image sensors, which could force per-sensor data labeling. In this paper, we focus on the use of co-training, a semi-supervised learning (SSL) method, for obtaining self-labeled object bounding boxes (BBs), i.e., the GT to train deep object detectors. In particular, we assess the goodness of multi-modal co-training by relying on two different views of an image, namely, appearance (RGB) and estimated depth (D). Moreover, we compare appearance-based single-modal co-training with multi-modal. Our results suggest that in a standard SSL setting (no domain shift, a few human-labeled data) and under virtual-to-real domain shift (many virtual-world labeled data, no human-labeled data) multi-modal co-training outperforms single-modal. In the latter case, by performing GAN-based domain translation both co-training modalities are on par, at least when using an off-the-shelf depth estimation model not specifically trained on the translated images.

6.
Entropy (Basel) ; 23(4)2021 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-33916017

RESUMO

Automatic recognition of visual objects using a deep learning approach has been successfully applied to multiple areas. However, deep learning techniques require a large amount of labeled data, which is usually expensive to obtain. An alternative is to use semi-supervised models, such as co-training, where multiple complementary views are combined using a small amount of labeled data. A simple way to associate views to visual objects is through the application of a degree of rotation or a type of filter. In this work, we propose a co-training model for visual object recognition using deep neural networks by adding layers of self-supervised neural networks as intermediate inputs to the views, where the views are diversified through the cross-entropy regularization of their outputs. Since the model merges the concepts of co-training and self-supervised learning by considering the differentiation of outputs, we called it Differential Self-Supervised Co-Training (DSSCo-Training). This paper presents some experiments using the DSSCo-Training model to well-known image datasets such as MNIST, CIFAR-100, and SVHN. The results indicate that the proposed model is competitive with the state-of-art models and shows an average relative improvement of 5% in accuracy for several datasets, despite its greater simplicity with respect to more recent approaches.

7.
Hum Genomics ; 13(Suppl 1): 43, 2019 10 22.
Artigo em Inglês | MEDLINE | ID: mdl-31639051

RESUMO

BACKGROUND: MicroRNAs (miRNAs) are a family of short, non-coding RNAs that have been linked to critical cellular activities, most notably regulation of gene expression. The identification of miRNA is a cross-disciplinary approach that requires both computational identification methods and wet-lab validation experiments, making it a resource-intensive procedure. While numerous machine learning methods have been developed to increase classification accuracy and thus reduce validation costs, most methods use supervised learning and thus require large labeled training data sets, often not feasible for less-sequenced species. On the other hand, there is now an abundance of unlabeled RNA sequence data due to the emergence of high-throughput wet-lab experimental procedures, such as next-generation sequencing. RESULTS: This paper explores the application of semi-supervised machine learning for miRNA classification in order to maximize the utility of both labeled and unlabeled data. We here present the novel combination of two semi-supervised approaches: active learning and multi-view co-training. Results across six diverse species show that this multi-stage semi-supervised approach is able to improve classification performance using very small numbers of labeled instances, effectively leveraging the available unlabeled data. CONCLUSIONS: The proposed semi-supervised miRNA classification pipeline holds the potential to identify novel miRNA with high recall and precision while requiring very small numbers of previously known miRNA. Such a method could be highly beneficial when studying miRNA in newly sequenced genomes of niche species with few known examples of miRNA.


Assuntos
Algoritmos , MicroRNAs/classificação , Aprendizado de Máquina Supervisionado , Animais , Área Sob a Curva , Humanos , Curva de Aprendizado , MicroRNAs/genética , Aprendizagem Baseada em Problemas
8.
J Biomed Inform ; 87: 21-36, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30240803

RESUMO

In online health expert question-answering (HQA) services, it is significant to automatically determine the quality of the answers. There are two prominent challenges in this task. First, the answers are usually written in short text, which makes it difficult to absorb the text semantic information. Second, it usually lacks sufficient labeled data but contains a huge amount of unlabeled data. To tackle these challenges, we propose a novel deep co-training framework based on factorization machines (FM) and deep textual views to intelligently and automatically identify the quality of HQA systems. More specifically, we exploit additional domain-specific semantic information from domain-specific word embeddings to expand the semantic space of short text and apply FM to excavate the non-independent interaction relationships among diverse features within individual views for improving the performance of the base classifier via co-training. Our learned deep textual views, the convolutional neural networks (CNN) view which focuses on extracting local features using convolution filters to locally model short text and the dependency-sensitive convolutional neural networks (DSCNN) view which focuses on capturing long-distance dependency information within the text to globally model short text, can then overcome the challenge of feature sparseness in the short text answers from the doctors. The developed co-training framework can effectively mine the highly non-linear semantic information embedded in the unlabeled data and expose the highly non-linear relationships between different views, which minimizes the labeling effort. Finally, we conduct extensive empirical evaluations and demonstrate that our proposed method can significantly improve the predictive performance of the answer quality in the context of HQA services.


Assuntos
Internet , Redes Neurais de Computação , Software , Telemedicina/métodos , Algoritmos , Comunicação , Humanos , Aprendizado de Máquina , Valor Preditivo dos Testes , Semântica
9.
J Imaging ; 10(5)2024 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-38786572

RESUMO

In the realm of medical image analysis, the cost associated with acquiring accurately labeled data is prohibitively high. To address the issue of label scarcity, semi-supervised learning methods are employed, utilizing unlabeled data alongside a limited set of labeled data. This paper presents a novel semi-supervised medical segmentation framework, DCCLNet (deep consistency collaborative learning UNet), grounded in deep consistent co-learning. The framework synergistically integrates consistency learning from feature and input perturbations, coupled with collaborative training between CNN (convolutional neural networks) and ViT (vision transformer), to capitalize on the learning advantages offered by these two distinct paradigms. Feature perturbation involves the application of auxiliary decoders with varied feature disturbances to the main CNN backbone, enhancing the robustness of the CNN backbone through consistency constraints generated by the auxiliary and main decoders. Input perturbation employs an MT (mean teacher) architecture wherein the main network serves as the student model guided by a teacher model subjected to input perturbations. Collaborative training aims to improve the accuracy of the main networks by encouraging mutual learning between the CNN and ViT. Experiments conducted on publicly available datasets for ACDC (automated cardiac diagnosis challenge) and Prostate datasets yielded Dice coefficients of 0.890 and 0.812, respectively. Additionally, comprehensive ablation studies were performed to demonstrate the effectiveness of each methodological contribution in this study.

10.
Heliyon ; 10(12): e33332, 2024 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-39022081

RESUMO

Particulate matter (PM) is defined by the Texas Commission on Environmental Quality (TCEQ) as "a mixture of solid particles and liquid droplets found in the air". These particles vary widely in size. Those particles that are less than 2.5 µm in aerodynamic diameter are known as Particulate Matter 2.5 or PM2.5. Urban haze pollution represented by PM2.5 is becoming serious, so air pollution monitoring is very important. However, due to high cost, the number of air monitoring stations is limited. Our work focuses on integrating multi-source heterogeneous data of Nanchang, China, which includes Taxi track, human mobility, Road networks, Points of Interest (POIs), Meteorology (e.g., temperature, dew point, humidity, wind speed, wind direction, atmospheric pressure, weather activity, weather conditions) and PM2.5 forecast data of air monitoring stations. This research presents an innovative approach to air quality prediction by integrating the above data sets from various sources and utilizing diverse architectures in Nanchang City, China. So for that, semi-supervised learning techniques will be used, namely collaborative training algorithm Co-Training (Co-T), who further adjusting algorithm Tri-Training (Tri-T). The objective is to accurately estimate haze pollution by integrating and using these multi-source heterogeneous data. We achieved this for the first time by employing a semi-supervised co-training strategy to accurately estimate pollution levels after applying the U-air system to environmental data. In particular, the algorithm of U-Air system is reproduced on these highly diverse heterogeneous data of Nanchang City, and the semi-supervised learning Co-T and Tri-T are used to conduct more detailed urban haze pollution prediction. Compared with Co-T, which train time classifier (TC) and subspace classifier (SC) respectively from the separated spatio-temporal perspective, the Tri-T is more accurate with a and faster because of its testing accuracy up to 85.62 %. The forecast results also present the potential of the city multi-source heterogeneous data and the effectiveness of the semi-supervised learning. We hope that this synthesis will motivate atmospheric environmental officials, scientists, and environmentalists in China to explore machine learning technology for controlling the discharge of pollutants and environmental management.

11.
Phys Med Biol ; 68(21)2023 10 25.
Artigo em Inglês | MEDLINE | ID: mdl-37567214

RESUMO

Objective.Accurate left atrial segmentation is the basis of the recognition and clinical analysis of atrial fibrillation. Supervised learning has achieved some competitive segmentation results, but the high annotation cost often limits its performance. Semi-supervised learning is implemented from limited labeled data and a large amount of unlabeled data and shows good potential in solving practical medical problems.Approach. In this study, we proposed a collaborative training framework for multi-scale uncertain entropy perception (MUE-CoT) and achieved efficient left atrial segmentation from a small amount of labeled data. Based on the pyramid feature network, learning is implemented from unlabeled data by minimizing the pyramid prediction difference. In addition, novel loss constraints are proposed for co-training in the study. The diversity loss is defined as a soft constraint so as to accelerate the convergence and a novel multi-scale uncertainty entropy calculation method and a consistency regularization term are proposed to measure the consistency between prediction results. The quality of pseudo-labels cannot be guaranteed in the pre-training period, so a confidence-dependent empirical Gaussian function is proposed to weight the pseudo-supervised loss.Main results.The experimental results of a publicly available dataset and an in-house clinical dataset proved that our method outperformed existing semi-supervised methods. For the two datasets with a labeled ratio of 5%, the Dice similarity coefficient scores were 84.94% ± 4.31 and 81.24% ± 2.4, the HD95values were 4.63 mm ± 2.13 and 3.94 mm ± 2.72, and the Jaccard similarity coefficient scores were 74.00% ± 6.20 and 68.49% ± 3.39, respectively.Significance.The proposed model effectively addresses the challenges of limited data samples and high costs associated with manual annotation in the medical field, leading to enhanced segmentation accuracy.


Assuntos
Fibrilação Atrial , Humanos , Entropia , Incerteza , Átrios do Coração , Distribuição Normal , Processamento de Imagem Assistida por Computador
12.
Med Phys ; 50(7): 4269-4281, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36636813

RESUMO

BACKGROUND: Semi-supervised learning is becoming an effective solution for medical image segmentation because of the lack of a large amount of labeled data. PURPOSE: Consistency-based strategy is widely used in semi-supervised learning. However, it is still a challenging problem because of the coupling of CNN-based isomorphic models. In this study, we propose a new semi-supervised medical image segmentation network (DRS-Net) based on a dual-regularization scheme to address this challenge. METHODS: The proposed model consists of a CNN and a multidecoder hybrid Transformer, which adopts two regularization schemes to extract more generalized representations for unlabeled data. Considering the difference in learning paradigm, we introduce the cross-guidance between CNN and hybrid Transformer, which uses the pseudo label output from one model to supervise the other model better to excavate valid representations from unlabeled data. In addition, we use feature-level consistency regularization to effectively improve the feature extraction performance. We apply different perturbations to the feature maps output from the hybrid Transformer encoder and keep an invariance of the predictions to enhance the encoder's representations. RESULTS: We have extensively evaluated our approach on three typical medical image datasets, including CT slices from Spleen, MRI slices from the Heart, and FM Nuclei. We compare DRS-Net with state-of-the-art methods, and experiment results show that DRS-Net performs better on the Spleen dataset, where the dice similarity coefficient increased by about 3.5%. The experimental results on the Heart and Nuclei datasets show that DRS-Net also improves the segmentation effect of the two datasets. CONCLUSIONS: The proposed DRS-Net enhances the segmentation performance of the datasets with three different medical modalities, where the dual-regularization scheme extracts more generalized representations and solves the overfitting problem.


Assuntos
Núcleo Celular , Coração , Baço , Aprendizado de Máquina Supervisionado , Processamento de Imagem Assistida por Computador
13.
Bioengineering (Basel) ; 10(7)2023 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-37508896

RESUMO

Medical image segmentation has made significant progress when a large amount of labeled data are available. However, annotating medical image segmentation datasets is expensive due to the requirement of professional skills. Additionally, classes are often unevenly distributed in medical images, which severely affects the classification performance on minority classes. To address these problems, this paper proposes Co-Distribution Alignment (Co-DA) for semi-supervised medical image segmentation. Specifically, Co-DA aligns marginal predictions on unlabeled data to marginal predictions on labeled data in a class-wise manner with two differently initialized models before using the pseudo-labels generated by one model to supervise the other. Besides, we design an over-expectation cross-entropy loss for filtering the unlabeled pixels to reduce noise in their pseudo-labels. Quantitative and qualitative experiments on three public datasets demonstrate that the proposed approach outperforms existing state-of-the-art semi-supervised medical image segmentation methods on both the 2D CaDIS dataset and the 3D LGE-MRI and ACDC datasets, achieving an mIoU of 0.8515 with only 24% labeled data on CaDIS, and a Dice score of 0.8824 and 0.8773 with only 20% data on LGE-MRI and ACDC, respectively.

14.
Comput Biol Med ; 157: 106736, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36958238

RESUMO

BACKGROUND AND OBJECTIVE: Abundant labeled data drives the model training for better performance, but collecting sufficient labels is still challenging. To alleviate the pressure of label collection, semi-supervised learning merges unlabeled data into training process. However, the joining of unlabeled data (e.g., data from different hospitals with different acquisition parameters) will change the original distribution. Such a distribution shift leads to a perturbation in the training process, potentially leading to a confirmation bias. In this paper, we investigate distribution shift and develop methods to increase the robustness of our models, with the goal of improving performance in semi-supervised semantic segmentation of medical images. We study distribution shift and increase model robustness to it, for improving practical performance in semi-supervised segmentation over medical images. METHODS: To alleviate the issue of distribution shift, we introduce adversarial training into the co-training process. We simulate perturbations caused by the distribution shift via adversarial perturbations and introduce the adversarial perturbation to attack the supervised training to improve the robustness against the distribution shift. Benefiting from label guidance, supervised training does not collapse under adversarial attacks. For co-training, two sub-models are trained from two views (over two disjoint subsets of the dataset) to extract different kinds of knowledge independently. Co-training outperforms single-model by integrating both views of knowledge to avoid confirmation bias. RESULTS: For practicality, we conduct extensive experiments on challenging medical datasets. Experimental results show desirable improvements to state-of-the-art counterparts (Yu and Wang, 2019; Peng et al., 2020; Perone et al., 2019). We achieve a DSC score of 87.37% with only 20% of labels on the ACDC dataset, almost same to using 100% of labels. On the SCGM dataset with more distribution shift, we achieve a DSC score of 78.65% with 6.5% of labels, surpassing 10.30% over Peng et al. (2020). Our evaluative results show superior robustness against distribution shifts in medical scenarios. CONCLUSION: Empirical results show the effectiveness of our work for handling distribution shift in medical scenarios.


Assuntos
Hospitais , Semântica , Aprendizado de Máquina Supervisionado , Processamento de Imagem Assistida por Computador
15.
Med Image Anal ; 89: 102933, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37611532

RESUMO

Nuclei segmentation is a crucial task for whole slide image analysis in digital pathology. Generally, the segmentation performance of fully-supervised learning heavily depends on the amount and quality of the annotated data. However, it is time-consuming and expensive for professional pathologists to provide accurate pixel-level ground truth, while it is much easier to get coarse labels such as point annotations. In this paper, we propose a weakly-supervised learning method for nuclei segmentation that only requires point annotations for training. First, coarse pixel-level labels are derived from the point annotations based on the Voronoi diagram and the k-means clustering method to avoid overfitting. Second, a co-training strategy with an exponential moving average method is designed to refine the incomplete supervision of the coarse labels. Third, a self-supervised visual representation learning method is tailored for nuclei segmentation of pathology images that transforms the hematoxylin component images into the H&E stained images to gain better understanding of the relationship between the nuclei and cytoplasm. We comprehensively evaluate the proposed method using two public datasets. Both visual and quantitative results demonstrate the superiority of our method to the state-of-the-art methods, and its competitive performance compared to the fully-supervised methods. Codes are available at https://github.com/hust-linyi/SC-Net.


Assuntos
Núcleo Celular , Processamento de Imagem Assistida por Computador , Humanos , Hematoxilina , Aprendizado de Máquina Supervisionado
16.
Med Phys ; 49(3): 1723-1738, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35061247

RESUMO

PURPOSE: To development and validate a neovascularization (NV) segmentation model in intravascular optical coherence tomography (IVOCT) through deep learning methods. METHODS AND MATERIALS: A total of 1950 2D slices of 70 IVOCT pullbacks were used in our study. We randomly selected 1273 2D slices from 44 patients as the training set, 379 2D slices from 11 patients as the validation set, and 298 2D slices from the last 15 patients as the testing set. Automatic NV segmentation is quite challenging, as it must address issues of speckle noise, shadow artifacts, high distribution variation, etc. To meet these challenges, a new deep learning-based segmentation method is developed based on a co-training architecture with an integrated structural attention mechanism. Co-training is developed to exploit the features of three consecutive slices. The structural attention mechanism comprises spatial and channel attention modules and is integrated into the co-training architecture at each up-sampling step. A cascaded fixed network is further incorporated to achieve segmentation at the image level in a coarse-to-fine manner. RESULTS: Extensive experiments were performed involving a comparison with several state-of-the-art deep learning-based segmentation methods. Moreover, the consistency of the results with those of manual segmentation was also investigated. Our proposed NV automatic segmentation method achieved the highest correlation with the manual delineation by interventional cardiologists (the Pearson correlation coefficient is 0.825). CONCLUSION: In this work, we proposed a co-training architecture with an integrated structural attention mechanism to segment NV in IVOCT images. The good agreement between our segmentation results and manual segmentation indicates that the proposed method has great potential for application in the clinical investigation of NV-related plaque diagnosis and treatment.


Assuntos
Placa Aterosclerótica , Tomografia de Coerência Óptica , Artefatos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Neovascularização Patológica , Redes Neurais de Computação , Tomografia de Coerência Óptica/métodos
17.
Comput Biol Med ; 149: 106051, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36055155

RESUMO

Semi-supervised learning has made significant strides in the medical domain since it alleviates the heavy burden of collecting abundant pixel-wise annotated data for semantic segmentation tasks. Existing semi-supervised approaches enhance the ability to extract features from unlabeled data with prior knowledge obtained from limited labeled data. However, due to the scarcity of labeled data, the features extracted by the models are limited in supervised learning, and the quality of predictions for unlabeled data also cannot be guaranteed. Both will impede consistency training. To this end, we proposed a novel uncertainty-aware scheme to make models learn regions purposefully. Specifically, we employ Monte Carlo Sampling as an estimation method to attain an uncertainty map, which can serve as a weight for losses to force the models to focus on the valuable region according to the characteristics of supervised learning and unsupervised learning. Simultaneously, in the backward process, we joint unsupervised and supervised losses to accelerate the convergence of the network via enhancing the gradient flow between different tasks. Quantitatively, we conduct extensive experiments on three challenging medical datasets. Experimental results show desirable improvements to state-of-the-art counterparts.


Assuntos
Processamento de Imagem Assistida por Computador , Aprendizado de Máquina Supervisionado , Processamento de Imagem Assistida por Computador/métodos , Incerteza
18.
Bioengineering (Basel) ; 9(11)2022 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-36354561

RESUMO

Lately, deep learning technology has been extensively investigated for accelerating dynamic magnetic resonance (MR) imaging, with encouraging progresses achieved. However, without fully sampled reference data for training, the current approaches may have limited abilities in recovering fine details or structures. To address this challenge, this paper proposes a self-supervised collaborative learning framework (SelfCoLearn) for accurate dynamic MR image reconstruction from undersampled k-space data directly. The proposed SelfCoLearn is equipped with three important components, namely, dual-network collaborative learning, reunderampling data augmentation and a special-designed co-training loss. The framework is flexible and can be integrated into various model-based iterative un-rolled networks. The proposed method has been evaluated on an in vivo dataset and was compared to four state-of-the-art methods. The results show that the proposed method possesses strong capabilities in capturing essential and inherent representations for direct reconstructions from the undersampled k-space data and thus enables high-quality and fast dynamic MR imaging.

19.
Magn Reson Imaging ; 92: 108-119, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35772581

RESUMO

Autocalibration signal is acquired in the k-space-based parallel MRI reconstruction for estimating interpolation coefficients and reconstructing missing unacquired data. Many ACS lines can suppress aliasing artifacts and noise by covering the low-frequency signal region. However, more ACS lines will delay the data acquisition process and therefore elongate the scan time. Furthermore, a single interpolator is often used for recovering missing k-space data, and model error may exist if the single interpolator size is not selected appropriately. In this work, based on the idea of the disagreement-based semi-supervised learning, a dual-interpolator strategy is proposed to collaboratively reconstruct missing k-space data. Two interpolators with different sizes are alternatively applied to estimate and re-estimate missing data in k-space. The disagreement between two interpolators is converged and real missing values are co-estimated from two views. The experimental results show that the proposed method outperforms GRAPPA, SPIRiT, and Nonlinear GRAPPA methods using relatively low number of ACS data, and reduces aliasing artifacts and noise in reconstructed images.


Assuntos
Algoritmos , Aumento da Imagem , Artefatos , Aumento da Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Cintilografia
20.
Med Image Anal ; 73: 102148, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34274693

RESUMO

Deep learning models achieve strong performance for radiology image classification, but their practical application is bottlenecked by the need for large labeled training datasets. Semi-supervised learning (SSL) approaches leverage small labeled datasets alongside larger unlabeled datasets and offer potential for reducing labeling cost. In this work, we introduce NoTeacher, a novel consistency-based SSL framework which incorporates probabilistic graphical models. Unlike Mean Teacher which maintains a teacher network updated via a temporal ensemble, NoTeacher employs two independent networks, thereby eliminating the need for a teacher network. We demonstrate how NoTeacher can be customized to handle a range of challenges in radiology image classification. Specifically, we describe adaptations for scenarios with 2D and 3D inputs, with uni and multi-label classification, and with class distribution mismatch between labeled and unlabeled portions of the training data. In realistic empirical evaluations on three public benchmark datasets spanning the workhorse modalities of radiology (X-Ray, CT, MRI), we show that NoTeacher achieves over 90-95% of the fully supervised AUROC with less than 5-15% labeling budget. Further, NoTeacher outperforms established SSL methods with minimal hyperparameter tuning, and has implications as a principled and practical option for semi-supervised learning in radiology applications.


Assuntos
Radiologia , Aprendizado de Máquina Supervisionado , Humanos , Radiografia
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa