Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Biol Med ; 170: 107912, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38219643

RESUMO

Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.


Assuntos
Aprendizado Profundo , Diagnóstico por Imagem , Processamento de Imagem Assistida por Computador
2.
Neural Netw ; 165: 999-1009, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37467587

RESUMO

In the few-shot class incremental learning (FSCIL) setting, new classes with few training examples become available incrementally, and deep learning models suffer from catastrophic forgetting of the previous classes when trained on new classes. Data augmentation techniques are generally used to increase the training data and improve the model performance. In this work, we demonstrate that differently augmented views of the same image obtained by applying data augmentations may not necessarily activate the same set of neurons in the model. Therefore, the information gained by a model regarding a class, when trained using data augmentation, may not necessarily be stored in the same set of neurons in the model. Consequently, during incremental training, even if some of the model weights that store the previously seen class information for a particular view get overwritten, the information of the previous classes for the other views may still remain intact in the other model weights. Therefore, the impact of catastrophic forgetting on the model predictions is different for different data augmentations used during training. Based on this, we present an Augmentation-based Prediction Rectification (APR) approach to reduce the impact of catastrophic forgetting in the FSCIL setting. APR can also augment other FSCIL approaches and significantly improve their performance. We also propose a novel feature synthesis module (FSM) for synthesizing features relevant to the previously seen classes without requiring training data from these classes. FSM outperforms other generative approaches in this setting. We experimentally show that our approach outperforms other methods on benchmark datasets.


Assuntos
Benchmarking , Neurônios
3.
IEEE/ACM Trans Comput Biol Bioinform ; 20(3): 1983-1994, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37015582

RESUMO

Single-cell RNA sequencing (scRNA-seq) is a revolutionary methodology that helps to analyze transcriptome or genome information from a single cell. However, high dimensionality and sparsity in data due to dropout events pose computational challenges for existing state-of-the-art scRNA-seq clustering methods. Learning efficient representations becomes even more challenging due to the presence of noise in scRNA-seq data. To overcome the effect of noise and learn effective representations, this paper proposes sc-INDC (Single-Cell Information Maximized Noise-Invariant Deep Clustering), a deep neural network that facilitates learning of informative and noise-invariant representations of scRNA-seq data. Furthermore, the time complexity of the proposed sc-INDC is significantly lower compared to state-of-the-art scRNA-seq clustering methods. Extensive experimentation on fourteen publicly available scRNA-seq datasets illustrates the efficacy of the proposed model. Additionally, visualizations of t-SNE plots and several ablation studies are also conducted to provide insights into the improved representation ability of sc-INDC. Code of the proposed sc-INDC will be available at: https://github.com/arnabkmondal/sc-INDC.


Assuntos
Perfilação da Expressão Gênica , Análise de Célula Única , Perfilação da Expressão Gênica/métodos , Análise de Sequência de RNA/métodos , Sequência de Bases , Análise de Célula Única/métodos , Análise por Conglomerados , Algoritmos
4.
Neural Netw ; 161: 202-212, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36774860

RESUMO

A class-incremental learning problem is characterized by training data becoming available in a phase-by-phase manner. Deep learning models suffer from catastrophic forgetting of the classes in the older phases as they get trained on the classes introduced in the new phase. In this work, we show that the change in orientation of an image has a considerable effect on the model prediction accuracy, which in turn demonstrates the different rates of catastrophic forgetting for the different orientations of the same image, which is a novel finding. Based on this, we propose a data-ensemble approach that combines the predictions for the different orientations of the image to help the model retain information regarding the previously seen classes and thereby reduce the rate of forgetting in the model predictions. However, we cannot directly use the data-ensemble approach if the model is trained using traditional techniques. Therefore, we also propose a novel training approach using a joint-incremental learning objective (JILO) that involves jointly training the network with two incremental learning objectives, i.e., the class-incremental learning objective and our proposed data-incremental learning objective. We empirically demonstrate that JILO is vital to the data-ensemble approach. We apply our proposed approach to state-of-the-art class-incremental learning methods and empirically show that our approach significantly improves the performance of these methods. Our proposed approach significantly improves the performance of the state-of-the-art method (AANets) on the CIFAR-100 dataset by absolute margins of 3.30%, 4.28%, 3.55%, 4.03%, for the number of phases P=50, 25, 10, and 5, respectively, which establishes the efficacy of the proposed work.


Assuntos
Redes Neurais de Computação
5.
Artigo em Inglês | MEDLINE | ID: mdl-36449592

RESUMO

In the task incremental learning problem, deep learning models suffer from catastrophic forgetting of previously seen classes/tasks as they are trained on new classes/tasks. This problem becomes even harder when some of the test classes do not belong to the training class set, i.e., the task incremental generalized zero-shot learning problem. We propose a novel approach to address the task incremental learning problem for both the non zero-shot and zero-shot settings. Our proposed approach, called Rectification-based Knowledge Retention (RKR), applies weight rectifications and affine transformations for adapting the model to any task. During testing, our approach can use the task label information (task-aware) to quickly adapt the network to that task. We also extend our approach to make it task-agnostic so that it can work even when the task label information is not available during testing. Specifically, given a continuum of test data, our approach predicts the task and quickly adapts the network to the predicted task. We experimentally show that our proposed approach achieves state-of-the-art results on several benchmark datasets for both non zero-shot and zero-shot task incremental learning.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA