Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Neural Netw ; 180: 106659, 2024 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-39216292

RESUMO

Domain adaptation on time-series data, which is often encountered in the field of industry, like anomaly detection and sensor data forecasting, but received limited attention in academia, is an important but challenging task in real-world scenarios. Most of the existing methods for time-series data use the covariate shift assumption for non-time-series data to extract the domain-invariant representation, but this assumption is hard to meet in practice due to the complex dependence among variables and a small change of the time lags may lead to a huge change of future values. To address this challenge, we leverage the stableness of causal structures among different domains. To further avoid the strong assumptions in causal discovery like linear non-Gaussian assumption, we relax it to mine the stable sparse associative structures instead of discovering the causal structures directly. Besides the domain-invariant structures, we also find that some domain-specific information like the strengths of the structures is important for prediction. Based on the aforementioned intuition, we extend the sparse associative structure alignment model in the conference version to the Sparse Associative Structure Alignment model with domain-specific information enhancement (SASA2 in short), which aligns the invariant unweighted spare associative structures and considers the variant information for time-series unsupervised domain adaptation. Specifically, we first generate the segment set to exclude the obstacle of offsets. Second, we extract the unweighted sparse associative structures via sparse attention mechanisms. Third, we extract the domain-specific information via an autoregressive module. Finally, we employ a unidirectional alignment restriction to guide the transformation from the source to the target. Moreover, we further provide a generalization analysis to show the theoretical superiority of our method. Compared with existing methods, our method yields state-of-the-art performance, with a 5% relative improvement in three real-world datasets, covering different applications: air quality, in-hospital healthcare, and anomaly detection. Furthermore, visualization results of sparse associative structures illustrate what knowledge can be transferred, boosting the transparency and interpretability of our method.

2.
IEEE Trans Med Imaging ; 42(12): 3602-3613, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37471191

RESUMO

The growth rate of pulmonary nodules is a critical clue to the cancerous diagnosis. It is essential to monitor their dynamic progressions during pulmonary nodule management. To facilitate the prosperity of research on nodule growth prediction, we organized and published a temporal dataset called NLSTt with consecutive computed tomography (CT) scans. Based on the self-built dataset, we develop a visual learner to predict the growth for the following CT scan qualitatively and further propose a model to predict the growth rate of pulmonary nodules quantitatively, so that better diagnosis can be achieved with the help of our predicted results. To this end, in this work, we propose a parameterized Gempertz-guided morphological autoencoder (GM-AE) to generate any future-time-span high-quality visual appearances of pulmonary nodules from the baseline CT scan. Specifically, we parameterize a popular mathematical model for tumor growth kinetics, Gompertz, to predict future masses and volumes of pulmonary nodules. Then, we exploit the expected growth rate on the mass and volume to guide decoders generating future shape and texture of pulmonary nodules. We introduce two branches in an autoencoder to encourage shape-aware and textural-aware representation learning and integrate the generated shape into the textural-aware branch to simulate the future morphology of pulmonary nodules. We conduct extensive experiments on the self-built NLSTt dataset to demonstrate the superiority of our GM-AE to its competitive counterparts. Experiment results also reveal the learnable Gompertz function enjoys promising descriptive power in accounting for inter-subject variability of the growth rate for pulmonary nodules. Besides, we evaluate our GM-AE model on an in-house dataset to validate its generalizability and practicality. We make its code publicly available along with the published NLSTt dataset.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Tomografia Computadorizada por Raios X/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Nódulo Pulmonar Solitário/diagnóstico por imagem
3.
IEEE Trans Neural Netw Learn Syst ; 34(10): 6824-6838, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37224350

RESUMO

Domain adaptation (DA) aims to transfer knowledge from one source domain to another different but related target domain. The mainstream approach embeds adversarial learning into deep neural networks (DNNs) to either learn domain-invariant features to reduce the domain discrepancy or generate data to fill in the domain gap. However, these adversarial DA (ADA) approaches mainly consider the domain-level data distributions, while ignoring the differences among components contained in different domains. Therefore, components that are not related to the target domain are not filtered out. This can cause a negative transfer. In addition, it is difficult to make full use of the relevant components between the source and target domains to enhance DA. To address these limitations, we propose a general two-stage framework, named multicomponent ADA (MCADA). This framework trains the target model by first learning a domain-level model and then fine-tuning that model at the component-level. In particular, MCADA constructs a bipartite graph to find the most relevant component in the source domain for each component in the target domain. Since the nonrelevant components are filtered out for each target component, fine-tuning the domain-level model can enhance positive transfer. Extensive experiments on several real-world datasets demonstrate that MCADA has significant advantages over state-of-the-art methods.

4.
IEEE Trans Pattern Anal Mach Intell ; 45(3): 3245-3258, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35617188

RESUMO

In many practical datasets, such as co-citation and co-authorship, relationships across the samples are more complex than pair-wise. Hypergraphs provide a flexible and natural representation for such complex correlations and thus obtain increasing attention in the machine learning and data mining communities. Existing deep learning-based hypergraph approaches seek to learn the latent vertex representations based on either vertices or hyperedges from previous layers and focus on reducing the cross-entropy error over labeled vertices to obtain a classifier. In this paper, we propose a novel model called Hypergraph Collaborative Network (HCoN), which takes the information from both previous vertices and hyperedges into consideration to achieve informative latent representations and further introduces the hypergraph reconstruction error as a regularizer to learn an effective classifier. We evaluate the proposed method on two cases, i.e., semi-supervised vertex and hyperedge classifications. We carry out the experiments on several benchmark datasets and compare our method with several state-of-the-art approaches. Experimental results demonstrate that the performance of the proposed method is better than that of the baseline methods.

5.
Artigo em Inglês | MEDLINE | ID: mdl-35867357

RESUMO

Sequential recommendation aims to choose the most suitable items for a user at a specific timestamp given historical behaviors. Existing methods usually model the user behavior sequence based on transition-based methods such as Markov chain. However, these methods also implicitly assume that the users are independent of each other without considering the influence between users. In fact, this influence plays an important role in sequence recommendation since the behavior of a user is easily affected by others. Therefore, it is desirable to aggregate both user behaviors and the influence between users, which are evolved temporally and involved in the heterogeneous graph of users and items. In this article, we incorporate dynamic user-item heterogeneous graphs to propose a novel sequential recommendation framework. As a result, the historical behaviors as well as the influence between users can be taken into consideration. To achieve this, we first formalize sequential recommendation as a problem to estimate conditional probability given temporal dynamic heterogeneous graphs and user behavior sequences. After that, we exploit the conditional random field to aggregate the heterogeneous graphs and user behaviors for probability estimation and employ the pseudo-likelihood approach to derive a tractable objective function. Finally, we provide scalable and flexible implementations of the proposed framework. Experimental results on three real-world datasets not only demonstrate the effectiveness of our proposed method but also provide some insightful discoveries on the sequential recommendation.

6.
IEEE Trans Image Process ; 30: 6364-6376, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34236965

RESUMO

Heterogeneous domain adaptation (HDA) is a challenging problem because of the different feature representations in the source and target domains. Most HDA methods search for mapping matrices from the source and target domains to discover latent features for learning. However, these methods barely consider the reconstruction error to measure the information loss during the mapping procedure. In this paper, we propose to jointly capture the information and match the source and target domain distributions in the latent feature space. In the learning model, we propose to minimize the reconstruction loss between the original and reconstructed representations to preserve information during transformation and reduce the Maximum Mean Discrepancy between the source and target domains to align their distributions. The resulting minimization problem involves two projection variables with orthogonal constraints that can be solved by the generalized gradient flow method, which can preserve orthogonal constraints in the computational procedure. We conduct extensive experiments on several image classification datasets to demonstrate that the effectiveness and efficiency of the proposed method are better than those of state-of-the-art HDA methods.

7.
BMC Med Imaging ; 21(1): 99, 2021 06 10.
Artigo em Inglês | MEDLINE | ID: mdl-34112095

RESUMO

BACKGROUND: Chest X-rays are the most commonly available and affordable radiological examination for screening thoracic diseases. According to the domain knowledge of screening chest X-rays, the pathological information usually lay on the lung and heart regions. However, it is costly to acquire region-level annotation in practice, and model training mainly relies on image-level class labels in a weakly supervised manner, which is highly challenging for computer-aided chest X-ray screening. To address this issue, some methods have been proposed recently to identify local regions containing pathological information, which is vital for thoracic disease classification. Inspired by this, we propose a novel deep learning framework to explore discriminative information from lung and heart regions. RESULT: We design a feature extractor equipped with a multi-scale attention module to learn global attention maps from global images. To exploit disease-specific cues effectively, we locate lung and heart regions containing pathological information by a well-trained pixel-wise segmentation model to generate binarization masks. By introducing element-wise logical AND operator on the learned global attention maps and the binarization masks, we obtain local attention maps in which pixels are are 1 for lung and heart region and 0 for other regions. By zeroing features of non-lung and heart regions in attention maps, we can effectively exploit their disease-specific cues in lung and heart regions. Compared to existing methods fusing global and local features, we adopt feature weighting to avoid weakening visual cues unique to lung and heart regions. Our method with pixel-wise segmentation can help overcome the deviation of locating local regions. Evaluated by the benchmark split on the publicly available chest X-ray14 dataset, the comprehensive experiments show that our method achieves superior performance compared to the state-of-the-art methods. CONCLUSION: We propose a novel deep framework for the multi-label classification of thoracic diseases in chest X-ray images. The proposed network aims to effectively exploit pathological regions containing the main cues for chest X-ray screening. Our proposed network has been used in clinic screening to assist the radiologists. Chest X-ray accounts for a significant proportion of radiological examinations. It is valuable to explore more methods for improving performance.


Assuntos
Aprendizado Profundo , Cardiopatias/diagnóstico por imagem , Pneumopatias/diagnóstico por imagem , Radiografia Torácica , Doenças Torácicas/diagnóstico por imagem , Coração/diagnóstico por imagem , Humanos , Pulmão/diagnóstico por imagem , Curva ROC
8.
IEEE J Biomed Health Inform ; 25(10): 3943-3954, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34018938

RESUMO

When encountering a dubious diagnostic case, medical instance retrieval can help radiologists make evidence-based diagnoses by finding images containing instances similar to a query case from a large image database. The similarity between the query case and retrieved similar cases is determined by visual features extracted from pathologically abnormal regions. However, the manifestation of these regions often lacks specificity, i.e., different diseases can have the same manifestation, and different manifestations may occur at different stages of the same disease. To combat the manifestation ambiguity in medical instance retrieval, we propose a novel deep framework called Y-Net, encoding images into compact hash-codes generated from convolutional features by feature aggregation. Y-Net can learn highly discriminative convolutional features by unifying the pixel-wise segmentation loss and classification loss. The segmentation loss allows exploring subtle spatial differences for good spatial-discriminability while the classification loss utilizes class-aware semantic information for good semantic-separability. As a result, Y-Net can enhance the visual features in pathologically abnormal regions and suppress the disturbing of the background during model training, which could effectively embed discriminative features into the hash-codes in the retrieval stage. Extensive experiments on two medical image datasets demonstrate that Y-Net can alleviate the ambiguity of pathologically abnormal regions and its retrieval performance outperforms the state-of-the-art method by an average of 9.27% on the returned list of 10.


Assuntos
Algoritmos , Semântica , Bases de Dados Factuais , Humanos , Radiologistas , Projetos de Pesquisa
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 2032-2035, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946300

RESUMO

Deep learning has achieved great success in image classification task when given sufficient labeled training images. However, in fundus image based glaucoma diagnosis, we often have very limited training data due to expensive cost in data labeling. Moreover, when facing a new application environment, it is difficult to train a network with limited labeled training images. In this case, some images from some auxiliary domains (i.e., source domain) could be exploited to improve the performance. Unfortunately, direct using the source domain data may not achieve promising performance for the domain of interest (i.e., target domain) due to reasons like distribution discrepancy between two domains. In this paper, focusing on glaucoma diagnosis, we propose a deep adversarial transfer learning method conditioned on label information to match the distributions of source and target domains, so that the labeled source images can be leveraged to improve the classification performance in the target domain. Different from the most existing adversarial transfer learning methods which consider marginal distribution matching only, we seek to match the label conditional distributions by handling images with different labels separately. We conduct experiments on three glaucoma datasets and adopt multiple evaluation metrics to verify the effectiveness of our proposed method.


Assuntos
Aprendizado Profundo , Glaucoma/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador , Fundo de Olho , Humanos
10.
IEEE Trans Neural Netw Learn Syst ; 29(7): 3252-3263, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29028211

RESUMO

In this paper, we study the online heterogeneous transfer (OHT) learning problem, where the target data of interest arrive in an online manner, while the source data and auxiliary co-occurrence data are from offline sources and can be easily annotated. OHT is very challenging, since the feature spaces of the source and target domains are different. To address this, we propose a novel technique called OHT by hedge ensemble by exploiting both offline knowledge and online knowledge of different domains. To this end, we build an offline decision function based on a heterogeneous similarity that is constructed using labeled source data and unlabeled auxiliary co-occurrence data. After that, an online decision function is learned from the target data. Last, we employ a hedge weighting strategy to combine the offline and online decision functions to exploit knowledge from the source and target domains of different feature spaces. We also provide a theoretical analysis regarding the mistake bounds of the proposed approach. Comprehensive experiments on three real-world data sets demonstrate the effectiveness of the proposed technique.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA