Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38771690

RESUMO

The success of graph neural networks stimulates the prosperity of graph mining and the corresponding downstream tasks including graph anomaly detection (GAD). However, it has been explored that those graph mining methods are vulnerable to structural manipulations on relational data. That is, the attacker can maliciously perturb the graph structures to assist the target nodes in evading anomaly detection. In this article, we explore the structural vulnerability of two typical GAD systems: unsupervised FeXtra-based GAD and supervised graph convolutional network (GCN)-based GAD. Specifically, structural poisoning attacks against GAD are formulated as complex bi-level optimization problems. Our first major contribution is then to transform the bi-level problem into one-level leveraging different regression methods. Furthermore, we propose a new way of utilizing gradient information to optimize the one-level optimization problem in the discrete domain. Comprehensive experiments demonstrate the effectiveness of our proposed attack algorithm BinarizedAttack .

2.
IEEE Trans Med Imaging ; 39(4): 819-832, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31425065

RESUMO

We propose a new method for generating synthetic CT images from modified Dixon (mDixon) MR data. The synthetic CT is used for attenuation correction (AC) when reconstructing PET data on abdomen and pelvis. While MR does not intrinsically contain any information about photon attenuation, AC is needed in PET/MR systems in order to be quantitatively accurate and to meet qualification standards required for use in many multi-center trials. Existing MR-based synthetic CT generation methods either use advanced MR sequences that have long acquisition time and limited clinical availability or use matching of the MR images from a newly scanned subject to images in a library of MR-CT pairs which has difficulty in accounting for the diversity of human anatomy especially in patients that have pathologies. To address these deficiencies, we present a five-phase interlinked method that uses mDixon MR acquisition and advanced machine learning methods for synthetic CT generation. Both transfer fuzzy clustering and active learning-based classification (TFC-ALC) are used. The significance of our efforts is fourfold: 1) TFC-ALC is capable of better synthetic CT generation than methods currently in use on the challenging abdomen using only common Dixon-based scanning. 2) TFC partitions MR voxels initially into the four groups regarding fat, bone, air, and soft tissue via transfer learning; ALC can learn insightful classifiers, using as few but informative labeled examples as possible to precisely distinguish bone, air, and soft tissue. Combining them, the TFC-ALC method successfully overcomes the inherent imperfection and potential uncertainty regarding the co-registration between CT and MR images. 3) Compared with existing methods, TFC-ALC features not only preferable synthetic CT generation but also improved parameter robustness, which facilitates its clinical practicability. Applying the proposed approach on mDixon-MR data from ten subjects, the average score of the mean absolute prediction deviation (MAPD) was 89.78±8.76 which is significantly better than the 133.17±9.67 obtained using the all-water (AW) method (p=4.11E-9) and the 104.97±10.03 obtained using the four-cluster-partitioning (FCP, i.e., external-air, internal-air, fat, and soft tissue) method (p=0.002). 4) Experiments in the PET SUV errors of these approaches show that TFC-ALC achieves the highest SUV accuracy and can generally reduce the SUV errors to 5% or less. These experimental results distinctively demonstrate the effectiveness of our proposed TFCALC method for the synthetic CT generation on abdomen and pelvis using only the commonly-available Dixon pulse sequence.


Assuntos
Abdome/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Pelve/diagnóstico por imagem , Tomografia por Emissão de Pósitrons/métodos , Máquina de Vetores de Suporte , Análise por Conglomerados , Lógica Fuzzy , Humanos , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X
3.
J Med Syst ; 43(5): 118, 2019 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-30911929

RESUMO

Artificial intelligence algorithms have been used in a wide range of applications in clinical aided diagnosis, such as automatic MR image segmentation and seizure EEG signal analyses. In recent years, many machine learning-based automatic MR brain image segmentation methods have been proposed as auxiliary methods of medical image analysis in clinical treatment. Nevertheless, many problems regarding precise medical images, which cannot be effectively utilized to improve partition performance, remain to be solved. Due to the poor contrast in grayscale images, the ambiguity and complexity of MR images, and individual variability, the performance of classic algorithms in medical image segmentation still needs improvement. In this paper, we introduce a distributed multitask fuzzy c-means (MT-FCM) clustering algorithm for MR brain image segmentation that can extract knowledge common among different clustering tasks. The proposed distributed MT-FCM algorithm can effectively exploit information common among different but related MR brain image segmentation tasks and can avoid the negative effects caused by noisy data that exist in some MR images. Experimental results on clinical MR brain images demonstrate that the distributed MT-FCM method demonstrates more desirable performance than the classic signal task method.


Assuntos
Encéfalo/diagnóstico por imagem , Lógica Fuzzy , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Algoritmos , Humanos , Reprodutibilidade dos Testes
4.
Artif Intell Med ; 90: 34-41, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-30054121

RESUMO

BACKGROUND: Manual contouring remains the most laborious task in radiation therapy planning and is a major barrier to implementing routine Magnetic Resonance Imaging (MRI) Guided Adaptive Radiation Therapy (MR-ART). To address this, we propose a new artificial intelligence-based, auto-contouring method for abdominal MR-ART modeled after human brain cognition for manual contouring. METHODS/MATERIALS: Our algorithm is based on two types of information flow, i.e. top-down and bottom-up. Top-down information is derived from simulation MR images. It grossly delineates the object based on its high-level information class by transferring the initial planning contours onto daily images. Bottom-up information is derived from pixel data by a supervised, self-adaptive, active learning based support vector machine. It uses low-level pixel features, such as intensity and location, to distinguish each target boundary from the background. The final result is obtained by fusing top-down and bottom-up outputs in a unified framework through artificial intelligence fusion. For evaluation, we used a dataset of four patients with locally advanced pancreatic cancer treated with MR-ART using a clinical system (MRIdian, Viewray, Oakwood Village, OH, USA). Each set included the simulation MRI and onboard T1 MRI corresponding to a randomly selected treatment session. Each MRI had 144 axial slices of 266 × 266 pixels. Using the Dice Similarity Index (DSI) and the Hausdorff Distance Index (HDI), we compared the manual and automated contours for the liver, left and right kidneys, and the spinal cord. RESULTS: The average auto-segmentation time was two minutes per set. Visually, the automatic and manual contours were similar. Fused results achieved better accuracy than either the bottom-up or top-down method alone. The DSI values were above 0.86. The spinal canal contours yielded a low HDI value. CONCLUSION: With a DSI significantly higher than the usually reported 0.7, our novel algorithm yields a high segmentation accuracy. To our knowledge, this is the first fully automated contouring approach using T1 MRI images for adaptive radiotherapy.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Neoplasias Pancreáticas/radioterapia , Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia Guiada por Imagem/métodos , Máquina de Vetores de Suporte , Humanos , Imagem Multimodal , Neoplasias Pancreáticas/diagnóstico por imagem , Neoplasias Pancreáticas/patologia , Tomografia Computadorizada por Raios X , Fluxo de Trabalho
5.
IEEE Access ; 6: 28594-28610, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-31289704

RESUMO

As a dedicated countermeasure for heterogeneous multi-view data, multi-view clustering is currently a hot topic in machine learning. However, many existing methods either neglect the effective collaborations among views during clustering or do not distinguish the respective importance of attributes in views, instead treating them equivalently. Motivated by such challenges, based on maximum entropy clustering (MEC), two specialized criteria-inter-view collaborative learning (IEVCL) and intra-view-weighted attributes (IAVWA)-are first devised as the bases. Then, by organically incorporating IEVCL and IAVWA into the formulation of classic MEC, a novel, collaborative multi-view clustering model and the matching algorithm referred to as the view-collaborative, attribute-weighted MEC (VC-AW-MEC) are proposed. The significance of our efforts is three-fold: 1) both IEVCL and IAVWA are dedicatedly devised based on MEC so that the proposed VC-AW-MEC is qualified to effectively handle as many multi-view data scenes as possible; 2) IEVCL is competent in seeking the consensus across all involved views throughout clustering, whereas IAVWA is capable of adaptively discriminating the individual impact regarding the attributes within each view; and 3) benefiting from jointly leveraging IEVCL and IAVWA, compared with some existing state-of-the-art approaches, the proposed VC-AW-MEC algorithm generally exhibits preferable clustering effectiveness and stability on heterogeneous multi-view data. Our efforts have been verified in many synthetic or real-world multi-view data scenes.

6.
Knowl Based Syst ; 130: 33-50, 2017 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-30050232

RESUMO

We study a novel fuzzy clustering method to improve the segmentation performance on the target texture image by leveraging the knowledge from a prior texture image. Two knowledge transfer mechanisms, i.e. knowledge-leveraged prototype transfer (KL-PT) and knowledge-leveraged prototype matching (KL-PM) are first introduced as the bases. Applying them, the knowledge-leveraged transfer fuzzy C-means (KL-TFCM) method and its three-stage-interlinked framework, including knowledge extraction, knowledge matching, and knowledge utilization, are developed. There are two specific versions: KL-TFCM-c and KL-TFCM-f, i.e. the so-called crisp and flexible forms, which use the strategies of maximum matching degree and weighted sum, respectively. The significance of our work is fourfold: 1) Owing to the adjustability of referable degree between the source and target domains, KL-PT is capable of appropriately learning the insightful knowledge, i.e. the cluster prototypes, from the source domain; 2) KL-PM is able to self-adaptively determine the reasonable pairwise relationships of cluster prototypes between the source and target domains, even if the numbers of clusters differ in the two domains; 3) The joint action of KL-PM and KL-PT can effectively resolve the data inconsistency and heterogeneity between the source and target domains, e.g. the data distribution diversity and cluster number difference. Thus, using the three-stage-based knowledge transfer, the beneficial knowledge from the source domain can be extensively, self-adaptively leveraged in the target domain. As evidence of this, both KL-TFCM-c and KL-TFCM-f surpass many existing clustering methods in texture image segmentation; and 4) In the case of different cluster numbers between the source and target domains, KL-TFCM-f proves higher clustering effectiveness and segmentation performance than does KL-TFCM-c.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA