Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
1.
J Xray Sci Technol ; 26(3): 435-448, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29562580

RESUMO

The optimization-based image reconstruction methods have been thoroughly investigated in the field of medical imaging. The Chambolle-Pock (CP) algorithm may be employed to solve these convex optimization image reconstruction programs. The preconditioned CP (PCP) algorithm has been shown to have much higher convergence rate than the ordinary CP (OCP) algorithm. This algorithm utilizes a preconditioner-parameter to tune the implementation of the algorithm to the specific application, which ranges from 0 and 2, but is often set to 1. In this work, we investigated the impact of the preconditioner-parameter on the convergence rate of the PCP algorithm when it is applied to the TV constrained, data-divergence minimization (TVDM) optimization based image reconstruction. We performed the investigations in the context of 2D computed tomography (CT) and 3D electron paramagnetic resonance imaging (EPRI). For 2D CT, we used the Shepp-Logan and two FORBILD phantoms. For 3D EPRI, we used a simulated 6-spheres phantom and a physical phantom. Study results showed that the optimal preconditioner-parameter depends on the specific imaging conditions. Simply setting the parameter equal to 1 cannot guarantee a fast convergence rate. Thus, this study suggests that one should adaptively tune the preconditioner-parameter to obtain the optimal convergence rate of the PCP algorithm.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Humanos , Processamento de Imagem Assistida por Computador/instrumentação , Imageamento Tridimensional/instrumentação , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/instrumentação
2.
J Xray Sci Technol ; 26(1): 83-102, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29036875

RESUMO

OBJECTIVES: This work aims to explore more accurate pixel-driven projection methods for iterative image reconstructions in order to reduce high-frequency artifacts in the generated projection image. METHODS: Three new pixel-driven projection methods namely, small-pixel-large-detector (SPLD), linear interpolation based (LIB) and distance anterpolation based (DAB), were proposed and applied to reconstruct images. The performance of these methods was evaluated in both two-dimensional (2D) computed tomography (CT) images via the modified FORBILD phantom and three-dimensional (3D) electron paramagnetic resonance (EPR) images via the 6-spheres phantom. Specifically, two evaluations based on projection generation and image reconstruction were performed. For projection generation, evaluation was using a 2D disc phantom, the modified FORBILD phantom and the 6-spheres phantom. For image reconstruction, evaluations were performed using the FORBILD and 6-spheres phantom. During evaluation, 2 quantitative indices of root-mean-square-error (RMSE) and contrast-to-noise-ratio (CNR) were used. RESULTS: Comparing to the use of ordinary pixel-driven projection method, RMSE of the SPLD based least-square algorithm was reduced from 0.0701 to 0.0384 and CNR was increased from 5.6 to 19.47 for 2D FORBILD phantom reconstruction. For 3D EPRI, RMSE of SPLD was also reduced from 0.0594 to 0.0498 and CNR was increased from 3.88 to 11.58. In addition, visual evaluation showed that images reconstructed in both 2D and 3D images suffered from high-frequency line-shape artifacts when using the ordinary pixel-driven projection method. However, using 3 new methods all suppressed the artifacts significantly and yielded more accurate reconstructions. CONCLUSIONS: Three proposed pixel-driven projection methods achieved more accurate iterative image reconstruction results. These new and more accurate methods can also be easily extended to other imaging modalities. Among them, SPLD method should be recommended to 3D and four dimensional (4D) EPR imaging.


Assuntos
Espectroscopia de Ressonância de Spin Eletrônica/métodos , Imageamento Tridimensional/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Humanos , Imagens de Fantasmas , Razão Sinal-Ruído
3.
Molecules ; 22(12)2017 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-29292776

RESUMO

Most proteins perform their biological functions while interacting as complexes. The detection of protein complexes is an important task not only for understanding the relationship between functions and structures of biological network, but also for predicting the function of unknown proteins. We present a new nodal metric by integrating its local topological information. The metric reflects its representability in a larger local neighborhood to a cluster of a protein interaction (PPI) network. Based on the metric, we propose a seed-expansion graph clustering algorithm (SEGC) for protein complexes detection in PPI networks. A roulette wheel strategy is used in the selection of the seed to enhance the diversity of clustering. For a candidate node u, we define its closeness to a cluster C, denoted as NC(u, C), by combing the density of a cluster C and the connection between a node u and C. In SEGC, a cluster which initially consists of only a seed node, is extended by adding nodes recursively from its neighbors according to the closeness, until all neighbors fail the process of expansion. We compare the F-measure and accuracy of the proposed SEGC algorithm with other algorithms on Saccharomyces cerevisiae protein interaction networks. The experimental results show that SEGC outperforms other algorithms under full coverage.


Assuntos
Modelos Biológicos , Mapeamento de Interação de Proteínas/métodos , Proteínas de Saccharomyces cerevisiae/química , Algoritmos , Análise por Conglomerados , Bases de Dados de Proteínas , Mapas de Interação de Proteínas , Saccharomyces cerevisiae/química
4.
J Xray Sci Technol ; 23(4): 423-33, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26410654

RESUMO

Electron paramagnetic resonance (EPR) Imaging (EPRI) is a robust method for measuring in vivo oxygen concentration (pO2). For 3D pulse EPRI, a commonly used reconstruction algorithm is the filtered backprojection (FBP) algorithm, in which the backprojection process is computationally intensive and may be time consuming when implemented on a CPU. A multistage implementation of the backprojection can be used for acceleration, however it is not flexible (requires equal linear angle projection distribution) and may still be time consuming. In this work, single-stage backprojection is implemented on a GPU (Graphics Processing Units) having 1152 cores to accelerate the process. The GPU implementation results in acceleration by over a factor of 200 overall and by over a factor of 3500 if only the computing time is considered. Some important experiences regarding the implementation of GPU-accelerated backprojection for EPRI are summarized. The resulting accelerated image reconstruction is useful for real-time image reconstruction monitoring and other time sensitive applications.


Assuntos
Gráficos por Computador , Espectroscopia de Ressonância de Spin Eletrônica/métodos , Imageamento Tridimensional/métodos , Algoritmos , Imagens de Fantasmas
5.
Inf Sci (N Y) ; 298: 447-467, 2015 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-32226109

RESUMO

Concepts are the most fundamental units of cognition in philosophy and how to learn concepts from various aspects in the real world is the main concern within the domain of conceptual knowledge presentation and processing. In order to improve efficiency and flexibility of concept learning, in this paper we discuss concept learning via granular computing from the point of view of cognitive computing. More precisely, cognitive mechanism of forming concepts is analyzed based on the principles from philosophy and cognitive psychology, including how to model concept-forming cognitive operators, define cognitive concepts and establish cognitive concept structure. Granular computing is then combined with the cognitive concept structure to improve efficiency of concept learning. Furthermore, we put forward a cognitive computing system which is the initial environment to learn composite concepts and can integrate past experiences into itself for enhancing flexibility of concept learning. Also, we investigate cognitive processes whose aims are to deal with the problem of learning one exact or two approximate cognitive concepts from a given object set, attribute set or pair of object and attribute sets.

6.
Neural Netw ; 172: 106131, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38244357

RESUMO

Crowd localization, which prevails to extract the independent individual features, plays an significant role in critical analysis for crowd scene. Dense trivial features of individual targets are frequently susceptible to interference from complex background features, which makes it difficult to obtain satisfactory predictions for individual targets. Aiming at this issue, a Fourier feature decorrelation based sample attention is proposed for dense crowd localization. The correlation between features are decoupled in the Fourier transform domain, which induces the model to focus more on the true correlation between individual target features and labels. From the perspective of Fourier feature correlation between samples, independence test statistic optimization with cross-covariance operator is developed for feature decorrelation within the sample attention framework. The sample attention with global weight learning is iteratively optimized through matching the prediction loss, which can induce model partial out the spurious correlation between target-irrelevant features and labels. Experimental results show that the method proposed in this paper outperforms the current advanced crowd location methods on public dense crowd datasets.


Assuntos
Redes Neurais de Computação
7.
Artigo em Inglês | MEDLINE | ID: mdl-37220049

RESUMO

In many real-world applications, data may dynamically expand over time in both volume and feature dimensions. Besides, they are often collected in batches (also called blocks). We refer this kind of data whose volume and features increase in blocks as blocky trapezoidal data streams. Current works either assume that the feature space of data streams is fixed or stipulate that the algorithm receives only one instance at a time, and none of them can effectively handle the blocky trapezoidal data streams. In this article, we propose a novel algorithm to learn a classification model from blocky trapezoidal data streams, called learning with incremental instances and features (IIF). We attempt to design highly dynamic model update strategies that can learn from increasing training data with an expanding feature space. Specifically, we first divide the data streams obtained on each round and construct the corresponding classifiers for these different divided parts. Then, to realize the effective interaction of information between each classifier, we utilize a single global loss function to capture their relationship. Finally, we use the idea of ensemble to achieve the final classification model. Furthermore, to make this method more applicable, we directly transform it into the kernel method. Both theoretical analysis and empirical analysis validate the effectiveness of our algorithm.

8.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 14789-14806, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37610915

RESUMO

With the emergence of new data collection ways in many dynamic environment applications, the samples are gathered gradually in the accumulated feature spaces. With the incorporation of new type features, it may result in the augmentation of class numbers. For instance, in activity recognition, using the old features during warm-up, we can separate different warm-up exercises. With the accumulation of new attributes obtained from newly added sensors, we can better separate the newly appeared formal exercises. Learning for such simultaneous augmentation of feature and class is crucial but rarely studied, particularly when the labeled samples with full observations are limited. In this paper, we tackle this problem by proposing a novel incremental learning method for Simultaneous Augmentation of Feature and Class (SAFC) in a two-stage way. To guarantee the reusability of the model trained on previous data, we add a regularizer in the current model, which can provide solid prior in training the new classifier. We also present the theoretical analyses about the generalization bound, which can validate the efficiency of model inheritance. After solving the one-shot problem, we also extend it to multi-shot. Experimental results demonstrate the effectiveness of our approaches, together with their effectiveness in activity recognition applications.

9.
Artigo em Inglês | MEDLINE | ID: mdl-37067967

RESUMO

Feature selection has become one of the hot research topics in the era of big data. At the same time, as an extension of single-valued data, interval-valued data with its inherent uncertainty tend to be more applicable than single-valued data in some fields for characterizing inaccurate and ambiguous information, such as medical test results and qualified product indicators. However, there are relatively few studies on unsupervised attribute reduction for interval-valued information systems (IVISs), and it remains to be studied how to effectively control the dramatic increase of time cost in feature selection of large sample datasets. For these reasons, we propose a feature selection method for IVISs based on graph theory. Then, the model complexity could be greatly reduced after we utilize the properties of the matrix power series to optimize the calculation of the original model. Our approach can be divided into two steps. The first is feature ranking with the principles of relevance and nonredundancy, and the second is selecting top-ranked attributes when the number of features to keep is fixed as a priori. In this article, experiments are performed on 14 public datasets and the corresponding seven comparative algorithms. The results of the experiments verify that our algorithm is effective and efficient for feature selection in IVISs.

10.
IEEE Trans Pattern Anal Mach Intell ; 45(2): 1798-1816, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35486570

RESUMO

The pure accuracy measure is used to eliminate random consistency from the accuracy measure. Biases to both majority and minority classes in the pure accuracy are lower than that in the accuracy measure. In this paper, we demonstrate that compared with the accuracy measure and F-measure, the pure accuracy measure is class distribution insensitive and discriminative for good classifiers. The advantages make the pure accuracy measure suitable for traditional classification. Further, we mainly focus on two points: exploring a tighter generalization bound on pure accuracy based learning paradigm and designing a learning algorithm based on the pure accuracy measure. Particularly, with the self-bounding property, we build an algorithm-independent generalization bound on the pure accuracy measure, which is tighter than the existing bound of an order O(1/√N) (N is the number of instances). The proposed bound is free from making a smoothness or convex assumption on the hypothesis functions. In addition, we design a learning algorithm optimizing the pure accuracy measure and use it in the selective ensemble learning setting. The experiments on sixteen benchmark data sets and four image data sets demonstrate that the proposed method statistically performs better than the other eight representative benchmark algorithms.

11.
IEEE Trans Neural Netw Learn Syst ; 34(10): 6798-6812, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37021900

RESUMO

Representation and learning of concepts are critical problems in data science and cognitive science. However, the existing research about concept learning has one prevalent disadvantage: incomplete and complex cognitive. Meanwhile, as a practical mathematical tool for concept representation and concept learning, two-way learning (2WL) also has some issues leading to the stagnation of its related research: the concept can only learn from specific information granules and lacks a concept evolution mechanism. To overcome these challenges, we propose the two-way concept-cognitive learning (TCCL) method for enhancing the flexibility and evolution ability of 2WL for concept learning. We first analyze the fundamental relationship between two-way granule concepts in the cognitive system to build a novel cognitive mechanism. Furthermore, the movement three-way decision (M-3WD) method is introduced to 2WL to study the concept evolution mechanism via the concept movement viewpoint. Unlike the existing 2WL method, the primary consideration of TCCL is two-way concept evolution rather than information granules transformation. Finally, to interpret and help understand TCCL, an example analysis and some experiments on various datasets are carried out to demonstrate our method's effectiveness. The results show that TCCL is more flexible and less time-consuming than 2WL, and meanwhile, TCCL can also learn the same concept as the latter method in concept learning. In addition, from the perspective of concept learning ability, TCCL is more generalization of concepts than the granule concept cognitive learning model (CCLM).

12.
Artigo em Inglês | MEDLINE | ID: mdl-37216237

RESUMO

The bagging method has received much application and attention in recent years due to its good performance and simple framework. It has facilitated the advanced random forest method and accuracy-diversity ensemble theory. Bagging is an ensemble method based on simple random sampling (SRS) method with replacement. However, SRS is the most foundation sampling method in the field of statistics, where exists some other advanced sampling methods for probability density estimation. In imbalanced ensemble learning, down-sampling, over-sampling, and SMOTE methods have been proposed for generating base training set. However, these methods aim at changing the underlying distribution of data rather than simulating it better. The ranked set sampling (RSS) method uses auxiliary information to get more effective samples. The purpose of this article is to propose a bagging ensemble method based on RSS, which uses the ordering of objects related to the class to obtain more effective training sets. To explain its performance, we give a generalization bound of ensemble from the perspective of posterior probability estimation and Fisher information. On the basis of RSS sample having a higher Fisher information than SRS sample, the presented bound theoretically explains the better performance of RSS-Bagging. The experiments on 12 benchmark datasets demonstrate that RSS-Bagging statistically performs better than SRS-Bagging when the base classifiers are multinomial logistic regression (MLR) and support vector machine (SVM).

13.
IEEE Trans Pattern Anal Mach Intell ; 44(5): 2438-2452, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-33108280

RESUMO

Regression analysis based methods have shown strong robustness and achieved great success in face recognition. In these methods, convex l1-norm and nuclear norm are usually utilized to approximate the l0-norm and rank function. However, such convex relaxations may introduce a bias and lead to a suboptimal solution. In this paper, we propose a novel Enhanced Group Sparse regularized Nonconvex Regression (EGSNR) method for robust face recognition. An upper bounded nonconvex function is introduced to replace l1-norm for sparsity, which alleviates the bias problem and adverse effects caused by outliers. To capture the characteristics of complex errors, we propose a mixed model by combining γ-norm and matrix γ-norm induced from the nonconvex function. Furthermore, an l2,γ-norm based regularizer is designed to directly seek the interclass sparsity or group sparsity instead of traditional l2,1-norm. The locality of data, i.e., the distance between the query sample and multi-subspaces, is also taken into consideration. This enhanced group sparse regularizer enables EGSNR to learn more discriminative representation coefficients. Comprehensive experiments on several popular face datasets demonstrate that the proposed EGSNR outperforms the state-of-the-art regression based methods for robust face recognition.


Assuntos
Algoritmos , Reconhecimento Facial , Face/diagnóstico por imagem , Análise de Regressão
14.
IEEE Trans Neural Netw Learn Syst ; 33(3): 1254-1268, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33332275

RESUMO

Regression-based methods have been widely applied in face identification, which attempts to approximately represent a query sample as a linear combination of all training samples. Recently, a matrix regression model based on nuclear norm has been proposed and shown strong robustness to structural noises. However, it may ignore two important issues: the label information and local relationship of data. In this article, a novel robust representation method called locality-constrained discriminative matrix regression (LDMR) is proposed, which takes label information and locality structure into account. Instead of focusing on the representation coefficients, LDMR directly imposes constraints on representation components by fully considering the label information, which has a closer connection to identification process. The locality structure characterized by subspace distances is used to learn class weights, and the correct class is forced to make more contribution to representation. Furthermore, the class weights are also incorporated into a competitive constraint on the representation components, which reduces the pairwise correlations between different classes and enhances the competitive relationships among all classes. An iterative optimization algorithm is presented to solve LDMR. Experiments on several benchmark data sets demonstrate that LDMR outperforms some state-of-the-art regression-based methods.

15.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 9236-9254, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34752381

RESUMO

Multi-modal classification (MMC) aims to integrate the complementary information from different modalities to improve classification performance. Existing MMC methods can be grouped into two categories: traditional methods and deep learning-based methods. The traditional methods often implement fusion in a low-level original space. Besides, they mostly focus on the inter-modal fusion and neglect the intra-modal fusion. Thus, the representation capacity of fused features induced by them is insufficient. The deep learning-based methods implement the fusion in a high-level feature space where the associations among features are considered, while the whole process is implicit and the fused space lacks interpretability. Based on these observations, we propose a novel interpretative association-based fusion method for MMC, named AF. In AF, both the association information and the high-order information extracted from feature space are simultaneously encoded into a new feature space to help to train an MMC model in an explicit manner. Moreover, AF is a general fusion framework, and most existing MMC methods can be embedded into it to improve their performance. Finally, the effectiveness and the generality of AF are validated on 22 datasets, four typically traditional MMC methods adopting best modality, early, late and model fusion strategies and a deep learning-based MMC method.

16.
Sci Rep ; 10(1): 9991, 2020 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-32561879

RESUMO

In the past decade, the study of the dynamics of complex networks has been a focus of research. In particular, the controllability of complex networks based on the nodal dynamics has received strong attention. As a result, significant theories have been formulated in network control. Target control theory is one of the most important results among these theories. This theory addresses how to select as few input nodes as possible to control the chosen target nodes in a nodal linear dynamic system. However, the research on how to control the target edges in switchboard dynamics, which is a dynamical process defined on the edges, has been lacking. This shortcoming has motivated us to give an effective control scheme for the target edges. Here, we propose the k-travel algorithm to approximately calculate the minimum number of driven edges and driver nodes for a directed tree-like network. For general cases, we put forward a greedy algorithm TEC to approximately calculate the minimum number of driven edges and driver nodes. Analytic calculations show that networks with large assortativity coefficient as well as small average shortest path are efficient in random target edge control, and networks with small clustering coefficient are efficient in local target edge control.

17.
IEEE Trans Neural Netw Learn Syst ; 29(7): 2986-2999, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-28650830

RESUMO

Feature selection is viewed as an important preprocessing step for pattern recognition, machine learning, and data mining. Neighborhood is one of the most important concepts in classification learning and can be used to distinguish samples with different decisions. In this paper, a neighborhood discrimination index is proposed to characterize the distinguishing information of a neighborhood relation. It reflects the distinguishing ability of a feature subset. The proposed discrimination index is computed by considering the cardinality of a neighborhood relation rather than neighborhood similarity classes. Variants of the discrimination index, including joint discrimination index, conditional discrimination index, and mutual discrimination index, are introduced to compute the change of distinguishing information caused by the combination of multiple feature subsets. They have the similar properties as Shannon entropy and its variants. A parameter, named neighborhood radius, is introduced in these discrimination measures to address the analysis of real-valued data. Based on the proposed discrimination measures, the significance measure of a candidate feature is defined and a greedy forward algorithm for feature selection is designed. Data sets selected from public data sources are used to compare the proposed algorithm with existing algorithms. The experimental results confirm that the discrimination index-based algorithm yields superior performance compared to other classical algorithms.

18.
Sci Rep ; 7: 45380, 2017 03 28.
Artigo em Inglês | MEDLINE | ID: mdl-28349923

RESUMO

Global connectivity is a quite important issue for networks. The failures of some key edges may lead to breakdown of the whole system. How to find them will provide a better understanding on system robustness. Based on topological information, we propose an approach named LE (link entropy) to quantify the edge significance on maintaining global connectivity. Then we compare the LE with the other six acknowledged indices on the edge significance: the edge betweenness centrality, degree product, bridgeness, diffusion importance, topological overlap and k-path edge centrality. Experimental results show that the LE approach outperforms in quantifying edge significance on maintaining global connectivity.

19.
IEEE Trans Neural Netw Learn Syst ; 27(10): 2047-59, 2016 10.
Artigo em Inglês | MEDLINE | ID: mdl-26441455

RESUMO

Learning from categorical data plays a fundamental role in such areas as pattern recognition, machine learning, data mining, and knowledge discovery. To effectively discover the group structure inherent in a set of categorical objects, many categorical clustering algorithms have been developed in the literature, among which k -modes-type algorithms are very representative because of their good performance. Nevertheless, there is still much room for improving their clustering performance in comparison with the clustering algorithms for the numeric data. This may arise from the fact that the categorical data lack a clear space structure as that of the numeric data. To address this issue, we propose, in this paper, a novel data-representation scheme for the categorical data, which maps a set of categorical objects into a Euclidean space. Based on the data-representation scheme, a general framework for space structure based categorical clustering algorithms (SBC) is designed. This framework together with the applications of two kinds of dissimilarities leads two versions of the SBC-type algorithms. To verify the performance of the SBC-type algorithms, we employ as references four representative algorithms of the k -modes-type algorithms. Experiments show that the proposed SBC-type algorithms significantly outperform the k -modes-type algorithms.

20.
J Magn Reson ; 258: 49-57, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26225440

RESUMO

Tumors and tumor portions with low oxygen concentrations (pO2) have been shown to be resistant to radiation therapy. As such, radiation therapy efficacy may be enhanced if delivered radiation dose is tailored based on the spatial distribution of pO2 within the tumor. A technique for accurate imaging of tumor oxygenation is critically important to guide radiation treatment that accounts for the effects of local pO2. Electron paramagnetic resonance imaging (EPRI) has been considered one of the leading methods for quantitatively imaging pO2 within tumors in vivo. However, current EPRI techniques require relatively long imaging times. Reducing the number of projection scan considerably reduce the imaging time. Conventional image reconstruction algorithms, such as filtered back projection (FBP), may produce severe artifacts in images reconstructed from sparse-view projections. This can lower the utility of these reconstructed images. In this work, an optimization based image reconstruction algorithm using constrained, total variation (TV) minimization, subject to data consistency, is developed and evaluated. The algorithm was evaluated using simulated phantom, physical phantom and pre-clinical EPRI data. The TV algorithm is compared with FBP using subjective and objective metrics. The results demonstrate the merits of the proposed reconstruction algorithm.


Assuntos
Espectroscopia de Ressonância de Spin Eletrônica/métodos , Imageamento Tridimensional/métodos , Imagem Molecular/métodos , Neoplasias Experimentais/metabolismo , Oximetria/métodos , Oxigênio/metabolismo , Algoritmos , Animais , Aumento da Imagem/métodos , Imageamento por Ressonância Magnética/métodos , Camundongos , Neoplasias Experimentais/patologia , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA