Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 40
Filtrar
1.
iScience ; 27(10): 110915, 2024 Oct 18.
Artigo em Inglês | MEDLINE | ID: mdl-39381747

RESUMO

Infrared and visible image fusion aims to produce images that highlight key targets and offer distinct textures, by merging the thermal radiation infrared images with the detailed texture visible images. Traditional auto encoder-decoder-based fusion methods often rely on manually designed fusion strategies, which lack flexibility across different scenarios. Addressing this limitation, we introduce EMAFusion, a fusion approach featuring an enhanced multiscale encoder and a learnable, lightweight fusion network. Our method incorporates skip connections, the convolutional block attention module (CBAM), and nest architecture within the auto encoder-decoder framework to adeptly extract and preserve multiscale features for fusion tasks. Furthermore, a fusion network driven by spatial and channel attention mechanisms is proposed, designed to precisely capture and integrate essential features from both image types. Comprehensive evaluations of the TNO image fusion dataset affirm the proposed method's superiority over existing state-of-the-art techniques, demonstrating its potential for advancing infrared and visible image fusion.

2.
IEEE Trans Cybern ; PP2024 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-39331547

RESUMO

Differentiable architecture search (DARTS) has emerged as a promising technique for effective neural architecture search, and it mainly contains two steps to find the high-performance architecture. First, the DARTS supernet that consists of mixed operations will be optimized via gradient descent. Second, the final architecture will be built by the selected operations that contribute the most to the supernet. Although DARTS improves the efficiency of neural architecture search (NAS), it suffers from the well-known degeneration issue which can lead to deteriorating architectures. Existing works mainly attribute the degeneration issue to the failure of its supernet optimization, while little attention has been paid to the selection method. In this article, we cease to apply the widely-used magnitude-based selection method and propose a novel criterion based on operation strength that estimates the importance of an operation by its effect on the final loss. We show that the degeneration issue can be effectively addressed by using the proposed criterion without any modification of supernet optimization, indicating that the magnitude-based selection method can be a critical reason for the instability of DARTS. The experiments on NAS-Bench-201 and DARTS search spaces show the effectiveness of our method.

3.
Artigo em Inglês | MEDLINE | ID: mdl-39255182

RESUMO

This article investigates the local synchronization for delayed complex dynamical networks (CDNs) under self-triggered impulsive control (STIC) approaches involving delays. With the help of Lyapunov-Razumikhin methods and comparison principle, some design criteria of STIC strategies ensuring local synchronization for delayed CDNs with delayed impulses are provided, and Zeno behavior can be avoided. Compared with the existing results on synchronization of CDNs under STIC, in this article, time delays in both continuous and discrete system dynamics are well considered. Moreover, the proposed self-triggered mechanism (STM) is an explicit expression, under which the next triggering instant can be derived directly, with simple structure and easy implementation. Finally, two numerical examples are provided to validate the proposed theoretical criteria.

4.
Artigo em Inglês | MEDLINE | ID: mdl-38743547

RESUMO

The superior performance of modern computer vision backbones (e.g., vision Transformers learned on ImageNet-1K/22K) usually comes with a costly training procedure. This study contributes to this issue by generalizing the idea of curriculum learning beyond its original formulation, i.e., training models using easier-to-harder data. Specifically, we reformulate the training curriculum as a soft-selection function, which uncovers progressively more difficult patterns within each example during training, instead of performing easier-to-harder sample selection. Our work is inspired by an intriguing observation on the learning dynamics of visual backbones: during the earlier stages of training, the model predominantly learns to recognize some 'easier-to-learn' discriminative patterns in the data. These patterns, when observed through frequency and spatial domains, incorporate lower-frequency components, and the natural image contents without distortion or data augmentation. Motivated by these findings, we propose a curriculum where the model always leverages all the training data at every learning stage, yet the exposure to the 'easier-to-learn' patterns of each example is initiated first, with harder patterns gradually introduced as training progresses. To implement this idea in a computationally efficient way, we introduce a cropping operation in the Fourier spectrum of the inputs, enabling the model to learn from only the lower-frequency components. Then we show that exposing the contents of natural images can be readily achieved by modulating the intensity of data augmentation. Finally, we integrate these two aspects and design curriculum learning schedules by proposing tailored searching algorithms. Moreover, we present useful techniques for deploying our approach efficiently in challenging practical scenarios, such as large-scale parallel training, and limited input/output or data pre-processing speed. The resulting method, EfficientTrain++, is simple, general, yet surprisingly effective. As an off-the-shelf approach, it reduces the training time of various popular models (e.g., ResNet, ConvNeXt, DeiT, PVT, Swin, CSWin, and CAFormer) by [Formula: see text] on ImageNet-1K/22K without sacrificing accuracy. It also demonstrates efficacy in self-supervised learning (e.g., MAE). Code is available at: https://github.com/LeapLabTHU/EfficientTrain.

5.
Artigo em Inglês | MEDLINE | ID: mdl-38662565

RESUMO

Dynamic computation has emerged as a promising strategy to improve the inference efficiency of deep networks. It allows selective activation of various computing units, such as layers or convolution channels, or adaptive allocation of computation to highly informative spatial regions in image features, thus significantly reducing unnecessary computations conditioned on each input sample. However, the practical efficiency of dynamic models does not always correspond to theoretical outcomes. This discrepancy stems from three key challenges: 1) The absence of a unified formulation for various dynamic inference paradigms, owing to the fragmented research landscape; 2) The undue emphasis on algorithm design while neglecting scheduling strategies, which are critical for optimizing computational performance and resource utilization in CUDA-enabled GPU settings; and 3) The cumbersome process of evaluating practical latency, as most existing libraries are tailored for static operators. To address these issues, we introduce Latency-Aware Unified Dynamic Networks (LAUDNet), a comprehensive framework that amalgamates three cornerstone dynamic paradigms-spatially-adaptive computation, dynamic layer skipping, and dynamic channel skipping-under a unified formulation. To reconcile theoretical and practical efficiency, LAUDNet integrates algorithmic design with scheduling optimization, assisted by a latency predictor that accurately and efficiently gauges the inference latency of dynamic operators. This latency predictor harmonizes considerations of algorithms, scheduling strategies, and hardware attributes. We empirically validate various dynamic paradigms within the LAUDNet framework across a range of vision tasks, including image classification, object detection, and instance segmentation. Our experiments confirm that LAUDNet effectively narrows the gap between theoretical and real-world efficiency. For example, LAUDNet can reduce the practical latency of its static counterpart, ResNet-101, by over 50% on hardware platforms such as V100, RTX3090, and TX2 GPUs. Furthermore, LAUDNet surpasses competing methods in the trade-off between accuracy and efficiency. Code is available at: https://www.github.com/LeapLabTHU/LAUDNet.

6.
Sci Rep ; 14(1): 5791, 2024 03 09.
Artigo em Inglês | MEDLINE | ID: mdl-38461342

RESUMO

Diabetic retinopathy (DR) is a serious ocular complication that can pose a serious risk to a patient's vision and overall health. Currently, the automatic grading of DR is mainly using deep learning techniques. However, the lesion information in DR images is complex, variable in shape and size, and randomly distributed in the images, which leads to some shortcomings of the current research methods, i.e., it is difficult to effectively extract the information of these various features, and it is difficult to establish the connection between the lesion information in different regions. To address these shortcomings, we design a multi-scale dynamic fusion (MSDF) module and combine it with graph convolution operations to propose a multi-scale dynamic graph convolutional network (MDGNet) in this paper. MDGNet firstly uses convolution kernels with different sizes to extract features with different shapes and sizes in the lesion regions, and then automatically learns the corresponding weights for feature fusion according to the contribution of different features to model grading. Finally, the graph convolution operation is used to link the lesion features in different regions. As a result, our proposed method can effectively combine local and global features, which is beneficial for the correct DR grading. We evaluate the effectiveness of method on two publicly available datasets, namely APTOS and DDR. Extensive experiments demonstrate that our proposed MDGNet achieves the best grading results on APTOS and DDR, and is more accurate and diverse for the extraction of lesion information.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico por imagem , Olho , Algoritmos , Face , Projetos de Pesquisa
7.
IEEE Trans Pattern Anal Mach Intell ; 46(9): 5890-5904, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38393854

RESUMO

Long-tailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples. Such imbalance issue considerably impairs the performance of standard supervised learning algorithms, which are mainly designed for balanced training sets. Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance. However, the performance of supervised contrastive learning is plagued by an inherent challenge: it necessitates sufficiently large batches of training data to construct contrastive pairs that cover all categories, yet this requirement is difficult to meet in the context of class-imbalanced data. To overcome this obstacle, we propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space, and samples contrastive pairs accordingly. In fact, estimating the distributions of all classes using features in a small batch, particularly for imbalanced data, is not feasible. Our key idea is to introduce a reasonable and simple assumption that the normalized features in contrastive learning follow a mixture of von Mises-Fisher (vMF) distributions on unit space, which brings two-fold benefits. First, the distribution parameters can be estimated using only the first sample moment, which can be efficiently computed in an online manner across different batches. Second, based on the estimated distribution, the vMF distribution allows us to sample an infinite number of contrastive pairs and derive a closed form of the expected contrastive loss for efficient optimization. Other than long-tailed problems, ProCo can be directly applied to semi-supervised learning by generating pseudo-labels for unlabeled data, which can subsequently be utilized to estimate the distribution of the samples inversely. Theoretically, we analyze the error bound of ProCo. Empirically, extensive experimental results on supervised/semi-supervised visual recognition and object detection tasks demonstrate that ProCo consistently outperforms existing methods across various datasets.

8.
Artigo em Inglês | MEDLINE | ID: mdl-37943650

RESUMO

Unsupervised domain adaptation (UDA) aims to adapt models learned from a well-annotated source domain to a target domain, where only unlabeled samples are given. Current UDA approaches learn domain-invariant features by aligning source and target feature spaces through statistical discrepancy minimization or adversarial training. However, these constraints could lead to the distortion of semantic feature structures and loss of class discriminability. In this article, we introduce a novel prompt learning paradigm for UDA, named domain adaptation via prompt learning (DAPrompt). In contrast to prior works, our approach learns the underlying label distribution for target domain rather than aligning domains. The main idea is to embed domain information into prompts, a form of representation generated from natural language, which is then used to perform classification. This domain information is shared only by images from the same domain, thereby dynamically adapting the classifier according to each domain. By adopting this paradigm, we show that our model not only outperforms previous methods on several cross-domain benchmarks but also is very efficient to train and easy to implement.

9.
Artigo em Inglês | MEDLINE | ID: mdl-37934636

RESUMO

Offline reinforcement learning (RL) optimizes the policy on a previously collected dataset without any interactions with the environment, yet usually suffers from the distributional shift problem. To mitigate this issue, a typical solution is to impose a policy constraint on a policy improvement objective. However, existing methods generally adopt a "one-size-fits-all" practice, i.e., keeping only a single improvement-constraint balance for all the samples in a mini-batch or even the entire offline dataset. In this work, we argue that different samples should be treated with different policy constraint intensities. Based on this idea, a novel plug-in approach named guided offline RL (GORL) is proposed. GORL employs a guiding network, along with only a few expert demonstrations, to adaptively determine the relative importance of the policy improvement and policy constraint for every sample. We theoretically prove that the guidance provided by our method is rational and near-optimal. Extensive experiments on various environments suggest that GORL can be easily installed on most offline RL algorithms with statistically significant performance improvements.

10.
Expert Syst Appl ; 228: 120389, 2023 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-37193247

RESUMO

Recent years have witnessed a growing interest in neural network-based medical image classification methods, which have demonstrated remarkable performance in this field. Typically, convolutional neural network (CNN) architectures have been commonly employed to extract local features. However, the transformer, a newly emerged architecture, has gained popularity due to its ability to explore the relevance of remote elements in an image through a self-attention mechanism. Despite this, it is crucial to establish not only local connectivity but also remote relationships between lesion features and capture the overall image structure to improve image classification accuracy. Therefore, to tackle the aforementioned issues, this paper proposes a network based on multilayer perceptrons (MLPs) that can learn the local features of medical images on the one hand and capture the overall feature information in both spatial and channel dimensions on the other hand, thus utilizing image features effectively. This paper has been extensively validated on COVID19-CT dataset and ISIC 2018 dataset, and the results show that the method in this paper is more competitive and has higher performance in medical image classification compared with existing methods. This shows that the use of MLP to capture image features and establish connections between lesions is expected to provide novel ideas for medical image classification tasks in the future.

11.
IEEE Trans Cybern ; 53(1): 173-183, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34260369

RESUMO

This article mainly explores the local input-to-state stability (LISS) property of a class of nonlinear systems via a saturated control strategy, where both the external disturbance and impulsive disturbance being fully considered. In terms of the Lyapunov method and inequality techniques, some sufficient conditions under which the system can be made LISS are proposed, and the elastic constraint relationship among saturated control gain, rate coefficients, external disturbance, and domain of initial value is revealed. Moreover, the optimization design procedures are provided with the hope of obtaining the estimates of admissible external disturbance and domain of initial value as large as possible, where the corresponding saturated control law can be designed by solving LMI -based conditions. In the absence of an external disturbance, the locally exponential stability (LES) property can also be presented with a set of more relaxed conditions. Finally, two examples are presented to reveal the validity of the obtained results.

12.
IEEE Trans Cybern ; 53(7): 4079-4093, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34990375

RESUMO

In this article, we focus on a biobjective hot strip mill (HSM) scheduling problem arising in the steel industry. Besides the conventional objective regarding penalty costs, we have also considered minimizing the total starting times of rolling operations in order to reduce the energy consumption for slab reheating. The problem is complicated by the inevitable uncertainty in rolling processing times, which means deterministic scheduling models will be ineffective. To obtain robust production schedules with satisfactory performance under all possible conditions, we apply the robust optimization (RO) approach to model and solve the scheduling problem. First, an RO model and an equivalent mixed-integer linear programming model are constructed to describe the HSM scheduling problem with uncertainty. Then, we devise an improved Benders' decomposition algorithm to solve the RO model and obtain exactly optimal solutions. Next, for coping with large-sized instances, a multiobjective particle swarm optimization algorithm with an embedded local search strategy is proposed to handle the biobjective scheduling problem and find the set of Pareto-optimal solutions. Finally, we conduct extensive computational tests to verify the proposed algorithms. Results show that the exact algorithm is effective for relatively small instances and the metaheuristic algorithm can achieve satisfactory solution quality for both small- and large-sized instances of the problem.


Assuntos
Algoritmos , Incerteza
13.
IEEE Trans Neural Netw Learn Syst ; 34(3): 1454-1464, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34464267

RESUMO

Deep reinforcement learning is confronted with problems of sampling inefficiency and poor task migration capability. Meta-reinforcement learning (meta-RL) enables meta-learners to utilize the task-solving skills trained on similar tasks and quickly adapt to new tasks. However, meta-RL methods lack enough queries toward the relationship between task-agnostic exploitation of data and task-related knowledge introduced by latent context, limiting their effectiveness and generalization ability. In this article, we develop an algorithm for off-policy meta-RL that can provide the meta-learners with self-oriented cognition toward how they adapt to the family of tasks. In our approach, we perform dynamic task-adaptiveness distillation to describe how the meta-learners adjust the exploration strategy in the meta-training process. Our approach also enables the meta-learners to balance the influence of task-agnostic self-oriented adaption and task-related information through latent context reorganization. In our experiments, our method achieves 10%-20% higher asymptotic reward than probabilistic embeddings for actor-critic RL (PEARL).

14.
IEEE Trans Pattern Anal Mach Intell ; 45(4): 4605-4621, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35939472

RESUMO

Spatial redundancy widely exists in visual recognition tasks, i.e., discriminative features in an image or video frame usually correspond to only a subset of pixels, while the remaining regions are irrelevant to the task at hand. Therefore, static models which process all the pixels with an equal amount of computation result in considerable redundancy in terms of time and space consumption. In this paper, we formulate the image recognition problem as a sequential coarse-to-fine feature learning process, mimicking the human visual system. Specifically, the proposed Glance and Focus Network (GFNet) first extracts a quick global representation of the input image at a low resolution scale, and then strategically attends to a series of salient (small) regions to learn finer features. The sequential process naturally facilitates adaptive inference at test time, as it can be terminated once the model is sufficiently confident about its prediction, avoiding further redundant computation. It is worth noting that the problem of locating discriminant regions in our model is formulated as a reinforcement learning task, thus requiring no additional manual annotations other than classification labels. GFNet is general and flexible as it is compatible with any off-the-shelf backbone models (such as MobileNets, EfficientNets and TSM), which can be conveniently deployed as the feature extractor. Extensive experiments on a variety of image classification and video recognition tasks and with various backbone models demonstrate the remarkable efficiency of our method. For example, it reduces the average latency of the highly efficient MobileNet-V3 on an iPhone XS Max by 1.3x without sacrificing accuracy. Code and pre-trained models are available at https://github.com/blackfeather-wang/GFNet-Pytorch.

15.
IEEE Trans Neural Netw Learn Syst ; 34(8): 4033-4046, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34739382

RESUMO

Meta reinforcement learning (meta-RL) is a promising technique for fast task adaptation by leveraging prior knowledge from previous tasks. Recently, context-based meta-RL has been proposed to improve data efficiency by applying a principled framework, dividing the learning procedure into task inference and task execution. However, the task information is not adequately leveraged in this approach, thus leading to inefficient exploration. To address this problem, we propose a novel context-based meta-RL framework with an improved exploration mechanism. For the existing exploration and execution problem in context-based meta-RL, we propose a novel objective that employs two exploration terms to encourage better exploration in action and task embedding space, respectively. The first term pushes for improving the diversity of task inference, while the second term, named action information, works as sharing or hiding task information in different exploration stages. We divide the meta-training procedure into task-independent exploration and task-relevant exploration stages according to the utilization of action information. By decoupling task inference and task execution and proposing the respective optimization objectives in the two exploration stages, we can efficiently learn policy and task inference networks. We compare our algorithm with several popular meta-RL methods on MuJoco benchmarks with both dense and sparse reward settings. The empirical results show that our method significantly outperforms baselines on the benchmarks in terms of sample efficiency and task performance.

16.
Neural Netw ; 154: 303-309, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35926291

RESUMO

Finite-time stability and stabilization problems of state-dependent delayed systems are studied in this paper. Different from discrete delays and time-dependent delays which can be well estimated over time, the information of state-dependent delays is usually hard to be estimated, especially when states are unknown or unmeasurable. To guarantee the stability of state-dependent delayed systems in the framework of finite time, a Razumikhin-type inequality is used, following which estimations on the settling time and the region of attraction are proposed. Moreover, the relationship between the variation speed of state-dependent delays and the size of the region of attraction is proposed. Then as an application of the theoretical result, finite-time stabilization is studied for a set of nonlinear coupled neural networks involving state-dependent transmission delay, where the design of memoryless finite-time controllers is addressed. Two numerical examples are given to show the effectiveness of the proposed results.


Assuntos
Algoritmos , Redes Neurais de Computação , Tempo
17.
Sci Rep ; 12(1): 11968, 2022 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-35831628

RESUMO

Presently, research on deep learning-based change detection (CD) methods has become a hot topic. In particular, feature pyramid networks (FPNs) are widely used in CD tasks to gradually fuse semantic features. However, existing FPN-based CD methods do not correctly detect the complete change region and cannot accurately locate the boundaries of the change region. To solve these problems, a new Multi-Scale Feature Progressive Fusion Network (MFPF-Net) is proposed, which consists of three innovative modules: Layer Feature Fusion Module (LFFM), Multi-Scale Feature Aggregation Module (MSFA), and Multi-Scale Feature Distribution Module (MSFD). Specifically, we first concatenate the features of each layer extracted from the bi-temporal images with their difference maps, and the resulting change maps fuse richer semantic information while effectively representing change regions. Then, the obtained change maps of each layer are directly aggregated, which improves the effective communication and full fusion of feature maps in CD while avoiding the interference of indirect information. Finally, the aggregated feature maps are layered again by pooling and convolution operations, and then a feature fusion strategy with a pyramid structure is used, with layers fused from low to high, to obtain richer contextual information, so that each layer of the layered feature maps has original semantic information and semantic features of other layers. We conducted comprehensive experiments on three publicly available benchmark datasets, CDD, LEVIR-CD, and WHU-CD to verify the effectiveness of the method, and the experimental results show that the method in this paper outperforms other comparative methods.

18.
Sci Rep ; 12(1): 7303, 2022 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-35508508

RESUMO

Punctuality of the steel-making scheduling is important to save steel production costs, but the processing time of the pretreatment process, which connects the iron- and steel-making stages, is usually uncertain. This paper presents a distributionally robust iron-steel allocation (DRISA) model to obtain a robust scheduling plan, where the distribution of the pretreatment time vector is assumed to belong to an ambiguity set which contains all the distributions with given first and second moments. This model aims to minimize the production objective by determining the iron-steel allocation and the completion time of each charge, while the constraints should hold with a certain probability under the worst-case distribution. To solve problems in large-scale efficiently, a variable neighborhood algorithm is developed to obtain a near-optimal solution in a short time. Experiments based on actual production data demonstrate its efficiency. Results also show the robustness of the DRISA model, i.e., the adjustment and delay of the robust schedule derived from the DRISA model are less than the nominal one.

19.
IEEE Trans Pattern Anal Mach Intell ; 44(11): 7436-7456, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-34613907

RESUMO

Dynamic neural network is an emerging research topic in deep learning. Compared to static models which have fixed computational graphs and parameters at the inference stage, dynamic networks can adapt their structures or parameters to different inputs, leading to notable advantages in terms of accuracy, computational efficiency, adaptiveness, etc. In this survey, we comprehensively review this rapidly developing area by dividing dynamic networks into three main categories: 1) sample-wise dynamic models that process each sample with data-dependent architectures or parameters; 2) spatial-wise dynamic networks that conduct adaptive computation with respect to different spatial locations of image data; and 3) temporal-wise dynamic models that perform adaptive inference along the temporal dimension for sequential data such as videos and texts. The important research problems of dynamic networks, e.g., architecture design, decision making scheme, optimization technique and applications, are reviewed systematically. Finally, we discuss the open problems in this field together with interesting future research directions.


Assuntos
Algoritmos , Redes Neurais de Computação
20.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 10222-10235, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-34882545

RESUMO

Deep reinforcement learning (RL) agents are becoming increasingly proficient in a range of complex control tasks. However, the agent's behavior is usually difficult to interpret due to the introduction of black-box function, making it difficult to acquire the trust of users. Although there have been some interesting interpretation methods for vision-based RL, most of them cannot uncover temporal causal information, raising questions about their reliability. To address this problem, we present a temporal-spatial causal interpretation (TSCI) model to understand the agent's long-term behavior, which is essential for sequential decision-making. TSCI model builds on the formulation of temporal causality, which reflects the temporal causal relations between sequential observations and decisions of RL agent. Then a separate causal discovery network is employed to identify temporal-spatial causal features, which are constrained to satisfy the temporal causality. TSCI model is applicable to recurrent agents and can be used to discover causal features with high efficiency once trained. The empirical results show that TSCI model can produce high-resolution and sharp attention masks to highlight task-relevant temporal-spatial information that constitutes most evidence about how vision-based RL agents make sequential decisions. In addition, we further demonstrate that our method is able to provide valuable causal interpretations for vision-based RL agents from the temporal perspective.


Assuntos
Algoritmos , Reforço Psicológico , Reprodutibilidade dos Testes , Atenção , Modelos Teóricos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA