Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
1.
Diagnostics (Basel) ; 13(12)2023 Jun 09.
Artigo em Inglês | MEDLINE | ID: mdl-37370905

RESUMO

During medical image analysis, it is often useful to align (or 'normalize') a given image of a given body part to a representative standard (or 'template') of that body part. The impact that brain templates have had on the analysis of brain images highlights the importance of templates in general. However, templates for human hands do not exist. Image normalization is especially important for hand images because hands, by design, readily change shape during various tasks. Here we report the construction of an anatomical template for healthy adult human hands. To do this, we used 27 anatomically representative T1-weighted magnetic resonance (MR) images of either hand from 21 demographically representative healthy adult subjects (13 females and 8 males). We used the open-source, cross-platform ANTs (Advanced Normalization Tools) medical image analysis software framework, to preprocess the MR images. The template was constructed using the ANTs standard multivariate template construction workflow. The resulting template image preserved all the essential anatomical features of the hand, including all the individual bones, muscles, tendons, ligaments, as well as the main branches of the median nerve and radial, ulnar, and palmar metacarpal arteries. Furthermore, the image quality of the template was significantly higher than that of the underlying individual hand images as measured by two independent canonical metrics of image quality.

2.
Front Psychol ; 14: 1132168, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37063564

RESUMO

In real life, we often have to make judgements under uncertainty. One such judgement task is estimating the probability of a given event based on uncertain evidence for the event, such as estimating the chances of actual fire when the fire alarm goes off. On the one hand, previous studies have shown that human subjects often significantly misestimate the probability in such cases. On the other hand, these studies have offered divergent explanations as to the exact causes of these judgment errors (or, synonymously, biases). For instance, different studies have attributed the errors to the neglect (or underweighting) of the prevalence (or base rate) of the given event, or the overweighting of the evidence for the individual event ('individuating information'), etc. However, whether or to what extent any such explanation can fully account for the observed errors remains unclear. To help fill this gap, we studied the probability estimation performance of non-professional subjects under four different real-world problem scenarios: (i) Estimating the probability of cancer in a mammogram given the relevant evidence from a computer-aided cancer detection system, (ii) estimating the probability of drunkenness based on breathalyzer evidence, and (iii & iv) estimating the probability of an enemy sniper based on two different sets of evidence from a drone reconnaissance system. In each case, we quantitatively characterized the contributions of the various potential explanatory variables to the subjects' probability judgements. We found that while the various explanatory variables together accounted for about 30 to 45% of the overall variance of the subjects' responses depending on the problem scenario, no single factor was sufficient to account for more than 53% of the explainable variance (or about 16 to 24% of the overall variance), let alone all of it. Further analyses of the explained variance revealed the surprising fact that no single factor accounted for significantly more than its 'fair share' of the variance. Taken together, our results demonstrate quantitatively that it is statistically untenable to attribute the errors of probabilistic judgement to any single cause, including base rate neglect. A more nuanced and unifying explanation would be that the actual biases reflect a weighted combination of multiple contributing factors, the exact mix of which depends on the particular problem scenario.

3.
Vision (Basel) ; 6(3)2022 Aug 10.
Artigo em Inglês | MEDLINE | ID: mdl-35997380

RESUMO

When searching a visual image that contains multiple target objects of interest, human subjects often show a satisfaction of search (SOS) effect, whereby if the subjects find one target, they are less likely to find additional targets in the image. Reducing SOS or, equivalently, subsequent search miss (SSM), is of great significance in many real-world situations where it is of paramount importance to find all targets in a given image, not just one. However, studies have shown that even highly trained and experienced subjects, such as expert radiologists, are subject to SOS. Here, using the detection of camouflaged objects (or camouflage-breaking) as an illustrative case, we demonstrate that when naïve subjects are trained to detect camouflaged objects more effectively, it has the side effect of reducing subjects' SOS. We tested subjects in the SOS task before and after they were trained in camouflage-breaking. During SOS testing, subjects viewed naturalistic scenes that contained zero, one, or two targets, depending on the image. As expected, before camouflage-training, subjects showed a strong SOS effect, whereby if they had found a target with relatively high visual saliency in a given image, they were less likely to have also found a lower-saliency target when one existed in the image. Subjects were then trained in the camouflage-breaking task to criterion using non-SOS images, i.e., camouflage images that contained zero or one target. Surprisingly, the trained subjects no longer showed significant levels of SOS. This reduction was specific to the particular background texture in which the subjects received camouflage training; subjects continued to show significant SOS when tested using a different background texture in which they did not receive camouflage training. A separate experiment showed that the reduction in SOS was not attributable to non-specific exposure or practice effects. Together, our results demonstrate that perceptual expertise can, in principle, reduce SOS, even when the perceptual training does not specifically target SOS reduction.

4.
Cogn Res Princ Implic ; 7(1): 52, 2022 06 20.
Artigo em Inglês | MEDLINE | ID: mdl-35723763

RESUMO

Many studies have shown that using a computer-aided detection (CAD) system does not significantly improve diagnostic accuracy in radiology, possibly because radiologists fail to interpret the CAD results properly. We tested this possibility using screening mammography as an illustrative example. We carried out two experiments, one using 28 practicing radiologists, and a second one using 25 non-professional subjects. During each trial, subjects were shown the following four pieces of information necessary for evaluating the actual probability of cancer in a given unseen mammogram: the binary decision of the CAD system as to whether the mammogram was positive for cancer, the true-positive and false-positive rates of the system, and the prevalence of breast cancer in the relevant patient population. Based only on this information, the subjects had to estimate the probability that the unseen mammogram in question was positive for cancer. Additionally, the non-professional subjects also had to decide, based on the same information, whether to recall the patients for additional testing. Both groups of subjects similarly (and significantly) overestimated the cancer probability regardless of the categorical CAD decision, suggesting that this effect is not peculiar to either group. The misestimations were not fully attributable to causes well-known in other contexts, such as base rate neglect or inverse fallacy. Non-professional subjects tended to recall the patients at high rates, even when the actual probably of cancer was at or near zero. Moreover, the recall rates closely reflected the subjects' estimations of cancer probability. Together, our results show that subjects interpret CAD system output poorly when only the probabilistic information about the underlying decision parameters is available to them. Our results also highlight the need for making the output of CAD systems more readily interpretable, and for providing training and assistance to radiologists in evaluating the output.


Assuntos
Neoplasias da Mama , Mamografia , Neoplasias da Mama/diagnóstico por imagem , Computadores , Diagnóstico por Computador/métodos , Detecção Precoce de Câncer , Feminino , Humanos , Mamografia/métodos , Radiologistas , Sensibilidade e Especificidade , Tecnologia
5.
Front Neurosci ; 16: 745269, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35669491

RESUMO

When making decisions under uncertainty, human subjects do not always act as rational decision makers, but often resort to one or more mental "shortcuts", or heuristics, to arrive at a decision. How do such "top-down" processes affect real-world decisions that must take into account empirical, "bottom-up" sensory evidence? Here we use recognition of camouflaged objects by expert viewers as an exemplar case to demonstrate that the effect of heuristics can be so strong as to override the empirical evidence in favor of heuristic information, even though the latter is random. We provided the viewers a random number that we told them was the estimate of a drone reconnaissance system of the probability that the visual image they were about to see contained a camouflaged target. We then showed them the image. We found that the subjects' own estimates of the probability of the target in the image reflected the random information they were provided, and ignored the actual evidence in the image. However, when the heuristic information was not provided, the same subjects were highly successful in finding the target in the same set of images, indicating that the effect was solely attributable to the availability of heuristic information. Two additional experiments confirmed that this effect was not idiosyncratic to camouflage images, visual search task, or the subjects' prior training or expertise. Together, these results demonstrate a novel aspect of the interaction between heuristics and sensory information during real-world decision making, where the former can be strong enough to veto the latter. This 'heuristic vetoing' is distinct from the vetoing of sensory information that occurs in certain visual illusions.

6.
Diagnostics (Basel) ; 12(1)2022 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-35054272

RESUMO

When making decisions under uncertainty, people in all walks of life, including highly trained medical professionals, tend to resort to using 'mental shortcuts', or heuristics. Anchoring-and-adjustment (AAA) is a well-known heuristic in which subjects reach a judgment by starting from an initial internal judgment ('anchored position') based on available external information ('anchoring information') and adjusting it until they are satisfied. We studied the effects of the AAA heuristic during diagnostic decision-making in mammography. We provided practicing radiologists (N = 27 across two studies) a random number that we told them was the estimate of a previous radiologist of the probability that a mammogram they were about to see was positive for breast cancer. We then showed them the actual mammogram. We found that the radiologists' own estimates of cancer in the mammogram reflected the random information they were provided and ignored the actual evidence in the mammogram. However, when the heuristic information was not provided, the same radiologists detected breast cancer in the same set of mammograms highly accurately, indicating that the effect was solely attributable to the availability of heuristic information. Thus, the effects of the AAA heuristic can sometimes be so strong as to override the actual clinical evidence in diagnostic tasks.

7.
Cogn Res Princ Implic ; 6(1): 27, 2021 04 06.
Artigo em Inglês | MEDLINE | ID: mdl-33825054

RESUMO

Camouflage-breaking is a special case of visual search where an object of interest, or target, can be hard to distinguish from the background even when in plain view. We have previously shown that naive, non-professional subjects can be trained using a deep learning paradigm to accurately perform a camouflage-breaking task in which they report whether or not a given camouflage scene contains a target. But it remains unclear whether such expert subjects can actually detect the target in this task, or just vaguely sense that the two classes of images are somehow different, without being able to find the target per se. Here, we show that when subjects break camouflage, they can also localize the camouflaged target accurately, even though they had received no specific training in localizing the target. The localization was significantly accurate when the subjects viewed the scene as briefly as 50 ms, but more so when the subjects were able to freely view the scenes. The accuracy and precision of target localization by expert subjects in the camouflage-breaking task were statistically indistinguishable from the accuracy and precision of target localization by naive subjects during a conventional visual search where the target 'pops out', i.e., is readily visible to the untrained eye. Together, these results indicate that when expert camouflage-breakers detect a camouflaged target, they can also localize it accurately.


Assuntos
Reconhecimento Visual de Modelos , Humanos
8.
J Med Imaging (Bellingham) ; 7(2): 022410, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32042860

RESUMO

The scientific, clinical, and pedagogical significance of devising methodologies to train nonprofessional subjects to recognize diagnostic visual patterns in medical images has been broadly recognized. However, systematic approaches to doing so remain poorly established. Using mammography as an exemplar case, we use a series of experiments to demonstrate that deep learning (DL) techniques can, in principle, be used to train naïve subjects to reliably detect certain diagnostic visual patterns of cancer in medical images. In the main experiment, subjects were required to learn to detect statistical visual patterns diagnostic of cancer in mammograms using only the mammograms and feedback provided following the subjects' response. We found not only that the subjects learned to perform the task at statistically significant levels, but also that their eye movements related to image scrutiny changed in a learning-dependent fashion. Two additional, smaller exploratory experiments suggested that allowing subjects to re-examine the mammogram in light of various items of diagnostic information may help further improve DL of the diagnostic patterns. Finally, a fourth small, exploratory experiment suggested that the image information learned was similar across subjects. Together, these results prove the principle that DL methodologies can be used to train nonprofessional subjects to reliably perform those aspects of medical image perception tasks that depend on visual pattern recognition expertise.

10.
Front Young Minds ; 72019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-32944570

RESUMO

We have all have experienced the frustration of looking for something we want, only to find a seemingly endless series of things we do not want. This process of looking for an object of interest is called visual search. We perform visual search all the time in everyday life, because the objects we want are almost always surrounded by many other objects. But, in some cases, it takes special training to find things, such as when searching for cancers in X-rays, weapons or explosives in airport luggage, or an enemy sniper hidden in the bushes. Understanding how we search for, and find, objects we are looking for is crucial to understanding how ordinary people and experts alike operate in the real world. While much remains to be discovered, what we have learned so far offers a fascinating window into how we see.

11.
Front Neuroinform ; 12: 82, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30515089

RESUMO

Making clinical decisions based on medical images is fundamentally an exercise in statistical decision-making. This is because in this case, the decision-maker must distinguish between image features that are clinically diagnostic (i.e., signal) from a large amount of non-diagnostic features. (i.e., noise). To perform this task, the decision-maker must have learned the underlying statistical distributions of the signal and noise to begin with. The same is true for machine learning algorithms that perform a given diagnostic task. In order to train and test human experts or expert machine systems in any diagnostic or analytical task, it is advisable to use large sets of images, so as to capture the underlying statistical distributions adequately. Large numbers of images are also useful in clinical and scientific research about the underlying diagnostic process, which remains poorly understood. Unfortunately, it is often difficult to obtain medical images of given specific descriptions in sufficiently large numbers. This represents a significant barrier to progress in the arenas of clinical care, education, and research. Here we describe a novel methodology that helps overcome this barrier. This method leverages the burgeoning technologies of deep learning (DL) and deep synthesis (DS) to synthesize medical images de novo. We provide a proof-of-principle of this approach using mammograms as an illustrative case. During the initial, prerequisite DL phase of the study, we trained a publicly available deep learning neural network (DNN), using open-sourced, radiologically vetted mammograms as labeled examples. During the subsequent DS phase of the study, the fully trained DNN was made to synthesize, de novo, images that capture the image statistics of a given input image. The resulting images indicated that our DNN was able to faithfully capture the image statistics of visually diverse sets of mammograms. We also briefly outline rigorous psychophysical testing methods to measure the extent to which synthesized mammography were sufficiently alike their original counterparts to human experts. These tests reveal that mammography experts fail to distinguish synthesized mammograms from their original counterparts at a statistically significant level, suggesting that the synthesized images were sufficiently realistic. Taken together, these results demonstrate that deep synthesis has the potential to be impactful in all fields in which medical images play a key role, most notably in radiology and pathology.

12.
Front Neurosci ; 12: 670, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30369862

RESUMO

In everyday life, we rely on human experts to make a variety of complex decisions, such as medical diagnoses. These decisions are typically made through some form of weakly guided learning, a form of learning in which decision expertise is gained through labeled examples rather than explicit instructions. Expert decisions can significantly affect people other than the decision-maker (for example, teammates, clients, or patients), but may seem cryptic and mysterious to them. It is therefore desirable for the decision-maker to explain the rationale behind these decisions to others. This, however, can be difficult to do. Often, the expert has a "gut feeling" for what the correct decision is, but may have difficulty giving an objective set of criteria for arriving at it. Explainability of human expert decisions, i.e., the extent to which experts can make their decisions understandable to others, has not been studied systematically. Here, we characterize the explainability of human decision-making, using binary categorical decisions about visual objects as an illustrative example. We trained a group of "expert" subjects to categorize novel, naturalistic 3-D objects called "digital embryos" into one of two hitherto unknown categories, using a weakly guided learning paradigm. We then asked the expert subjects to provide a written explanation for each binary decision they made. These experiments generated several intriguing findings. First, the expert's explanations modestly improve the categorization performance of naïve users (paired t-tests, p < 0.05). Second, this improvement differed significantly between explanations. In particular, explanations that pointed to a spatially localized region of the object improved the user's performance much better than explanations that referred to global features. Third, neither experts nor naïve subjects were able to reliably predict the degree of improvement for a given explanation. Finally, significant bias effects were observed, where naïve subjects rated an explanation significantly higher when told it comes from an expert user, compared to the rating of the same explanation when told it comes from another non-expert, suggesting a variant of the Asch conformity effect. Together, our results characterize, for the first time, the various issues, both methodological and conceptual, underlying the explainability of human decisions.

13.
Compr Physiol ; 8(3): 903-953, 2018 06 18.
Artigo em Inglês | MEDLINE | ID: mdl-29978891

RESUMO

The last three decades have seen major strides in our understanding of neural mechanisms of high-level vision, or visual cognition of the world around us. Vision has also served as a model system for the study of brain function. Several broad insights, as yet incomplete, have recently emerged. First, visual perception is best understood not as an end unto itself, but as a sensory process that subserves the animal's behavioral goal at hand. Visual perception is likely to be simply a side effect that reflects the readout of visual information processing that leads to behavior. Second, the brain is essentially a probabilistic computational system that produces behaviors by collectively evaluating, not necessarily consciously or always optimally, the available information about the outside world received from the senses, the behavioral goals, prior knowledge about the world, and possible risks and benefits of a given behavior. Vision plays a prominent role in the overall functioning of the brain providing the lion's share of information about the outside world. Third, the visual system does not function in isolation, but rather interacts actively and reciprocally with other brain systems, including other sensory faculties. Finally, various regions of the visual system process information not in a strict hierarchical manner, but as parts of various dynamic brain-wide networks, collectively referred to as the "connectome." Thus, a full understanding of vision will ultimately entail understanding, in granular, quantitative detail, various aspects of dynamic brain networks that use visual sensory information to produce behavior under real-world conditions. © 2017 American Physiological Society. Compr Physiol 8:903-953, 2018.


Assuntos
Rede Nervosa/fisiologia , Percepção Visual/fisiologia , Animais , Encéfalo/fisiologia , Cognição , Humanos
14.
Front Psychol ; 5: 160, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24624102

RESUMO

For scientific, clinical, and machine learning purposes alike, it is desirable to quantify the verbal reports of high-level visual percepts. Methods to do this simply do not exist at present. Here we propose a novel methodological principle to help fill this gap, and provide empirical evidence designed to serve as the initial "proof" of this principle. In the proposed method, subjects view images of real-world scenes and describe, in their own words, what they saw. The verbal description is independently evaluated by several evaluators. Each evaluator assigns a rank score to the subject's description of each visual object in each image using a novel ranking principle, which takes advantage of the well-known fact that semantic descriptions of real life objects and scenes can usually be rank-ordered. Thus, for instance, "animal," "dog," and "retriever" can be regarded as increasingly finer-level, and therefore higher ranking, descriptions of a given object. These numeric scores can preserve the richness of the original verbal description, and can be subsequently evaluated using conventional statistical procedures. We describe an exemplar implementation of this method and empirical data that show its feasibility. With appropriate future standardization and validation, this novel method can serve as an important tool to help quantify the subjective experience of the visual world. In addition to being a novel, potentially powerful testing tool, our method also represents, to our knowledge, the only available method for numerically representing verbal accounts of real-world experience. Given that its minimal requirements, i.e., a verbal description and the ground truth that elicited the description, our method has a wide variety of potential real-world applications.

16.
J Vis Exp ; (69): e3358, 2012 Nov 02.
Artigo em Inglês | MEDLINE | ID: mdl-23149420

RESUMO

In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties. Many innovative and useful methods currently exist for creating novel objects and object categories (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.


Assuntos
Algoritmos , Inteligência Artificial , Modelos Teóricos , Percepção , Imageamento Tridimensional , Análise de Componente Principal
17.
Psychol Sci ; 23(11): 1395-403, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23064405

RESUMO

How does the visual system recognize a camouflaged object? Obviously, the brain cannot afford to learn all possible camouflaged scenes or target objects. However, it may learn the general statistical properties of backgrounds of interest, which would enable it to break camouflage by comparing the statistics of a background with a target versus the statistics of the same background without a target. To determine whether the brain uses this strategy, we digitally created novel camouflaged scenes that had only the general statistical properties of the background in common. When subjects learned to break camouflage, their ability to detect a camouflaged target improved significantly not only for previously unseen instances of a camouflaged scene, but also for scenes that contained novel targets. Moreover, performance improved even for scenes that did not contain an actual target but had the statistical properties of backgrounds with a target. These results reveal that learning backgrounds is a powerful, versatile strategy by which the brain can learn to break camouflage.


Assuntos
Aprendizagem/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Transferência de Experiência/fisiologia , Feminino , Humanos , Masculino , Percepção Visual/fisiologia
18.
Artigo em Inglês | MEDLINE | ID: mdl-22936910

RESUMO

Visual appearance of natural objects is profoundly affected by viewing conditions such as viewpoint and illumination. Human subjects can nevertheless compensate well for variations in these viewing conditions. The strategies that the visual system uses to accomplish this are largely unclear. Previous computational studies have suggested that in principle, certain types of object fragments (rather than whole objects) can be used for invariant recognition. However, whether the human visual system is actually capable of using this strategy remains unknown. Here, we show that human observers can achieve illumination invariance by using object fragments that carry the relevant information. To determine this, we have used novel, but naturalistic, 3-D visual objects called "digital embryos." Using novel instances of whole embryos, not fragments, we trained subjects to recognize individual embryos across illuminations. We then tested the illumination-invariant object recognition performance of subjects using fragments. We found that the performance was strongly correlated with the mutual information (MI) of the fragments, provided that MI value took variations in illumination into consideration. This correlation was not attributable to any systematic differences in task difficulty between different fragments. These results reveal two important principles of invariant object recognition. First, the subjects can achieve invariance at least in part by compensating for the changes in the appearance of small local features, rather than of whole objects. Second, the subjects do not always rely on generic or pre-existing invariance of features (i.e., features whose appearance remains largely unchanged by variations in illumination), and are capable of using learning to compensate for appearance changes when necessary. These psychophysical results closely fit the predictions of earlier computational studies of fragment-based invariant object recognition.

20.
Front Hum Neurosci ; 6: 170, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22723774

RESUMO

Theoretical studies suggest that the visual system uses prior knowledge of visual objects to recognize them in visual clutter, and posit that the strategies for recognizing objects in clutter may differ depending on whether or not the object was learned in clutter to begin with. We tested this hypothesis using functional magnetic resonance imaging (fMRI) of human subjects. We trained subjects to recognize naturalistic, yet novel objects in strong or weak clutter. We then tested subjects' recognition performance for both sets of objects in strong clutter. We found many brain regions that were differentially responsive to objects during object recognition depending on whether they were learned in strong or weak clutter. In particular, the responses of the left fusiform gyrus (FG) reliably reflected, on a trial-to-trial basis, subjects' object recognition performance for objects learned in the presence of strong clutter. These results indicate that the visual system does not use a single, general-purpose mechanism to cope with clutter. Instead, there are two distinct spatial patterns of activation whose responses are attributable not to the visual context in which the objects were seen, but to the context in which the objects were learned.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA