Your browser doesn't support javascript.
loading
Attentional Bias in Human Category Learning: The Case of Deep Learning.
Hanson, Catherine; Caglar, Leyla Roskan; Hanson, Stephen José.
Affiliation
  • Hanson C; Rutgers Brain Imaging Center, Newark, NJ, United States.
  • Caglar LR; RUBIC and Psychology Department and Center for Molecular and Behavioral Neuroscience, Rutgers University-Newark, Newark, NJ, United States.
  • Hanson SJ; Rutgers Brain Imaging Center, Newark, NJ, United States.
Front Psychol ; 9: 374, 2018.
Article in En | MEDLINE | ID: mdl-29706907
ABSTRACT
Category learning performance is influenced by both the nature of the category's structure and the way category features are processed during learning. Shepard (1964, 1987) showed that stimuli can have structures with features that are statistically uncorrelated (separable) or statistically correlated (integral) within categories. Humans find it much easier to learn categories having separable features, especially when attention to only a subset of relevant features is required, and harder to learn categories having integral features, which require consideration of all of the available features and integration of all the relevant category features satisfying the category rule (Garner, 1974). In contrast to humans, a single hidden layer backpropagation (BP) neural network has been shown to learn both separable and integral categories equally easily, independent of the category rule (Kruschke, 1993). This "failure" to replicate human category performance appeared to be strong evidence that connectionist networks were incapable of modeling human attentional bias. We tested the presumed limitations of attentional bias in networks in two ways (1) by having networks learn categories with exemplars that have high feature complexity in contrast to the low dimensional stimuli previously used, and (2) by investigating whether a Deep Learning (DL) network, which has demonstrated humanlike performance in many different kinds of tasks (language translation, autonomous driving, etc.), would display human-like attentional bias during category learning. We were able to show a number of interesting results. First, we replicated the failure of BP to differentially process integral and separable category structures when low dimensional stimuli are used (Garner, 1974; Kruschke, 1993). Second, we show that using the same low dimensional stimuli, Deep Learning (DL), unlike BP but similar to humans, learns separable category structures more quickly than integral category structures. Third, we show that even BP can exhibit human like learning differences between integral and separable category structures when high dimensional stimuli (face exemplars) are used. We conclude, after visualizing the hidden unit representations, that DL appears to extend initial learning due to feature development thereby reducing destructive feature competition by incrementally refining feature detectors throughout later layers until a tipping point (in terms of error) is reached resulting in rapid asymptotic learning.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Front Psychol Year: 2018 Document type: Article Affiliation country:

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Front Psychol Year: 2018 Document type: Article Affiliation country: