Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
Learn Behav ; 41(3): 238-55, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23292506

ABSTRACT

Pigeons learned a series of reversals of a simultaneous red-green discrimination with a 6-s delay of reinforcement. The signal properties during the 6-s reinforcement delay were varied across blocks of reversals, such that the delay was either unsignaled (intertrial interval conditions during the delay) or signaled by illumination of the center key. Four different signal conditions were presented: (1) signals only after S+ responses, (2) signals only after S- responses, (3) differential signals after S+ versus S- responding, and (4) the same nondifferential signals after S+ and S- responses. (A zero-delay control condition was also included.) Learning was at a high level in the S+ -only and differential-signal conditions, and learning was at a low level during the unsignaled, nondifferentially signaled, and S- signal conditions. Thus, a differential stimulus contingent on correct choices was necessary for proficient learning-to-learn, even though within-reversal learning occurred in all conditions. During the S+ and differential-signal conditions, improvement in learning continued to occur even after more than 240 reversals (more than 38,000 trials).


Subject(s)
Discrimination Learning , Reinforcement Schedule , Reversal Learning , Serial Learning , Animals , Columbidae , Reinforcement, Psychology , Time Factors
2.
PLoS Biol ; 5(2): e15, 2007 Feb.
Article in English | MEDLINE | ID: mdl-17227143

ABSTRACT

Haptic perception is an active process that provides an awareness of objects that are encountered as an organism scans its environment. In contrast to the sensation of touch produced by contact with an object, the perception of object location arises from the interpretation of tactile signals in the context of the changing configuration of the body. A discrete sensory representation and a low number of degrees of freedom in the motor plant make the ethologically prominent rat vibrissa system an ideal model for the study of the neuronal computations that underlie this perception. We found that rats with only a single vibrissa can combine touch and movement to distinguish the location of objects that vary in angle along the sweep of vibrissa motion. The patterns of this motion and of the corresponding behavioral responses show that rats can scan potential locations and decide which location contains a stimulus within 150 ms. This interval is consistent with just one to two whisk cycles and provides constraints on the underlying perceptual computation. Our data argue against strategies that do not require the integration of sensory and motor modalities. The ability to judge angular position with a single vibrissa thus connects previously described, motion-sensitive neurophysiological signals to perception in the behaving animal.


Subject(s)
Behavior, Animal/physiology , Motor Activity/physiology , Space Perception/physiology , Vibrissae/physiology , Animals , Female , Rats , Rats, Long-Evans , Time Factors
3.
Perspect Biol Med ; 53(1): 106-20, 2010.
Article in English | MEDLINE | ID: mdl-20173299

ABSTRACT

Evidenced-based medicine views random-assignment clinical trials as the gold standard of evidence. Because patient populations are heterogeneous, large numbers of patients must be studied in order to achieve statistically significant results, but the means or medians of these large samples have weak predictive validity for individual patients. Further, the logic of random-assignment clinical trials allows only the inference that some subset of patients benefits from the treatment. Post-hoc analysis is therefore necessary to identify those patients. Otherwise, many patients may receive treatments that are useless and potentially harmful.


Subject(s)
Clinical Trials, Phase III as Topic , Data Interpretation, Statistical , Evidence-Based Medicine , Patient Selection , Randomized Controlled Trials as Topic , Bias , Humans , Sample Size , Treatment Outcome
4.
Behav Processes ; 69(2): 155-7; author reply 159-63, 2005 May 31.
Article in English | MEDLINE | ID: mdl-15845303

ABSTRACT

The concept of heuristics implies that the rules governing choice behavior may vary with ecological constraints. Behavior analysis, in contrast, seeks general principles that transcend specific situations. To the extent that search is successful, the concept of heuristics is unlikely to play a significant role in the analysis of animal behavior.


Subject(s)
Algorithms , Decision Making , Learning , Models, Psychological , Animals , Behavior, Animal , Ecology , Humans
5.
J Exp Anal Behav ; 80(3): 261-72, 2003 Nov.
Article in English | MEDLINE | ID: mdl-14964707

ABSTRACT

Pigeons were trained on multiple schedules that provided concurrent reinforcement in each of two components. In Experiment 1, one component consisted of a variable-interval (VI) 40-s schedule presented with a VI 20-s schedule, and the other a VI 40-s schedule presented with a VI 80-s schedule. After extended training, probe tests measured preference between the stimuli associated with the two 40-s schedules. Probe tests replicated the results of Belke (1992) that showed preference for the 40-s schedule that had been paired with the 80-s schedule. In a second condition, the overall reinforcer rate provided by the two components was equated by adding a signaled VI schedule to the component with the lower reinforcer rate. Probe results were unchanged. In Experiment 2, pigeons were trained on alternating concurrent VI 30-s VI 60-s schedules. One schedule provided 2-s access to food and the other provided 6-s access. The larger reinforcer magnitude produced higher response rates and was preferred on probe trials. Rate of changeover responding, however, did not differ as a function of reinforcer magnitude. The present results demonstrate that preference on probe trials is not a simple reflection of the pattern of changeover behavior established during training.


Subject(s)
Appetitive Behavior , Arousal , Choice Behavior , Reinforcement Schedule , Time Perception , Animals , Columbidae , Cues , Psychomotor Performance , Transfer, Psychology
6.
J Exp Anal Behav ; 99(2): 179-88, 2013 Mar.
Article in English | MEDLINE | ID: mdl-23319434

ABSTRACT

Two alternative approaches describe determinants of responding to a stimulus temporally distant from primary reinforcement. One emphasizes the temporal relation of each stimulus to the primary reinforcer, with relative proximity of the stimulus determining response rate. A contrasting view emphasizes immediate consequences of responding to the stimulus, the key factor being the conditioned reinforcement value of those immediate consequences. To contrast these approaches, 4 pigeons were exposed to a two-component multiple schedule with three-link chain schedules in each component. Only middle-link stimuli differed between chains. Baseline reinforcement probabilities were 0.50 for both chains; during discrimination phases it was 1.0 for one chain and 0.0 for the other. During discrimination phases pigeons responded more to the reinforcement-correlated middle link than to the extinction-correlated middle link, demonstrating that responding was affected by the probability change. Terminal link responding was also higher in the reinforced chain, even though the terminal link stimulus was identical in both chains. Of greatest interest is initial link responding, which was temporally most distant from reinforcement. Initial link responding, necessarily equal in the two chains, was significantly higher during the 1.0/0.0 discrimination phases, even though overall reinforcement probability remained constant. For 3 of 4 birds, in fact, initial-link response rates were higher than terminal-link response rates, an outcome that can be ascribed only to the potent conditioned reinforcement properties of the middle-link stimulus during the discrimination phases. Results are incompatible with any account of chain behavior based solely on relative time to reinforcement.


Subject(s)
Reinforcement Schedule , Reinforcement, Psychology , Animals , Columbidae , Conditioning, Operant , Discrimination Learning , Extinction, Psychological , Time Factors
7.
Learn Behav ; 38(1): 96-102, 2010 Feb.
Article in English | MEDLINE | ID: mdl-20065353

ABSTRACT

Pigeons learned a series of reversals of a simultaneous red-green visual discrimination. Delay of reinforcement (0 vs. 2 sec) and intertrial interval (ITI; 4 vs. 40 sec) were varied across blocks of reversals. Learning was faster with 0-sec than with 2-sec delays for both ITI values and faster with 4-sec ITIs than with 40-sec ITIs for both delays. Improvement in learning across successive reversals was evident throughout the experiment, furthermore, even after more than 120 reversals. The potent effects of small differences in reinforcement delay provide evidence for associative accounts and appear to be incompatible with accounts of choice that attempt to encompass the effects of temporal parameters in terms of animals' timing of temporal intervals.


Subject(s)
Discrimination Learning/physiology , Reversal Learning/physiology , Serial Learning/physiology , Analysis of Variance , Animals , Attention/physiology , Color Perception/physiology , Columbidae , Reinforcement Schedule , Reinforcement, Psychology , Time Factors
8.
J Exp Anal Behav ; 93(2): 147-55, 2010 Mar.
Article in English | MEDLINE | ID: mdl-20885807

ABSTRACT

Pigeons were presented with a concurrent-chains schedule in which the total time to primary reinforcement was equated for the two alternatives (VI 30 s VI 60 s vs. VI 60 s VI 30 s). In one set of conditions, the terminal links were signaled by the same stimulus, and in another set of conditions they were signaled by different stimuli. Choice was in favor of the shorter terminal link when the terminal links were differentially signaled but in favor of the shorter initial link (and longer terminal link) when the terminal links shared the same stimulus. Preference reversed regularly with reversals of the stimulus condition and was unrelated to the discrimination between the two terminal links during the nondifferential stimulus condition. The present results suggest that the relative value of the terminal-link stimuli and the relative rate of conditioned reinforcer presentation are important influences on choice behavior, and that models of conditioned reinforcement need to include both factors.


Subject(s)
Association Learning , Choice Behavior , Conditioning, Operant , Reinforcement Schedule , Animals , Columbidae , Neuropsychological Tests , Reaction Time
9.
Anim Learn Behav ; 30(1): 1-20, 2002 Feb.
Article in English | MEDLINE | ID: mdl-12017964

ABSTRACT

Behavioral contrast is defined as a change in response rate during a stimulus associated with a constant reinforcement schedule, in inverse relation to the rates of reinforcement in the surrounding stimulus conditions. Contrast has at least two functionally separable components: local contrast, which occurs after component transition, and molar contrast. Local contrast contributes to molar contrast under some conditions, but not generally. Molar contrast is due primarily to anticipatory contrast. However, anticipatory contrast with respect to response rate has been shown to be inversely related to stimulus preference, which challenges the widely held view that contrast effects reflect changes in stimulus value owing to the reinforcement context. More recent data demonstrate that the inverse relation between response rate and preference with respect to anticipatory contrast is due to Pavlovian contingencies embedded in anticipatory contrast procedures. When those contingencies are weakened, anticipatory contrast and stimulus preference are positively related, thus reaffirming the view that the reinforcing effectiveness of a constant schedule is inversely related to the value of the context of reinforcement in which it occurs. The underlying basis of how the context of reinforcement controls reinforcement value remains uncertain, although clear parallels exist between contrast and the effects of contingency in both Pavlovian and operant conditioning.


Subject(s)
Association Learning , Attention , Conditioning, Psychological , Reinforcement Schedule , Animals , Mental Recall , Motivation , Set, Psychology
10.
Behav Processes ; 62(1-3): 115-123, 2003 Apr 28.
Article in English | MEDLINE | ID: mdl-12729973

ABSTRACT

Recent theories of behavior have proposed that associative learning principles be replaced by a theoretical framework that assumes the animal has a veridical record of the temporal relations between events. I argue here that such a theory omits critical features of learned behavior: functional differences between different types of temporal relations, the critical nature of response-reinforcer delays, and the necessity of conditioned value as a theoretical construct.

11.
Psychol Sci ; 13(5): 454-9, 2002 Sep.
Article in English | MEDLINE | ID: mdl-12219813

ABSTRACT

Superconditioning is said to occur when learning an association between a conditioned stimulus (CS) and unconditioned stimulus (US) isfacilitated by pairing the CS with the US in the presence of a previously established conditioned inhibitor. Previous demonstrations of superconditioning have been criticized because their control conditions have allowed alternative interpretations. Using a within-subjects autoshaping procedure, the present study unambiguously demonstrated superconditioning. The results support the view that super-conditioning is the symmetric opposite of blocking.


Subject(s)
Association Learning , Conditioning, Classical , Inhibition, Psychological , Animals , Avoidance Learning , Color Perception , Columbidae , Discrimination Learning , Pattern Recognition, Visual
12.
Anim Learn Behav ; 30(1): 34-42, 2002 Feb.
Article in English | MEDLINE | ID: mdl-12017966

ABSTRACT

Pigeons were trained on a multiple schedule in which separate concurrent schedules were presented in the two components of the schedule. During one component, concurrent variable-interval 40-sec variable-interval 80-sec schedules operated. In the second component, concurrent variable-interval 40-sec variable-interval 20-sec schedules operated. After stable baseline performance was obtained in both components, extinction probe choice tests were presented to assess preference between the variable-interval 40-sec schedules from the two components. The variable-interval 40-sec schedule paired with the variable-interval 80-sec schedule was preferred over the variable-interval 40-sec schedule paired with the variable-interval 20-sec schedule. The subjects were also exposed to several resistance-to-change manipulations: (1) prefeeding prior to the experimental session, (2) a free-food schedule added to timeout periods separating components, and (3) extinction. The results indicated that preference and resistance to change do not necessarily covary.


Subject(s)
Choice Behavior , Motivation , Reinforcement Schedule , Animals , Association Learning , Columbidae
SELECTION OF CITATIONS
SEARCH DETAIL