Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters











Database
Language
Publication year range
1.
J Exp Anal Behav ; 120(3): 344-362, 2023 11.
Article in English | MEDLINE | ID: mdl-37581958

ABSTRACT

We investigated the effects of differential and nondifferential reinforcers on divided control by compound-stimulus dimensions. Six pigeons responded in a delayed matching-to-sample procedure in which a blue or yellow sample stimulus flashed on/off at a fast or slow rate, and subjects reported its color or alternation frequency. The dimension to report was unsignaled (Phase 1) or signaled (Phase 2). Correct responses were reinforced with a probability of .70, and the probability of reinforcers for errors varied across conditions. Comparison choice depended on reinforcer ratios for correct and incorrect responding; as the frequency of error reinforcers according to a dimension increased, control (measured by log d) by that dimension decreased and control by the other dimension increased. Davison and Nevin's (1999) model described data when the dimension to report was unsignaled, whereas model fits were poorer when it was signaled, perhaps due to carryover between conditions. We are the first to test this quantitative model of divided control with reinforcers for errors and when the dimension to report is signaled; hence, further research is needed to establish the model's generality. We question whether divided stimulus control is dimensional and suggest it may instead reflect joint control by compound stimuli and reinforcer ratios.


Subject(s)
Discrimination Learning , Reinforcement, Psychology , Humans , Animals , Reinforcement Schedule , Probability , Columbidae
2.
J Exp Anal Behav ; 116(2): 182-207, 2021 09.
Article in English | MEDLINE | ID: mdl-34223635

ABSTRACT

Behavioral flexibility has, in part, been defined by choice behavior changing as a function of changes in reinforcer payoffs. We examined whether the generalized matching law quantitatively described changes in choice behavior in zebrafish when relative reinforcer rates, delays/immediacy, and magnitudes changed between two alternatives across conditions. Choice was sensitive to each of the three reinforcer properties. Sensitivity estimates to changes in relative reinforcer rates were greater when 2 variable-interval schedules were arranged independently between alternatives (Experiment 1a) than when a single schedule pseudorandomly arranged reinforcers between alternatives (Experiment 1b). Sensitivity estimates for changes in relative reinforcer immediacy (Experiment 2) and magnitude (Experiment 3) were similar but lower than estimates for reinforcer rates. These differences in sensitivity estimates are consistent with studies examining other species, suggesting flexibility in zebrafish choice behavior in the face of changes in payoff as described by the generalized matching law.


Subject(s)
Reinforcement, Psychology , Zebrafish , Animals , Choice Behavior , Columbidae , Reinforcement Schedule
3.
Behav Processes ; 138: 29-33, 2017 May.
Article in English | MEDLINE | ID: mdl-28216120

ABSTRACT

Greater rates of intermittent reinforcement in the presence of discriminative stimuli generally produce greater resistance to extinction, consistent with predictions of behavioral momentum theory. Other studies reveal more rapid extinction with higher rates of reinforcers - the partial reinforcement extinction effect. Further, repeated extinction often produces more rapid decreases in operant responding due to learning a discrimination between training and extinction contingencies. The present study examined extinction repeatedly with training with different rates of intermittent reinforcement in a multiple schedule. We assessed whether repeated extinction would reverse the pattern of greater resistance to extinction with greater reinforcer rates. Counter to this prediction, resistance to extinction was consistently greater across twelve assessments of training followed by six successive sessions of extinction. Moreover, patterns of responding during extinction resembled those observed during satiation tests, which should not alter discrimination processes with repeated testing. These findings join others suggesting operant responding in extinction can be durable across repeated tests.


Subject(s)
Conditioning, Operant , Extinction, Psychological , Reinforcement, Psychology , Animals , Columbidae , Male , Reinforcement Schedule
4.
J Exp Anal Behav ; 104(1): 7-19, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25989016

ABSTRACT

We investigated why violations to the constant-ratio rule, an assumption of the generalized matching law, occur in procedures that arrange frequent changes to reinforcer ratios. Our investigation produced steady-state data and compared them with data from equivalent, frequently changing procedures. Six pigeons responded in a four-alternative concurrent-schedule experiment with an arranged reinforcer-rate ratio of 27:9:3:1. The same four variable-interval schedules were used in every condition, for 50 sessions, and the physical location of each schedule was changed across conditions. The experiment was a steady-state version of a frequently changing procedure in which the locations of four VI schedules were changed every 10 reinforcers. We found that subjects' responding was consistent with the constant-ratio rule in the steady-state procedure. Additionally, local analyses showed that preference after reinforcement was towards the alternative that was likely to produce the next reinforcer, instead of being towards the just-reinforced alternative as in frequently changing procedures. This suggests that the effect of a reinforcer on preference is fundamentally different in rapidly changing and steady-state environments. Comparing this finding to the existing literature suggests that choice is more influenced by reinforcer-generated signals when the reinforcement contingencies often change.


Subject(s)
Choice Behavior , Reinforcement Schedule , Animals , Columbidae , Conditioning, Operant , Reinforcement, Psychology
5.
J Exp Anal Behav ; 97(1): 51-70, 2012 Jan.
Article in English | MEDLINE | ID: mdl-22287804

ABSTRACT

Four pigeons were trained in a series of two-component multiple schedules. Reinforcers were scheduled with random-interval schedules. The ratio of arranged reinforcer rates in the two components was varied over 4 log units, a much wider range than previously studied. When performance appeared stable, prefeeding tests were conducted to assess resistance to change. Contrary to the generalized matching law, logarithms of response ratios in the two components were not a linear function of log reinforcer ratios, implying a failure of parameter invariance. Over a 2 log unit range, the function appeared linear and indicated undermatching, but in conditions with more extreme reinforcer ratios, approximate matching was observed. A model suggested by McLean (1991), originally for local contrast, predicts these changes in sensitivity to reinforcer ratios somewhat better than models by Herrnstein (1970) and by Williams and Wixted (1986). Prefeeding tests of resistance to change were conducted at each reinforcer ratio, and relative resistance to change was also a nonlinear function of log reinforcer ratios, again contrary to conclusions from previous work. Instead, the function suggests that resistance to change in a component may be determined partly by the rate of reinforcement and partly by the ratio of reinforcers to responses.


Subject(s)
Color Perception , Discrimination Learning , Reinforcement Schedule , Time Perception , Animals , Columbidae , Conditioning, Operant , Generalization, Psychological , Nonlinear Dynamics
6.
J Exp Anal Behav ; 94(2): 197-207, 2010 Sep.
Article in English | MEDLINE | ID: mdl-21451748

ABSTRACT

Four pigeons were trained on two-key concurrent variable-interval schedules with no changeover delay. In Phase 1, relative reinforcers on the two alternatives were varied over five conditions from .1 to .9. In Phases 2 and 3, we instituted a molar feedback function between relative choice in an interreinforcer interval and the probability of reinforcers on the two keys ending the next interreinforcer interval. The feedback function was linear, and was negatively sloped so that more extreme choice in an interreinforcer interval made it more likely that a reinforcer would be available on the other key at the end of the next interval. The slope of the feedback function was -1 in Phase 2 and -3 in Phase 3. We varied relative reinforcers in each of these phases by changing the intercept of the feedback function. Little effect of the feedback functions was discernible at the local (interreinforcer interval) level, but choice measured at an extended level across sessions was strongly and significantly decreased by increasing the negative slope of the feedback function.


Subject(s)
Choice Behavior , Conditioning, Operant , Feedback , Reinforcement Schedule , Animals , Columbidae , Models, Psychological , Neuropsychological Tests , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL