Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 45
Filter
1.
Anim Cogn ; 27(1): 11, 2024 Mar 02.
Article in English | MEDLINE | ID: mdl-38429608

ABSTRACT

Optimal foraging theory suggests that animals make decisions which maximize their food intake per unit time when foraging, but the mechanisms animals use to track the value of behavioral alternatives and choose between them remain unclear. Several models for how animals integrate past experience have been suggested. However, these models make differential predictions for the occurrence of spontaneous recovery of choice: a behavioral phenomenon in which a hiatus from the experimental environment results in animals reverting to a behavioral allocation consistent with a reward distribution from the more distant past, rather than one consistent with their most recently experienced distribution. To explore this phenomenon and compare these models, three free-operant experiments with rats were conducted using a serial reversal design. In Phase 1, two responses (A and B) were baited with pellets on concurrent variable interval schedules, favoring option A. In Phase 2, lever baiting was reversed to favor option B. Rats then entered a delay period, where they were maintained at weight in their home cages and no experimental sessions took place. Following this delay, preference was assessed using initial responding in test sessions where levers were presented, but not baited. Models were compared in performance, including an exponentially weighted moving average, the Temporal Weighting Rule, and variants of these models. While the data provided strong evidence of spontaneous recovery of choice, the form and extent of recovery was inconsistent with the models under investigation. Potential interpretations are discussed in relation to both the decision rule and valuation functions employed.


Subject(s)
Choice Behavior , Conditioning, Operant , Rats , Animals , Choice Behavior/physiology , Conditioning, Operant/physiology , Reward , Behavior, Animal
2.
Behav Anal Pract ; 17(1): 228-245, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38405296

ABSTRACT

The extant literature demonstrates that individuals with intellectual and developmental disabilities (IDD) exhibit preferences among communication modalities when multiple modalities are available and produce reinforcement on identical reinforcement schedules. High- and low-tech communication options, such as voice output devices and picture cards, are commonly recommended for individuals with limited vocal communication skills. In this study, we conducted a systematic literature review of research studies that implemented mand modality preference assessments (MMPAs) that included both a high- and low-tech communication option with individuals with IDD. We identified 27 studies meeting our inclusion criteria and summarized the participant demographics, MMPA design and procedural variations, and MMPA outcomes. The results suggested that high-tech communication options were generally more preferred over low-tech options. However, there was a high degree of variability in how the studies were conducted and conclusions were reached. We discuss some of the current research gaps and the implications for clinical practice.

3.
J Appl Behav Anal ; 56(3): 623-637, 2023 06.
Article in English | MEDLINE | ID: mdl-37088926

ABSTRACT

Differential reinforcement of alternative behavior (DRA) without extinction is an effective intervention for reducing problem behavior maintained by socially mediated reinforcement, particularly when implementing dense schedules of reinforcement for appropriate behavior. Thinning schedules of reinforcement for an alternative response may result in resurgence of problem behavior. Resurgence may be of particular concern in the treatment of problem behavior without extinction because problem behavior that resurges is also likely to encounter reinforcement and thus can be expected to maintain. In the present investigation, we compared the effectiveness of single and concurrent DRA schedules in decreasing the probability of resurgence when problem behavior continues to produce reinforcement throughout all phases of the evaluation. Concurrent DRA schedules reduced or eliminated the likelihood of resurgence compared with a single DRA schedule during a treatment challenge.


Subject(s)
Problem Behavior , Humans , Conditioning, Operant , Extinction, Psychological , Reinforcement Schedule , Reinforcement, Psychology , Behavior Therapy
4.
J Exp Anal Behav ; 119(2): 337-355, 2023 03.
Article in English | MEDLINE | ID: mdl-36718124

ABSTRACT

The generalized matching law predicts performance on concurrent schedules when variable-interval schedules are programmed but is trivially applicable when independent ratio schedules are used. Responding usually is exclusive to the schedule with the lowest response requirement. Determining a method to program concurrent ratio schedules such that matching analyses can be usefully employed would extend the generality of matching research and lead to new avenues of research. In the present experiments, ratio schedules were programmed dependently such that responses to either of the two options progressed the requirement on both schedules. Responding is not exclusive because the probability of reinforcement increases on both schedules as responses are allocated to either schedule. In Experiment 1, performance on concurrent variable-ratio schedules was assessed, and reinforcer ratios were varied across conditions to investigate changes in sensitivity. Additionally, the length of a changeover delay was manipulated. In Experiment 2, performance was compared under concurrently available, dependently programmed variable-ratio and fixed-ratio schedules. Performance was well described by the generalized matching law. Increases in the changeover delay decreased sensitivity, whereas sensitivity was higher when variable-ratio schedules were employed, compared with fixed-ratio schedules. Concurrent ratio schedules can be a viable approach to studying functional differences between ratio and interval schedules.


Subject(s)
Reinforcement, Psychology , Reinforcement Schedule
5.
Behav Processes ; 206: 104834, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36706824

ABSTRACT

The generalized matching law or Law of Allocation proposed by Baum (2018a, 2018b) potentially provides a broad conceptual framework within which to understand the allocation of time among activities. In its simplest form, the law incorporates power-function induction of activities by variables such as rate and amount of delivered inducers. Whether these variables affect allocation independently of one another is a central issue, because independence of the variables would allow simple multiplication of power functions and would make quantitative prediction simple too. The present experiment used a titration procedure to test the independence of rate and amount of food in determining pigeons' allocation of pecking between two keys. Amount ratio was varied within sessions to engender different peck ratios. Rate ratio was varied across two series of conditions. The results conformed to the predictions of the simple version of the Law of Allocation by strongly supporting independence of rate and amount. The Law of Allocation may have broad application for understanding activities in natural settings and everyday life.


Subject(s)
Conditioning, Operant , Reinforcement, Psychology , Animals , Reinforcement Schedule , Columbidae , Food
6.
J Exp Anal Behav ; 116(2): 182-207, 2021 09.
Article in English | MEDLINE | ID: mdl-34223635

ABSTRACT

Behavioral flexibility has, in part, been defined by choice behavior changing as a function of changes in reinforcer payoffs. We examined whether the generalized matching law quantitatively described changes in choice behavior in zebrafish when relative reinforcer rates, delays/immediacy, and magnitudes changed between two alternatives across conditions. Choice was sensitive to each of the three reinforcer properties. Sensitivity estimates to changes in relative reinforcer rates were greater when 2 variable-interval schedules were arranged independently between alternatives (Experiment 1a) than when a single schedule pseudorandomly arranged reinforcers between alternatives (Experiment 1b). Sensitivity estimates for changes in relative reinforcer immediacy (Experiment 2) and magnitude (Experiment 3) were similar but lower than estimates for reinforcer rates. These differences in sensitivity estimates are consistent with studies examining other species, suggesting flexibility in zebrafish choice behavior in the face of changes in payoff as described by the generalized matching law.


Subject(s)
Reinforcement, Psychology , Zebrafish , Animals , Choice Behavior , Columbidae , Reinforcement Schedule
7.
J Exp Anal Behav ; 115(3): 634-649, 2021 05.
Article in English | MEDLINE | ID: mdl-33713441

ABSTRACT

Rats were given repeated choices between social and nonsocial outcomes, and between familiar and unfamiliar social outcomes. Lever presses on either of 2 levers in the middle chamber of a 3-chamber apparatus opened a door adjacent to the lever, permitting 45-s access to social interaction with the rat in the chosen side chamber. In Experiment 1, rats preferred (a) social over nonsocial options, choosing their cagemate rat over an empty chamber, and (b) an unfamiliar over a familiar rat, choosing a non-cagemate over their cagemate. These findings were replicated in Experiment 2 with 2 different non-cagemate rats. Rats preferred both non-cagemate rats to a similar degree when pitted against their cagemate, but were indifferent when the 2 non-cagemates were pitted against each other. Similar preference for social over nonsocial and non-cagemate over cagemate was seen in Experiment 3, with new non-cagemate rats introduced after every third session. Response rates (for both cagemate and non-cagemate rats) were elevated under conditions of nonsocial (isolated) housing compared to conditions of social (paired) housing, demonstrating a social deprivation effect. Together, the experiments contribute to an experimental analysis of social preference within a social reinforcement framework, drawing on methods with proven efficacy in the analysis of reinforcement more generally.


Subject(s)
Reinforcement, Psychology , Social Behavior , Animals , Rats
8.
J Appl Behav Anal ; 53(3): 1514-1530, 2020 07.
Article in English | MEDLINE | ID: mdl-32034774

ABSTRACT

The purpose of the current study was to evaluate the effects of different magnitudes of escape for compliance relative to the magnitudes of escape for problem behavior in a concurrent-schedule arrangement. Three individuals who exhibited escape-maintained problem behavior participated. A large differential magnitude condition (240-s escape for compliance, 10-s escape for problem behavior) was compared to equal (30-s escape for compliance and problem behavior) and moderate differential magnitude (90-s escape for compliance, 10-s escape for problem behavior) conditions. The authors also evaluated the impact of correcting for reinforcer access time (i.e., time on escape intervals) on intervention interpretation. For all participants, problem behavior decreased during only the large differential magnitude condition, and including reinforcer access time in the overall session time did not affect interpretation of treatment outcomes. Providing larger escape magnitudes for compliance relative to problem behavior may facilitate treatment involving concurrent-reinforcement schedules for escape-maintained problem behavior.


Subject(s)
Problem Behavior/psychology , Reinforcement, Psychology , Adolescent , Autism Spectrum Disorder/psychology , Autism Spectrum Disorder/therapy , Child , Child, Preschool , Female , Humans , Male , Reinforcement Schedule , Treatment Outcome
9.
J Exp Anal Behav ; 111(2): 252-273, 2019 03.
Article in English | MEDLINE | ID: mdl-30779357

ABSTRACT

We demonstrate the usefulness of Bayesian methods in developing, evaluating, and using psychological models in the experimental analysis of behavior. We do this through a case study, involving new experimental data that measure the response count and time allocation behavior in pigeons under concurrent random-ratio random-interval schedules of reinforcement. To analyze these data, we implement a series of behavioral models, based on the generalized matching law, as graphical models, and use computational methods to perform fully Bayesian inference. We demonstrate how Bayesian methods, implemented in this way, make inferences about parameters representing psychological variables, how they test the descriptive adequacy of models as accounts of behavior, and how they compare multiple competing models. We also demonstrate how the Bayesian graphical modeling approach allows for more complicated modeling structures, including hierarchical, common cause, and latent mixture structures, to formalize more complicated behavioral models. As part of the case study, we demonstrate how the statistical properties of Bayesian methods allow them to provide more direct and intuitive tests of theories and hypotheses, and how they support the creative and exploratory development of new theories and models.


Subject(s)
Bayes Theorem , Models, Psychological , Reinforcement, Psychology , Animals , Columbidae , Conditioning, Operant , Data Interpretation, Statistical , Humans , Psychology, Experimental/methods , Reinforcement Schedule
10.
J Exp Anal Behav ; 111(2): 359-368, 2019 03.
Article in English | MEDLINE | ID: mdl-30677136

ABSTRACT

Obtained reinforcement (whether measured as counts or as rates) is frequently used as a predictor in regression analyses of behavior. This approach, however, often contradicts the strict requirement that predictors in a regression be statistically independent of behavior. Indeed, by definition, reinforcement in operant scenarios depends on behavior, creating a causal feedback loop. The consequence of this feedback loop is bias in the estimation of regression parameters. This manuscript describes the technique of instrumental variable estimation (IVE), which allows unbiased regression parameters to be obtained through the use of "instruments," variables that are known a priori to be independent of both compromised predictors and of regression outcomes. Instruments also allow the strength of the bias to be assessed. Two examples of this technique are provided (one relying on real data and one relying on simulation) in the context of regression models of generalized matching.


Subject(s)
Applied Behavior Analysis/statistics & numerical data , Reinforcement, Psychology , Statistics as Topic , Regression Analysis
11.
J Exp Anal Behav ; 110(3): 336-365, 2018 11.
Article in English | MEDLINE | ID: mdl-30325040

ABSTRACT

A multivariate analysis is concerned with more than one dependent variable simultaneously. Models that generate event records have a privileged status in a multivariate analysis. From a model that generates event records, we may compute predictions for any dependent variable associated with those event records. However, because of the generality that is afforded to us by these kinds of models, we must carefully consider the selection of dependent variables. Thus, we present a conditional compromise heuristic for the selection of dependent variables from a large group of variables. The heuristic is applied to McDowell's Evolutionary Theory of Behavior Dynamics (ETBD) for fitting to a concurrent variable-interval schedule in-transition dataset. From the parameters obtained from fitting ETBD, we generated predictions for a wide range of dependent variables. Overall, we found that our ETBD implementation accounted well for various flavors of the log response ratio, but had difficulty accounting for the overall response rates and cumulative reinforcer effects. Based on these results, we argue that the predictions of our ETBD implementation could be improved by decreasing the base response probabilities, either by increasing the response latencies or by decreasing the sizes of the operant classes.


Subject(s)
Behavior , Biological Evolution , Animals , Computer Simulation , Heuristics , Humans , Multivariate Analysis , Psychological Theory
12.
J Exp Anal Behav ; 109(2): 313-335, 2018 03.
Article in English | MEDLINE | ID: mdl-29450892

ABSTRACT

In two experiments, experimentally naïve rats were trained in concurrent variable-interval schedules in which the reinforcer ratios changed daily according to a pseudorandom binary sequence. In Experiment 1, relative response rates showed clear sensitivity to current-session reinforcer ratios, but not to previous sessions' reinforcer ratios. Within sessions, sensitivity to the current session's reinforcement rates increased steadily, and by session end, response ratios approached matching to the current-session reinforcer ratios. Across sessions, sensitivity to the current session's reinforcer ratio decreased with continued exposure to the pseudorandom binary sequence, contrary to expectations based on previous studies demonstrating learning sets. Using a second group of naïve rats, Experiment 2 replicated the main results from Experiment 1 and showed that although there were increases over sessions in both changeover rate and response rate during the changeover delay, neither could explain the accompanying reductions in sensitivity. We consider the role of reinforcement history, showing that our results can be simulated using two separate representations, one local and one nonlocal, but a more complex approach will be needed to bring together these results and other history effects such as learning sets and spontaneous recovery.


Subject(s)
Choice Behavior , Reinforcement Schedule , Animals , Conditioning, Classical , Discrimination Learning , Male , Rats , Reinforcement, Psychology
13.
J Exp Anal Behav ; 109(1): 107-124, 2018 01.
Article in English | MEDLINE | ID: mdl-29194638

ABSTRACT

Responding on concurrent schedules produced a conditional discrimination (Phases 1 and 2), asking either which peck produced the event, or which color the keys were when the event was produced. In Phases 3 and 4, reinforcer delivery or a delay in blackout was interpolated between responding and the conditional discrimination. In Phase 1, location versus color discrimination accuracy was controlled by the relative reinforcer frequency for correct responses to these questions (divided stimulus control). In Phases 2 to 4, relative reinforcer frequency for correct responses to these questions was .5, and the relative frequency with which concurrent-schedule responses produced the questions was varied. This variation had no clear effect on the accuracy of reporting Location or Color. These results are consistent with the model of divided control suggested by Davison and Elliffe (2010). Arranging a 3-s reinforcer between responding and choice decreased both color and location accuracy, but a 3-s delay only decreased location accuracy. Thus, in concurrent-schedule performance, both ambient stimuli prior to a reinforcer and the location of the just-reinforced response are available as discriminative stimuli following the reinforcer. Control of postreinforcer responding is divided between these according to their association with the relative frequency of subsequent reinforcers.


Subject(s)
Conditioning, Operant , Discrimination Learning , Animals , Color , Columbidae , Photic Stimulation , Reinforcement, Psychology
14.
J Exp Anal Behav ; 108(3): 398-413, 2017 11.
Article in English | MEDLINE | ID: mdl-29105098

ABSTRACT

The resurgence of time allocation with pigeons was studied in three experiments. In Phase 1 of each experiment, response-independent food occurred with different probabilities in the presence of two different keylights. Each peck on the key changed its color and the food probability in effect. In Phase 2, the food probabilities associated with each keylight were reversed and, in Phase 3, food was discontinued in the presence of either keylight. The food probabilities were .25 and .75, in Experiment 1, and 0.0 and 1.0 in Experiment 2. More time was allocated to the keylight correlated with more probable food in Phases 1 and 2, and in Phase 3 resurgence of time allocation occurred for two of three pigeons in Experiment 1, and for each of four pigeons in Experiment 2. Because time had to be allocated to either of the two alternatives in Experiments 1 and 2, however, it was difficult to characterize the time allocation patterns in Phase 3 as resurgence when changeover responding approached zero. In Experiment 3 this issue was addressed by providing a third alternative uncorrelated with food such that in each phase, after 30 s in the presence of either keylight correlated with food, the third alternative always was reinstated, requiring a response to access either of the two keylights correlated with food. In this experiment, the food probabilities were similar to those in Experiment 1. Resurgence of time allocation occurred for each of three pigeons under this procedure. The results of these experiments suggest that patterns of time allocation resurge similarly to discrete responses and to spatial and temporal patterns of responding.


Subject(s)
Reinforcement, Psychology , Time Perception , Animals , Columbidae , Conditioning, Operant , Food , Male , Probability , Reinforcement Schedule , Reward
15.
J Exp Anal Behav ; 108(2): 204-222, 2017 09.
Article in English | MEDLINE | ID: mdl-28758210

ABSTRACT

Choice behavior among two alternatives has been widely researched, but fewer studies have examined the effect of multiple (more than two) alternatives on choice. Two experiments investigated whether changing the overall reinforcer rate affected preference among three and four concurrently scheduled alternatives. Experiment 1 trained six pigeons on concurrent schedules with three alternatives available simultaneously. These alternatives arranged reinforcers in a ratio of 9:3:1 with the configuration counterbalanced across pigeons. The overall rate of reinforcement was varied across conditions. Preference between the pair of keys arranging the 9:3 reinforcer ratio was less extreme than the pair arranging the 3:1 reinforcer ratio regardless of overall reinforcer rate. This difference was attributable to the richer alternative receiving fewer responses per reinforcer than the other alternatives. Experiment 2 trained pigeons on concurrent schedules with four alternatives available simultaneously. These alternatives arranged reinforcers in a ratio of 8:4:2:1, and the overall reinforcer rate was varied. Next, two of the alternatives were put into extinction and the random interval duration was changed from 60 s to 5 s. The ratio of absolute response rates was independent of interval length across all conditions. In both experiments, an analysis of sequences of visits following each reinforcer showed that the pigeons typically made their first response to the richer alternative irrespective of which alternative was just reinforced. Performance on these three- and four-alternative concurrent schedules is not easily extrapolated from corresponding research using two-alternative concurrent schedules.


Subject(s)
Choice Behavior , Reinforcement, Psychology , Animals , Columbidae , Conditioning, Operant , Reinforcement Schedule
16.
J Exp Anal Behav ; 107(3): 369-387, 2017 05.
Article in English | MEDLINE | ID: mdl-28516673

ABSTRACT

Although choice between two alternatives has been widely researched, fewer studies have examined choice across multiple (more than two) alternatives. Past models of choice behavior predict that the number of alternatives should not affect relative response allocation, but more recent research has found violations of this principle. Five pigeons were presented with three concurrently scheduled alternatives. Relative reinforcement rates across these alternatives were assigned 9:3:1. In some conditions three keys were available; in others, only two keys were available. The number of available alternatives did not affect relative response rates for pairs of alternatives; there were no significant differences in behavior between the two and three key conditions. For two birds in the three-alternative conditions and three birds in the two-alternative conditions, preference was more extreme for the pair of alternatives with the lower overall pairwise reinforcer rate (3:1) than the pair with higher overall reinforcer rate (9:3). However, when responding during the changeover was removed three birds showed the opposite pattern in the three-alternative conditions; preference was more extreme for the pair of alternatives with the higher overall reinforcer rate. These findings differ from past research and do not support established theories of choice behavior.


Subject(s)
Choice Behavior , Animals , Columbidae , Conditioning, Operant , Models, Psychological , Reinforcement Schedule , Reinforcement, Psychology
17.
J Exp Anal Behav ; 107(3): 321-342, 2017 05.
Article in English | MEDLINE | ID: mdl-28516674

ABSTRACT

Price's equation describes evolution across time in simple mathematical terms. Although it is not a theory, but a derived identity, it is useful as an analytical tool. It affords lucid descriptions of genetic evolution, cultural evolution, and behavioral evolution (often called "selection by consequences") at different levels (e.g., individual vs. group) and at different time scales (local and extended). The importance of the Price equation for behavior analysis lies in its ability to precisely restate selection by consequences, thereby restating, or even replacing, the law of effect. Beyond this, the equation may be useful whenever one regards ontogenetic behavioral change as evolutionary change, because it describes evolutionary change in abstract, general terms. As an analytical tool, the behavioral Price equation is an excellent aid in understanding how behavior changes within organisms' lifetimes. For example, it illuminates evolution of response rate, analyses of choice in concurrent schedules, negative contingencies, and dilemmas of self-control.


Subject(s)
Biological Evolution , Choice Behavior , Animals , Behavior , Cultural Evolution , Humans , Models, Theoretical , Reinforcement Schedule , Reinforcement, Psychology , Selection, Genetic/genetics
18.
J Appl Behav Anal ; 50(3): 590-599, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28513826

ABSTRACT

The effects of noncontingent reinforcement (NCR) without extinction during treatment of problem behavior maintained by social positive reinforcement were evaluated for five individuals diagnosed with autism spectrum disorder. A continuous NCR schedule was gradually thinned to a fixed-time 5-min schedule. If problem behavior increased during NCR schedule thinning, a continuous NCR schedule was reinstated and NCR schedule thinning was repeated with differential reinforcement of alternative behavior (DRA) included. Results showed an immediate decrease in all participants' problem behavior during continuous NCR, and problem behavior maintained at low levels during NCR schedule thinning for three participants. Problem behavior increased and maintained at higher rates during NCR schedule thinning for two other participants; however, the addition of DRA to the intervention resulted in decreased problem behavior and increased mands.


Subject(s)
Autism Spectrum Disorder/therapy , Behavior Therapy/methods , Extinction, Psychological , Problem Behavior/psychology , Reinforcement Schedule , Reinforcement, Psychology , Child , Child, Preschool , Humans , Male
19.
J Exp Anal Behav ; 107(1): 39-64, 2017 01.
Article in English | MEDLINE | ID: mdl-28101928

ABSTRACT

Although theoretical discussions typically assume that positive and negative reinforcement differ, the literature contains little unambiguous evidence that they produce differential behavioral effects. To test whether the two types of consequences control behavior differently, we pitted money-gain positive reinforcement and money-loss-avoidance negative reinforcement, scheduled through identically programmed variable-cycle schedules, against each other in concurrent schedules. Contingencies of response-produced feedback, normally different in positive and negative reinforcement, were made symmetrical. Steeper matching slopes were produced compared to a baseline consisting of all positive reinforcement. This free-operant differential outcomes effect supports the notion that that stimulus-presentation positive reinforcement and stimulus-elimination negative reinforcement are functionally "different." However, a control experiment showed that the feedback asymmetry of more traditional positive and negative reinforcement schedules also is sufficient to create a "difference" when the type of consequence is held constant. We offer these findings as a small step in meeting the very large challenge of moving negative reinforcement theory beyond decades of relative quiescence.


Subject(s)
Conditioning, Operant , Reinforcement, Psychology , Humans , Psychological Theory , Reinforcement Schedule
20.
J Exp Anal Behav ; 107(1): 123-135, 2017 01.
Article in English | MEDLINE | ID: mdl-28000221

ABSTRACT

Pigeons made repeated choices between earning and exchanging reinforcer-specific tokens (green tokens exchangeable for food, red tokens exchangeable for water) and reinforcer-general tokens (white tokens exchangeable for food or water) in a closed token economy. Food and green food tokens could be earned on one panel; water and red water tokens could be earned on a second panel; white generalized tokens could be earned on either panel. Responses on one key produced tokens according to a fixed-ratio schedule, whereas responses on a second key produced exchange periods, during which all previously earned tokens could be exchanged for the appropriate commodity. Most conditions were conducted in a closed economy, and pigeons distributed their token allocation in ways that permitted food and water consumption. When the price of all tokens was equal and low, most pigeons preferred the generalized tokens. When token-production prices were manipulated, pigeons reduced production of the tokens that increased in price while increasing production of the generalized tokens that remained at a fixed price. The latter is consistent with a substitution effect: Generalized tokens increased and were exchanged for the more expensive reinforcer. When food and water were made freely available outside the session, token production and exchange was sharply reduced but was not eliminated, even in conditions when it no longer produced tokens. The results join with other recent data in showing sustained generalized functions of token reinforcers, and demonstrate the utility of token-economic methods for assessing demand for and substitution among multiple commodities in a laboratory context.


Subject(s)
Reinforcement, Psychology , Token Economy , Animals , Columbidae , Conditioning, Operant , Generalization, Psychological
SELECTION OF CITATIONS
SEARCH DETAIL