Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
Add more filters










Publication year range
1.
Dev Psychobiol ; 65(5): e22399, 2023 07.
Article in English | MEDLINE | ID: mdl-37338253

ABSTRACT

Attention-deficit/hyperactivity disorder (ADHD) is a common neurodevelopmental disorder that often presents with abnormal time perception and increased impulsive choice behavior. The spontaneously hypertensive rat (SHR) is the most widely used preclinical model of the ADHD-Combined and ADHD-Hyperactive/Impulsive subtypes of the disorder. However, when testing the spontaneously hypertensive rat from Charles River (SHR/NCrl) on timing and impulsive choice tasks, the appropriate control strain is not clear, and it is possible that one of the possible control strains, the Wistar Kyoto from Charles River (WKY/NCrl), is an appropriate model for ADHD-Predominately Inattentive. Our goals were to test the SHR/NCrl, WKY/NCrl, and Wistar (WI; the progenitor strain for the SHR/NCrl and WKY/NCrl) strains on time perception and impulsive choice tasks to assess the validity of SHR/NCrl and WKY/NCrl as models of ADHD, and the validity of the WI strain as a control. We also sought to assess impulsive choice behavior in humans diagnosed with the three subtypes of ADHD and compare them with our findings from the preclinical models. We found SHR/NCrl rats timed faster and were more impulsive than WKY/NCrl and WI rats, and human participants diagnosed with ADHD were more impulsive compared to controls, but there were no differences between the three ADHD subtypes.


Subject(s)
Attention Deficit Disorder with Hyperactivity , Delay Discounting , Rats , Humans , Animals , Rats, Inbred SHR , Rats, Inbred WKY , Impulsive Behavior , Disease Models, Animal
2.
J Autism Dev Disord ; 52(6): 2414-2429, 2022 Jun.
Article in English | MEDLINE | ID: mdl-34115327

ABSTRACT

Recently it has been proposed that impairments related to autism spectrum disorder (ASD) may reflect a more fundamental disruption in time perception. Here, we examined whether in utero exposure to valproic acid (VPA) can generate specific behavioral deficits related to ASD and time perception. Pups from control and VPA groups were tested using fixed-interval (FI) temporal bisection, peak interval, and intertemporal choice tasks. In addition, the rats were assessed on motor function, perseverative and exploratory behavior, anxiety, and memory. The VPA group displayed a leftward shift in timing functions. VPA rats displayed no deficits on the motor and memory tasks, but were significantly different from controls on measures of perseveration and anxiety.


Subject(s)
Autism Spectrum Disorder , Prenatal Exposure Delayed Effects , Animals , Autism Spectrum Disorder/chemically induced , Behavior, Animal , Choice Behavior , Disease Models, Animal , Female , Humans , Rats , Social Behavior , Valproic Acid/adverse effects
3.
J Exp Anal Behav ; 117(1): 53-68, 2022 01.
Article in English | MEDLINE | ID: mdl-34734647

ABSTRACT

Chronic exposure to delayed reinforcement has been shown to increase choice for larger, later reinforcement in a subsequent delay discounting task. In the three experiments presented in this paper, the opposite was tested: effects of chronic exposure to immediate reinforcement on choice in a subsequent delay discounting task. In Experiment 1, larger, later reinforcement choice was significantly reduced as a result of exposure to immediate reinforcement, compared to a maturation/handle control group, in experienced, male Lewis rats. In Experiment 2, with naive male and female Wistar rats, and Experiment 3, with naïve male Long Evans rats, the impact of exposure to immediate reinforcement was less robust, but directionally consistent with results from Experiment 1. These results align with some previous work reporting that exposure to immediate reinforcement may decrease choice for larger, later reinforcement in a delay discounting task, and/or blunt maturational increases in choice for larger, later reinforcement. These findings have implications for future research investigating experience-based interventions to manipulate delay discounting behavior. They also have clinical implications for understanding and treating disorders involving maladaptive choice.


Subject(s)
Delay Discounting , Animals , Choice Behavior , Female , Impulsive Behavior , Male , Rats , Rats, Inbred Lew , Rats, Long-Evans , Rats, Wistar
4.
Behav Processes ; 190: 104449, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34175409

ABSTRACT

The CNTNAP2 gene has been implicated in several neuropsychological disorders, including autism spectrum disorder (ASD) and schizophrenia. The CNTNAP2 knockout (KO) rat model, rats without the CNTNAP2 gene, exhibits deficits in social interaction and increases in both repetitive and anxiety-like behaviors. However, deficits in time perception that may underlie several of the neuropsychological disorders implicated have not been investigated. The current study investigated timing in CNTNAP2 KO rats compared to control rats using a discrete-trial temporal bisection task. Results suggested deficits in the timing of relatively long durations in the CNTNAP2 KO rats. This finding is consistent with similar findings previously reported in humans diagnosed with ASD, and is promising for understanding the role that the CNTNAP2 gene may play in timing in certain neuropsychological disorders, and for developing targeted clinical therapies.


Subject(s)
Autism Spectrum Disorder , Schizophrenia , Animals , Anxiety , Membrane Proteins/genetics , Mice , Mice, Knockout , Nerve Tissue Proteins/genetics , Rats
6.
Behav Processes ; 158: 126-136, 2019 Jan.
Article in English | MEDLINE | ID: mdl-30468886

ABSTRACT

A relatively strong preference for smaller-sooner rewards (SSR) over larger-later rewards (LLR) is associated with a host of maladaptive behavioral patterns. As such, the clinical implications for increasing preference for LLR are profound. There is a growing body of literature that suggests extended exposure to delayed reward may increase preference for LLR in rats. However, questions remain about the underlying mechanism driving this effect and the extent to which extended exposure to immediate rewards may decrease LLR choice. In Experiment 1, we tested effects of a differential-reinforcement-of-low-rates schedule (DRL) to increase LLR choice using a pretest/posttest design with Wistar rats as subjects. We compared this group to a group of rats exposed to a differential-reinforcement-of-high-rates schedule (DRH). The DRH intervention has never been employed in this research context, but explicitly programs an immediate response-reinforcement requirement. In Experiment 2, we tested effects of an intervention with a delay longer than those used in the delay discounting pretest and posttest. No previous research has tested effects of an intervention delay this long, relative to the delay discounting task. We compared this group to a group exposed to a delay that was part of the delay discounting pretest and posttest and to a group exposed to a traditional no-delay, fixed-ratio (FR) 2 control intervention. In both experiments, we found that exposure to delayed rewards in the intervention phase significantly increased LLR choice relative to pretest performance. These findings replicate and extend a growing body of literature showing that delay exposure increases preference for LLR. We also found significant decreases in LLR choice from pretest to posttest in the DRH and no-delay intervention groups in Experiments 1 and 2, respectively. This is the first report of such an effect and has implications for understanding and interpreting effects of delay exposure training in past and future research. Our results also suggested no relationship between improved temporal tracking of reward and increases in LLR choice as a result of delay exposure training.


Subject(s)
Choice Behavior/physiology , Delay Discounting/physiology , Food , Impulsive Behavior/physiology , Reward , Animals , Male , Rats , Rats, Wistar , Reinforcement, Psychology , Time Factors
7.
J Exp Anal Behav ; 108(2): 236-254, 2017 09.
Article in English | MEDLINE | ID: mdl-28776677

ABSTRACT

Verbal rules or instructions often exert obvious and meaningful control over human behavior. Sometimes instructions benefit the individual by enabling faster acquisition of a skill or by obviating an aversive consequence. However, research has also suggested a clear disadvantage: "insensitivity" to changing underlying contingencies. The two experiments described here investigated the variables that control initial rule-following behavior and rule-following insensitivity. When the initial rule was inaccurate, behavior was consistent with the rule for approximately half of participants and all participants' behavior was mostly insensitive to changing contingencies. When the initial rule was accurate, behavior of all participants was consistent with it and behavior for nearly all participants was insensitive to changes in underlying contingencies. These findings have implications for how best to establish and maintain rule-following behavior in applied settings when deviant behavior would be more reinforcing to the individual.


Subject(s)
Reinforcement, Psychology , Adolescent , Adult , Choice Behavior , Female , Humans , Male , Middle Aged , Photic Stimulation , Reward , Young Adult
8.
Behav Processes ; 135: 16-24, 2017 Feb.
Article in English | MEDLINE | ID: mdl-27864066

ABSTRACT

The spontaneously hypertensive (SHR/NCrl) rat from Charles River is one of the most widely used models of the combined subtype of Attention-Deficit/Hyperactivity Disorder (ADHD-C). Although often used as its control strain, the Wistar Kyoto (WKY/NCrl) from Charles River has been proposed as a model of the predominately inattentive subtype of ADHD (ADHD-PI). In Experiment 1 SHR/NCrl, WKY/NCrl, and Wistar (WI; the progenitor strain for the two models) rats were trained on a left→right lever-press sequence in the presence of light discriminative stimuli that signaled the active lever in the sequence. In subsequent conditions the discriminative light cues were removed or reversed. WKY/NCrl accuracy remained relatively stable across cue light transitions. SHR/NCrl and WI accuracy was more disrupted when light cues were removed or reversed-an indication that behavior of the WKY/NCrl rats may not have come under control of the discriminative light cues as it did for the other strains, but relied more on past behavior and spatial cues. In Experiment 2, all three strains were exposed to a response-initiated fixed-interval (RIFI) 8-s schedule of reinforcement. In RIFI schedules behavior must be timed from a past instance of the target response. Replicating previous work, timing during the FI was roughly equivalent across the three strains; however, latencies to initiate the FI were significantly longer for SHR/NCrl than WKY/NCrl and WI rats, suggesting SHR/NCrl behavior was less sensitive to the first-response:food contingency in the RIFI schedule. These findings identify differences in stimulus control between the three strains and may help determine the efficacy of SHR/NCrl and WKY/NCrl as models of ADHD subtypes in humans.


Subject(s)
Attention Deficit Disorder with Hyperactivity/physiopathology , Behavior, Animal/physiology , Animals , Cues , Discrimination, Psychological/physiology , Disease Models, Animal , Male , Psychomotor Performance , Rats , Rats, Inbred SHR , Rats, Inbred WKY
9.
J Exp Anal Behav ; 106(3): 210-224, 2016 11.
Article in English | MEDLINE | ID: mdl-27870108

ABSTRACT

Behavior and events distributed in time can serve as markers that signal delays to future events. The majority of timing research has focused on how behavior changes as the time to some event, usually food availability, decreases. The primary objective of the two experiments presented here was to assess how behavior changes as time passes between two time markers when the first time marker was manipulated but the second, food delivery, was held constant. Pigeons were exposed to fixed-interval, response-initiated fixed-interval, and signaled response-initiated fixed-interval 15- and 30-s schedules of reinforcement. In Experiment 1, first-response latencies were systematically shorter in the signaled response-initiated schedules than response-initiated schedules, suggesting that the first response was a more effective time marker when it was signaled. In Experiment 2, responding in no-food (i.e. "peak") trials indicated that timing accuracy was equivalent in the three schedule types. Compared to fixed interval schedules, timing precision was reduced in the signaled response-initiated schedules and was lowest in response-initiated schedules. Results from Experiments 1 and 2 coupled with previous research suggest that the overall "informativeness" of a time marker relative to other events and behaviors in the environment may determine its efficacy.


Subject(s)
Reaction Time , Reinforcement Schedule , Reinforcement, Psychology , Animals , Columbidae , Food
10.
Learn Behav ; 44(4): 366-377, 2016 12.
Article in English | MEDLINE | ID: mdl-27129789

ABSTRACT

Recent research on interval timing in the behavioral and neurological sciences has employed a concurrent fixed-interval (FI) procedure first reported by Platt and Davis (Journal of Experimental Psychology: Animal Behavior Processes, 9, 160-170, 1983). Studies employing the task typically assess just 1 dependent variable, the switch/bisection point; however, multiple measures of timing are available in the procedure and it is unclear (a) what is timed (i.e., learned) by subjects and (b) what other measures might tell us about timing in the task and generally. The main objective of the current experiment was to utilize multiple dependent measures of timing accuracy and precision derived from the task to assess whether the 2 FIs are timed independently or if timing 1 FI interferes with timing the other, and vice versa. Four pigeons were exposed to an FI temporal bisection procedure with parametric manipulations across two phases. In the constant phase, the short FI was always the same; the long FI was 2 to 16 times the short FI and changed across conditions. In the proportional phase, the long FI was always 4 times the duration of the short FI. Across both phases, pigeon mean bisection points were near the geometric mean of the 2 FIs. Coefficients of variance increased as the durations to be timed increased. Results suggested pigeons' timing of the short FI was affected by the presence of the long FI, and vice versa. The FI temporal bisection task offers multiple dependent variables for analysis and is well suited for studying temporal learning and decision making.


Subject(s)
Decision Making , Learning , Time Perception , Animals , Behavior, Animal , Columbidae
11.
J Appl Behav Anal ; 48(4): 936-40, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26282112

ABSTRACT

Aversive control is a common method to reduce undesirable behavior in horses. However, it often results in unintended negative side effects, including potential abuse of the animal. Procedures based on positive reinforcement, such as differential reinforcement of other behavior (DRO), may reduce undesirable behaviors with fewer negative consequences. The current study used DRO schedules to reduce pawing using a multiple baseline design across 3 horses. Results indicated that DRO schedules were effective at reducing pawing. However, individual differences in sensitivity to DRO and reinforcer efficacy may be important considerations.


Subject(s)
Behavior, Animal , Conditioning, Operant , Horses , Reinforcement, Psychology , Self-Injurious Behavior/rehabilitation , Animals , Humans , Male , Reinforcement Schedule
12.
J Exp Anal Behav ; 103(2): 375-92, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25533195

ABSTRACT

Different events can serve as time markers that initiate intervals in schedules of reinforcement. Pigeons were exposed to fixed-interval (FI) schedules in which the onset of the interval was signaled by the illumination of a key light or initiated by a peck to a lighted key. Food was delivered following the first response after the interval elapsed. In Experiment 1, three pigeons were exposed to a multiple schedule. One component was a standard FI schedule: Key light illumination signaled the onset of the interval. The other component was a response-initiated fixed-interval (RIFI) schedule: The first key-peck response determined the onset of the interval. In Experiment 2, three pigeons were exposed to a multiple FI-RIFI schedule of reinforcement and on occasional trials food was not delivered (i.e. "no-food" or "peak trials"). A yoking procedure equated reinforcement rates between the schedule types in both experiments. Absolute response rates early in the intervals were higher in the RIFI schedules of both experiments. Normalized response-rate gradients, ogive fits of normalized response gradients, and breakpoints were not systematically different for the schedule types in Experiment 1, indicating similar patterns of responding between interval onset and food delivery. However, during peak trials in Experiment 2 the duration of responding at a high rate was longer for RIFI schedules than FI schedules. This suggests that timing precision was reduced in the RIFI schedules and that relative "distinctiveness" of a time marker may determine its efficacy.


Subject(s)
Reinforcement Schedule , Animals , Columbidae , Conditioning, Operant , Reaction Time , Reinforcement, Psychology , Time Factors
13.
Behav Processes ; 106: 82-90, 2014 Jul.
Article in English | MEDLINE | ID: mdl-24811449

ABSTRACT

State-dependent valuation learning (SDVL) is a preference for stimuli associated with relative food deprivation over stimuli associated with relative satiety. Pigeons were exposed to experimental conditions designed to investigate SDVL and to test the hypothesis that obtained relative immediacy during training predicts choice during test probes. Energy states were manipulated using a procedure that has previously revealed SDVL in starlings and pigeons. In Experiment 1, pigeons preferred the stimulus associated with deprivation in the first choice probe session, but were indifferent in the second. Changes in choice were consistent with changes in obtained relative immediacy. In Experiment 2, training parameters were altered and SDVL did not occur. Obtained relative immediacy again predicted choice. Results of both experiments provide evidence that obtained relative immediacy may be an important contributing factor to the SDVL phenomenon.


Subject(s)
Choice Behavior/physiology , Columbidae/physiology , Conditioning, Operant/physiology , Food Deprivation/physiology , Animals , Time Factors
14.
J Exp Anal Behav ; 100(2): 187-97, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23897546

ABSTRACT

In fixed-interval (FI) and response-initiated fixed-interval (RIFI) schedules of reinforcement, a response is required after an interval has elapsed for delivery of reinforcement. In RIFI schedules, a response is required to initiate each interval as well. The objective of this experiment was a systematic comparison of performance in the two schedule types over a range of interval durations. Four pigeons were exposed to FI and RIFI schedules of 15, 30, 60, 120 and 240 s. Interfood intervals were longer and more variable in RIFI than corresponding FI schedules. In addition, response rates early in the RIFI schedules were higher than in corresponding FI schedules. However, the distribution of first-response latencies, mean breakpoints, and normalized response gradients suggest that temporal discrimination was equivalent in the two schedules.


Subject(s)
Reinforcement Schedule , Animals , Columbidae , Conditioning, Psychological , Reaction Time , Reinforcement, Psychology
15.
J Exp Anal Behav ; 99(3): 346-61, 2013 May.
Article in English | MEDLINE | ID: mdl-23408327

ABSTRACT

The present study investigated the effects of punishing responses inconsistent with rules on instructional control during a choice task. In a procedure modeled after Hackenberg and Joker (1994), 7 adults were presented with repeated choices between progressive- and fixed-time schedules of reinforcement and were given instructions (rules) for how to respond to maximize earnings. Across sessions, the progressive-time schedule step size was manipulated so that the instructions became increasingly inaccurate, and deviating from the instructions produced greater earnings. The experiment consisted of two phases, Penalty and No Penalty. During the Penalty phase, deviating from the instructions produced a money-loss penalty (response-cost punishment). Two participants showed persistent instructional control and therefore completed only one phase (Penalty or No Penalty), and 1 participant showed little instructional control during the Penalty phase until the punishment magnitude was increased. In all 4 participants who experienced both Penalty and No Penalty phases, punishment increased the consistency of choices with the instructions, and in 2 of these participants punishment increased the progressive-schedule step size at which choices began to systematically deviate from the instructions. These results show that monetary penalties for breaking with rules may enhance instructional control, but that deviations from rules still occur under punishment when such deviations produce greater reinforcement than rule following.


Subject(s)
Choice Behavior , Punishment/psychology , Female , Humans , Male , Reward , Young Adult
16.
Behav Processes ; 91(1): 125-8, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22617186

ABSTRACT

Biting and chewing by horses on crossties can result in injury to the handler and damage to equipment. Operant-conditioning techniques have been used to train horses and could be used to reduce or eliminate undesirable biting and chewing. Presently, a differential-reinforcement-of-other-behavior (DRO) schedule, in the context of a reversal design, was effective in reducing biting and chewing in two horses. In DRO schedules, a reinforcer is delivered contingent on the absence of a target behavior for a specified interval. Positive-reinforcement procedures offer an alternative to aversive-control techniques typically used in equine training and may provide for better equine welfare and horse-human interaction.


Subject(s)
Conditioning, Operant , Horses , Mastication , Reinforcement, Psychology , Animals , Female , Male , Reinforcement Schedule
SELECTION OF CITATIONS
SEARCH DETAIL
...