Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Exp Anal Behav ; 119(1): 129-139, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36443244

RESUMEN

Many philosophers, psychologists, and lay folk associate volition with autonomy (actions are independent of an individual's environment) and free will (individuals originate their actions). Most behaviorists hold these views to be incompatible with behavior analyses. The present paper describes volition as interpreted by B. F. Skinner, Howard Rachlin, and Allen Neuringer. Skinner relates volition to positively reinforced operant behavior. That works because, like operants, voluntary actions are free, in the sense of not physically constrained; they affect their environments, often resulting in positive outcomes, and are sometimes unpredictable. Rachlin, while incorporating Skinnerian methods, interprets volition within his own Teleological Behaviorism framework. For Rachlin, reinforcement of an individual response is often incompatible with voluntary control, thereby disagreeing with Skinner. Responses are voluntary only when they are members of extended response patterns. Neuringer also begins with Skinner's operants, but argues that, under the control of reinforcing consequences, both voluntary actions and operant responses are sometimes predictable and other times "truly" unpredictable. Neuringer does not assume that environments determine voluntary actions, thereby disagreeing with Skinner and Rachlin. Taken together, the agreements and disagreements among these three behaviorists may help to shed light on the relationship between operants and volition.


Asunto(s)
Behaviorismo , Volición , Humanos , Refuerzo en Psicología
3.
Am Psychol ; 76(8): 1349, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-35113601

RESUMEN

Memorializes Howard Rachlin (1935-2021). Rachlin was born to Irving and Gussie Kugler Rachlin in New York City on March 10, 1935. He died 86 years later of cancer, leaving his wife Nahid, daughter Leila, and grandson Ethan. He received numerous recognitions: the Med Associates Distinguished Contributions to Basic Behavioral Research award from Division 25 of the American Psychological Association, the Impact of Science on Application award from the Association for Behavior Analysis, a James McKeen Cattell Fellowship, continuous funding from the National Science Foundation and the National Institutes of Mental Health (from which he received the MERIT award), visiting scholar at the Russell Sage Foundation, and invited speaker at the Nobel symposium on Behavioral and Experimental Economics. Of himself Rachlin wrote: "He obtained a bachelor of mechanical engineering degree from Cooper Union in New York City [1957], where he learned to treat all scientific and practical questions as asking for answers rather than for self-expression; masters in philosophy and psychology from The New School of Social Research in New York City [1962], where he learned that the whole may be greater than the sum of its parts; and a PhD from Harvard University [1965], where B. F. Skinner and Richard Herrnstein taught him how to be a behaviorist." After teaching at Harvard, he joined Stony Brook University in New York in 1969, rising to the position of Distinguished Research Professor. Rachlin studied choice and decision-making; he was one of the founders of behavioral economics. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Asunto(s)
Distinciones y Premios , Humanos , Aprendizaje , Masculino , Filosofía , Sociedades Científicas , Universidades
4.
J Exp Anal Behav ; 110(3): 380-393, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30298690

RESUMEN

Studies with rats and pigeons showed that reinforcement of response variability improved learning of difficult response sequences. The results suggested that concurrent reinforcement of variability might be a helpful tool when educators or therapists attempt to teach individuals with learning difficulties. However, similar experiments with humans failed to confirm the results. In fact, in the human case, concurrent reinforcement of variability interfered with learning. The present experiment studied the same phenomenon with human participants in the context of a computer-based game. Our results were consistent with the nonhuman animal findings. When students in our experiment were concurrently reinforced for sequence variability, they were more likely than control participants to learn a difficult response sequence. We conclude that reinforcement of variability can facilitate learning-in humans as well as animals -and discuss possible reasons for the difference between our results and the previous human findings.


Asunto(s)
Refuerzo en Psicología , Conducta de Elección , Humanos , Aprendizaje , Esquema de Refuerzo , Juegos de Video/psicología
5.
J Exp Anal Behav ; 107(1): 21-33, 2017 01.
Artículo en Inglés | MEDLINE | ID: mdl-27887034

RESUMEN

This paper examines similarities in the works of Epicurus, an ancient Greek philosopher, and B. F. Skinner, a behavioral psychologist. They both were empiricists who argued in favor of the lawfulness of behavior while maintaining that random events were included within those laws. They both devoted much effort to describing how individuals could live effective, rewarding and pleasurable lives. They both emphasized simple and natural pleasures (or reinforcers) and the importance of combining personal pleasures with actions that benefit friends and community. They both opposed punishment and all aversive measures used by governments and religions to control behaviors. And both created utopias: a real community, The Garden, where Epicurus lived with his followers, and a fictional one, Walden Two, by Skinner. We consider how a combination of the ideas of Epicurus and Skinner can contribute to their common goal of helping people to live better lives.


Asunto(s)
Mundo Griego , Filosofía/historia , Psicología/historia , Historia del Siglo XX , Historia Antigua , Placer , Refuerzo en Psicología , Recompensa , Utopias
6.
Physiol Behav ; 107(3): 451-7, 2012 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-22885121

RESUMEN

In an open-field test, the Long-Evans (LE) strain of rats was identified as "bold" and the PVG strain as "shy." Some members of each strain then experienced 14 sessions of a common enrichment procedure, namely exposure to a series of novel objects (Exposed). Others in each strain were explicitly reinforced with food pellets for variable interactions with the same objects (Reinforced). Both experience and strain influenced object interactions. In particular, Reinforced rats interacted more variably with the objects - contacting, probing, pushing and so forth - than did the Exposed; and LEs interacted more variably than PVGs. Foraging proficiency in the same rats was then studied in a transfer-of-training test. Food pellets were hidden among never-before experienced objects and the rats were permitted to explore freely. Reinforced rats discovered and consumed more pellets than Exposed; and LEs discovered and consumed more than PVGs. Thus a bold genetic strain and reinforcement of variability independently contributed to successful foraging behavior.


Asunto(s)
Condicionamiento Operante/fisiología , Conducta Exploratoria/fisiología , Personalidad/genética , Refuerzo en Psicología , Animales , Conducta Alimentaria/fisiología , Ratas , Ratas Endogámicas , Ratas Long-Evans , Tiempo de Reacción/genética , Esquema de Refuerzo
7.
Behav Anal ; 35(2): 229-35, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23450914
8.
Behav Anal ; 34(1): 27-9, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-22532726
9.
Psychol Rev ; 117(3): 972-93, 2010 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-20658860

RESUMEN

A behavior-based theory identified 2 characteristics of voluntary acts. The first, extensively explored in operant-conditioning experiments, is that voluntary responses produce the reinforcers that control them. This bidirectional relationship-in which reinforcer depends on response and response on reinforcer-demonstrates the functional nature of the voluntary act. The present article focuses on the second characteristic: a similar bidirectional relationship between reinforcement and the predictability/unpredictability of voluntary acts. Support for the theory comes from 2 areas of research. The first shows that levels of behavioral variability-from highly predictable to randomlike-are directly influenced by reinforcers. Put another way, variability is an operant dimension, analogous to response rate and force. The second source of support comes from psychophysical experiments in which human participants judged the degree to which "choices" by virtual actors on a computer screen appeared to be voluntary. The choices were intermittently reinforced according to concurrently operating schedules. The actors' behaviors appeared to most closely approximate voluntary human choices when response distributions matched reinforcer distributions (an indication of functionality) and when levels of variability, from repetitive to random, changed with reinforcement contingencies. Thus, voluntary acts are characterized by reinforcement-controlled functionality and unpredictability.


Asunto(s)
Condicionamiento Operante , Conducta de Elección , Discriminación en Psicología , Humanos , Lingüística , Modelos Psicológicos , Estimulación Física , Esquema de Refuerzo , Refuerzo en Psicología , Incertidumbre
10.
Behav Anal ; 33(2): 181-4, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-22532711
11.
J Exp Anal Behav ; 92(2): 139-59, 2009 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-20354596

RESUMEN

In most studies of choice under concurrent schedules of reinforcement, two physically identical operanda are provided. In the "real world," however, more than two choice alternatives are often available and biases are common. This paper describes a method for studying choices among an indefinite number of alternatives when large biases are present. Twenty rats were rewarded for choosing among five operanda with reinforcers scheduled probabilistically and concurrently. Large biases were generated by differences among the operanda: two were levers and three were pigeon keys. The results showed that when reinforcer frequencies were systematically varied, an extension of Baum's (1974) Generalized Matching Model, referred to as the Barycentric Matching Model, provided an excellent description of the data, including individual bias values for each of the operanda and a single exponent indicating sensitivity to reinforcer ratios.


Asunto(s)
Conducta de Elección , Condicionamiento Operante , Modelos Psicológicos , Pruebas Neuropsicológicas , Refuerzo en Psicología , Recompensa , Animales , Masculino , Ratas , Ratas Long-Evans , Esquema de Refuerzo , Sesgo de Selección
12.
J Exp Psychol Anim Behav Process ; 34(4): 437-60, 2008 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-18954229

RESUMEN

Two procedures commonly used to study choice are concurrent reinforcement and probability learning. Under concurrent-reinforcement procedures, once a reinforcer is scheduled, it remains available indefinitely until collected. Therefore reinforcement becomes increasingly likely with passage of time or responses on other operanda. Under probability learning, reinforcer probabilities are constant and independent of passage of time or responses. Therefore a particular reinforcer is gained or not, on the basis of a single response, and potential reinforcers are not retained, as when betting at a roulette wheel. In the "real" world, continued availability of reinforcers often lies between these two extremes, with potential reinforcers being lost owing to competition, maturation, decay, and random scatter. The authors parametrically manipulated the likelihood of continued reinforcer availability, defined as hold, and examined the effects on pigeons' choices. Choices varied as power functions of obtained reinforcers under all values of hold. Stochastic models provided generally good descriptions of choice emissions with deviations from stochasticity systematically related to hold. Thus, a single set of principles accounted for choices across hold values that represent a wide range of real-world conditions.


Asunto(s)
Conducta de Elección , Aprendizaje , Refuerzo en Psicología , Animales , Columbidae , Modelos Psicológicos , Factores de Tiempo
13.
Behav Processes ; 78(2): 231-9, 2008 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-18394825

RESUMEN

Human participants played a computer game in which choices among five alternatives were concurrently reinforced according to dependent random-ratio schedules. "Dependent" indicates that choices to any of the wedges activated the random-number generators governing reinforcers on all five alternatives. Two conditions were compared. In the hold condition, once scheduled, a reinforcer - worth a constant five points - remained available until it was collected. In the decay condition, point values decreased with intervening responses, i.e., rapid collection was differentially reinforced. Slopes of matching functions were higher in the decay than hold condition. However inter-subject variability was high in both conditions.


Asunto(s)
Conducta de Elección , Toma de Decisiones , Refuerzo en Psicología , Adulto , Femenino , Juegos Experimentales , Humanos , Masculino , Valores de Referencia
14.
J Exp Anal Behav ; 88(1): 1-28, 2007 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-17725049

RESUMEN

Attempts to characterize voluntary behavior have been ongoing for thousands of years. We provide experimental evidence that judgments of volition are based upon distributions of responses in relation to obtained rewards. Participants watched as responses, said to be made by "actors," appeared on a computer screen. The participant's task was to estimate how well each actor represented the voluntary choices emitted by a real person. In actuality, all actors' responses were generated by algorithms based on Baum's (1979) generalized matching function. We systematically varied the exponent values (sensitivity parameter) of these algorithms: some actors matched response proportions to received reinforcer proportions, others overmatched (predominantly chose the highest-valued alternative), and yet others undermatched (chose relatively equally among the alternatives). In each of five experiments, we found that the matchingactor's responses were judged most closely to approximate voluntary choice. We found also that judgments of high volition depended upon stochastic (or probabilistic) generation. Thus, stochastic responses that match reinforcer proportions best represent voluntary human choice.


Asunto(s)
Conducta de Elección , Volición , Humanos , Juicio , Juego e Implementos de Juego , Refuerzo en Psicología , Procesos Estocásticos
15.
Learn Behav ; 34(2): 111-23, 2006 May.
Artículo en Inglés | MEDLINE | ID: mdl-16933797

RESUMEN

Operant responses are often weakened when delays are imposed between the responses and reinforcers. We examined what happens when delayed reinforcers were contingent upon operant response variability. Three groups of rats were rewardedfor varying their response sequences, with onegroup rewarded for high variability, another for middle, and the third for low levels. Consistent with many reports in the literature, responding slowed significantly in all groups as delays were lengthened. Consistent with other reports, large differences in variability were maintained across the three groups despite the delays. Reinforced variability appears to be relatively immune to disruption by such things as delays, response slowing, prefeeding, and noncontingent reinforcement. Furthermore, the small effects on variability depended on baseline levels: As delays lengthened, variability increased in the low group, was statistically unchanged in the middle group, and decreased in the high group, an interaction similar to that reported previously when reinforcement frequencies were lowered. Thus, variable operant responding is controlled by reinforcement contingencies, but sometimes differently than more commonly studied repetitive responding.


Asunto(s)
Condicionamiento Operante , Refuerzo en Psicología , Animales , Conducta Animal , Masculino , Modelos Biológicos , Ratas , Ratas Long-Evans , Factores de Tiempo
16.
Am Psychol ; 59(9): 891-906, 2004 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-15584823

RESUMEN

Although reinforcement often leads to repetitive, even stereotyped responding, that is not a necessary outcome. When it depends on variations, reinforcement results in responding that is diverse, novel, indeed unpredictable, with distributions sometimes approaching those of a random process. This article reviews evidence for the powerful and precise control by reinforcement over behavioral variability, evidence obtained from human and animal-model studies, and implications of such control. For example, reinforcement of variability facilitates learning of complex new responses, aids problem solving, and may contribute to creativity. Depression and autism are characterized by abnormally repetitive behaviors, but individuals afflicted with such psychopathologies can learn to vary their behaviors when reinforced for so doing. And reinforced variability may help to solve a basic puzzle concerning the nature of voluntary action.


Asunto(s)
Conducta de Elección , Refuerzo en Psicología , Animales , Conducta Animal , Condicionamiento Operante , Creatividad , Discriminación en Psicología , Humanos , Memoria , Solución de Problemas
17.
Behav Modif ; 27(2): 251-64, 2003 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-12705108

RESUMEN

This study asked whether response sequences generated by moderately depressed students are more repetitive than those generated by nondepressed students and whether sequence variability can be increased in those identified as depressed. Seventy-five undergraduate students completed the Center for Epidemiological Studies Depression Scale (CES-D) and were divided into moderately depressed and nondepressed groups. Some of the students had received class instruction concerning behavioral variability; others did not. All students participated in a two-phase, computer-game procedure in which response-sequence variability was measured. When reinforcement was provided independently of sequence variability, the depressed participants responded more repetitively than did the nondepressed. When high sequence variability was required for reinforcement, variability increased significantly in all participants, with the depressed achieving the same high levels as the nondepressed. The students who had been instructed about variability responded more variably throughout than the noninstructed. Therefore, both direct reinforcement and instruction increased behavioral variability of depressed individuals, a goal of some therapies for depression.


Asunto(s)
Terapia Conductista/métodos , Depresión/terapia , Adulto , Condicionamiento Operante , Depresión/diagnóstico , Femenino , Humanos , Masculino , Refuerzo en Psicología , Índice de Severidad de la Enfermedad , Encuestas y Cuestionarios
18.
Psychon Bull Rev ; 9(2): 250-8, 2002 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-12120786

RESUMEN

We compared two sources of behavior variability: decreased levels of reinforcement and reinforcement contingent on variability itself. In Experiment 1, four groups of rats were reinforced for different levels of response-sequence variability: one group was reinforced for low variability, two groups were reinforced for intermediate levels, and one group was reinforced for very high variability. All of the groups experienced three different reinforcement frequencies for meeting their respective variability contingencies. Results showed that reinforcement contingencies controlled response variability more than did reinforcement frequencies. Experiment 2 showed that only those animals concurrently reinforced for high variability acquired a difficult-to-learn sequence; animals reinforced for low variability learned little or not at all. Variability was therefore controlled mainly by reinforcement contingencies, and learning increased as a function of levels of baseline variability. Knowledge of these relationships may be helpful to those who attempt to condition operant responses.


Asunto(s)
Atención , Condicionamiento Operante , Aprendizaje por Probabilidad , Esquema de Refuerzo , Animales , Aprendizaje por Asociación , Señales (Psicología) , Aprendizaje Discriminativo , Masculino , Desempeño Psicomotor , Ratas , Ratas Long-Evans
19.
Behav Processes ; 57(2-3): 199-209, 2002 Apr 28.
Artículo en Inglés | MEDLINE | ID: mdl-11947998

RESUMEN

Reinforcement was presented contingent upon human subjects simultaneously varying three dimensions of an operant response. The response was drawing rectangles on a computer screen. The dimensions were area of the rectangle, its location on the screen, and its shape. In Experiment 1, an experimental group was reinforced for satisfying the three-part variability contingency. A control group was equally reinforced for drawing rectangles but independently of levels of variability. Results showed that the experimental group varied significantly more along each of the dimensions than did the control group. In Experiment 2, another group of subjects was reinforced for repeating instances along one of the dimensions, e.g. repeatedly draw a rectangle in approximately the same location, while simultaneously varying along the other two dimensions. The subjects learned to satisfy these contingencies as well. These results show reinforcement simultaneously and independently controls the variability of three orthogonal dimensions of a response.

20.
Psychon Bull Rev ; 9(4): 672-705, 2002 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-12613672

RESUMEN

Although responses are sometimes easy to predict, at other times responding seems highly variable, unpredictable, or even random. The inability to predict is generally attributed to ignorance of controlling variables, but this article is a review of research showing that the highest levels of behavioral variability may result from identifiable reinforcers contingent on such variability. That is, variability is an operant. Discriminative stimuli and reinforcers control it, resulting in low or high variability, depending on the contingencies. Schedule-of-reinforcement effects are orderly, and choosing to vary or repeat is lawfully governed by relative reinforcement frequencies. The operant nature of variability has important implications. For example, learning, exploring, creating, and problem solving may partly depend on it. Abnormal levels of variability, including those found in psychopathologies such as autism, depression, and attention deficit hyperactivity disorder, may be modified through reinforcement. Operant variability may also help to explain some of the unique attributes of voluntary action.


Asunto(s)
Aprendizaje por Asociación , Condicionamiento Operante , Motivación , Esquema de Refuerzo , Análisis de Varianza , Animales , Concienciación , Terapia Conductista , Humanos , Especificidad de la Especie
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA