Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 64
Filtrar
1.
Behav Anal ; 36(2): 345-359, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-28018044

RESUMO

In science we study processes in the material world. The way these processes operate can be discovered by conducting experiments that activate them, and findings from such experiments can lead to functional complexity theories of how the material processes work. The results of a good functional theory will agree with experimental measurements, but the theory may not incorporate in its algorithmic workings a representation of the material processes themselves. Nevertheless, the algorithmic operation of a good functional theory may be said to make contact with material reality by incorporating the emergent computations the material processes carry out. These points are illustrated in the experimental analysis of behavior by considering an evolutionary theory of behavior dynamics, the algorithmic operation of which does not correspond to material features of the physical world, but the functional output of which agrees quantitatively and qualitatively with findings from a large body of research with live organisms.

2.
Perspect Behav Sci ; 46(1): 119-136, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37006601

RESUMO

The evolutionary theory of behavior dynamics (ETBD) is a complexity theory, which means that it is stated in the form of simple low-level rules, the repeated operation of which generates high-level outcomes that can be compared to data. The low-level rules of the theory implement Darwinian processes of selection, reproduction, and mutation. This tutorial is an introduction to the ETBD for a general audience, and illustrates how the theory is used to animate artificial organisms that can behave continuously in any experimental environment. Extensive research has shown that the theory generates behavior in artificial organisms that is indistinguishable in qualitative and quantitative detail from the behavior of live organisms in a wide variety of experimental environments. An overview and summary of this supporting evidence is provided. The theory may be understood to be computationally equivalent to the biological nervous system, which means that the algorithmic operation of the theory and the material operation of the nervous system give the same answers. The applied relevance of the theory is also discussed, including the creation of artificial organisms with various forms of psychopathology that can be used to study clinical problems and their treatment. Finally, possible future directions are discussed, such as the extension of the theory to behavior in a two-dimensional grid world.

3.
J Exp Anal Behav ; 119(1): 117-128, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36416717

RESUMO

A test of the evolutionary theory was conducted by replicating Bradshaw et al.'s (1977, 1978, 1979) experiments in which human participants worked on single-alternative variable-interval (VI) schedules of reinforcement under three punishment conditions: no punishment, superimposed VI punishment, and superimposed variable-ratio (VR) punishment. Artificial organisms (AOs) animated by the theory worked in the same environments. Four principal findings were reported for the human participants: (1) their behavior was well described by an hyperbola in all conditions, (2) the asymptote of the hyperbola under VI punishment was equal to the asymptote in the absence of punishment, but the asymptote under VR punishment was lower than the asymptote in the absence of punishment, (3) the parameter in the denominator of the hyperbola was larger under both VI and VR punishment than in the absence of punishment, and (4) response suppression under punishment was greater at lower than at higher reinforcement frequencies. These four outcomes were also observed in the behavior of the AOs working in the same environments, thereby confirming the theory's first-order predictions about the effects of punishment on single-alternative responding.


Assuntos
Punição , Reforço Psicológico , Humanos , Esquema de Reforço , Evolução Biológica
5.
Perspect Behav Sci ; 44(4): 561-580, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35098025

RESUMO

This article provides an overview of highlights from 60 years of basic research on choice that are relevant to the assessment and treatment of clinical problems. The quantitative relations developed in this research provide useful information about a variety of clinical problems including aggressive, antisocial, and delinquent behavior, attention-deficit/hyperactivity disorder (ADHD), bipolar disorder, chronic pain syndrome, intellectual disabilities, pedophilia, and self-injurious behavior. A recent development in this field is an evolutionary theory of behavior dynamics that is used to animate artificial organisms (AOs). The behavior of AOs animated by the theory has been shown to conform to the quantitative relations that have been developed in the choice literature over the years, which means that the theory generates these relations as emergent outcomes, and therefore provides a theoretical basis for them. The theory has also been used to create AOs that exhibit specific psychopathological behavior, the assessment and treatment of which has been studied virtually. This modeling of psychopathological behavior has contributed to our understanding of the nature and treatment of the problems in humans.

6.
J Exp Anal Behav ; 116(2): 225-242, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34383960

RESUMO

Artificial organisms (AOs) animated by an evolutionary theory of behavior dynamics (ETBD) worked on concurrent interval schedules with a standard reinforcer magnitude on 1 alternative and a range of reinforcer magnitudes on the other. The reinforcer magnitudes on the second alternative were hedonically scaled using the generalized matching law. The AOs then worked on single interval schedules that arranged various combinations of the scaled reinforcer magnitudes and a range of nominal schedule values. This produced bivariate response rate data to which 5 candidate equations were fitted. One equation was found to provide the best description of the bivariate data in terms of percentage of variance accounted for, information criterion value, and residual profile. This equation consisted of 2 factors, 1 entailing the scaled magnitude, 1 entailing the obtained reinforcement rate, and both expressed in the form of exponentiated hyperbolas. The theory's prediction of the bivariate equation, along with additional predictions of the theory, were tested on data from an experiment in which rats pressed levers for various concentrations of sucrose pellets. The bivariate equation predicted by the theory was confirmed, as were all the additional predictions of the theory that could be tested on this data set.


Assuntos
Comportamento de Escolha , Reforço Psicológico , Animais , Evolução Biológica , Ratos , Esquema de Reforço , Sacarose
7.
Perspect Behav Sci ; 44(4): 581-603, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35098026

RESUMO

The subtypes of automatically reinforced self-injurious behavior (ASIB) delineated by Hagopian and colleagues (Hagopian et al., 2015; 2017) demonstrated how functional-analysis (FA) outcomes may predict the efficacy of various treatments. However, the mechanisms underlying the different patterns of responding obtained during FAs and corresponding differences in treatment efficacy have remained unclear. A central cause of this lack of clarity is that some proposed mechanisms, such as differences in the reinforcing efficacy of the products of ASIB, are difficult to manipulate. One solution may be to model subtypes of ASIB using mathematical models of behavior in which all aspects of the behavior can be controlled. In the current study, we used the evolutionary theory of behavior dynamics (ETBD; McDowell, 2019) to model the subtypes of ASIB, evaluate predictions of treatment efficacy, and replicate recent research aiming to test explanations for subtype differences. Implications for future research related to ASIB are discussed.

8.
J Exp Anal Behav ; 115(3): 747-768, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33711206

RESUMO

We performed three experiments to improve the quality and retention of data obtained from a Procedure for Rapidly Establishing Steady-State Behavior (PRESS-B; Klapes et al., 2020). In Experiment 1, 120 participants worked on nine concurrent random-interval random-interval (conc RI RI) schedules and were assigned to four conditions of varying changeover delay (COD) length. The 0.5-s COD condition group exhibited the fewest instances of exclusive reinforcer acquisition. Importantly, this group did not differ in generalized matching law (GML) fit quality from the other groups. In Experiment 2, 60 participants worked on nine conc RI RI schedules with a wider range of scheduled reinforcement rate ratios than was used in Experiment 1. Participants showed dramatic reductions in exclusive reinforcer acquisition. Experiment 3 entailed a replication of Experiment 2 wherein blackout periods were implemented between the schedule presentations and each schedule remained in operation until at least one reinforcer was acquired on each alternative. GML fit quality was slightly more consistent in Experiment 3 than in the previous experiments. Thus, these results suggest that future PRESS-B studies should implement a shorter COD, a wider and richer scheduled reinforcement rate ratio range, and brief blackouts between schedule presentations for optimal data quality and retention.


Assuntos
Condicionamento Operante , Reforço Psicológico , Comportamento de Escolha , Humanos , Esquema de Reforço
9.
J Exp Anal Behav ; 114(3): 430-446, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-33025598

RESUMO

The axiomatic principle that all behavior is choice was incorporated into a revised implementation of an evolutionary theory's account of behavior on single schedules. According to this implementation, target responding occurs in the context of background responding and reinforcement. In Phase 1 of the research, the target responding of artificial organisms (AOs) animated by the revised theory was found to be well described by an exponentiated hyperbola, the parameters of which varied as a function of the background reinforcement rate. In Phase 2, the effect of reinforcer magnitude on the target behavior of the AOs was studied. As in Phase 1, the AOs' behavior was well described by an exponentiated hyperbola, the parameters of which varied with both the target reinforcer magnitude and the background reinforcement rate. Evidence from experiments with live organisms was found to be consistent with the Phase-1 predictions of the revised theory. The Phase-2 predictions have not been tested. The revised implementation of the theory can be used to study the effects of superimposing punishment on single-schedule responding, and it may lead to the discovery of a function that relates response rate to both the rate and magnitude of reinforcement on single schedules.


Assuntos
Evolução Biológica , Comportamento de Escolha , Animais , Comportamento , Humanos , Modelos Biológicos , Teoria Psicológica , Esquema de Reforço , Reforço Psicológico
10.
J Exp Anal Behav ; 114(1): 142-159, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32543721

RESUMO

Previous continuous choice laboratory procedures for human participants are either prohibitively time-intensive or result in inadequate fits of the generalized matching law (GML). We developed a rapid-acquisition laboratory procedure (Procedure for Rapidly Establishing Steady-State Behavior, or PRESS-B) for studying human continuous choice that reduces participant burden and produces data that is well-described by the GML. To test the procedure, 27 human participants were exposed to 9 independent concurrent random-interval random-interval reinforcement schedules over the course of a single, 37-min session. Fits of the GML to the participants' data accounted for large proportions of variance (median R2 : 0.94), with parameter estimates that were similar to those previously found in human continuous choice studies [median a: 0.67; median log(b): -0.02]. In summary, PRESS-B generates human continuous choice behavior in the laboratory that conforms to the GML with limited experimental duration.


Assuntos
Comportamento de Escolha , Aprendizagem por Discriminação , Condicionamento Operante , Humanos , Estimulação Luminosa , Esquema de Reforço , Reforço Psicológico
11.
J Exp Anal Behav ; 111(1): 130-145, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30656712

RESUMO

The evolutionary theory of behavior dynamics is a complexity theory that instantiates the Darwinian principles of selection, reproduction, and mutation in a genetic algorithm. The algorithm is used to animate artificial organisms that behave continuously in time and can be placed in any experimental environment. The present paper is an update on the status of the theory. It includes a summary of the evidence supporting the theory, a list of the theory's untested predictions, and a discussion of how the algorithmic operations of the theory may correspond to material reality. Based on the evidence reviewed here, the evolutionary theory appears to be a strong candidate for a comprehensive theory of adaptive behavior.


Assuntos
Comportamento , Evolução Biológica , Teoria Psicológica , Algoritmos , Animais , Comportamento Animal , Humanos
12.
J Exp Anal Behav ; 112(2): 128-143, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31385310

RESUMO

An implementation of punishment in the evolutionary theory of behavior dynamics is proposed, and is applied to responding on concurrent schedules of reinforcement with superimposed punishment. In this implementation, punishment causes behaviors to mutate, and to do so with a higher probability in a lean reinforcement context than in a rich one. Computational experiments were conducted in an attempt to replicate three findings from experiments with live organisms. These are (1) when punishment is superimposed on one component of a concurrent schedule, response rate decreases in the punished component and increases in the unpunished component, (2) when punishment is superimposed on both components at equal scheduled rates, preference increases over its no-punishment baseline, and (3) when punishment is superimposed on both components at rates that are proportional to the scheduled rates of reinforcement, preference remains unchanged from the baseline preference. Artificial organisms animated by the theory, and working on concurrent schedules with superimposed punishment, reproduced all of these findings. Given this outcome, it may be possible to discover a steady-state mathematical description of punished choice in live organisms by studying the punished choice behavior of artificial organisms animated by the evolutionary theory.


Assuntos
Evolução Biológica , Teoria Psicológica , Punição/psicologia , Animais , Comportamento de Escolha , Columbidae , Condicionamento Operante , Modelos Psicológicos , Ratos , Esquema de Reforço , Reforço Psicológico
13.
J Exp Anal Behav ; 111(2): 166-182, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30706474

RESUMO

Regularization, or shrinkage estimation, refers to a class of statistical methods that constrain the variability of parameter estimates when fitting models to data. These constraints move parameters toward a group mean or toward a fixed point (e.g., 0). Regularization has gained popularity across many fields for its ability to increase predictive power over classical techniques. However, articles published in JEAB and other behavioral journals have yet to adopt these methods. This paper reviews some common regularization schemes and speculates as to why articles published in JEAB do not use them. In response, we propose our own shrinkage estimator that avoids some of the possible objections associated with the reviewed regularization methods. Our estimator works by mixing weighted individual and group (WIG) data rather than by constraining parameters. We test this method on a problem of model selection. Specifically, we conduct a simulation study on the selection of matching-law-based punishment models, comparing WIG with ordinary least squares (OLS) regression, and find that, on average, WIG outperforms OLS in this context.


Assuntos
Análise do Comportamento Aplicada/estatística & dados numéricos , Modelos Estatísticos , Estatística como Assunto , Simulação por Computador , Análise dos Mínimos Quadrados , Punição
14.
Behav Processes ; 78(2): 291-6, 2008 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-18243578

RESUMO

[McDowell, J.J, 2004. A computational model of selection by consequences. J. Exp. Anal. Behav. 81, 297-317] instantiated the principle of selection by consequences in a virtual organism with an evolving repertoire of possible behaviors undergoing selection, reproduction, and mutation over many generations. The process is based on the computational approach, which is non-deterministic and rules-based. The model proposes a causal account for operant behavior. McDowell found that the virtual organism consistently showed a hyperbolic relationship between response and reinforcement rates according to the quantitative law of effect. To continue validation of the computational model, the present study examined its behavior on the molecular level by comparing the virtual organism's IRT distributions in the form of log survivor plots to findings from live organisms. Log survivor plots did not show the "broken-stick" feature indicative of distinct bouts and pauses in responding, although the bend in slope of the plots became more defined at low reinforcement rates. The shape of the virtual organism's log survivor plots was more consistent with the data on reinforced responding in pigeons. These results suggest that log survivor plot patterns of the virtual organism were generally consistent with the findings from live organisms providing further support for the computational model of selection by consequences as a viable account of operant behavior.


Assuntos
Comportamento de Escolha , Simulação por Computador , Modelos Psicológicos , Análise de Sobrevida , Animais , Comportamento Animal , Evolução Biológica , Biologia Computacional , Modelos Lineares , Modelos Biológicos , Seleção Genética
15.
J Exp Anal Behav ; 90(3): 387-403, 2008 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-19070343

RESUMO

Virtual organisms animated by a computational theory of selection by consequences responded on symmetrical and asymmetrical concurrent schedules of reinforcement. The theory instantiated Darwinian principles of selection, reproduction, and mutation such that a population of potential behaviors evolved under the selection pressure exerted by reinforcement from the environment. The virtual organisms' steady-state behavior was well described by the power function matching equation, and the parameters of the equation behaved in ways that were consistent with findings from experiments with live organisms. Together with previous research on single-alternative schedules (McDowell, 2004; McDowell & Caron, 2007) these results indicate that the equations of matching theory are emergent properties of the evolutionary dynamics of selection by consequences.


Assuntos
Comportamento Animal , Esquema de Reforço , Animais , Interface Usuário-Computador
16.
J Exp Anal Behav ; 109(2): 336-348, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29509286

RESUMO

A direct-suppression, or subtractive, model of punishment has been supported as the qualitatively and quantitatively superior matching law-based punishment model (Critchfield, Paletz, MacAleese, & Newland, 2003; de Villiers, 1980; Farley, 1980). However, this conclusion was made without testing the model against its predecessors, including the original (Herrnstein, 1961) and generalized (Baum, 1974) matching laws, which have different numbers of parameters. To rectify this issue, we reanalyzed a set of data collected by Critchfield et al. (2003) using information theoretic model selection criteria. We found that the most advanced version of the direct-suppression model (Critchfield et al., 2003) does not convincingly outperform the generalized matching law, an account that does not include punishment rates in its prediction of behavior allocation. We hypothesize that this failure to outperform the generalized matching law is due to significant theoretical shortcomings in model development. To address these shortcomings, we present a list of requirements that all punishment models should satisfy. The requirements include formal statements of flexibility, efficiency, and adherence to theory. We compare all past punishment models to the items on this list through algebraic arguments and model selection criteria. None of the models presented in the literature thus far meets all of the requirements.


Assuntos
Modelos Psicológicos , Punição/psicologia , Animais , Condicionamento Clássico , Inibição Psicológica , Modelos Estatísticos , Teoria Psicológica , Ratos , Reforço Psicológico
17.
J Exp Anal Behav ; 110(3): 323-335, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30195256

RESUMO

An evolutionary theory of adaptive behavior dynamics was tested by studying the behavior of artificial organisms (AOs) animated by the theory, working on concurrent ratio schedules with unequal and equal ratios in the components. The evolutionary theory implements Darwinian rules of selection, reproduction, and mutation in the form of a genetic algorithm that causes a population of potential behaviors to evolve under the selection pressure of consequences from the environment. On concurrent ratio schedules with unequal ratios in the components, the AOs tended to respond exclusively on the component with the smaller ratio, provided that ratio was not too large and the difference between the ratios was not too small. On concurrent ratio schedules with equal ratios in the components, the AOs tended to respond exclusively on one component, provided the equal ratios were not too large. In addition, the AOs' preference on the latter schedules adjusted rapidly when the equal ratios were changed between conditions, but their steady-state preference was a continuous function of the value of the equal ratios. Most of these outcomes are consistent with the results of experiments with live organisms, and consequently support the evolutionary theory.


Assuntos
Comportamento , Evolução Biológica , Animais , Simulação por Computador , Meio Ambiente , Humanos , Teoria Psicológica , Reprodução
18.
Behav Processes ; 75(2): 97-106, 2007 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-17399915

RESUMO

A computational theory of selection by consequences [McDowell, J.J, 2004. A computational model of selection by consequences. J. Exp. Anal. Behav. 81, 297-317] was tested by studying the responding of virtual organisms that were animated by the theory on random interval schedules of reinforcement. The theory generated responding by applying principles of selection, reproduction, and mutation to a population of potential behaviors that evolved in response to the selection pressure exerted by reinforcement. The organisms' equilibrium response rates were well described by the modern version of the Herrnstein hyperbola, which includes an exponent on reinforcement rate. Under strong selection pressure this exponent decreased with increasing mutation rate from a value near 1.0 at 1% mutation to an asymptotic value of 0.83 at mutation rates of 10% and greater. This asymptotic value is consistent with values obtained by fitting the equation to data from live organisms responding on single schedules, and with the value of about 0.80 that is expected on the basis of extensive research with live organisms responding on concurrent schedules. These results show that the computational theory is consistent with the modern theory of matching [McDowell, J.J, 2005. On the classic and modern theories of matching. J. Exp. Anal. Behav. 84, 111-127], and that it is a viable candidate for a mathematical dynamics of behavior.


Assuntos
Comportamento , Comportamento de Escolha , Simulação por Computador , Tomada de Decisões , Modelos Psicológicos , Esquema de Reforço , Algoritmos , Animais , Comportamento Animal , Biologia Computacional , Aprendizagem por Discriminação , Humanos , Teoria Psicológica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA