Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
J Exp Anal Behav ; 76(1): 21-42, 2001 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-11516114

RESUMO

Group choice refers to the distribution of group members between two choice alternatives over time. The ideal free distribution (IFD), an optimal foraging model from behavioral ecology, predicts that the ratio of foragers at two resource sites should equal the ratio of obtained resources, a prediction that is formally analogous to the matching law of individual choice, except that group choice is a social phenomenon. Two experiments investigated the usefulness of IFD analyses of human group choice and individual-based explanations that might account for the group-level events. Instead of nonhuman animals foraging at two sites for resources, a group of humans chose blue and red cards to receive points that could earn cash prizes. The groups chose blue and red cards in ratios in positive relation to the ratios of points associated with the cards. When group choice ratios and point ratios were plotted on logarithmic coordinates and fitted with regression lines, the slopes (i.e., sensitivity measures) approached 1.0 but tended to fall short of it (i.e., undermatching), with little bias and little unaccounted for variance. These experiments demonstrate that an IFD analysis of group choice is possible and useful, and suggest that group choice may be explained by the individual members' tendency to optimize reinforcement.


Assuntos
Comportamento de Escolha , Comportamento Cooperativo , Modelos Psicológicos , Comportamento Social , Adolescente , Adulto , Feminino , Humanos , Masculino , Reforço Psicológico
2.
J Exp Anal Behav ; 75(3): 338-41; discussion 367-78, 2001 May.
Artigo em Inglês | MEDLINE | ID: mdl-11453623

RESUMO

The molar view of behavior arose in response to the demonstrated inadequacy of explanations based on contiguity. Although Dinsmoor's (2001) modifications to two-factor theory render it irrefutable, a more basic criticism arises when we see that the molar and molecular views differ paradigmatically. The molar view has proven more productive.


Assuntos
Comportamento Apetitivo , Aprendizagem da Esquiva , Motivação , Esquema de Reforço , Animais , Eletrochoque , Humanos , Modelos Psicológicos
3.
J Exp Anal Behav ; 74(1): 1-24, 2000 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-10966094

RESUMO

Six pigeons were trained in sessions composed of seven components, each arranged with a different concurrent-schedule reinforcer ratio. These components occurred in an irregular order with equal frequency, separated by 10-s blackouts. No signals differentiated the different reinforcer ratios. Conditions lasted 50 sessions, and data were collected from the last 35 sessions. In Part 1, the arranged overall reinforcer rate was 2.22 reinforcers per minute. Over conditions, number of reinforcers per component was varied from 4 to 12. In Part 2, the overall reinforcer rate was six per minute, with both 4 and 12 reinforcers per component. Within components, log response-allocation ratios adjusted rapidly as more reinforcers were delivered in the component, and the slope of the choice relation (sensitivity) leveled off at moderately high levels after only about eight reinforcers. When the carryover from previous components was taken into account, the number of reinforcers in the components appeared to have no systematic effect on the speed at which behavior changed after a component started. Consequently, sensitivity values at each reinforcer delivery were superimposable. However, adjustment to changing reinforcer ratios was faster, and reached greater sensitivity values, when overall reinforcer rate was higher. Within a component, each successive reinforcer from the same alternative ("confirming") had a smaller effect than the one before, but single reinforcers from the other alternative ("disconfirming") always had a large effect. Choice in the prior component carried over into the next component, and its effects could be discerned even after five or six reinforcement and nonreinforcement is suggested.


Assuntos
Comportamento de Escolha/fisiologia , Meio Ambiente , Reforço Psicológico , Animais , Comportamento Animal/fisiologia , Columbidae , Aprendizagem por Discriminação/fisiologia , Esquema de Reforço
6.
Behav Anal ; 18(1): 1-21, 1995.
Artigo em Inglês | MEDLINE | ID: mdl-22478201

RESUMO

Behavior analysis risks intellectual isolation unless it integrates its explanations with evolutionary theory. Rule-governed behavior is an example of a topic that requires an evolutionary perspective for a full understanding. A rule may be defined as a verbal discriminative stimulus produced by the behavior of a speaker under the stimulus control of a long-term contingency between the behavior and fitness. As a discriminative stimulus, the rule strengthens listener behavior that is reinforced in the short run by socially mediated contingencies, but which also enters into the long-term contingency that enhances the listener's fitness. The long-term contingency constitutes the global context for the speaker's giving the rule. When a rule is said to be "internalized," the listener's behavior has switched from short- to long-term control. The fitness-enhancing consequences of long-term contingencies are health, resources, relationships, or reproduction. This view ties rules both to evolutionary theory and to culture. Stating a rule is a cultural practice. The practice strengthens, with short-term reinforcement, behavior that usually enhances fitness in the long run. The practice evolves because of its effect on fitness. The standard definition of a rule as a verbal statement that points to a contingency fails to distinguish between a rule and a bargain ("If you'll do X, then I'll do Y"), which signifies only a single short-term contingency that provides mutual reinforcement for speaker and listener. In contrast, the giving and following of a rule ("Dress warmly; it's cold outside") can be understood only by reference also to a contingency providing long-term enhancement of the listener's fitness or the fitness of the listener's genes. Such a perspective may change the way both behavior analysts and evolutionary biologists think about rule-governed behavior.

7.
Behav Anal ; 17(2): 201-6, 1994.
Artigo em Inglês | MEDLINE | ID: mdl-22478185
8.
J Exp Anal Behav ; 59(2): 245-64, 1993 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-16812686

RESUMO

TWO DIFFERENCES BETWEEN RATIO AND INTERVAL PERFORMANCE ARE WELL KNOWN: (a) Higher rates occur on ratio schedules, and (b) ratio schedules are unable to maintain responding at low rates of reinforcement (ratio "strain"). A third phenomenon, a downturn in response rate at the highest rates of reinforcement, is well documented for ratio schedules and is predicted for interval schedules. Pigeons were exposed to multiple variable-ratio variable-interval schedules in which the intervals generated in the variable-ratio component were programmed in the variable-interval component, thereby "yoking" or approximately matching reinforcement in the two components. The full range of ratio performances was studied, from strained to continuous reinforcement. In addition to the expected phenomena, a new phenomenon was observed: an upturn in variable-interval response rate in the midrange of rates of reinforcement that brought response rates on the two schedules to equality before the downturn at the highest rates of reinforcement. When the average response rate was corrected by eliminating pausing after reinforcement, the downturn in response rate vanished, leaving a strictly monotonic performance curve. This apparent functional independence of the postreinforcement pause and the qualitative shift in response implied by the upturn in variable-interval response rate suggest that theoretical accounts will require thinking of behavior as partitioned among at least three categories, and probably four: postreinforcement activity, other unprogrammed activity, ratio-typical operant behavior, and interval-typical operant behavior.

9.
J Exp Anal Behav ; 57(3): 365-75, 1992 May.
Artigo em Inglês | MEDLINE | ID: mdl-16812658

RESUMO

Finding a theoretically sound feedback function for variable-interval schedules remains an important unsolved problem. It is important because interval schedules model a significant feature of the world: the dependence of reinforcement on factors beyond the organism's control. The problem remains unsolved because no feedback function yet proposed satisfies all the theoretical and empirical requirements. Previous suggestions that succeed in fitting data fail theoretically because they violate a newly recognized theoretical requirement: The slope of the function must approach or equal 1.0 at the origin. A function is presented that satisfies all requirements but lacks any theoretical justification. This function and two suggested by Prelec and Herrnstein (1978) and Nevin and Baum (1980) are evaluated against several sets of data. All three fitted the data well. The success of the two theoretically incorrect functions raises an empirical puzzle: Low rates of reinforcement are coupled with response rates that seem anomalously high. It remains to be discovered what this reflects about the temporal patterning of operant behavior at low reinforcement rates. A theoretically and empirically correct function derived from basic assumptions about operant behavior also remains to be discovered.

10.
Behav Anal ; 12(2): 167-76, 1989.
Artigo em Inglês | MEDLINE | ID: mdl-22478030

RESUMO

Molecular explanations of behavior, based on momentary events and variables that can be measured each time an event occurs, can be contrasted with molar explanations, based on aggregates of events and variables that can be measured only over substantial periods of time. Molecular analyses cannot suffice for quantitative accounts of behavior, because the historical variables that determine behavior are inevitably molar. When molecular explanations are attempted, they always depend on hypothetical constructs that stand as surrogates for molar environmental variables. These constructs allow no quantitative predictions when they are vague, and when they are made precise, they become superfluous, because they can be replaced with molar measures. In contrast to molecular accounts of phenomena like higher responding on ratio schedules than interval schedules and free-operant avoidance, molar accounts tend to be simple and straightforward. Molar theory incorporates the notion that behavior produces consequences that in turn affect the behavior, the notion that behavior and environment together constitute a feedback system. A feedback function specifies the dependence of consequences on behavior, thereby describing properties of the environment. Feedback functions can be derived for simple schedules, complex schedules, and natural resources. A complete theory of behavior requires describing the environment's feedback functions and the organism's functional relations. Molar thinking, both in the laboratory and in the field, can allow quantitative prediction, the mark of a mature science.

11.
J Exp Anal Behav ; 39(3): 499-501, 1983 May.
Artigo em Inglês | MEDLINE | ID: mdl-16812332
12.
J Exp Anal Behav ; 38(1): 35-49, 1982 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-16812283

RESUMO

Since foraging in nature can be viewed as instrumental behavior, choice between sources of food, known as "patches," can be viewed as choice between instrumental response alternatives. Whereas the travel required to change alternatives deters changeover in nature, the changeover delay (COD) usually deters changeover in the laboratory. In this experiment, pigeons were exposed to laboratory choice situations, concurrent variable-interval schedules, that were standard except for the introduction of a travel requirement for changeover. As the travel requirement increased, rate of changeover decreased and preference for a favored alternative strengthened. When the travel requirement was small, the relations between choice and relative reinforcement revealed the usual tendencies toward matching and undermatching. When the travel requirement was large, strong overmatching occurred. These results, together with those from experiments in which changeover was deterred by punishment or a fixed-ratio requirement, deviate from the matching law, even when a correction is made for cost of changeover. If one accepted an argument that the COD is analogous to travel, the results suggest that the norm in choice relations would be overmatching. This overmatching, however, might only be the sign of an underlying strategy approximating optimization.

13.
J Exp Anal Behav ; 36(3): 387-403, 1981 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-16812255

RESUMO

The interaction between instrumental behavior and environment can be conveniently described at a molar level as a feedback system. Two different possible theories, the matching law and optimization, differ primarily in the reference criterion they suggest for the system. Both offer accounts of most of the known phenomena of performance on concurrent and single variable-interval and variable-ratio schedules. The matching law appears stronger in describing concurrent performances, whereas optimization appears stronger in describing performance on single schedules.

14.
J Exp Anal Behav ; 34(2): 207-17, 1980 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-16812187

RESUMO

On a given variable-interval schedule, the average obtained rate of reinforcement depends on the average rate of responding. An expression for this feedback effect is derived from the assumptions that free-operant responding occurs in bursts with a constant tempo, alternating with periods of engagement in other activities; that the durations of bursts and other activities are exponentially distributed; and that the rates of initiating and terminating bursts are inversely related. The expression provides a satisfactory account of the data of three experiments.

15.
J Exp Anal Behav ; 32(2): 269-81, 1979 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-501274

RESUMO

Almost all of 103 sets of data from 23 different studies of choice conformed closely to the equation: log (B(1)/B(2)) = a log (r(1)/r(2)) + log b, where B(1) and B(2) are either numbers of responses or times spent at Alternatives 1 and 2, r(1) and r(2) are the rates of reinforcement obtained from Alternatives 1 and 2, and a and b are empirical constants. Although the matching relation requires the slope a to equal 1.0, the best-fitting values of a frequently deviated from this. For B(1) and B(2) measured as numbers of responses, a tended to fall short of 1.0 (undermatching). For B(1) and B(2) measured as times, a fell to both sides of 1.0, with the largest mode at about 1.0. Those experiments that produced values of a for both responses and time revealed only a rough correspondence between the two values; a was often noticeably larger for time. Statistical techniques for assessing significance of a deviation of a from 1.0 suggested that values of a between .90 and 1.11 can be considered good approximations to matching. Of the two experimenters who contributed the most data, one generally found undermatching, while the other generally found matching. The difference in results probably arises from differences in procedure. The procedural variations that lead to undermatching appear to be those that produce (a) asymmetrical pausing that favors the poorer alternative; (b) systematic temporal variation in preference that favors the poorer alternative; and (c) patterns of responding that involve changing over between alternatives or brief bouts at the alternatives.


Assuntos
Comportamento de Escolha , Aprendizagem por Discriminação , Animais , Columbidae , Condicionamento Operante , Esquema de Reforço
16.
J Exp Anal Behav ; 26(1): 27-35, 1976 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-16811928

RESUMO

Rats' pressing on two levers was reinforced according to two independent variable-interval schedules that were varied during the experiment. Since the levers were connected directly to the programming equipment, bypassing the standard pulseformers, reinforcement could occur while a lever was held down. Although the time a lever was pressed might, therefore, have varied independently of number of presses, these two measures covaried substantially, because the average duration of the presses remained roughly constant. This rough invariance may have resulted from the rats' tendency to make bursts of brief presses (i.e., to jiggle the levers), even though the contingencies encouraged holding. When duration did vary, presses on the two levers tended to vary together. As a result, relative time spent pressing corresponded closely to relative number of presses. Both of these measures conformed well to the matching law. Absolute behavioral frequency at a lever, measured either way, varied directly with proportion of reinforcement for that lever, in accordance with the generalized version of the matching law. Number of presses seemed, on balance, to be a slightly more reliable measure than pressing time. The substantial interchangeability may prove more significant than the slight disparity, however, because it supports the notion that all behavior can be measured on a common scale of time.

17.
J Exp Anal Behav ; 25(2): 179-84, 1976 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-16811901

RESUMO

Pigeons' standing on a platform produced food reinforcement according to two-component multiple schedules in which either both components consisted of the same variable-interval schedule or one of these was replaced with a component without reinforcement (extinction). The components of the multiple schedule alternated every 30 sec, and were signalled by changes in the color of diffuse overhead illumination. Changing the schedule of one of the components to extinction increased the percentage of time spent on the platform during the unchanged component (behavioral contrast). This result casts doubt on accounts that attribute behavioral contrast to variations in the rate of noninstrumental elicited responses.

18.
J Exp Anal Behav ; 23(1): 45-53, 1975 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-16811831

RESUMO

Three human subjects detected unpredictable signals by pressing either of two telegraph keys. The relative frequencies with which detections occurred for the two alternatives were varied. The procedure included a changeover delay and response cost for letting go of a key. All subjects matched the relative time spent holding each key to the relative number of detections for that key, in conformity with the matching law. One subject's performance, which at first deviated from the relation, came into conformity with it when response cost was increased. Another subject's performance approximated matching more closely when the changeover delay was increased. The results confirm and extend the notions that choice consists in time allocation and that all behavior can be measured on the common scale of time.

19.
J Exp Anal Behav ; 22(1): 231-42, 1974 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-16811782

RESUMO

DATA ON CHOICE GENERALLY CONFORM CLOSELY TO AN EQUATION OF THE FORM: log(B(1)/B(2))=a log(r(1)/r(2)+log k, where B(1) and B(2) are the frequencies of responding at Alternatives 1 and 2, r(1) and r(2) are the obtained reinforcement from Alternatives 1 and 2, and a and k are empirical constants. When a and k equal one, this equation is equivalent to the matching relation: B(1)/B(2)=r(1)/r(2). Two types of deviation from matching can occur with this formulation: a and k not equal to one. In some experiments, a systematically falls short of one. This deviation is undermatching. The reasons for undermatching are obscure at present. Some evidence suggests, however, that factors favoring discrimination also favor matching. Matching (a=1) may represent the norm in choice when discrimination is maximal. When k differs from one, its magnitude indicates the degree of bias in choice. The generalized matching law predicts that bias should take this form (adding a constant proportion of responding to the favored alternative). Data from a variety of experiments indicate that it generally does.

20.
J Exp Anal Behav ; 22(1): 91-101, 1974 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-16811791

RESUMO

Pigeons' pecks at two white response keys (initial-link situation) occasionally turned both keys red (terminal-link situation). When the two keys were red, pecks occasionally produced food, after which the keys were again white. In both situations, a changeover delay prevented the response-produced outcome from immediately following a change of responding from either key to the other. In the initial-link situation, the ratio of pecks at the keys closely paralleled the ratio of transitions into the terminal-link situation produced by the pecks, conforming to the well-known matching relation. In the terminal-link situation, the peck ratios deviated from the matching relation toward indifference. Overall response rate and rate of changeover were generally higher in the terminal-link situation than in the initial-link situation. The finding of matching in the initial-link situation supports a definition of reinforcement as situation transition. The differences in performance between the two situations, viewed in the light of other recent findings, suggest that the effects of a changeover delay depend on the overall reinforcing value of the choice alternatives.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA