Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
Behav Brain Sci ; 41: e125, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-31064524

RESUMEN

A scientific claim is a generalization based on a reported statistically significant effect. The reproducibility of that claim is its scientific meaning. Anything not explicitly mentioned in a scientific claim as a limitation of the claim's scope means that it implicitly generalizes over these unmentioned aspects. Hence, so-called "conceptual" replications that differ in these unmentioned aspects from the original study are legitimate, and necessary to test the generalization implied by the original study's claim.


Asunto(s)
Ciencia , Reproducibilidad de los Resultados
2.
Behav Brain Sci ; 40: e286, 2017 01.
Artículo en Inglés | MEDLINE | ID: mdl-29342715

RESUMEN

Structural priming is a sufficient but not a necessary condition for proving the existence of representations. Absence of evidence is not evidence of absence. Cognitive science relies on the legitimacy of positing representations and processes without "proving" every component. Also, psycholinguistics relies on other methods, including acceptability judgments, to find the materials for priming experiments in the first place.


Asunto(s)
Juicio , Hombro , Psicolingüística
3.
J Cogn Neurosci ; 26(11): 2530-9, 2014 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-24893743

RESUMEN

RTs in conversation, with average gaps of 200 msec and often less, beat standard RTs, despite the complexity of response and the lag in speech production (600 msec or more). This can only be achieved by anticipation of timing and content of turns in conversation, about which little is known. Using EEG and an experimental task with conversational stimuli, we show that estimation of turn durations are based on anticipating the way the turn would be completed. We found a neuronal correlate of turn-end anticipation localized in ACC and inferior parietal lobule, namely a beta-frequency desynchronization as early as 1250 msec, before the end of the turn. We suggest that anticipation of the other's utterance leads to accurately timed transitions in everyday conversations.


Asunto(s)
Anticipación Psicológica/fisiología , Encéfalo/fisiología , Comunicación , Relaciones Interpersonales , Percepción del Habla/fisiología , Habla/fisiología , Adulto , Ritmo beta , Electroencefalografía , Femenino , Humanos , Masculino , Pruebas Neuropsicológicas , Adulto Joven
4.
Behav Brain Sci ; 36(4): 351, 2013 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-23789779

RESUMEN

We encourage Pickering & Garrod (P&G) to implement this promising theory in a computational model. The proposed theory crucially relies on having an efficient and reliable mechanism for early intention recognition. Furthermore, the generation of impoverished predictions is incompatible with a number of key phenomena that motivated P&G's theory. Explaining these phenomena requires fully specified perceptual predictions in both comprehension and production.


Asunto(s)
Comprensión/fisiología , Modelos Teóricos , Percepción del Habla/fisiología , Habla/fisiología , Humanos
5.
Sci Rep ; 13(1): 3458, 2023 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-36859459

RESUMEN

A key mechanism in the comprehension of conversation is the ability for listeners to recognize who is speaking and when a speaker switch occurs. Some authors suggest that speaker change detection is accomplished through bottom-up mechanisms in which listeners draw on changes in the acoustic features of the auditory signal. Other accounts propose that speaker change detection involves drawing on top-down linguistic representations to identify who is speaking. The present study investigates these hypotheses experimentally by manipulating the pragmatic coherence of conversational utterances. In experiment 1, participants listened to pairs of utterances and had to indicate whether they heard the same or different speakers. Even though all utterances were spoken by the same speaker, our results show that when two segments of conversation are spoken by the same speaker but make sense for different speakers to say, listeners report hearing different speakers. In experiment 2 we removed pragmatic information from the same stimuli by scrambling word order while leaving acoustic information intact. In contrast to experiment 1, results from the second experiment indicate no difference between our experimental conditions. We interpret these results as a top-down effect of pragmatic expectations: knowledge of conversational structure at least partially determines a listener's perception of speaker changes in conversation.


Asunto(s)
Acústica , Percepción Auditiva , Humanos , Comunicación , Audición , Conocimiento
6.
Cogn Process ; 13 Suppl 1: S369-74, 2012 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-22806654

RESUMEN

The reliable automatic visual recognition of indoor scenes with complex object constellations using only sensor data is a nontrivial problem. In order to improve the construction of an accurate semantic 3D model of an indoor scene, we exploit human-produced verbal descriptions of the relative location of pairs of objects. This requires the ability to deal with different spatial reference frames (RF) that humans use interchangeably. In German, both the intrinsic and relative RF are used frequently, which often leads to ambiguities in referential communication. We assume that there are certain regularities that help in specific contexts. In a first experiment, we investigated how speakers of German describe spatial relationships between different pieces of furniture. This gave us important information about the distribution of the RFs used for furniture-predicate combinations, and by implication also about the preferred spatial predicate. The results of this experiment are compiled into a computational model that extracts partial orderings of spatial arrangements between furniture items from verbal descriptions. In the implemented system, the visual scene is initially scanned by a 3D camera system. From the 3D point cloud, we extract point clusters that suggest the presence of certain furniture objects. We then integrate the partial orderings extracted from the verbal utterances incrementally and cumulatively with the estimated probabilities about the identity and location of objects in the scene, and also estimate the probable orientation of the objects. This allows the system to significantly improve both the accuracy and richness of its visual scene representation.


Asunto(s)
Orientación , Reconocimiento Visual de Modelos/fisiología , Percepción Espacial/fisiología , Simulación por Computador , Humanos , Modelos Psicológicos , Estimulación Luminosa
7.
PLoS One ; 17(1): e0261811, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34995299

RESUMEN

Understanding the spread of false or dangerous beliefs-often called misinformation or disinformation-through a population has never seemed so urgent. Network science researchers have often taken a page from epidemiologists, and modeled the spread of false beliefs as similar to how a disease spreads through a social network. However, absent from those disease-inspired models is an internal model of an individual's set of current beliefs, where cognitive science has increasingly documented how the interaction between mental models and incoming messages seems to be crucially important for their adoption or rejection. Some computational social science modelers analyze agent-based models where individuals do have simulated cognition, but they often lack the strengths of network science, namely in empirically-driven network structures. We introduce a cognitive cascade model that combines a network science belief cascade approach with an internal cognitive model of the individual agents as in opinion diffusion models as a public opinion diffusion (POD) model, adding media institutions as agents which begin opinion cascades. We show that the model, even with a very simplistic belief function to capture cognitive effects cited in disinformation study (dissonance and exposure), adds expressive power over existing cascade models. We conduct an analysis of the cognitive cascade model with our simple cognitive function across various graph topologies and institutional messaging patterns. We argue from our results that population-level aggregate outcomes of the model qualitatively match what has been reported in COVID-related public opinion polls, and that the model dynamics lend insights as to how to address the spread of problematic beliefs. The overall model sets up a framework with which social science misinformation researchers and computational opinion diffusion modelers can join forces to understand, and hopefully learn how to best counter, the spread of disinformation and "alternative facts."


Asunto(s)
COVID-19 , Desinformación , Modelos Teóricos , Opinión Pública , SARS-CoV-2 , Medios de Comunicación Sociales , Humanos
8.
Cogn Sci ; 44(9): e12890, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32939773

RESUMEN

People with aphasia use gestures not only to communicate relevant content but also to compensate for their verbal limitations. The Sketch Model (De Ruiter, 2000) assumes a flexible relationship between gesture and speech with the possibility of a compensatory use of the two modalities. In the successor of the Sketch Model, the AR-Sketch Model (De Ruiter, 2017), the relationship between iconic gestures and speech is no longer assumed to be flexible and compensatory, but instead iconic gestures are assumed to express information that is redundant to speech. In this study, we evaluated the contradictory predictions of the Sketch Model and the AR-Sketch Model using data collected from people with aphasia as well as a group of people without language impairment. We only found compensatory use of gesture in the people with aphasia, whereas the people without language impairments made very little compensatory use of gestures. Hence, the people with aphasia gestured according to the prediction of the Sketch Model, whereas the people without language impairment did not. We conclude that aphasia fundamentally changes the relationship of gesture and speech.


Asunto(s)
Afasia , Gestos , Habla , Humanos
9.
J Speech Lang Hear Res ; 62(12): 4417-4432, 2019 12 18.
Artículo en Inglés | MEDLINE | ID: mdl-31710512

RESUMEN

Purpose People with aphasia (PWA) use different kinds of gesture spontaneously when they communicate. Although there is evidence that the nature of the communicative task influences the linguistic performance of PWA, so far little is known about the influence of the communicative task on the production of gestures by PWA. We aimed to investigate the influence of varying communicative constraints on the production of gesture and spoken expression by PWA in comparison to persons without language impairment. Method Twenty-six PWA with varying aphasia severities and 26 control participants (CP) without language impairment participated in the study. Spoken expression and gesture production were investigated in 2 different tasks: (a) spontaneous conversation about topics of daily living and (b) a cartoon narration task, that is, retellings of short cartoon clips. The frequencies of words and gestures as well as of different gesture types produced by the participants were analyzed and tested for potential effects of group and task. Results Main results for task effects revealed that PWA and CP used more iconic gestures and pantomimes in the cartoon narration task than in spontaneous conversation. Metaphoric gestures, deictic gestures, number gestures, and emblems were more frequently used in spontaneous conversation than in cartoon narrations by both participant groups. Group effects show that, in both tasks, PWA's gesture-to-word ratios were higher than those for the CP. Furthermore, PWA produced more interactive gestures than the CP in both tasks, as well as more number gestures and pantomimes in spontaneous conversation. Conclusions The current results suggest that PWA use gestures to compensate for their verbal limitations under varying communicative constraints. The properties of the communicative task influence the use of different gesture types in people with and without aphasia. Thus, the influence of communicative constraints needs to be considered when assessing PWA's multimodal communicative abilities.


Asunto(s)
Afasia/fisiopatología , Comunicación , Gestos , Habla/fisiología , Afasia/psicología , Estudios de Casos y Controles , Femenino , Humanos , Lingüística , Masculino , Persona de Mediana Edad , Análisis y Desempeño de Tareas
10.
PLoS One ; 13(8): e0201516, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30067853

RESUMEN

Interactions with artificial agents often lack immediacy because agents respond slower than their users expect. Automatic speech recognisers introduce this delay by analysing a user's utterance only after it has been completed. Early, uncertain hypotheses of incremental speech recognisers can enable artificial agents to respond more timely. However, these hypotheses may change significantly with each update. Therefore, an already initiated action may turn into an error and invoke error cost. We investigated whether humans would use uncertain hypotheses for planning ahead and/or initiating their response. We designed a Ghost-in-the-Machine study in a bar scenario. A human participant controlled a bartending robot and perceived the scene only through its recognisers. The results showed that participants used uncertain hypotheses for selecting the best matching action. This is comparable to computing the utility of dialogue moves. Participants evaluated the available evidence and the error cost of their actions prior to initiating them. If the error cost was low, the participants initiated their response with only suggestive evidence. Otherwise, they waited for additional, more confident hypotheses if they still had time to do so. If there was time pressure but only little evidence, participants grounded their understanding with echo questions. These findings contribute to a psychologically plausible policy for human-robot interaction that enables artificial agents to respond more timely and socially appropriately under uncertainty.


Asunto(s)
Robótica , Habla , Adulto , Comprensión , Diseño de Equipo , Femenino , Humanos , Relaciones Interpersonales , Masculino , Robótica/instrumentación , Incertidumbre , Adulto Joven
11.
Top Cogn Sci ; 10(2): 264-278, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-29749040

RESUMEN

Miscommunication is a neglected issue in the cognitive sciences, where it has often been discounted as noise in the system. This special issue argues for the opposite view: Miscommunication is a highly structured and ubiquitous feature of human interaction that systematically underpins people's ability to create and maintain shared languages. Contributions from conversation analysis, computational linguistics, experimental psychology, and formal semantics provide evidence for these claims. They highlight the multi-modal, multi-person character of miscommunication. They demonstrate the incremental, contingent, and locally adaptive nature of the processes people use to detect and deal with miscommunication. They show how these processes can drive language change. In doing so, these contributions introduce an alternative perspective on what successful communication is, new methods for studying it, and application areas where these ideas have a particular impact. We conclude that miscommunication is not noise but essential to the productive flexibility of human communication, especially our ability to respond constructively to new people and new situations.


Asunto(s)
Ciencia Cognitiva , Comunicación , Humanos
12.
Front Psychol ; 8: 211, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28270782

RESUMEN

In every-day conversations, the gap between turns of conversational partners is most frequently between 0 and 200 ms. We were interested how speakers achieve such fast transitions. We designed an experiment in which participants listened to pre-recorded questions about images presented on a screen and were asked to answer these questions. We tested whether speakers already prepare their answers while they listen to questions and whether they can prepare for the time of articulation by anticipating when questions end. In the experiment, it was possible to guess the answer at the beginning of the questions in half of the experimental trials. We also manipulated whether it was possible to predict the length of the last word of the questions. The results suggest when listeners know the answer early they start speech production already during the questions. Speakers can also time when to speak by predicting the duration of turns. These temporal predictions can be based on the length of anticipated words and on the overall probability of turn durations.

13.
Am J Speech Lang Pathol ; 26(2): 483-497, 2017 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-28492911

RESUMEN

PURPOSE: People with aphasia (PWA) face significant challenges in verbally expressing their communicative intentions. Different types of gestures are produced spontaneously by PWA, and a potentially compensatory function of these gestures has been discussed. The current study aimed to investigate how much information PWA communicate through 3 types of gesture and the communicative effectiveness of such gestures. METHOD: Listeners without language impairment rated the information content of short video clips taken from PWA in conversation. Listeners were asked to rate communication within a speech-only condition and a gesture + speech condition. RESULTS: The results revealed that the participants' interpretations of the communicative intentions expressed in the clips of PWA were significantly more accurate in the gesture + speech condition for all tested gesture types. CONCLUSION: It was concluded that all 3 gesture types under investigation contributed to the expression of semantic meaning communicated by PWA. Gestures are an important communicative means for PWA and should be regarded as such by their interlocutors. Gestures have been shown to enhance listeners' interpretation of PWA's overall communication.


Asunto(s)
Afasia/psicología , Comprensión , Gestos , Relaciones Interpersonales , Comunicación Manual , Adulto , Anciano , Afasia/diagnóstico , Métodos de Comunicación Total , Femenino , Humanos , Masculino , Persona de Mediana Edad , Semántica , Medición de la Producción del Habla
14.
Front Psychol ; 6: 89, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-25699004

RESUMEN

During conversations participants alternate smoothly between speaker and hearer roles with only brief pauses and overlaps. There are two competing types of accounts about how conversationalists accomplish this: (a) the signaling approach and (b) the anticipatory ('projection') approach. We wanted to investigate, first, the relative merits of these two accounts, and second, the relative contribution of semantic and syntactic information to the timing of next turn initiation. We performed three button-press experiments using turn fragments taken from natural conversations to address the following questions: (a) Is turn-taking predominantly based on anticipation or on reaction, and (b) what is the relative contribution of semantic and syntactic information to accurate turn-taking. In our first experiment we gradually manipulated the information available for anticipation of the turn end (providing information about the turn end in advance to completely removing linguistic information). The results of our first experiment show that the distribution of the participants' estimation of turn-endings for natural turns is very similar to the distribution for pure anticipation. We conclude that listeners are indeed able to anticipate a turn-end and that this strategy is predominantly used in turn-taking. In Experiment 2 we collected purely reacted responses. We used the distributions from Experiments 1 and 2 together to estimate a new dependent variable called Reaction Anticipation Proportion. We used this variable in our third experiment where we manipulated the presence vs. absence of semantic and syntactic information by low-pass filtering open-class and closed class words in the turn. The results suggest that for turn-end anticipation, both semantic and syntactic information are needed, but that the semantic information is a more important anticipation cue than syntactic information.

15.
Front Psychol ; 6: 1641, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26582998

RESUMEN

We used a new method called "Ghost-in-the-Machine" (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer's requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human-robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience.

16.
J Neurosci Methods ; 232: 24-9, 2014 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-24809245

RESUMEN

BACKGROUND: Even though research in turn-taking in spoken dialogues is now abundant, a typical EEG-signature associated with the anticipation of turn-ends has not yet been identified until now. NEW METHOD: The purpose of this study was to examine if readiness potentials (RP) can be used to study the anticipation of turn-ends by using it in a motoric finger movement and articulatory movement task. The goal was to determine the preconscious onset of turn-end anticipation in early, preconscious turn-end anticipation processes by the simultaneous registration of EEG measures (RP) and behavioural measures (anticipation timing accuracy, ATA). For our behavioural measures, we used both button-press and verbal response ("yes"). In the experiment, 30 subjects were asked to listen to auditorily presented utterances and press a button or utter a brief verbal response when they expected the end of the turn. During the task, a 32-channel-EEG signal was recorded. RESULTS: The results showed that the RPs during verbal- and button-press-responses developed similarly and had an almost identical time course: the RP signals started to develop 1170 vs. 1190 ms before the behavioural responses. COMPARISON WITH EXISTING METHODS: Until now, turn-end anticipation is usually studied using behavioural methods, for instance by measuring the anticipation timing accuracy, which is a measurement that reflects conscious behavioural processes and is insensitive to preconscious anticipation processes. CONCLUSION: The similar time course of the recorded RP signals for both verbal- and button-press responses provide evidence for the validity of using RPs as an online marker for response preparation in turn-taking and spoken dialogue research.


Asunto(s)
Variación Contingente Negativa/fisiología , Lenguaje , Desempeño Psicomotor/fisiología , Conducta Verbal/fisiología , Estimulación Acústica , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Tiempo de Reacción/fisiología , Adulto Joven
17.
Front Psychol ; 4: 182, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23596435

RESUMEN

The selection and processing of a spatial frame of reference (FOR) in interpreting verbal scene descriptions is of great interest to psycholinguistics. In this study, we focus on the choice between the relative and the intrinsic FOR, addressing two questions: (a) does the presence or absence of a background in the scene influence the selection of a FOR, and (b) what is the effect of a previously selected FOR on the subsequent processing of a different FOR. Our results show that if a scene includes a realistic background, this will make the selection of the relative FOR more likely. We attribute this effect to the facilitation of mental simulation, which enhances the relation between the viewer and the objects. With respect to the response accuracy, we found both a higher (with the same FOR) and a lower accuracy (with a different FOR), while for the response latencies, we only found a delay effect with a different FOR.

18.
Front Psychol ; 4: 557, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24009594

RESUMEN

Recognizing the intention of others is important in all social interactions, especially in the service domain. Enabling a bartending robot to serve customers is particularly challenging as the system has to recognize the social signals produced by customers and respond appropriately. Detecting whether a customer would like to order is essential for the service encounter to succeed. This detection is particularly challenging in a noisy environment with multiple customers. Thus, a bartending robot has to be able to distinguish between customers intending to order, chatting with friends or just passing by. In order to study which signals customers use to initiate a service interaction in a bar, we recorded real-life customer-staff interactions in several German bars. These recordings were used to generate initial hypotheses about the signals customers produce when bidding for the attention of bar staff. Two experiments using snapshots and short video sequences then tested the validity of these hypothesized candidate signals. The results revealed that bar staff responded to a set of two non-verbal signals: first, customers position themselves directly at the bar counter and, secondly, they look at a member of staff. Both signals were necessary and, when occurring together, sufficient. The participants also showed a strong agreement about when these cues occurred in the videos. Finally, a signal detection analysis revealed that ignoring a potential order is deemed worse than erroneously inviting customers to order. We conclude that (a) these two easily recognizable actions are sufficient for recognizing the intention of customers to initiate a service interaction, but other actions such as gestures and speech were not necessary, and (b) the use of reaction time experiments using natural materials is feasible and provides ecologically valid results.

19.
Top Cogn Sci ; 4(2): 232-48, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22389109

RESUMEN

The tradeoff hypothesis in the speech-gesture relationship claims that (a) when gesturing gets harder, speakers will rely relatively more on speech, and (b) when speaking gets harder, speakers will rely relatively more on gestures. We tested the second part of this hypothesis in an experimental collaborative referring paradigm where pairs of participants (directors and matchers) identified targets to each other from an array visible to both of them. We manipulated two factors known to affect the difficulty of speaking to assess their effects on the gesture rate per 100 words. The first factor, codability, is the ease with which targets can be described. The second factor, repetition, is whether the targets are old or new (having been already described once or twice). We also manipulated a third factor, mutual visibility, because it is known to affect the rate and type of gesture produced. None of the manipulations systematically affected the gesture rate. Our data are thus mostly inconsistent with the tradeoff hypothesis. However, the gesture rate was sensitive to concurrent features of referring expressions, suggesting that gesture parallels aspects of speech. We argue that the redundancy between speech and gesture is communicatively motivated.


Asunto(s)
Gestos , Habla/fisiología , Humanos , Lenguaje , Comunicación Manual , Variaciones Dependientes del Observador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA