Your browser doesn't support javascript.
loading
Ecologically valid speech collection in behavioral research: The Ghent Semi-spontaneous Speech Paradigm (GSSP).
Van Der Donckt, Jonas; Kappen, Mitchel; Degraeve, Vic; Demuynck, Kris; Vanderhasselt, Marie-Anne; Van Hoecke, Sofie.
Afiliação
  • Van Der Donckt J; IDLab, Ghent University - imec, Technologiepark Zwijnaarde 122, 9052, Ghent, Zwijnaarde, Belgium. Jonvdrdo.Vanderdonckt@UGent.be.
  • Kappen M; Department of Electronics and Information Systems, Ghent University, Ghent, Belgium. Jonvdrdo.Vanderdonckt@UGent.be.
  • Degraeve V; Department of Head and Skin, Ghent University, University Hospital Ghent (UZ Ghent), Department of Psychiatry and Medical Psychology, Corneel Heymanslaan 10, 9000, Gent, Belgium. Mitchel.Kappen@UGent.be.
  • Demuynck K; Ghent Experimental Psychiatry (GHEP) Lab, Ghent University, Ghent, Belgium. Mitchel.Kappen@UGent.be.
  • Vanderhasselt MA; IDLab, Ghent University - imec, Technologiepark Zwijnaarde 122, 9052, Ghent, Zwijnaarde, Belgium.
  • Van Hoecke S; Department of Electronics and Information Systems, Ghent University, Ghent, Belgium.
Behav Res Methods ; 2023 Dec 13.
Article em En | MEDLINE | ID: mdl-38091208
This paper introduces the Ghent Semi-spontaneous Speech Paradigm (GSSP), a new method for collecting unscripted speech data for affective-behavioral research in both experimental and real-world settings through the description of peer-rated pictures with a consistent affective load. The GSSP was designed to meet five criteria: (1) allow flexible speech recording durations, (2) provide a straightforward and non-interfering task, (3) allow for experimental control, (4) favor spontaneous speech for its prosodic richness, and (5) require minimal human interference to enable scalability. The validity of the GSSP was evaluated through an online task, in which this paradigm was implemented alongside a fixed-text read-aloud task. The results indicate that participants were able to describe images with an adequate duration, and acoustic analysis demonstrated a trend for most features in line with the targeted speech styles (i.e., unscripted spontaneous speech versus scripted read-aloud speech). A speech style classification model using acoustic features achieved a balanced accuracy of 83% on within-dataset validation, indicating separability between the GSSP and read-aloud speech task. Furthermore, when validating this model on an external dataset that contains interview and read-aloud speech, a balanced accuracy score of 70% is obtained, indicating an acoustic correspondence between the GSSP speech and spontaneous interviewee speech. The GSSP is of special interest for behavioral and speech researchers looking to capture spontaneous speech, both in longitudinal ambulatory behavioral studies and laboratory studies. To facilitate future research on speech styles, acoustics, and affective states, the task implementation code, the collected dataset, and analysis notebooks are available.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article