Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Clin Psychol Psychother ; 31(2): e2982, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38659356

RESUMEN

The period after psychiatric hospitalization is an extraordinarily high-risk period for suicidal thoughts and behaviours (STBs). Affective-cognitive constructs (ACCs) are salient risk factors for STBs, and intensive longitudinal metrics of these constructs may improve personalized risk detection and intervention. However, limited research has examined how within-person daily levels and between-person dynamic metrics of ACCs relate to STBs after hospital discharge. Adult psychiatric inpatients (N = 95) completed a 65-day ecological momentary assessment protocol after discharge as part of a 6-month follow-up period. Using dynamic structural equation models, we examined both within-person daily levels and between-person dynamic metrics (intensity, variability and inertia) of positive and negative affect, rumination, distress intolerance and emotion dysregulation as risk factors for STBs. Within-person lower daily levels of positive affect and higher daily levels of negative affect, rumination, distress intolerance and emotion dysregulation were risk factors for next-day suicidal ideation (SI). Same-day within-person higher rumination and negative affect were also risk factors for same-day SI. At the between-person level, higher overall positive affect was protective against active SI and suicidal behaviour over the 6-month follow-up, while greater variability of rumination and distress intolerance increased risk for active SI, suicidal behaviour and suicide attempt. The present study provides the most comprehensive examination to date of intensive longitudinal metrics of ACCs as risk factors for STBs. Results support the continued use of intensive longitudinal methods to improve STB risk detection. Interventions focusing on rumination and distress intolerance may specifically help to prevent suicidal crises during critical transitions in care.


Asunto(s)
Ideación Suicida , Humanos , Masculino , Femenino , Adulto , Factores de Riesgo , Persona de Mediana Edad , Evaluación Ecológica Momentánea , Intento de Suicidio/psicología , Intento de Suicidio/estadística & datos numéricos , Regulación Emocional , Trastornos Mentales/psicología , Rumiación Cognitiva , Hospitalización/estadística & datos numéricos , Afecto , Hospitales Psiquiátricos
2.
Qual Life Res ; 30(1): 251-265, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-32839864

RESUMEN

PURPOSE: As Huntington disease (HD) progresses, speech and swallowing difficulties become more profound. These difficulties have an adverse effect on health-related quality of life (HRQOL), thus psychometrically robust measures of speech and swallowing are needed to better understand the impact of these domains across the course of the disease. Therefore, the purpose of this study is to establish the clinical utility of two new patient-reported outcome measures (PROs), HDQLIFE Speech Difficulties and HDQLIFE Swallowing Difficulties. METHODS: Thirty-one participants with premanifest or manifest HD, and 31 age- and sex-matched healthy control participants were recruited for this study. Participants completed several PROs [HDQLIFE Speech Difficulties, HDQLIFE Swallowing Difficulties, Communication Participation Item Bank (CPIB)], as well as several clinician-rated assessments of speech and functioning. A computational algorithm designed to detect features of spoken discourse was also examined. Analyses were focused on establishing the reliability and validity of these new measures. RESULTS: Internal consistency was good for Swallowing (Cronbach's alpha = 0.89) and excellent for Speech and the CPIB (both Cronbach's alpha ≥ 0.94), and convergent/discriminant validity was supported. Known groups validity for the PROs was supported by significant group differences among control participants and persons with different stages of HD (all p < 0.0001). All PROs were able to distinguish those with and without clinician-rated dysarthria (likelihood ratios far exceeded the threshold for clinical decision making [all ≥ 3.28]). CONCLUSIONS: Findings support the clinical utility of the HDQLIFE Speech and Swallowing PROs and the CPIB for use across the HD disease spectrum. These PROs also have the potential to be clinically useful in other populations.


Asunto(s)
Trastornos de Deglución/etiología , Enfermedad de Huntington/complicaciones , Psicometría/métodos , Calidad de Vida/psicología , Trastornos del Habla/etiología , Adulto , Estudios de Casos y Controles , Femenino , Humanos , Enfermedad de Huntington/patología , Masculino , Persona de Mediana Edad , Medición de Resultados Informados por el Paciente , Reproducibilidad de los Resultados
3.
J Speech Lang Hear Res ; : 1-13, 2024 Oct 08.
Artículo en Inglés | MEDLINE | ID: mdl-39378266

RESUMEN

PURPOSE: This work introduces updated transcripts, disfluency annotations, and word timings for FluencyBank, which we refer to as FluencyBank Timestamped. This data set will enable the thorough analysis of how speech processing models (such as speech recognition and disfluency detection models) perform when evaluated with typical speech versus speech from people who stutter (PWS). METHOD: We update the FluencyBank data set, which includes audio recordings from adults who stutter, to explore the robustness of speech processing models. Our update (semi-automated with manual review) includes new transcripts with timestamps and disfluency labels corresponding to each token in the transcript. Our disfluency labels capture typical disfluencies (filled pauses, repetitions, revisions, and partial words), and we explore how speech model performance compares for Switchboard (typical speech) and FluencyBank Timestamped. We present benchmarks for three speech tasks: intended speech recognition, text-based disfluency detection, and audio-based disfluency detection. For the first task, we evaluate how well Whisper performs for intended speech recognition (i.e., transcribing speech without disfluencies). For the next tasks, we evaluate how well a Bidirectional Embedding Representations from Transformers (BERT) text-based model and a Whisper audio-based model perform for disfluency detection. We select these models, BERT and Whisper, as they have shown high accuracies on a broad range of tasks in their language and audio domains, respectively. RESULTS: For the transcription task, we calculate an intended speech word error rate (isWER) between the model's output and the speaker's intended speech (i.e., speech without disfluencies). We find isWER is comparable between Switchboard and FluencyBank Timestamped, but that Whisper transcribes filled pauses and partial words at higher rates in the latter data set. Within FluencyBank Timestamped, isWER increases with stuttering severity. For the disfluency detection tasks, we find the models detect filled pauses, revisions, and partial words relatively well in FluencyBank Timestamped, but performance drops substantially for repetitions because the models are unable to generalize to the different types of repetitions (e.g., multiple repetitions and sound repetitions) from PWS. We hope that FluencyBank Timestamped will allow researchers to explore closing performance gaps between typical speech and speech from PWS. CONCLUSIONS: Our analysis shows that there are gaps in speech recognition and disfluency detection performance between typical speech and speech from PWS. We hope that FluencyBank Timestamped will contribute to more advancements in training robust speech processing models.

4.
IEEE Trans Affect Comput ; 12(4): 1055-1068, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35695825

RESUMEN

Automatic speech emotion recognition provides computers with critical context to enable user understanding. While methods trained and tested within the same dataset have been shown successful, they often fail when applied to unseen datasets. To address this, recent work has focused on adversarial methods to find more generalized representations of emotional speech. However, many of these methods have issues converging, and only involve datasets collected in laboratory conditions. In this paper, we introduce Adversarial Discriminative Domain Generalization (ADDoG), which follows an easier to train "meet in the middle" approach. The model iteratively moves representations learned for each dataset closer to one another, improving cross-dataset generalization. We also introduce Multiclass ADDoG, or MADDoG, which is able to extend the proposed method to more than two datasets, simultaneously. Our results show consistent convergence for the introduced methods, with significantly improved results when not using labels from the target dataset. We also show how, in most cases, ADDoG and MADDoG can be used to improve upon baseline state-of-the-art methods when target dataset labels are added and in-the-wild data are considered. Even though our experiments focus on cross-corpus speech emotion, these methods could be used to remove unwanted factors of variation in other settings.

5.
Interspeech ; 2021: 1907-1911, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-39170691

RESUMEN

Parkinson's disease (PD) is a central nervous system disorder that causes motor impairment. Recent studies have found that people with PD also often suffer from cognitive impairment (CI). While a large body of work has shown that speech can be used to predict motor symptom severity in people with PD, much less has focused on cognitive symptom severity. Existing work has investigated if acoustic features, derived from speech, can be used to detect CI in people with PD. However, these acoustic features are general and are not targeted toward capturing CI. Speech errors and disfluencies provide additional insight into CI. In this study, we focus on read speech, which offers a controlled template from which we can detect errors and disfluencies, and we analyze how errors and disfluencies vary with CI. The novelty of this work is an automated pipeline, including transcription and error and disfluency detection, capable of predicting CI in people with PD. This will enable efficient analyses of how cognition modulates speech for people with PD, leading to scalable speech assessments of CI.

6.
Interspeech ; 2020: 4966-4970, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-33244474

RESUMEN

Huntington disease (HD) is a fatal autosomal dominant neurocognitive disorder that causes cognitive disturbances, neuropsychiatric symptoms, and impaired motor abilities (e.g., gait, speech, voice). Due to its progressive nature, HD treatment requires ongoing clinical monitoring of symptoms. Individuals with the Huntingtin gene mutation, which causes HD, may exhibit a range of speech symptoms as they progress from premanifest to manifest HD. Speech-based passive monitoring has the potential to augment clinical information by more continuously tracking manifestation symptoms. Differentiating between premanifest and manifest HD is an important yet under-studied problem, as this distinction marks the need for increased treatment. In this work we present the first demonstration of how changes in speech can be measured to differentiate between premanifest and manifest HD. To do so, we focus on one speech symptom of HD: distorted vowels. We introduce a set of Filtered Vowel Distortion Measures (FVDM) which we extract from read speech. We show that FVDM, coupled with features from existing literature, can differentiate between premanifest and manifest HD with 80% accuracy.

7.
Interspeech ; 2018: 1898-1902, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-33241056

RESUMEN

Speech is a critical biomarker for Huntington Disease (HD), with changes in speech increasing in severity as the disease progresses. Speech analyses are currently conducted using either transcriptions created manually by trained professionals or using global rating scales. Manual transcription is both expensive and time-consuming and global rating scales may lack sufficient sensitivity and fidelity [1]. Ultimately, what is needed is an unobtrusive measure that can cheaply and continuously track disease progression. We present first steps towards the development of such a system, demonstrating the ability to automatically differentiate between healthy controls and individuals with HD using speech cues. The results provide evidence that objective analyses can be used to support clinical diagnoses, moving towards the tracking of symptomatology outside of laboratory and clinical environments.

8.
Artículo en Inglés | MEDLINE | ID: mdl-27570493

RESUMEN

Speech contains patterns that can be altered by the mood of an individual. There is an increasing focus on automated and distributed methods to collect and monitor speech from large groups of patients suffering from mental health disorders. However, as the scope of these collections increases, the variability in the data also increases. This variability is due in part to the range in the quality of the devices, which in turn affects the quality of the recorded data, negatively impacting the accuracy of automatic assessment. It is necessary to mitigate variability effects in order to expand the impact of these technologies. This paper explores speech collected from phone recordings for analysis of mood in individuals with bipolar disorder. Two different phones with varying amounts of clipping, loudness, and noise are employed. We describe methodologies for use during preprocessing, feature extraction, and data modeling to correct these differences and make the devices more comparable. The results demonstrate that these pipeline modifications result in statistically significantly higher performance, which highlights the potential of distributed mental health systems.

9.
Artículo en Inglés | MEDLINE | ID: mdl-27630535

RESUMEN

Speech patterns are modulated by the emotional and neurophysiological state of the speaker. There exists a growing body of work that computationally examines this modulation in patients suffering from depression, autism, and post-traumatic stress disorder. However, the majority of the work in this area focuses on the analysis of structured speech collected in controlled environments. Here we expand on the existing literature by examining bipolar disorder (BP). BP is characterized by mood transitions, varying from a healthy euthymic state to states characterized by mania or depression. The speech patterns associated with these mood states provide a unique opportunity to study the modulations characteristic of mood variation. We describe methodology to collect unstructured speech continuously and unobtrusively via the recording of day-to-day cellular phone conversations. Our pilot investigation suggests that manic and depressive mood states can be recognized from this speech data, providing new insight into the feasibility of unobtrusive, unstructured, and continuous speech-based wellness monitoring for individuals with BP.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA