Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Schizophr Bull ; 49(Suppl_2): S86-S92, 2023 03 22.
Artículo en Inglés | MEDLINE | ID: mdl-36946526

RESUMEN

This workshop summary on natural language processing (NLP) markers for psychosis and other psychiatric disorders presents some of the clinical and research issues that NLP markers might address and some of the activities needed to move in that direction. We propose that the optimal development of NLP markers would occur in the context of research efforts to map out the underlying mechanisms of psychosis and other disorders. In this workshop, we identified some of the challenges to be addressed in developing and implementing NLP markers-based Clinical Decision Support Systems (CDSSs) in psychiatric practice, especially with respect to psychosis. Of note, a CDSS is meant to enhance decision-making by clinicians by providing additional relevant information primarily through software (although CDSSs are not without risks). In psychiatry, a field that relies on subjective clinical ratings that condense rich temporal behavioral information, the inclusion of computational quantitative NLP markers can plausibly lead to operationalized decision models in place of idiosyncratic ones, although ethical issues must always be paramount.


Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Trastornos Mentales , Trastornos Psicóticos , Humanos , Procesamiento de Lenguaje Natural , Lingüística , Trastornos Psicóticos/diagnóstico
2.
Eur J Psychotraumatol ; 11(1): 1726672, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32284819

RESUMEN

Background: Identifying and addressing hotspots is a key element of imaginal exposure in Brief Eclectic Psychotherapy for PTSD (BEPP). Research shows that treatment effectiveness is associated with focusing on these hotspots and that hotspot frequency and characteristics may serve as indicators for treatment success. Objective: This study aims to develop a model to automatically recognize hotspots based on text and speech features, which might be an efficient way to track patient progress and predict treatment efficacy. Method: A multimodal supervised classification model was developed based on analog tape recordings and transcripts of imaginal exposure sessions of 10 successful and 10 non-successful treatment completers. Data mining and machine learning techniques were used to extract and select text (e.g. words and word combinations) and speech (e.g. speech rate, pauses between words) features that distinguish between 'hotspot' (N = 37) and 'non-hotspot' (N = 45) phases during exposure sessions. Results: The developed model resulted in a high training performance (mean F 1-score of 0.76) but a low testing performance (mean F 1-score = 0.52). This shows that the selected text and speech features could clearly distinguish between hotspots and non-hotspots in the current data set, but will probably not recognize hotspots from new input data very well. Conclusions: In order to improve the recognition of new hotspots, the described methodology should be applied to a larger, higher quality (digitally recorded) data set. As such this study should be seen mainly as a proof of concept, demonstrating the possible application and contribution of automatic text and audio analysis to therapy process research in PTSD and mental health research in general.


Antecedentes:La identificación y el abordaje de los puntos críticos (hotspots en inglés) es un elemento clave para exposición imaginaria en la Psicoterapia Ecléctica Breve para TEPT (BEPP por sus siglas en inglés). La investigación muestra que la efectividad del tratamiento se asocia con la focalización en estos puntos críticosy que la frecuencia y características de los puntos críticos podría servir de indicador para el éxito terapéutico.Objetivo: Este estudio tiene como objetivo desarrollar un modelo para reconocer automáticamente los puntos críticos basados en características de texto y discurso, lo que podría ser una forma eficiente de seguir los progresos del paciente y predecir la eficacia del tratamiento.Metodo: Se desarrolló un modelo de clasificación supervisada multimodal basado en grabaciones y transcripciones de cintas analógicas de sesiones de exposición imaginaria de diez de tratamiento exitosos y diez no exitosos. Se usaron técnicas de minería de datos y técnicas de aprendizaje automático para extraer y seleccionar las características de texto (ej., palabras y combinaciones de palabras) y discurso (ej., velocidad del discurso, pausas entre las palabras) que distinguen entre las fases de 'puntos críticos' (N= 37) y ' puntos no críticos' (N= 45) durante las sesiones de exposición.Resultados: El modelo desarrollado resultó en un alto rendimiento de entrenamiento (puntaje F1 promedio de 0.76) pero un bajo rendimiento de prueba (puntaje F1 promedio = 0.52). Esto muestra que las características de los textos y discursos seleccionados podrían distinguir claramente entre puntos críticos y puntos no críticos en el conjunto de datos actual, pero probablemente no reconocerá muy bien los puntos críticos de nuevos datos de entrada.Conclusiones: Para mejorar el reconocimiento de nuevos puntos críticos, la metodología descrita debería ser aplicada a un conjunto de datos más grande y de mejor alta calidad (grabado digital). Como tal, este estudio debe verse principalmente como una prueba de concepto, demostrando la posible aplicación y contribución del análisis automático de texto y audio para la investigación del proceso terapéutico en TEPT e investigación en salud mental en general.

3.
Front Artif Intell ; 3: 10, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33733130

RESUMEN

This paper discusses how the transcription hurdle in dialect corpus building can be cleared. While corpus analysis has strongly gained in popularity in linguistic research, dialect corpora are still relatively scarce. This scarcity can be attributed to several factors, one of which is the challenging nature of transcribing dialects, given a lack of both orthographic norms for many dialects and speech technological tools trained on dialect data. This paper addresses the questions (i) how dialects can be transcribed efficiently and (ii) whether speech technological tools can lighten the transcription work. These questions are tackled using the Southern Dutch dialects (SDDs) as case study, for which the usefulness of automatic speech recognition (ASR), respeaking, and forced alignment is considered. Tests with these tools indicate that dialects still constitute a major speech technological challenge. In the case of the SDDs, the decision was made to use speech technology only for the word-level segmentation of the audio files, as the transcription itself could not be sped up by ASR tools. The discussion does however indicate that the usefulness of ASR and other related tools for a dialect corpus project is strongly determined by the sound quality of the dialect recordings, the availability of statistical dialect-specific models, the degree of linguistic differentiation between the dialects and the standard language, and the goals the transcripts have to serve.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...