Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
BMC Psychiatry ; 23(1): 920, 2023 12 08.
Artigo em Inglês | MEDLINE | ID: mdl-38066477

RESUMO

Attention deficit hyperactivity disorder (ADHD) is the most prevalent neuropsychiatric disorder in the world. Currently, the diagnosis is based mainly on interviews, resulting in uncertainties in the clinical assessment. While some neuropsychological tests are used, their specificity and selectivity are low, and more reliable biomarkers are desirable. Previous research indicates that ADHD is associated with morphological changes in the cerebellum, which is essential for motor ability and timing. Here, we compared 29 children diagnosed with ADHD to 96 age-matched controls on prism adaptation, eyeblink conditioning, and timed motor performance in a finger tapping task. Prism adaptation and timing precision in the finger tapping task, but not performance on eyeblink conditioning, differed between the ADHD and control groups, as well as between children with and without Deficits in Attention, Motor control, and Perception (DAMP) - a more severe form of ADHD. The results suggest finger tapping can be used as a cheap, objective, and unbiased biomarker to complement current diagnostic procedures.


Assuntos
Transtorno do Deficit de Atenção com Hiperatividade , Criança , Humanos , Transtorno do Deficit de Atenção com Hiperatividade/psicologia , Desempenho Psicomotor , Cerebelo , Testes Neuropsicológicos
2.
Psychiatry Res ; 333: 115667, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38290286

RESUMO

In this narrative review, we survey recent empirical evaluations of AI-based language assessments and present a case for the technology of large language models to be poised for changing standardized psychological assessment. Artificial intelligence has been undergoing a purported "paradigm shift" initiated by new machine learning models, large language models (e.g., BERT, LAMMA, and that behind ChatGPT). These models have led to unprecedented accuracy over most computerized language processing tasks, from web searches to automatic machine translation and question answering, while their dialogue-based forms, like ChatGPT have captured the interest of over a million users. The success of the large language model is mostly attributed to its capability to numerically represent words in their context, long a weakness of previous attempts to automate psychological assessment from language. While potential applications for automated therapy are beginning to be studied on the heels of chatGPT's success, here we present evidence that suggests, with thorough validation of targeted deployment scenarios, that AI's newest technology can move mental health assessment away from rating scales and to instead use how people naturally communicate, in language.


Assuntos
Inteligência Artificial , Idioma , Humanos , Aprendizado de Máquina
3.
medRxiv ; 2024 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-38699296

RESUMO

Accurate assessments of symptoms and diagnoses are essential for health research and clinical practice but face many challenges. The absence of a single error-free measure is currently addressed by assessment methods involving experts reviewing several sources of information to achieve a more accurate or best-estimate assessment. Three bodies of work spanning medicine, psychiatry, and psychology propose similar assessment methods: The Expert Panel, the Best-Estimate Diagnosis, and the Longitudinal Expert All Data (LEAD). However, the quality of such best-estimate assessments is typically very difficult to evaluate due to poor reporting of the assessment methods and when it is reported, the reporting quality varies substantially. Here we tackle this gap by developing reporting guidelines for such studies, using a four-stage approach: 1) drafting reporting standards accompanied by rationales and empirical evidence, which were further developed with a patient organization for depression, 2) incorporating expert feedback through a two-round Delphi procedure, 3) refining the guideline based on an expert consensus meeting, and 4) testing the guideline by i) having two researchers test it and ii) using it to examine the extent previously published articles report the standards. The last step also demonstrates the need for the guideline: 18 to 58% (Mean = 33%) of the standards were not reported across fifteen randomly selected studies. The LEADING guideline comprises 20 reporting standards related to four groups: The Longitudinal design; the Appropriate data; the Evaluation - experts, materials, and procedures; and the Validity group. We hope that the LEADING guideline will be useful in assisting researchers in planning, reporting, and evaluating research aiming to achieve best-estimate assessments.

4.
Sci Rep ; 12(1): 3918, 2022 03 10.
Artigo em Inglês | MEDLINE | ID: mdl-35273198

RESUMO

We show that using a recent break-through in artificial intelligence -transformers-, psychological assessments from text-responses can approach theoretical upper limits in accuracy, converging with standard psychological rating scales. Text-responses use people's primary form of communication -natural language- and have been suggested as a more ecologically-valid response format than closed-ended rating scales that dominate social science. However, previous language analysis techniques left a gap between how accurately they converged with standard rating scales and how well ratings scales converge with themselves - a theoretical upper-limit in accuracy. Most recently, AI-based language analysis has gone through a transformation as nearly all of its applications, from Web search to personalized assistants (e.g., Alexa and Siri), have shown unprecedented improvement by using transformers. We evaluate transformers for estimating psychological well-being from questionnaire text- and descriptive word-responses, and find accuracies converging with rating scales that approach the theoretical upper limits (Pearson r = 0.85, p < 0.001, N = 608; in line with most metrics of rating scale reliability). These findings suggest an avenue for modernizing the ubiquitous questionnaire and ultimately opening doors to a greater understanding of the human condition.


Assuntos
Inteligência Artificial , Idioma , Comunicação , Humanos , Reprodutibilidade dos Testes , Inquéritos e Questionários
5.
Front Psychol ; 12: 602581, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34149500

RESUMO

BACKGROUND: Question-based computational language assessments (QCLA) of mental health, based on self-reported and freely generated word responses and analyzed with artificial intelligence, is a potential complement to rating scales for identifying mental health issues. This study aimed to examine to what extent this method captures items related to the primary and secondary symptoms associated with Major Depressive Disorder (MDD) and Generalized Anxiety Disorder (GAD) described in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). We investigated whether the word responses that participants generated contained information of all, or some, of the criteria that define MDD and GAD using symptom-based rating scales that are commonly used in clinical research and practices. METHOD: Participants (N = 411) described their mental health with freely generated words and rating scales relating to depression and worry/anxiety. Word responses were quantified and analyzed using natural language processing and machine learning. RESULTS: The QCLA correlated significantly with the individual items connected to the DSM-5 diagnostic criteria of MDD (PHQ-9; Pearson's r = 0.30-0.60, p < 0.001) and GAD (GAD-7; Pearson's r = 0.41-0.52, p < 0.001; PSWQ-8; Spearman's r = 0.52-0.63, p < 0.001) for respective rating scales. Items measuring primary criteria (cognitive and emotional aspects) yielded higher predictability than secondary criteria (behavioral aspects). CONCLUSION: Together these results suggest that QCLA may be able to complement rating scales in measuring mental health in clinical settings. The approach carries the potential to personalize assessments and contributes to the ongoing discussion regarding the diagnostic heterogeneity of depression.

6.
Psychol Methods ; 24(1): 92-115, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-29963879

RESUMO

Psychological constructs, such as emotions, thoughts, and attitudes are often measured by asking individuals to reply to questions using closed-ended numerical rating scales. However, when asking people about their state of mind in a natural context ("How are you?"), we receive open-ended answers using words ("Fine and happy!") and not closed-ended answers using numbers ("7") or categories ("A lot"). Nevertheless, to date it has been difficult to objectively quantify responses to open-ended questions. We develop an approach using open-ended questions in which the responses are analyzed using natural language processing (Latent Semantic Analyses). This approach of using open-ended, semantic questions is compared with traditional rating scales in nine studies (N = 92-854), including two different study paradigms. The first paradigm requires participants to describe psychological aspects of external stimuli (facial expressions) and the second paradigm involves asking participants to report their subjective well-being and mental health problems. The results demonstrate that the approach using semantic questions yields good statistical properties with competitive, or higher, validity and reliability compared with corresponding numerical rating scales. As these semantic measures are based on natural language and measure, differentiate, and describe psychological constructs, they have the potential of complementing and extending traditional rating scales. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Sintomas Comportamentais/diagnóstico , Expressão Facial , Processamento de Linguagem Natural , Satisfação Pessoal , Escalas de Graduação Psiquiátrica , Psicologia/métodos , Pesquisa Qualitativa , Semântica , Adolescente , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
7.
Front Behav Neurosci ; 12: 299, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30559655

RESUMO

Eyeblink conditioning is one of the most popular experimental paradigms for studying the neural mechanisms underlying learning and memory. A key parameter in eyeblink conditioning is the interstimulus interval (ISI), the time between the onset of the conditional stimulus (CS) and the onset of the unconditional stimulus (US). Though previous studies have examined how the ISI affects learning there is no clear consensus concerning which ISI is most effective and different researchers use different ISIs. Importantly, the brain undergoes changes throughout life with significant cerebellar growth in adolescents, which could mean that different ISIs might be called for in children, adolescents and adults. Moreover, the fact that animals are often trained with a shorter ISI than humans make direct comparisons problematic. In this study, we compared eyeblink conditioning in young adolescents aged 10-15 and adults using one short ISI (300 ms) and one long ISI (500 ms). The results demonstrate that young adolescents and adults produce a higher percentage of CRs when they are trained with a 500 ms ISI compared to a 300 ms ISI. The results also show that learning is better in the adults, especially for the shorter ISI.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa