Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
Heliyon ; 10(18): e38056, 2024 Sep 30.
Artículo en Inglés | MEDLINE | ID: mdl-39381244

RESUMEN

Objective: This article uses the framework of Schwartz's values theory to examine whether the embedded values-like profile within large language models (LLMs) impact ethical decision-making dilemmas faced by primary care. It specifically aims to evaluate whether each LLM exhibits a distinct values-like profile, assess its alignment with general population values, and determine whether latent values influence clinical recommendations. Methods: The Portrait Values Questionnaire-Revised (PVQ-RR) was submitted to each LLM (Claude, Bard, GPT-3.5, and GPT-4) 20 times to ensure reliable and valid responses. Their responses were compared to a benchmark derived from a diverse international sample consisting of over 53,000 culturally diverse respondents who completed the PVQ-RR. Four vignettes depicting prototypical professional quandaries involving conflicts between competing values were presented to the LLMs. The option selected by each LLM and the strength of its recommendation were evaluated to determine if underlying values-like impact output. Results: Each LLM demonstrated a unique values-like profile. Universalism and self-direction were prioritized, while power and tradition were assigned less importance than population benchmarks, suggesting potential Western-centric biases. Four clinical vignettes involving value conflicts were presented to the LLMs. Preliminary indications suggested that embedded values-like influence recommendations. Significant variances in confidence strength regarding chosen recommendations materialized between models, proposing that further vetting is required before the LLMs can be relied on as judgment aids. However, the overall selection of preferences aligned with intrinsic value hierarchies. Conclusion: The distinct intrinsic values-like embedded within LLMs shape ethical decision-making, which carries implications for their integration in primary care settings serving diverse populations. For context-appropriate, equitable delivery of AI-assisted healthcare globally it is essential that LLMs are tailored to align with cultural outlooks.

2.
JMIR Ment Health ; 11: e58011, 2024 Oct 17.
Artículo en Inglés | MEDLINE | ID: mdl-39417792

RESUMEN

Unlabelled: Knowledge has become more open and accessible to a large audience with the "democratization of information" facilitated by technology. This paper provides a sociohistorical perspective for the theme issue "Responsible Design, Integration, and Use of Generative AI in Mental Health." It evaluates ethical considerations in using generative artificial intelligence (GenAI) for the democratization of mental health knowledge and practice. It explores the historical context of democratizing information, transitioning from restricted access to widespread availability due to the internet, open-source movements, and most recently, GenAI technologies such as large language models. The paper highlights why GenAI technologies represent a new phase in the democratization movement, offering unparalleled access to highly advanced technology as well as information. In the realm of mental health, this requires delicate and nuanced ethical deliberation. Including GenAI in mental health may allow, among other things, improved accessibility to mental health care, personalized responses, and conceptual flexibility, and could facilitate a flattening of traditional hierarchies between health care providers and patients. At the same time, it also entails significant risks and challenges that must be carefully addressed. To navigate these complexities, the paper proposes a strategic questionnaire for assessing artificial intelligence-based mental health applications. This tool evaluates both the benefits and the risks, emphasizing the need for a balanced and ethical approach to GenAI integration in mental health. The paper calls for a cautious yet positive approach to GenAI in mental health, advocating for the active engagement of mental health professionals in guiding GenAI development. It emphasizes the importance of ensuring that GenAI advancements are not only technologically sound but also ethically grounded and patient-centered.


Asunto(s)
Inteligencia Artificial , Salud Mental , Inteligencia Artificial/ética , Humanos , Salud Mental/ética , Democracia
3.
J Clin Psychiatry ; 85(4)2024 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-39361412

RESUMEN

Objective: Suicide is a critical global health concern. Research indicates that generative artificial intelligence (GenAI) and large language models, such as generative pretrained transformer-3 (GPT-3) and GPT-4, can evaluate suicide risk comparably to experts, yet the criteria these models use are unclear. This study explores how variations in prompts, specifically regarding past suicide attempts, gender, and age, influence the risk assessments provided by ChatGPT-3 and ChatGPT-4.Methods: Using a controlled scenario based approach, 8 vignettes were created. Both ChatGPT-3.5 and ChatGPT 4 were used to predict the likelihood of serious suicide attempts, suicide attempts, and suicidal thoughts. A univariate 3-way analysis of variance was conducted to analyze the effects of the independent variables (previous suicide attempts, gender, and age) on the dependent variables (likelihood of serious suicide attempts, suicide attempts, and suicidal thoughts).Results: Both ChatGPT-3.5 and ChatGPT-4 recognized the importance of previous suicide attempts in predicting severe suicide risks and suicidal thoughts. ChatGPT-4 also identified gender differences, associating men with a higher risk, while both models disregarded age as a risk factor. Interaction analysis revealed that ChatGPT-3.5 associated past attempts with a higher likelihood of suicidal thoughts in men, whereas ChatGPT-4 showed an increased risk for women.Conclusions: The study highlights ChatGPT-3.5 and ChatGPT-4's potential in suicide risk evaluation, emphasizing the importance of prior attempts and gender, while noting differences in their handling of interactive effects and the negligible role of age. These findings reflect the complexity of GenAI decision-making. While promising for suicide risk assessment, these models require careful application due to limitations and real-world complexities.


Asunto(s)
Inteligencia Artificial , Intento de Suicidio , Humanos , Intento de Suicidio/estadística & datos numéricos , Intento de Suicidio/psicología , Masculino , Femenino , Medición de Riesgo , Factores Sexuales , Factores de Edad , Adulto , Ideación Suicida , Persona de Mediana Edad , Adulto Joven , Adolescente , Factores de Riesgo
4.
Eur J Paediatr Neurol ; 52: 1-9, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38968910

RESUMEN

BACKGROUND: Children with Attention Deficit Hyperactivity Disorder (ADHD) demonstrate a heterogeneous sensorimotor, emotional, and cognitive profile. Comorbid sensorimotor imbalance, anxiety, and spatial disorientation are particularly prevalent among their non-core symptoms. Studies in other populations presented these three comorbid dysfunctions in the context of vestibular hypofunction. OBJECTIVE: To test whether there is a subgroup of children with ADHD who have vestibular hypofunction presenting with concomitant imbalance, anxiety, and spatial disorientation. METHODS: Children with ADHD-only (n = 28), ADHD + Developmental Coordination Disorder (ADHD + DCD; n = 38), and Typical Development (TD; n = 19) were evaluated for vestibular function by the Dynamic Visual Acuity test (DVA-t), balance by the Bruininks-Oseretsky Test of motor proficiency (BOT-2), panic anxiety by the Screen for Child Anxiety Related Emotional Disorders questionnaire-Child version (SCARED-C), and spatial navigation by the Triangular Completion test (TC-t). RESULTS: Children with ADHD vs. TD presented with a high rate of vestibular hypofunction (65 vs. 0 %), imbalance (42 vs. 0 %), panic anxiety (27 vs. 11 %), and spatial disorientation (30 vs. 5 %). Children with ADHD + DCD contributed more frequent and severe vestibular hypofunction and imbalance than children with ADHD-only (74 vs. 54 %; 58 vs. 21 %, respectively). A concomitant presence of imbalance, anxiety, and spatial disorientation was observed in 33 % of children with ADHD, all sharing vestibular hypofunction. CONCLUSIONS: Vestibular hypofunction may be the common pathophysiology of imbalance, anxiety, and spatial disorientation in children. These comorbidities are preferentially present in children with ADHD + DCD rather than ADHD-only, thus likely related to DCD rather than to ADHD disorder. Children with this profile may benefit from a vestibular rehabilitation intervention.


Asunto(s)
Trastorno por Déficit de Atención con Hiperactividad , Trastornos de la Destreza Motora , Enfermedades Vestibulares , Humanos , Trastorno por Déficit de Atención con Hiperactividad/fisiopatología , Masculino , Femenino , Niño , Enfermedades Vestibulares/fisiopatología , Enfermedades Vestibulares/complicaciones , Trastornos de la Destreza Motora/etiología , Trastornos de la Destreza Motora/fisiopatología , Trastornos de la Destreza Motora/epidemiología , Adolescente , Comorbilidad , Equilibrio Postural/fisiología , Ansiedad/etiología
5.
PeerJ ; 12: e17468, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38827287

RESUMEN

The aim of this study was to evaluate the effectiveness of ChatGPT-3.5 and ChatGPT-4 in incorporating critical risk factors, namely history of depression and access to weapons, into suicide risk assessments. Both models assessed suicide risk using scenarios that featured individuals with and without a history of depression and access to weapons. The models estimated the likelihood of suicidal thoughts, suicide attempts, serious suicide attempts, and suicide-related mortality on a Likert scale. A multivariate three-way ANOVA analysis with Bonferroni post hoc tests was conducted to examine the impact of the forementioned independent factors (history of depression and access to weapons) on these outcome variables. Both models identified history of depression as a significant suicide risk factor. ChatGPT-4 demonstrated a more nuanced understanding of the relationship between depression, access to weapons, and suicide risk. In contrast, ChatGPT-3.5 displayed limited insight into this complex relationship. ChatGPT-4 consistently assigned higher severity ratings to suicide-related variables than did ChatGPT-3.5. The study highlights the potential of these two models, particularly ChatGPT-4, to enhance suicide risk assessment by considering complex risk factors.


Asunto(s)
Depresión , Suicidio , Armas , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven , Depresión/psicología , Depresión/epidemiología , Medición de Riesgo , Factores de Riesgo , Ideación Suicida , Suicidio/psicología , Suicidio/estadística & datos numéricos , Prevención del Suicidio , Intento de Suicidio/psicología , Intento de Suicidio/estadística & datos numéricos , Armas/estadística & datos numéricos
6.
JMIR Ment Health ; 11: e54781, 2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38787297

RESUMEN

Unlabelled: This paper explores a significant shift in the field of mental health in general and psychotherapy in particular following generative artificial intelligence's new capabilities in processing and generating humanlike language. Following Freud, this lingo-technological development is conceptualized as the "fourth narcissistic blow" that science inflicts on humanity. We argue that this narcissistic blow has a potentially dramatic influence on perceptions of human society, interrelationships, and the self. We should, accordingly, expect dramatic changes in perceptions of the therapeutic act following the emergence of what we term the artificial third in the field of psychotherapy. The introduction of an artificial third marks a critical juncture, prompting us to ask the following important core questions that address two basic elements of critical thinking, namely, transparency and autonomy: (1) What is this new artificial presence in therapy relationships? (2) How does it reshape our perception of ourselves and our interpersonal dynamics? and (3) What remains of the irreplaceable human elements at the core of therapy? Given the ethical implications that arise from these questions, this paper proposes that the artificial third can be a valuable asset when applied with insight and ethical consideration, enhancing but not replacing the human touch in therapy.


Asunto(s)
Inteligencia Artificial , Psicoterapia , Inteligencia Artificial/ética , Humanos , Psicoterapia/métodos , Psicoterapia/ética
7.
Omega (Westport) ; : 302228241254559, 2024 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-38776395

RESUMEN

This study examined the roles of resilience and willingness to seek psychological help in influencing Post-Traumatic Growth (PTG) among 173 emerging adults who experienced parental loss during their school years. A positive relationship was found between resilience, the willingness to seek psychological help, and PTG. Participants who endured loss over five years prior manifested increased PTG (New-Possibilities, Spiritual Change, and Appreciation of Life sub-scales) relative to those with more recent losses. The multiple regression model was notable, accounting for 33% of the variance in PTG. Both resilience and the willingness to seek psychological help assistance significantly predicted PTG, surpassing other predictors in the model. It is worth noting that the type of loss, whether sudden or anticipated, did not alter PTG levels. In essence, this study underscores the enduring positive psychological impact of parental loss on emerging adults, highlighting the critical need for comprehensive psychological resources and support for such individuals.

8.
Front Neurol ; 15: 1365369, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38711564

RESUMEN

Introduction: The vestibulo-ocular reflex (VOR) stabilizes vision during head movements. VOR disorders lead to symptoms such as imbalance, dizziness, and oscillopsia. Despite similar VOR dysfunction, patients display diverse complaints. This study analyses saccades, balance, and spatial orientation in chronic peripheral and central VOR disorders, specifically examining the impact of oscillopsia. Methods: Participants involved 15 patients with peripheral bilateral vestibular loss (pBVL), 21 patients with clinically and genetically confirmed Machado-Joseph disease (MJD) who also have bilateral vestibular deficit, and 22 healthy controls. All pBVL and MJD participants were tested at least 9 months after the onset of symptoms and underwent a detailed clinical neuro-otological evaluation at the Dizziness and Eye Movements Clinic of the Meir Medical Center. Results: Among the 15 patients with pBVL and 21 patients with MJD, only 5 patients with pBVL complained of chronic oscillopsia while none of the patients with MJD reported this complaint. Comparison between groups exhibited significant differences in vestibular, eye movements, balance, and spatial orientation. When comparing oscillopsia with no-oscillopsia subjects, significant differences were found in the dynamic visual acuity test, the saccade latency of eye movements, and the triangle completion test. Discussion: Even though there is a significant VOR gain impairment in MJD with some subjects having less VOR gain than pBVL with reported oscillopsia, no individuals with MJD reported experiencing oscillopsia. This study further supports that subjects experiencing oscillopsia present a real impairment to stabilize the image on the retina, whereas those without oscillopsia may utilize saccade strategies to cope with it and may also rely on visual information for spatial orientation. Finding objective differences will help to understand the causes of the oscillopsia experience and develop coping strategies to overcome it.

9.
J Neurol Sci ; 460: 122990, 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38579416

RESUMEN

Cerebellar ataxia with neuropathy and vestibular areflexia syndrome (CANVAS) is a slowly progressing autosomal recessive ataxic disorder linked to an abnormal biallelic intronic (most commonly) AAGGG repeat expansion in the replication factor complex subunit 1 (RFC1). While the clinical diagnosis is relatively straightforward when the three components of the disorder are present, it becomes challenging when only one of the triad (cerebellar ataxia, neuropathy or vestibular areflexia) manifests. Isolated cases of Bilateral Vestibulopathy (BVP) or vestibular areflexia that later developed the other components of CANVAS have not been documented. We report four cases of patients with chronic imbalance and BVP that, after several years, developed cerebellar and neuropathic deficits with positive genetic testing for RFC1. Our report supports the concept that CANVAS should be considered in every patient with BVP of unknown etiology, even without the presence of the other triad components. This is especially important given that about 50% of cases in many BVP series are diagnosed as idiopathic, some of which may be undiagnosed CANVAS.


Asunto(s)
Vestibulopatía Bilateral , Ataxia Cerebelosa , Humanos , Vestibulopatía Bilateral/diagnóstico , Vestibulopatía Bilateral/genética , Vestibulopatía Bilateral/complicaciones , Masculino , Femenino , Adulto , Ataxia Cerebelosa/genética , Ataxia Cerebelosa/diagnóstico , Persona de Mediana Edad , Proteína de Replicación C
10.
JMIR Ment Health ; 11: e55988, 2024 Apr 09.
Artículo en Inglés | MEDLINE | ID: mdl-38593424

RESUMEN

BACKGROUND: Large language models (LLMs) hold potential for mental health applications. However, their opaque alignment processes may embed biases that shape problematic perspectives. Evaluating the values embedded within LLMs that guide their decision-making have ethical importance. Schwartz's theory of basic values (STBV) provides a framework for quantifying cultural value orientations and has shown utility for examining values in mental health contexts, including cultural, diagnostic, and therapist-client dynamics. OBJECTIVE: This study aimed to (1) evaluate whether the STBV can measure value-like constructs within leading LLMs and (2) determine whether LLMs exhibit distinct value-like patterns from humans and each other. METHODS: In total, 4 LLMs (Bard, Claude 2, Generative Pretrained Transformer [GPT]-3.5, GPT-4) were anthropomorphized and instructed to complete the Portrait Values Questionnaire-Revised (PVQ-RR) to assess value-like constructs. Their responses over 10 trials were analyzed for reliability and validity. To benchmark the LLMs' value profiles, their results were compared to published data from a diverse sample of 53,472 individuals across 49 nations who had completed the PVQ-RR. This allowed us to assess whether the LLMs diverged from established human value patterns across cultural groups. Value profiles were also compared between models via statistical tests. RESULTS: The PVQ-RR showed good reliability and validity for quantifying value-like infrastructure within the LLMs. However, substantial divergence emerged between the LLMs' value profiles and population data. The models lacked consensus and exhibited distinct motivational biases, reflecting opaque alignment processes. For example, all models prioritized universalism and self-direction, while de-emphasizing achievement, power, and security relative to humans. Successful discriminant analysis differentiated the 4 LLMs' distinct value profiles. Further examination found the biased value profiles strongly predicted the LLMs' responses when presented with mental health dilemmas requiring choosing between opposing values. This provided further validation for the models embedding distinct motivational value-like constructs that shape their decision-making. CONCLUSIONS: This study leveraged the STBV to map the motivational value-like infrastructure underpinning leading LLMs. Although the study demonstrated the STBV can effectively characterize value-like infrastructure within LLMs, substantial divergence from human values raises ethical concerns about aligning these models with mental health applications. The biases toward certain cultural value sets pose risks if integrated without proper safeguards. For example, prioritizing universalism could promote unconditional acceptance even when clinically unwise. Furthermore, the differences between the LLMs underscore the need to standardize alignment processes to capture true cultural diversity. Thus, any responsible integration of LLMs into mental health care must account for their embedded biases and motivation mismatches to ensure equitable delivery across diverse populations. Achieving this will require transparency and refinement of alignment techniques to instill comprehensive human values.


Asunto(s)
Técnicos Medios en Salud , Salud Mental , Humanos , Estudios Transversales , Reproducibilidad de los Resultados , Lenguaje
11.
JMIR Ment Health ; 11: e53043, 2024 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-38533615

RESUMEN

Background: The current paradigm in mental health care focuses on clinical recovery and symptom remission. This model's efficacy is influenced by therapist trust in patient recovery potential and the depth of the therapeutic relationship. Schizophrenia is a chronic illness with severe symptoms where the possibility of recovery is a matter of debate. As artificial intelligence (AI) becomes integrated into the health care field, it is important to examine its ability to assess recovery potential in major psychiatric disorders such as schizophrenia. Objective: This study aimed to evaluate the ability of large language models (LLMs) in comparison to mental health professionals to assess the prognosis of schizophrenia with and without professional treatment and the long-term positive and negative outcomes. Methods: Vignettes were inputted into LLMs interfaces and assessed 10 times by 4 AI platforms: ChatGPT-3.5, ChatGPT-4, Google Bard, and Claude. A total of 80 evaluations were collected and benchmarked against existing norms to analyze what mental health professionals (general practitioners, psychiatrists, clinical psychologists, and mental health nurses) and the general public think about schizophrenia prognosis with and without professional treatment and the positive and negative long-term outcomes of schizophrenia interventions. Results: For the prognosis of schizophrenia with professional treatment, ChatGPT-3.5 was notably pessimistic, whereas ChatGPT-4, Claude, and Bard aligned with professional views but differed from the general public. All LLMs believed untreated schizophrenia would remain static or worsen without professional treatment. For long-term outcomes, ChatGPT-4 and Claude predicted more negative outcomes than Bard and ChatGPT-3.5. For positive outcomes, ChatGPT-3.5 and Claude were more pessimistic than Bard and ChatGPT-4. Conclusions: The finding that 3 out of the 4 LLMs aligned closely with the predictions of mental health professionals when considering the "with treatment" condition is a demonstration of the potential of this technology in providing professional clinical prognosis. The pessimistic assessment of ChatGPT-3.5 is a disturbing finding since it may reduce the motivation of patients to start or persist with treatment for schizophrenia. Overall, although LLMs hold promise in augmenting health care, their application necessitates rigorous validation and a harmonious blend with human expertise.


Asunto(s)
Médicos Generales , Esquizofrenia , Humanos , Salud Mental , Inteligencia Artificial , Empleos en Salud
12.
JMIR Ment Health ; 11: e54369, 2024 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-38319707

RESUMEN

BACKGROUND: Mentalization, which is integral to human cognitive processes, pertains to the interpretation of one's own and others' mental states, including emotions, beliefs, and intentions. With the advent of artificial intelligence (AI) and the prominence of large language models in mental health applications, questions persist about their aptitude in emotional comprehension. The prior iteration of the large language model from OpenAI, ChatGPT-3.5, demonstrated an advanced capacity to interpret emotions from textual data, surpassing human benchmarks. Given the introduction of ChatGPT-4, with its enhanced visual processing capabilities, and considering Google Bard's existing visual functionalities, a rigorous assessment of their proficiency in visual mentalizing is warranted. OBJECTIVE: The aim of the research was to critically evaluate the capabilities of ChatGPT-4 and Google Bard with regard to their competence in discerning visual mentalizing indicators as contrasted with their textual-based mentalizing abilities. METHODS: The Reading the Mind in the Eyes Test developed by Baron-Cohen and colleagues was used to assess the models' proficiency in interpreting visual emotional indicators. Simultaneously, the Levels of Emotional Awareness Scale was used to evaluate the large language models' aptitude in textual mentalizing. Collating data from both tests provided a holistic view of the mentalizing capabilities of ChatGPT-4 and Bard. RESULTS: ChatGPT-4, displaying a pronounced ability in emotion recognition, secured scores of 26 and 27 in 2 distinct evaluations, significantly deviating from a random response paradigm (P<.001). These scores align with established benchmarks from the broader human demographic. Notably, ChatGPT-4 exhibited consistent responses, with no discernible biases pertaining to the sex of the model or the nature of the emotion. In contrast, Google Bard's performance aligned with random response patterns, securing scores of 10 and 12 and rendering further detailed analysis redundant. In the domain of textual analysis, both ChatGPT and Bard surpassed established benchmarks from the general population, with their performances being remarkably congruent. CONCLUSIONS: ChatGPT-4 proved its efficacy in the domain of visual mentalizing, aligning closely with human performance standards. Although both models displayed commendable acumen in textual emotion interpretation, Bard's capabilities in visual emotion interpretation necessitate further scrutiny and potential refinement. This study stresses the criticality of ethical AI development for emotional recognition, highlighting the need for inclusive data, collaboration with patients and mental health experts, and stringent governmental oversight to ensure transparency and protect patient privacy.


Asunto(s)
Inteligencia Artificial , Emociones , Humanos , Proyectos Piloto , Benchmarking , Ojo
14.
Omega (Westport) ; : 302228231223275, 2024 Jan 04.
Artículo en Inglés | MEDLINE | ID: mdl-38174720

RESUMEN

Non-suicidal self-injury (NNSI) among adolescents is a significant concern. This study aimed to explore teachers' perceptions and experiences in cases of NSSI among their students. This qualitative-phenomenological study used in-depth, semi-structured interviews conducted with 27 teachers from high-schools in Israel. Thematic analysis was used to identify patterns and themes. Theme 1 highlighted the emotional impact of discovering self-injury incidents, including panic, confusion, and helplessness. Theme 2 focused on teachers' limited professional support and their need for training and guidance. Theme 3 explored teachers' desire to help students and their strategies for building connections and providing empathy, sometimes despite emotional detachment. Theme 4 emphasized the importance of involving parents and the need for effective communication. This study emphasizes the importance of providing teachers comprehensive training to address NSSI effectively. These findings provide a better understanding of teachers' experiences and underscore the need for enhanced support systems.

15.
Fam Med Community Health ; 12(Suppl 1)2024 01 09.
Artículo en Inglés | MEDLINE | ID: mdl-38199604

RESUMEN

BACKGROUND: Artificial intelligence (AI) has rapidly permeated various sectors, including healthcare, highlighting its potential to facilitate mental health assessments. This study explores the underexplored domain of AI's role in evaluating prognosis and long-term outcomes in depressive disorders, offering insights into how AI large language models (LLMs) compare with human perspectives. METHODS: Using case vignettes, we conducted a comparative analysis involving different LLMs (ChatGPT-3.5, ChatGPT-4, Claude and Bard), mental health professionals (general practitioners, psychiatrists, clinical psychologists and mental health nurses), and the general public that reported previously. We evaluate the LLMs ability to generate prognosis, anticipated outcomes with and without professional intervention, and envisioned long-term positive and negative consequences for individuals with depression. RESULTS: In most of the examined cases, the four LLMs consistently identified depression as the primary diagnosis and recommended a combined treatment of psychotherapy and antidepressant medication. ChatGPT-3.5 exhibited a significantly pessimistic prognosis distinct from other LLMs, professionals and the public. ChatGPT-4, Claude and Bard aligned closely with mental health professionals and the general public perspectives, all of whom anticipated no improvement or worsening without professional help. Regarding long-term outcomes, ChatGPT 3.5, Claude and Bard consistently projected significantly fewer negative long-term consequences of treatment than ChatGPT-4. CONCLUSIONS: This study underscores the potential of AI to complement the expertise of mental health professionals and promote a collaborative paradigm in mental healthcare. The observation that three of the four LLMs closely mirrored the anticipations of mental health experts in scenarios involving treatment underscores the technology's prospective value in offering professional clinical forecasts. The pessimistic outlook presented by ChatGPT 3.5 is concerning, as it could potentially diminish patients' drive to initiate or continue depression therapy. In summary, although LLMs show potential in enhancing healthcare services, their utilisation requires thorough verification and a seamless integration with human judgement and skills.


Asunto(s)
Inteligencia Artificial , Médicos Generales , Humanos , Depresión/diagnóstico , Depresión/terapia , Estudios Prospectivos , Pronóstico , Modelos Psicológicos
17.
Front Psychiatry ; 14: 1280440, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37928920

RESUMEN

Objective: Stimulation of the peripheral visual field has been previously reported as beneficial for cognitive performance in ADHD. This study assesses the safety and efficacy of a novel intervention involving peripheral visual stimuli in managing attention deficit hyperactivity disorder (ADHD). Methods: One hundred and eight adults, 18-40 years old, with ADHD, were enrolled in a two-month open-label study. The intervention (i.e., Neuro-glasses) consisted of standard eyeglasses with personalized peripheral visual stimuli embedded on the lenses. Participants were assessed at baseline and at the end of the study with self-report measures of ADHD symptoms (the Adult ADHD Self-Report Scale; ASRS), and executive functions (The Behavior Rating Inventory of Executive Function Adult Version; BRIEF-A). A computerized test of continuous performance (The Conners' Continuous Performance Test-3; CPT-3) was tested at baseline with standard eyeglasses and at the end of study using Neuro-glasses. The Clinical Global Impression-Improvement scale (CGI-I) was assessed at the intervention endpoint. Safety was monitored by documentation of adverse events. Results: The efficacy analysis included 97 participants. Significant improvements were demonstrated in self-reported measures of inattentive symptoms (ASRS inattentive index; p = 0.037) and metacognitive functions concerning self-management and performance monitoring (BRIEF-A; p = 0.029). A continuous-performance test (CPT-3) indicated significant improvement in detectability (d'; p = 0.027) and reduced commission errors (p = 0.004), suggesting that the Neuro-glasses have positive effects on response inhibition. Sixty-two percent of the participants met the response criteria assessed by a clinician (CGI-I). No major adverse events were reported. Conclusion: Neuro-glasses may offer a safe and effective approach to managing adult ADHD. Results encourage future controlled efficacy studies to confirm current findings in adults and possibly children with ADHD.Clinical trial registration: https://www.clinicaltrials.gov/, Identifier NCT05777785.

18.
Artículo en Inglés | MEDLINE | ID: mdl-37844967

RESUMEN

OBJECTIVE: To compare evaluations of depressive episodes and suggested treatment protocols generated by Chat Generative Pretrained Transformer (ChatGPT)-3 and ChatGPT-4 with the recommendations of primary care physicians. METHODS: Vignettes were input to the ChatGPT interface. These vignettes focused primarily on hypothetical patients with symptoms of depression during initial consultations. The creators of these vignettes meticulously designed eight distinct versions in which they systematically varied patient attributes (sex, socioeconomic status (blue collar worker or white collar worker) and depression severity (mild or severe)). Each variant was subsequently introduced into ChatGPT-3.5 and ChatGPT-4. Each vignette was repeated 10 times to ensure consistency and reliability of the ChatGPT responses. RESULTS: For mild depression, ChatGPT-3.5 and ChatGPT-4 recommended psychotherapy in 95.0% and 97.5% of cases, respectively. Primary care physicians, however, recommended psychotherapy in only 4.3% of cases. For severe cases, ChatGPT favoured an approach that combined psychotherapy, while primary care physicians recommended a combined approach. The pharmacological recommendations of ChatGPT-3.5 and ChatGPT-4 showed a preference for exclusive use of antidepressants (74% and 68%, respectively), in contrast with primary care physicians, who typically recommended a mix of antidepressants and anxiolytics/hypnotics (67.4%). Unlike primary care physicians, ChatGPT showed no gender or socioeconomic biases in its recommendations. CONCLUSION: ChatGPT-3.5 and ChatGPT-4 aligned well with accepted guidelines for managing mild and severe depression, without showing the gender or socioeconomic biases observed among primary care physicians. Despite the suggested potential benefit of using atificial intelligence (AI) chatbots like ChatGPT to enhance clinical decision making, further research is needed to refine AI recommendations for severe cases and to consider potential risks and ethical issues.


Asunto(s)
Ansiolíticos , Médicos de Atención Primaria , Humanos , Depresión/tratamiento farmacológico , Reproducibilidad de los Resultados , Colina O-Acetiltransferasa , Antidepresivos/uso terapéutico
20.
JMIR Ment Health ; 10: e51232, 2023 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-37728984

RESUMEN

BACKGROUND: ChatGPT, a linguistic artificial intelligence (AI) model engineered by OpenAI, offers prospective contributions to mental health professionals. Although having significant theoretical implications, ChatGPT's practical capabilities, particularly regarding suicide prevention, have not yet been substantiated. OBJECTIVE: The study's aim was to evaluate ChatGPT's ability to assess suicide risk, taking into consideration 2 discernable factors-perceived burdensomeness and thwarted belongingness-over a 2-month period. In addition, we evaluated whether ChatGPT-4 more accurately evaluated suicide risk than did ChatGPT-3.5. METHODS: ChatGPT was tasked with assessing a vignette that depicted a hypothetical patient exhibiting differing degrees of perceived burdensomeness and thwarted belongingness. The assessments generated by ChatGPT were subsequently contrasted with standard evaluations rendered by mental health professionals. Using both ChatGPT-3.5 and ChatGPT-4 (May 24, 2023), we executed 3 evaluative procedures in June and July 2023. Our intent was to scrutinize ChatGPT-4's proficiency in assessing various facets of suicide risk in relation to the evaluative abilities of both mental health professionals and an earlier version of ChatGPT-3.5 (March 14 version). RESULTS: During the period of June and July 2023, we found that the likelihood of suicide attempts as evaluated by ChatGPT-4 was similar to the norms of mental health professionals (n=379) under all conditions (average Z score of 0.01). Nonetheless, a pronounced discrepancy was observed regarding the assessments performed by ChatGPT-3.5 (May version), which markedly underestimated the potential for suicide attempts, in comparison to the assessments carried out by the mental health professionals (average Z score of -0.83). The empirical evidence suggests that ChatGPT-4's evaluation of the incidence of suicidal ideation and psychache was higher than that of the mental health professionals (average Z score of 0.47 and 1.00, respectively). Conversely, the level of resilience as assessed by both ChatGPT-4 and ChatGPT-3.5 (both versions) was observed to be lower in comparison to the assessments offered by mental health professionals (average Z score of -0.89 and -0.90, respectively). CONCLUSIONS: The findings suggest that ChatGPT-4 estimates the likelihood of suicide attempts in a manner akin to evaluations provided by professionals. In terms of recognizing suicidal ideation, ChatGPT-4 appears to be more precise. However, regarding psychache, there was an observed overestimation by ChatGPT-4, indicating a need for further research. These results have implications regarding ChatGPT-4's potential to support gatekeepers, patients, and even mental health professionals' decision-making. Despite the clinical potential, intensive follow-up studies are necessary to establish the use of ChatGPT-4's capabilities in clinical practice. The finding that ChatGPT-3.5 frequently underestimates suicide risk, especially in severe cases, is particularly troubling. It indicates that ChatGPT may downplay one's actual suicide risk level.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA