Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Psychol Med ; 54(5): 886-894, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37665038

RESUMEN

BACKGROUND: The DSM-5 features hundreds of diagnoses comprising a multitude of symptoms, and there is considerable repetition in the symptoms among diagnoses. This repetition undermines what we can learn from studying individual diagnostic constructs because it can obscure both disorder- and symptom-specific signals. However, these lost opportunities are currently veiled because symptom repetition in the DSM-5 has not been quantified. METHOD: This descriptive study mapped the repetition among the 1419 symptoms described in 202 diagnoses of adult psychopathology in section II of the DSM-5. Over a million possible symptom comparisons needed to be conducted, for which we used both qualitative content coding and natural language processing. RESULTS: In total, we identified 628 distinct symptoms: 397 symptoms (63.2%) were unique to a single diagnosis, whereas 231 symptoms (36.8%) repeated across multiple diagnoses a total of 1022 times (median 3 times per symptom; range 2-22). Some chapters had more repetition than others: For example, every symptom of every diagnosis in the bipolar and related disorders chapter was repeated in other chapters, but there was no repetition for any symptoms of any diagnoses in the elimination disorders, gender dysphoria or paraphilic disorders. The most frequently repeated symptoms included insomnia, difficulty concentrating, and irritability - listed in 22, 17 and 16 diagnoses, respectively. Notably, the top 15 most frequently repeating diagnostic criteria were dominated by symptoms of major depressive disorder. CONCLUSION: Overall, our findings lay the foundation for a better understanding of the extent and potential consequences of symptom overlap.


Asunto(s)
Trastorno Depresivo Mayor , Trastornos del Inicio y del Mantenimiento del Sueño , Adulto , Humanos , Trastorno Depresivo Mayor/diagnóstico , Manual Diagnóstico y Estadístico de los Trastornos Mentales , Psicopatología
2.
JMIR Public Health Surveill ; 9: e42495, 2023 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-37656492

RESUMEN

BACKGROUND: The recent pandemic had the potential to worsen the opioid crisis through multiple effects on patients' lives, such as the disruption of care. In particular, good levels of adherence with respect to medication for opioid use disorder (MOUD), recognized as being important for positive outcomes, may be disrupted. OBJECTIVE: This study aimed to investigate whether patients on MOUD experienced a drop in medication adherence during the recent COVID-19 pandemic. METHODS: This retrospective cohort study used Medicaid claims data from 6 US states from 2018 until the start of 2021. We compared medication adherence for people on MOUD before and after the beginning of the COVID-19 pandemic in March 2020. Our main measure was the proportion of days covered (PDC), a score that measures patients' adherence to their MOUD. We carried out a breakpoint analysis on PDC, followed by a patient-level beta regression analysis with PDC as the dependent variable while controlling for a set of covariates. RESULTS: A total of 79,991 PDC scores were calculated for 37,604 patients (age: mean 37.6, SD 9.8 years; sex: n=17,825, 47.4% female) between 2018 and 2021. The coefficient for the effect of COVID-19 on PDC score was -0.076 and was statistically significant (odds ratio 0.925, 95% CI 0.90-0.94). CONCLUSIONS: The COVID-19 pandemic was negatively associated with patients' adherence to their medication, which had declined since the beginning of the pandemic.


Asunto(s)
COVID-19 , Trastornos Relacionados con Opioides , Estados Unidos/epidemiología , Humanos , Femenino , Adulto , Masculino , Analgésicos Opioides/uso terapéutico , Pandemias , Estudios Retrospectivos , COVID-19/epidemiología , Cumplimiento de la Medicación , Trastornos Relacionados con Opioides/tratamiento farmacológico , Trastornos Relacionados con Opioides/epidemiología
3.
Int J Med Inform ; 177: 105122, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37295138

RESUMEN

BACKGROUND: Natural Language Processing (NLP) applications have developed over the past years in various fields including its application to clinical free text for named entity recognition and relation extraction. However, there has been rapid developments the last few years that there's currently no overview of it. Moreover, it is unclear how these models and tools have been translated into clinical practice. We aim to synthesize and review these developments. METHODS: We reviewed literature from 2010 to date, searching PubMed, Scopus, the Association of Computational Linguistics (ACL), and Association of Computer Machinery (ACM) libraries for studies of NLP systems performing general-purpose (i.e., not disease- or treatment-specific) information extraction and relation extraction tasks in unstructured clinical text (e.g., discharge summaries). RESULTS: We included in the review 94 studies with 30 studies published in the last three years. Machine learning methods were used in 68 studies, rule-based in 5 studies, and both in 22 studies. 63 studies focused on Named Entity Recognition, 13 on Relation Extraction and 18 performed both. The most frequently extracted entities were "problem", "test" and "treatment". 72 studies used public datasets and 22 studies used proprietary datasets alone. Only 14 studies defined clearly a clinical or information task to be addressed by the system and just three studies reported its use outside the experimental setting. Only 7 studies shared a pre-trained model and only 8 an available software tool. DISCUSSION: Machine learning-based methods have dominated the NLP field on information extraction tasks. More recently, Transformer-based language models are taking the lead and showing the strongest performance. However, these developments are mostly based on a few datasets and generic annotations, with very few real-world use cases. This may raise questions about the generalizability of findings, translation into practice and highlights the need for robust clinical evaluation.


Asunto(s)
Aprendizaje Automático , Procesamiento de Lenguaje Natural , Humanos , Lenguaje , Almacenamiento y Recuperación de la Información , PubMed
4.
PLoS One ; 17(12): e0278988, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36520864

RESUMEN

BACKGROUND: Opioid Use Disorder (OUD) and opioid overdose (OD) impose huge social and economic burdens on society and health care systems. Research suggests that Medication for Opioid Use Disorder (MOUD) is effective in the treatment of OUD. We use machine learning to investigate the association between patient's adherence to prescribed MOUD along with other risk factors in patients diagnosed with OUD and potential OD following the treatment. METHODS: We used longitudinal Medicaid claims for two selected US states to subset a total of 26,685 patients with OUD diagnosis and appropriate Medicaid coverage between 2015 and 2018. We considered patient age, sex, region level socio-economic data, past comorbidities, MOUD prescription type and other selected prescribed medications along with the Proportion of Days Covered (PDC) as a proxy for adherence to MOUD as predictive variables for our model, and overdose events as the dependent variable. We applied four different machine learning classifiers and compared their performance, focusing on the importance and effect of PDC as a variable. We also calculated results based on risk stratification, where our models separate high risk individuals from low risk, to assess usefulness in clinical decision-making. RESULTS: Among the selected classifiers, the XGBoost classifier has the highest AUC (0.77) closely followed by the Logistic Regression (LR). The LR has the best stratification result: patients in the top 10% of risk scores account for 35.37% of overdose events over the next 12 month observation period. PDC score calculated over the treatment window is one of the most important features, with better PDC lowering risk of OD, as expected. In terms of risk stratification results, of the 35.37% of overdose events that the predictive model could detect within the top 10% of risk scores, 72.3% of these cases were non-adherent in terms of their medication (PDC <0.8). Targeting the top 10% outcome of the predictive model could decrease the total number of OD events by 10.4%. CONCLUSIONS: The best performing models allow identification of, and focus on, those at high risk of opioid overdose. With MOUD being included for the first time as a factor of interest, and being identified as a significant factor, outreach activities related to MOUD can be targeted at those at highest risk.


Asunto(s)
Buprenorfina , Sobredosis de Droga , Sobredosis de Opiáceos , Trastornos Relacionados con Opioides , Estados Unidos , Humanos , Analgésicos Opioides/efectos adversos , Trastornos Relacionados con Opioides/tratamiento farmacológico , Sobredosis de Droga/tratamiento farmacológico , Cumplimiento de la Medicación , Aprendizaje Automático , Buprenorfina/uso terapéutico , Tratamiento de Sustitución de Opiáceos/métodos
5.
Front Psychol ; 13: 1039431, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36405156

RESUMEN

Despite the challenges associated with virtually mediated communication, remote collaboration is a defining characteristic of online multiplayer gaming communities. Inspired by the teamwork exhibited by players in first-person shooter games, this study investigated the verbal and behavioral coordination of four-player teams playing a cooperative online video game. The game, Desert Herding, involved teams consisting of three ground players and one drone operator tasked to locate, corral, and contain evasive robot agents scattered across a large desert environment. Ground players could move throughout the environment, while the drone operator's role was akin to that of a "spectator" with a bird's-eye view, with access to veridical information of the locations of teammates and the to-be-corralled agents. Categorical recurrence quantification analysis (catRQA) was used to measure the communication dynamics of teams as they completed the task. Demands on coordination were manipulated by varying the ground players' ability to observe the environment with the use of game "fog." Results show that catRQA was sensitive to changes to task visibility, with reductions in task visibility reorganizing how participants conversed during the game to maintain team situation awareness. The results are discussed in the context of future work that can address how team coordination can be augmented with the inclusion of artificial agents, as synthetic teammates.

6.
Cogn Sci ; 46(10): e13204, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-36251464

RESUMEN

People working as a team can achieve more than when working alone due to a team's ability to parallelize the completion of tasks. In collaborative search tasks, this necessitates the formation of effective division of labor strategies to minimize redundancies in search. For such strategies to be developed, team members need to perceive the task's relevant components and how they evolve over time, as well as an understanding of what others will do so that they can structure their own behavior to contribute to the team's goal. This study explored whether the capacity for team members to coordinate effectively can be related to how participants structure their search behaviors in an online multiplayer collaborative search task. Our results demonstrated that the structure of search behavior, quantified using detrended fluctuation analysis, was sensitive to contextual factors that limit a participant's ability to gather information. Further, increases in the persistence of movement fluctuations during search behavior were found as teams developed more effective coordinative strategies and were associated with better task performance.


Asunto(s)
Análisis y Desempeño de Tareas , Juegos de Video , Humanos , Motivación , Movimiento
7.
Neural Netw ; 154: 56-67, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-35853320

RESUMEN

Modern neuroimaging techniques enable us to construct human brains as brain networks or connectomes. Capturing brain networks' structural information and hierarchical patterns is essential for understanding brain functions and disease states. Recently, the promising network representation learning capability of graph neural networks (GNNs) has prompted related methods for brain network analysis to be proposed. Specifically, these methods apply feature aggregation and global pooling to convert brain network instances into vector representations encoding brain structure induction for downstream brain network analysis tasks. However, existing GNN-based methods often neglect that brain networks of different subjects may require various aggregation iterations and use GNN with a fixed number of layers to learn all brain networks. Therefore, how to fully release the potential of GNNs to promote brain network analysis is still non-trivial. In our work, a novel brain network representation framework, BN-GNN, is proposed to solve this difficulty, which searches for the optimal GNN architecture for each brain network. Concretely, BN-GNN employs deep reinforcement learning (DRL) to automatically predict the optimal number of feature propagations (reflected in the number of GNN layers) required for a given brain network. Furthermore, BN-GNN improves the upper bound of traditional GNNs' performance in eight brain network disease analysis tasks.


Asunto(s)
Conectoma , Redes Neurales de la Computación , Encéfalo/diagnóstico por imagen , Humanos
8.
Stud Health Technol Inform ; 290: 582-586, 2022 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-35673083

RESUMEN

Data imbalance is a well-known challenge in the development of machine learning models. This is particularly relevant when the minority class is the class of interest, which is frequently the case in models that predict mortality, specific diagnoses or other important clinical end-points. Typical methods of dealing with this include over- or under-sampling training data, or weighting the loss function in order to boost the signal from the minority class. Data augmentation is another frequently employed method - particularly for models that use images as input data. For discrete time-series data, however, there is no consensus method of data augmentation. We propose a simple data augmentation strategy that can be applied to discrete time-series data from the EMR. This strategy is then demonstrated using a publicly available data-set, in order to provide proof of concept for the work undertaken in [1], where data is unable to be made open.


Asunto(s)
Aprendizaje Profundo , Registros Electrónicos de Salud , Aprendizaje Automático
9.
Suicide Life Threat Behav ; 51(3): 455-466, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33185302

RESUMEN

OBJECTIVE: Identifying predictors contributing to suicide risk could help prevent suicides via targeted interventions. However, using only known risk factors may not yield accurate enough results. Furthermore, risk models typically rely on suicidal ideation, even though people often withhold this information. METHOD: This study examined the contribution of various predictors to the accuracy of six machine learning models for identifying suicidal behavior in a prison population (n = 353), including borderline personality disorder (BPD) and antisocial personality disorder (APD) criteria, and compared how excluding data about suicidal ideation affects accuracy. RESULTS: Results revealed that gradient tree boosting accurately identified individuals with suicidal behavior, even without relying on questions about suicidal ideation (AUC = 0.875, F1 = 0.846). Furthermore, the model maintained this accuracy with only 29 predictors. Meeting five or more diagnostic criteria of BPD was an important risk factor for suicidal behavior. APD criteria, in the presence of other predictors, did not substantially improve accuracy. Additionally, it may be possible to implement a decision tree model to assess individuals at risk of suicide, without focusing upon suicidal ideation. CONCLUSIONS: These findings highlight that modern classification algorithms do not necessarily require information about suicidal ideation for modeling suicide and self-harm behavior.


Asunto(s)
Trastorno de Personalidad Limítrofe , Suicidio , Trastorno de Personalidad Limítrofe/diagnóstico , Humanos , Aprendizaje Automático , Ideación Suicida , Intento de Suicidio
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...