Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Nat Ment Health ; 2(5): 616-626, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38746691

RESUMO

Pharmacogenomics could optimize antipsychotic treatment by preventing adverse drug reactions, improving treatment efficacy or relieving the cost burden on the healthcare system. Here we conducted a systematic review to investigate whether pharmacogenetic testing in individuals undergoing antipsychotic treatment influences clinical or economic outcomes. On 12 January 2024, we searched MEDLINE, EMBASE, PsycINFO and Cochrane Centrale Register of Controlled Trials. The results were summarized using a narrative approach and summary tables. In total, 13 studies were eligible for inclusion in the systematic review. The current evidence base is either in favor of pharmacogenetics-guided prescribing or showed no difference between pharmacogenetics and treatment as usual for clinical and economic outcomes. In the future, we require randomized controlled trials with sufficient sample sizes that provide recommendations for patients who take antipsychotics based on a broad, multigene panel, with consistent and comparable clinical outcomes.

2.
BJPsych Open ; 9(1): e10, 2023 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-36621525

RESUMO

BACKGROUND: Patient and public involvement (PPI) groups can provide valuable input to create more accessible study documents with less jargon. However, we don't know whether this procedure improves accessibility for potential participants. AIMS: We assessed whether participant information sheets were rated as more accessible after PPI review and which aspects of information sheets and study design were important to mental health patients compared with a control group with no mental health service use. METHOD: This was a double-blind quasi-experimental study using a mixed-methods explanatory design. Patients and control participants quantitatively rated pre- and post-review documents. Semi-structured interviews were thematically analysed to gain qualitative feedback on opinions of information sheets and studies. Two-way multivariate analysis of variance was used to detect differences in ratings between pre- and post-review documents. RESULTS: We found no significant (P < 0.05) improvements in patient (n = 15) or control group (n = 21) ratings after PPI review. Patients and controls both rated PPI as of low importance in studies and considered the study rationale as most important. However, PPI was often misunderstood, with participants believing that it meant lay patients would take over the design and administration of the study. Qualitative findings highlight the importance of clear, friendly and visually appealing information sheets. CONCLUSIONS: Researchers should be aware of what participants want to know about so they can create information sheets addressing these priorities, for example, explaining why the research is necessary. PPI is poorly understood by the wider population and efforts must be made to increase diversity in participation.

3.
JMIR Form Res ; 6(9): e39813, 2022 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-36149733

RESUMO

BACKGROUND: As the number of mental health apps has grown, increasing efforts have been focused on establishing quality tailored reviews. These reviews prioritize clinician and academic views rather than the views of those who use them, particularly those with lived experiences of mental health problems. Given that the COVID-19 pandemic has increased reliance on web-based and mobile mental health support, understanding the views of those with mental health conditions is of increasing importance. OBJECTIVE: This study aimed to understand the opinions of people with mental health problems on mental health apps and how they differ from established ratings by professionals. METHODS: A mixed methods study was conducted using a web-based survey administered between December 2020 and April 2021, assessing 11 mental health apps. We recruited individuals who had experienced mental health problems to download and use 3 apps for 3 days and complete a survey. The survey consisted of the One Mind PsyberGuide Consumer Review Questionnaire and 2 items from the Mobile App Rating Scale (star and recommendation ratings from 1 to 5). The consumer review questionnaire contained a series of open-ended questions, which were thematically analyzed and using a predefined protocol, converted into binary (positive or negative) ratings, and compared with app ratings by professionals and star ratings from app stores. RESULTS: We found low agreement between the participants' and professionals' ratings. More than half of the app ratings showed disagreement between participants and professionals (198/372, 53.2%). Compared with participants, professionals gave the apps higher star ratings (3.58 vs 4.56) and were more likely to recommend the apps to others (3.44 vs 4.39). Participants' star ratings were weakly positively correlated with app store ratings (r=0.32, P=.01). Thematic analysis found 11 themes, including issues of user experience, ease of use and interactivity, privacy concerns, customization, and integration with daily life. Participants particularly valued certain aspects of mental health apps, which appear to be overlooked by professional reviewers. These included functions such as the ability to track and measure mental health and providing general mental health education. The cost of apps was among the most important factors for participants. Although this is already considered by professionals, this information is not always easily accessible. CONCLUSIONS: As reviews on app stores and by professionals differ from those by people with lived experiences of mental health problems, these alone are not sufficient to provide people with mental health problems with the information they desire when choosing a mental health app. App rating measures must include the perspectives of mental health service users to ensure ratings represent their priorities. Additional work should be done to incorporate the features most important to mental health service users into mental health apps.

4.
J Ment Health ; 31(4): 576-584, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35786178

RESUMO

Background: Mental health stigma on social media is well studied, but not from the perspective of mental health service users. Coronavirus disease-19 (COVID-19) increased mental health discussions and may have impacted stigma.Objectives: (1) to understand how service users perceive and define mental health stigma on social media; (2) how COVID-19 shaped mental health conversations and social media use.Methods: We collected 2,700 tweets related to seven mental health conditions: schizophrenia, depression, anxiety, autism, eating disorders, OCD, and addiction. Twenty-seven service users rated them as stigmatising or neutral, followed by focus group discussions. Focus group transcripts were thematically analysed.Results: Participants rated 1,101 tweets (40.8%) as stigmatising. Tweets related to schizophrenia were most frequently classed as stigmatising (411/534, 77%). Tweets related to depression or anxiety were least stigmatising (139/634, 21.9%). A stigmatising tweet depended on perceived intention and context but some words (e.g. "psycho") felt stigmatising irrespective of context.Discussion: The anonymity of social media seemingly increased stigma, but COVID-19 lockdowns improved mental health literacy. This is the first study to qualitatively investigate service users' views of stigma towards various mental health conditions on Twitter and we show stigma is common, particularly towards schizophrenia. Service user involvement is vital when designing solutions to stigma.


Assuntos
COVID-19 , Serviços de Saúde Mental , Mídias Sociais , Controle de Doenças Transmissíveis , Humanos , Estigma Social
5.
JMIR Aging ; 5(1): e30388, 2022 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-35072637

RESUMO

BACKGROUND: Dementia misconceptions on social media are common, with negative effects on people with the condition, their carers, and those who know them. This study codeveloped a thematic framework with carers to understand the forms these misconceptions take on Twitter. OBJECTIVE: The aim of this study is to identify and analyze types of dementia conversations on Twitter using participatory methods. METHODS: A total of 3 focus groups with dementia carers were held to develop a framework of dementia misconceptions based on their experiences. Dementia-related tweets were collected from Twitter's official application programming interface using neutral and negative search terms defined by the literature and by carers (N=48,211). A sample of these tweets was selected with equal numbers of neutral and negative words (n=1497), which was validated in individual ratings by carers. We then used the framework to analyze, in detail, a sample of carer-rated negative tweets (n=863). RESULTS: A total of 25.94% (12,507/48,211) of our tweet corpus contained negative search terms about dementia. The carers' framework had 3 negative and 3 neutral categories. Our thematic analysis of carer-rated negative tweets found 9 themes, including the use of weaponizing language to insult politicians (469/863, 54.3%), using dehumanizing or outdated words or statements about members of the public (n=143, 16.6%), unfounded claims about the cures or causes of dementia (n=11, 1.3%), or providing armchair diagnoses of dementia (n=21, 2.4%). CONCLUSIONS: This is the first study to use participatory methods to develop a framework that identifies dementia misconceptions on Twitter. We show that misconceptions and stigmatizing language are not rare. They manifest through minimizing and underestimating language. Web-based campaigns aiming to reduce discrimination and stigma about dementia could target those who use negative vocabulary and reduce the misconceptions that are being propagated, thus improving general awareness.

6.
JMIR Infodemiology ; 2(2): e36871, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-37113444

RESUMO

Background: Dementia misconceptions on Twitter can have detrimental or harmful effects. Machine learning (ML) models codeveloped with carers provide a method to identify these and help in evaluating awareness campaigns. Objective: This study aimed to develop an ML model to distinguish between misconceptions and neutral tweets and to develop, deploy, and evaluate an awareness campaign to tackle dementia misconceptions. Methods: Taking 1414 tweets rated by carers from our previous work, we built 4 ML models. Using a 5-fold cross-validation, we evaluated them and performed a further blind validation with carers for the best 2 ML models; from this blind validation, we selected the best model overall. We codeveloped an awareness campaign and collected pre-post campaign tweets (N=4880), classifying them with our model as misconceptions or not. We analyzed dementia tweets from the United Kingdom across the campaign period (N=7124) to investigate how current events influenced misconception prevalence during this time. Results: A random forest model best identified misconceptions with an accuracy of 82% from blind validation and found that 37% of the UK tweets (N=7124) about dementia across the campaign period were misconceptions. From this, we could track how the prevalence of misconceptions changed in response to top news stories in the United Kingdom. Misconceptions significantly rose around political topics and were highest (22/28, 79% of the dementia tweets) when there was controversy over the UK government allowing to continue hunting during the COVID-19 pandemic. After our campaign, there was no significant change in the prevalence of misconceptions. Conclusions: Through codevelopment with carers, we developed an accurate ML model to predict misconceptions in dementia tweets. Our awareness campaign was ineffective, but similar campaigns could be enhanced through ML to respond to current events that affect misconceptions in real time.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...