Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 261
Filter
4.
Soc Sci Med ; 348: 116809, 2024 May.
Article in English | MEDLINE | ID: mdl-38547808

ABSTRACT

Representations of migrants influence how they are perceived by others. Hence, how children who have migrated or whose parents have migrated (Children in Migrant Families: CMFs) are represented in clinical practice guidelines (CPGs) for Swedish school health services (SHS) may influence how they are perceived by school nurses. Thus, this study aimed to explore representations of CMFs in school nurses' CPGs. Data consisted of 130 CPGs from municipalities in Sweden. Documents were analyzed using the "What is the Problem Represented to be" (WPR) approach - an analytic strategy for investigating embedded assumptions of "problems". In the analysis, Sarah Ahmed's work on "strangers" and "strangeness" was applied. In the CPGs, the CMFs and their health were repeatedly mentioned in conjunction with the need for particular or additional actions, efforts, or routines when assessing or discussing their health, to a degree beyond what is "usually" provided. This need was motivated by representing the CMFs and their health as being the same, yet different in relation to "Swedish" children in general. Thus, the children were not only represented as different, but they were "foreignized". These representations of difference and foreignness placed the children on a continuum in relation to what is recognized as "familiar" in their health, and constructed elastic boundaries between the strange and the familiar. By illustrating how these boundaries were used for difference-making between "familiar" and "strange", this study showed how CMFs are alternately represented as similar and different, and foreignized while provided with SHS aiming to make them "familiar".


Subject(s)
Cultural Competency , Emigrants and Immigrants , Prejudice , Recognition, Psychology , School Health Services , Child , Female , Humans , Male , Cross-Cultural Comparison , Practice Guidelines as Topic , Prejudice/prevention & control , Prejudice/statistics & numerical data , School Nursing , Sweden
11.
Sci Rep ; 13(1): 10538, 2023 06 29.
Article in English | MEDLINE | ID: mdl-37386078

ABSTRACT

Everyday expression of prejudice continues to pose a social challenge across societies. We tend to assume that to the extent people are egalitarian, they are more likely to confront prejudice-but this might not necessarily be the case. We tested this assumption in two countries (US and Hungary) among majority members of society, using a behavioral paradigm for measuring confronting. Prejudice was directed at various outgroup minority individuals (African Americans, Muslims and Latinos in the US, and Roma in Hungary). Across four experiments (N = 1116), we predicted and found that egalitarian (anti-prejudiced) values were only associated with hypothetical confronting intentions, but not with actual confronting, and stronger egalitarians were more likely to overestimate their confronting than weaker egalitarians-to the point that while intentions differed, the actual confronting rate of stronger and weaker egalitarians were similar. We also predicted and found that such overestimation was associated with internal (and not external) motivation to respond without prejudice. We also identified behavioral uncertainty (being uncertain how to intervene) as a potential explanation for egalitarians' overestimation. The implications of these findings for egalitarians' self-reflection, intergroup interventions, and research are discussed.


Subject(s)
Human Rights , Prejudice , Humans , Black or African American , Prejudice/ethnology , Prejudice/prevention & control , United States , Hungary , Motivation , Self-Assessment , Social Behavior , Roma , Hispanic or Latino , Islam
12.
JAMA ; 329(4): 283-284, 2023 01 24.
Article in English | MEDLINE | ID: mdl-36602791

ABSTRACT

This Viewpoint discusses a proposed DHHS rule to address discrimination in clinical algorithms and the need for additional considerations to ensure the burden of liability for biased algorithms is not disproportionately placed on health care professionals.


Subject(s)
Algorithms , Delivery of Health Care , Prejudice , Social Discrimination , Bias , Prejudice/prevention & control , Social Discrimination/prevention & control , Delivery of Health Care/methods , Delivery of Health Care/standards
13.
JAMA ; 329(4): 285-286, 2023 01 24.
Article in English | MEDLINE | ID: mdl-36602795

ABSTRACT

This Viewpoint discusses recent legal directives by the DHHS and FDA that could increase health care entities' liability for possible discriminatory biases of clinical algorithms and the need for additional legal clarity to avoid adverse effects on algorithm development and use.


Subject(s)
Algorithms , Delivery of Health Care , Medical Device Legislation , Prejudice , Liability, Legal , Prejudice/legislation & jurisprudence , Prejudice/prevention & control , United States , Delivery of Health Care/legislation & jurisprudence , Delivery of Health Care/methods
14.
JAMA ; 329(4): 306-317, 2023 01 24.
Article in English | MEDLINE | ID: mdl-36692561

ABSTRACT

Importance: Stroke is the fifth-highest cause of death in the US and a leading cause of serious long-term disability with particularly high risk in Black individuals. Quality risk prediction algorithms, free of bias, are key for comprehensive prevention strategies. Objective: To compare the performance of stroke-specific algorithms with pooled cohort equations developed for atherosclerotic cardiovascular disease for the prediction of new-onset stroke across different subgroups (race, sex, and age) and to determine the added value of novel machine learning techniques. Design, Setting, and Participants: Retrospective cohort study on combined and harmonized data from Black and White participants of the Framingham Offspring, Atherosclerosis Risk in Communities (ARIC), Multi-Ethnic Study for Atherosclerosis (MESA), and Reasons for Geographical and Racial Differences in Stroke (REGARDS) studies (1983-2019) conducted in the US. The 62 482 participants included at baseline were at least 45 years of age and free of stroke or transient ischemic attack. Exposures: Published stroke-specific algorithms from Framingham and REGARDS (based on self-reported risk factors) as well as pooled cohort equations for atherosclerotic cardiovascular disease plus 2 newly developed machine learning algorithms. Main Outcomes and Measures: Models were designed to estimate the 10-year risk of new-onset stroke (ischemic or hemorrhagic). Discrimination concordance index (C index) and calibration ratios of expected vs observed event rates were assessed at 10 years. Analyses were conducted by race, sex, and age groups. Results: The combined study sample included 62 482 participants (median age, 61 years, 54% women, and 29% Black individuals). Discrimination C indexes were not significantly different for the 2 stroke-specific models (Framingham stroke, 0.72; 95% CI, 0.72-073; REGARDS self-report, 0.73; 95% CI, 0.72-0.74) vs the pooled cohort equations (0.72; 95% CI, 0.71-0.73): differences 0.01 or less (P values >.05) in the combined sample. Significant differences in discrimination were observed by race: the C indexes were 0.76 for all 3 models in White vs 0.69 in Black women (all P values <.001) and between 0.71 and 0.72 in White men and between 0.64 and 0.66 in Black men (all P values ≤.001). When stratified by age, model discrimination was better for younger (<60 years) vs older (≥60 years) adults for both Black and White individuals. The ratios of observed to expected 10-year stroke rates were closest to 1 for the REGARDS self-report model (1.05; 95% CI, 1.00-1.09) and indicated risk overestimation for Framingham stroke (0.86; 95% CI, 0.82-0.89) and pooled cohort equations (0.74; 95% CI, 0.71-0.77). Performance did not significantly improve when novel machine learning algorithms were applied. Conclusions and Relevance: In this analysis of Black and White individuals without stroke or transient ischemic attack among 4 US cohorts, existing stroke-specific risk prediction models and novel machine learning techniques did not significantly improve discriminative accuracy for new-onset stroke compared with the pooled cohort equations, and the REGARDS self-report model had the best calibration. All algorithms exhibited worse discrimination in Black individuals than in White individuals, indicating the need to expand the pool of risk factors and improve modeling techniques to address observed racial disparities and improve model performance.


Subject(s)
Black People , Healthcare Disparities , Prejudice , Risk Assessment , Stroke , White People , Female , Humans , Male , Middle Aged , Atherosclerosis/epidemiology , Cardiovascular Diseases/epidemiology , Ischemic Attack, Transient/epidemiology , Retrospective Studies , Stroke/diagnosis , Stroke/epidemiology , Stroke/ethnology , Risk Assessment/standards , Reproducibility of Results , Sex Factors , Age Factors , Race Factors/statistics & numerical data , Black People/statistics & numerical data , White People/statistics & numerical data , United States/epidemiology , Machine Learning/standards , Bias , Prejudice/prevention & control , Healthcare Disparities/ethnology , Healthcare Disparities/standards , Healthcare Disparities/statistics & numerical data , Computer Simulation/standards , Computer Simulation/statistics & numerical data
15.
J Homosex ; 70(10): 1979-2010, 2023 Aug 24.
Article in English | MEDLINE | ID: mdl-35452360

ABSTRACT

Against the backdrop of the healthcare inequities and maltreatment facing LGBT patients, recommendations have been made for the inclusion of LGBT health topics in nursing curricula. Based on data collected in focus group discussions with South African nursing students, we complicate the assumption that training focused on health-specific knowledge will effectively reform providers' prejudicial practices. Findings reveal ambivalence: silence and discrimination versus inclusive humanism. Participants drew on discourses of ignorance, religion, and egalitarian treatment to justify their inadequacy regarding LGBT patients; while doing so, however, they deployed othering discourses in which homophobic and transphobic disregard is rendered acceptable, and "scientifically" supported through binary, deterministic views of sexuality and gender. Such "expert" views accord with Foucault's notion of "grotesque discourse." We conclude with a discussion of the findings' implications for nursing education; we call for the recognition and teaching of binary ideology as a form of discursive violence over LGBT lives.


Subject(s)
Attitude of Health Personnel , Education, Nursing , Learning , Nurses , Patient Care , Sexual and Gender Minorities , Speech , Nurses/psychology , Education, Nursing/methods , Patient Care/methods , Humans , Male , Female , Healthcare Disparities , Prejudice/prevention & control , Prejudice/psychology , Focus Groups , South Africa , Curriculum , Interviews as Topic
SELECTION OF CITATIONS
SEARCH DETAIL