Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.023
Filtrar
1.
Clin Dermatol ; 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39218323

RESUMEN

Patient demand for procedures has increased in the evolving landscape of cosmetic dermatology. This has been fueled, in part, by social media and the growing normalization of cosmetic enhancements; however, this has led some patients to have potentially unrealistic expectations, placing undue pressure on dermatologists to meet these often unrealizable demands. This pressure is further exacerbated by patients who are seen as difficult, demanding, and time-consuming and who may require extensive counseling. Physicians may adopt dynamic or differential pricing strategies to offset the additional time and effort these patients require. We discuss the ethical concerns surrounding these pricing strategies in the cosmetic sphere, highlight the importance of transparency in pricing, and offer suggestions to promote clarity and fairness in cosmetic dermatology practices.

2.
J Clin Transl Sci ; 8(1): e94, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39220818

RESUMEN

Introduction: Patients with cystic fibrosis (CF) experience frequent episodes of acute decline in lung function called pulmonary exacerbations (PEx). An existing clinical and place-based precision medicine algorithm that accurately predicts PEx could include racial and ethnic biases in clinical and geospatial training data, leading to unintentional exacerbation of health inequities. Methods: We estimated receiver operating characteristic curves based on predictions from a nonstationary Gaussian stochastic process model for PEx within 3, 6, and 12 months among 26,392 individuals aged 6 years and above (2003-2017) from the US CF Foundation Patient Registry. We screened predictors to identify reasons for discriminatory model performance. Results: The precision medicine algorithm performed worse predicting a PEx among Black patients when compared with White patients or to patients of another race for all three prediction horizons. There was little to no difference in prediction accuracies among Hispanic and non-Hispanic patients for the same prediction horizons. Differences in F508del, smoking households, secondhand smoke exposure, primary and secondary road densities, distance and drive time to the CF center, and average number of clinical evaluations were key factors associated with race. Conclusions: Racial differences in prediction accuracies from our PEx precision medicine algorithm exist. Misclassification of future PEx was attributable to several underlying factors that correspond to race: CF mutation, location where the patient lives, and clinical awareness. Associations of our proxies with race for CF-related health outcomes can lead to systemic racism in data collection and in prediction accuracies from precision medicine algorithms constructed from it.

3.
Front Psychol ; 15: 1430492, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39228874

RESUMEN

Background: The development of a stable society is closely linked to a prevalent sense of social fairness. Participating in physical activities, which are inherently social, plays a crucial role in fostering mental stability within social contexts. Objective: This study aims to examine how physical exercise influences the sense of social fairness among college students, focusing on the potential mediating effects of perceived social support and life satisfaction. Methods: The study surveyed 496 Chinese college students using several scales: the Physical Activity Rating Scale-3 (PARS-3), Perceived Social Support Scale (PSSS), Satisfaction with Life Scale (SWLS), and Social Justice Scale (SJS). Results: (1) A positive correlation was found between physical exercise and sense of social fairness (r = 0.151, p < 0.01). A significant direct effect of physical exercise on sense of social fairness was also observed (ß = 0.151, t = 3.971, p < 0.01). (2) Physical exercise was a positive predictor of perceived social support (ß = 0.113, t = 4.062, p < 0.01), which in turn positively influenced both life satisfaction (ß = 0.333, t = 18.047, p < 0.01) and sense of social fairness (ß = 0.485, t = 6.931, p < 0.01). Additionally, life satisfaction had a positive effect on sense of social fairness (ß = 0.431, t = 3.247, p < 0.01). (3) Both perceived social support and life satisfaction significantly mediated the relationship between physical exercise and sense of social fairness through two pathways: physical exercise → perceived social support → sense of social fairness (mediating effect: 0.055); and physical exercise → perceived social support → life satisfaction → sense of social fairness (mediating effect: 0.016). Conclusion: (1) Physical exercise enhances both perceived social support and the sense of social fairness among college students, suggesting that it not only directly contributes to an enhanced sense of social fairness but also fosters supportive social relationships. (2) The influence of physical exercise on the sense of social fairness operates both directly and indirectly through the mediating roles of perceived social support and, sequentially, life satisfaction.

6.
Int J Psychophysiol ; 204: 112424, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39178992

RESUMEN

Economic decision-making plays a paramount role in both individual and national interests. Individuals have fairness preferences in economic decision-making, but a proposer's moral-related information may affect fairness considerations. In prior ERP studies, researchers have suggested moral identity influences fairness preferences in the Ultimatum Game (UG), but there are discrepancies in the results. Furthermore, whether role models (individuals whom someone else looks to help decide suitable behaviors), who can modulate people's moral standards, can affect fairness concerns in UG is still understudied. To address the questions, we selected the moral-related statements by eliminating those with illegal information and employed the ERP technique to explore whether the interplay of the proposer's role model and moral-related behavior influenced fairness processing in the modified UG and the corresponding neural mechanisms. We mainly found that the aforementioned interaction effect on proposal considerations in UG could be mirrored in both rejection rates and P300 variations. The results demonstrate that the interaction between the proposer's role model and moral behavior can modulate fairness concerns in UG. Our current work provides new avenues for elucidating the time course of the influencing mechanism of fair distributions in complicated social environments.

7.
Lancet Reg Health Eur ; 41: 100804, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-39119096

RESUMEN

The 2030 Sustainable Development Goals (SDG) agenda has committed to 'ensuring that no one is left behind'. Applying the right to health of non-citizens and international migrants is challenging in today's highly polarized political discourse on migration governance and integration. We explore the role of a priority setting approach to help support better, fairer and more transparent policy making in migration health. A priority setting approach must also incorporate migration health for more efficient and fair allocation of scarce resources. Explicitly recognizing the trade-offs as part of strategic planning, would circumvent ad hoc decision-making during crises, not well-suited for fairness. Discussions surrounding decisions about expanding services to migrants or subgroups of migrants, which services and to whom should be transparent and fair. We conclude that a priority setting approach can help better inform policy making by being more closely aligned with the practical challenges policy makers face towards the progressive realization of migration health.

8.
Front Digit Health ; 6: 1351637, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39119589

RESUMEN

Introduction: Machine learning (ML) algorithms have been heralded as promising solutions to the realization of assistive systems in digital healthcare, due to their ability to detect fine-grain patterns that are not easily perceived by humans. Yet, ML algorithms have also been critiqued for treating individuals differently based on their demography, thus propagating existing disparities. This paper explores gender and race bias in speech-based ML algorithms that detect behavioral and mental health outcomes. Methods: This paper examines potential sources of bias in the data used to train the ML, encompassing acoustic features extracted from speech signals and associated labels, as well as in the ML decisions. The paper further examines approaches to reduce existing bias via using the features that are the least informative of one's demographic information as the ML input, and transforming the feature space in an adversarial manner to diminish the evidence of the demographic information while retaining information about the focal behavioral and mental health state. Results: Results are presented in two domains, the first pertaining to gender and race bias when estimating levels of anxiety, and the second pertaining to gender bias in detecting depression. Findings indicate the presence of statistically significant differences in both acoustic features and labels among demographic groups, as well as differential ML performance among groups. The statistically significant differences present in the label space are partially preserved in the ML decisions. Although variations in ML performance across demographic groups were noted, results are mixed regarding the models' ability to accurately estimate healthcare outcomes for the sensitive groups. Discussion: These findings underscore the necessity for careful and thoughtful design in developing ML models that are capable of maintaining crucial aspects of the data and perform effectively across all populations in digital healthcare applications.

9.
J Intell ; 12(8)2024 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-39195124

RESUMEN

The goal of this paper was to describe the context within which the PASS theory of intelligence was conceived and the reasons why this theory was used to guide the construction of the Cognitive Assessment System and the several versions of the Cognitive Assessment System, 2nd Edition. We also discuss validity issues such as equitable assessment of intelligence, using PASS scores to examine a pattern of strengths and weaknesses related to academic variability and diagnosis, and the utility of PASS scores for intervention. We provide summaries of the research that informs our suggestions that intelligence testing should be theory-based, not constrained by the seminal work of test developers in the early 1900s, and neurocognitive processes should be measured based on brain function.

10.
Sci Rep ; 14(1): 19704, 2024 08 24.
Artículo en Inglés | MEDLINE | ID: mdl-39181915

RESUMEN

The equitable allocation of resources has long been a central concern for humanity, prompting extensive research into various motivations that drive the pursuit of distributive justice. In contrast to one of the most fundamental motives, inequality aversion, a conflicting motive has been proposed: rank-reversal aversion. However, it remains unclear whether this rank-reversal aversion persists in the presence of self-rank. Here we provide evidence of rank-reversal aversion in the first-party context and explore diverse moral strategies for distribution. In a modified version of the redistribution game involving 55 online-recruited participants, we observed rank-reversal aversion only when one's rank was maintained. When participants' self-rank was altered, they tended to base their behavior on their new ranks. This behavioral tendency varied among individuals, revealing three distinct moral strategies, all incorporating considerations of rank-reversal. Our findings suggest that rank-reversal aversion can indeed influence one's distribution behavior, although the extent of its impact may vary among individuals, especially when self-rank is a factor. These insights can be extended to political and economic domains, contributing to a deeper understanding of the underlying mechanisms of distributive justice.


Asunto(s)
Motivación , Justicia Social , Humanos , Masculino , Femenino , Adulto , Justicia Social/psicología , Principios Morales , Asignación de Recursos , Adulto Joven
11.
J Neurodev Disord ; 16(1): 50, 2024 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-39217324

RESUMEN

BACKGROUND: Sharing and fairness are important prosocial behaviors that help us navigate the social world. However, little is known about how and whether individuals with Williams Syndrome (WS) engage in these behaviors. The unique phenotype of individuals with WS, consisting of high social motivation and limited social cognition, can also offer insight into the role of social motivation in sharing and fairness when compared to typically developing (TD) individuals. The current study used established experimental paradigms to examine sharing and fairness in individuals with WS and TD individuals. METHODS: We compared a sample of patients with WS to TD children (6-year-olds) matched by mental age (MA) on two experimental tasks: the Dictator Game (DG, Experiment 1, N = 17 WS, 20 TD) with adults modeling giving behaviors used to test sharing and the Inequity Game (IG, Experiment 2, N = 14 WS, 17 TD) used to test fairness. RESULTS: Results showed that the WS group behaved similarly to the TD group for baseline giving in the DG and in the IG, rejecting disadvantageous offers but accepting advantageous ones. However, after viewing an adult model giving behavior, the WS group gave more than their baseline, with many individuals giving more than half, while the TD group gave less. Combined these results suggest that social motivation is sufficient for sharing and, in particular, generous sharing, as well as the self-focused form of fairness. Further, individuals with WS appear capable of both learning to be more generous and preventing disadvantageous outcomes, a more complex profile than previously known. CONCLUSIONS: In conclusion, the present study provides a snapshot into sharing and fairness-related behaviors in WS, contributing to our understanding of the intriguing social-behavioral phenotype associated with this developmental disorder.


Asunto(s)
Motivación , Conducta Social , Síndrome de Williams , Humanos , Síndrome de Williams/fisiopatología , Síndrome de Williams/psicología , Motivación/fisiología , Masculino , Femenino , Niño , Juegos Experimentales , Adulto
12.
Brain Sci ; 14(8)2024 Aug 02.
Artículo en Inglés | MEDLINE | ID: mdl-39199481

RESUMEN

To better understand the individual differences in fairness, we used event-related potentials (ERPs) to explore the fairness characteristics of deaf college students through the ultimatum game task. Behaviorally, the significant main effect of the proposal type was found, which meant both deaf and hearing college students showed a lower acceptance rate for the more unfair proposal. Interestingly, we found a significant interaction between group and proposal type in the early stage (N1). Moreover, in the deaf college group, N1 (induced by moderately and very unfair proposals) was significantly larger than that of fair proposals. However, we found that deaf college students had smaller amplitudes on P2 and P3 than hearing college students. These results suggested that deaf college students might pursue more equity strongly so they are more sensitive to unfair information in the early stage. In a word, we should provide more fair allocations for deaf college students in our harmonious society.

13.
JMIR AI ; 3: e55820, 2024 Aug 20.
Artículo en Inglés | MEDLINE | ID: mdl-39163597

RESUMEN

BACKGROUND: Opioid use disorder (OUD) is a critical public health crisis in the United States, affecting >5.5 million Americans in 2021. Machine learning has been used to predict patient risk of incident OUD. However, little is known about the fairness and bias of these predictive models. OBJECTIVE: The aims of this study are two-fold: (1) to develop a machine learning bias mitigation algorithm for sociodemographic features and (2) to develop a fairness-aware weighted majority voting (WMV) classifier for OUD prediction. METHODS: We used the 2020 National Survey on Drug and Health data to develop a neural network (NN) model using stochastic gradient descent (SGD; NN-SGD) and an NN model using Adam (NN-Adam) optimizers and evaluated sociodemographic bias by comparing the area under the curve values. A bias mitigation algorithm, based on equality of odds, was implemented to minimize disparities in specificity and recall. Finally, a WMV classifier was developed for fairness-aware prediction of OUD. To further analyze bias detection and mitigation, we did a 1-N matching of OUD to non-OUD cases, controlling for socioeconomic variables, and evaluated the performance of the proposed bias mitigation algorithm and WMV classifier. RESULTS: Our bias mitigation algorithm substantially reduced bias with NN-SGD, by 21.66% for sex, 1.48% for race, and 21.04% for income, and with NN-Adam by 16.96% for sex, 8.87% for marital status, 8.45% for working condition, and 41.62% for race. The fairness-aware WMV classifier achieved a recall of 85.37% and 92.68% and an accuracy of 58.85% and 90.21% using NN-SGD and NN-Adam, respectively. The results after matching also indicated remarkable bias reduction with NN-SGD and NN-Adam, respectively, as follows: sex (0.14% vs 0.97%), marital status (12.95% vs 10.33%), working condition (14.79% vs 15.33%), race (60.13% vs 41.71%), and income (0.35% vs 2.21%). Moreover, the fairness-aware WMV classifier achieved high performance with a recall of 100% and 85.37% and an accuracy of 73.20% and 89.38% using NN-SGD and NN-Adam, respectively. CONCLUSIONS: The application of the proposed bias mitigation algorithm shows promise in reducing sociodemographic bias, with the WMV classifier confirming bias reduction and high performance in OUD prediction.

14.
Proc Natl Acad Sci U S A ; 121(33): e2408731121, 2024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39106305

RESUMEN

AI is now an integral part of everyday decision-making, assisting us in both routine and high-stakes choices. These AI models often learn from human behavior, assuming this training data is unbiased. However, we report five studies that show that people change their behavior to instill desired routines into AI, indicating this assumption is invalid. To show this behavioral shift, we recruited participants to play the ultimatum game, where they were asked to decide whether to accept proposals of monetary splits made by either other human participants or AI. Some participants were informed their choices would be used to train an AI proposer, while others did not receive this information. Across five experiments, we found that people modified their behavior to train AI to make fair proposals, regardless of whether they could directly benefit from the AI training. After completing this task once, participants were invited to complete this task again but were told their responses would not be used for AI training. People who had previously trained AI persisted with this behavioral shift, indicating that the new behavioral routine had become habitual. This work demonstrates that using human behavior as training data has more consequences than previously thought since it can engender AI to perpetuate human biases and cause people to form habits that deviate from how they would normally act. Therefore, this work underscores a problem for AI algorithms that aim to learn unbiased representations of human preferences.


Asunto(s)
Inteligencia Artificial , Toma de Decisiones , Humanos , Toma de Decisiones/fisiología , Masculino , Femenino , Adulto , Conducta de Elección/fisiología , Adulto Joven
15.
Soc Dev ; 33(2)2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38993500

RESUMEN

This study examined children's responses to targeted and collective punishment. Thirty-six 4-5-year-olds and 36 6-7-year-olds (36 females; 54 White; data collected 2018-2019 in the United States) experienced three classroom punishment situations: Targeted (only transgressing student punished), Collective (one student transgressed, all students punished), and Baseline (all students transgressed, all punished). The older children evaluated collective punishment as less fair than targeted, whereas younger children evaluated both similarly. Across ages, children distributed fewer resources to teachers who administered collective than targeted punishment, and rated transgressors more negatively and distributed fewer resources to transgressors in Collective and Targeted than Baseline. These findings demonstrate children's increasing understanding of punishment and point to the potential impact of different forms of punishment on children's social lives.

16.
J Biomed Inform ; 157: 104692, 2024 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-39009174

RESUMEN

BACKGROUND: An inherent difference exists between male and female bodies, the historical under-representation of females in clinical trials widened this gap in existing healthcare data. The fairness of clinical decision-support tools is at risk when developed based on biased data. This paper aims to quantitatively assess the gender bias in risk prediction models. We aim to generalize our findings by performing this investigation on multiple use cases at different hospitals. METHODS: First, we conduct a thorough analysis of the source data to find gender-based disparities. Secondly, we assess the model performance on different gender groups at different hospitals and on different use cases. Performance evaluation is quantified using the area under the receiver-operating characteristic curve (AUROC). Lastly, we investigate the clinical implications of these biases by analyzing the underdiagnosis and overdiagnosis rate, and the decision curve analysis (DCA). We also investigate the influence of model calibration on mitigating gender-related disparities in decision-making processes. RESULTS: Our data analysis reveals notable variations in incidence rates, AUROC, and over-diagnosis rates across different genders, hospitals and clinical use cases. However, it is also observed the underdiagnosis rate is consistently higher in the female population. In general, the female population exhibits lower incidence rates and the models perform worse when applied to this group. Furthermore, the decision curve analysis demonstrates there is no statistically significant difference between the model's clinical utility across gender groups within the interested range of thresholds. CONCLUSION: The presence of gender bias within risk prediction models varies across different clinical use cases and healthcare institutions. Although inherent difference is observed between male and female populations at the data source level, this variance does not affect the parity of clinical utility. In conclusion, the evaluations conducted in this study highlight the significance of continuous monitoring of gender-based disparities in various perspectives for clinical risk prediction models.

17.
Diagn Interv Radiol ; 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38953330

RESUMEN

Although artificial intelligence (AI) methods hold promise for medical imaging-based prediction tasks, their integration into medical practice may present a double-edged sword due to bias (i.e., systematic errors). AI algorithms have the potential to mitigate cognitive biases in human interpretation, but extensive research has highlighted the tendency of AI systems to internalize biases within their model. This fact, whether intentional or not, may ultimately lead to unintentional consequences in the clinical setting, potentially compromising patient outcomes. This concern is particularly important in medical imaging, where AI has been more progressively and widely embraced than any other medical field. A comprehensive understanding of bias at each stage of the AI pipeline is therefore essential to contribute to developing AI solutions that are not only less biased but also widely applicable. This international collaborative review effort aims to increase awareness within the medical imaging community about the importance of proactively identifying and addressing AI bias to prevent its negative consequences from being realized later. The authors began with the fundamentals of bias by explaining its different definitions and delineating various potential sources. Strategies for detecting and identifying bias were then outlined, followed by a review of techniques for its avoidance and mitigation. Moreover, ethical dimensions, challenges encountered, and prospects were discussed.

18.
Artículo en Inglés | MEDLINE | ID: mdl-38960729

RESUMEN

OBJECTIVE: This study aims to develop machine learning models that provide both accurate and equitable predictions of 2-year stroke risk for patients with atrial fibrillation across diverse racial groups. MATERIALS AND METHODS: Our study utilized structured electronic health records (EHR) data from the All of Us Research Program. Machine learning models (LightGBM) were utilized to capture the relations between stroke risks and the predictors used by the widely recognized CHADS2 and CHA2DS2-VASc scores. We mitigated the racial disparity by creating a representative tuning set, customizing tuning criteria, and setting binary thresholds separately for subgroups. We constructed a hold-out test set that not only supports temporal validation but also includes a larger proportion of Black/African Americans for fairness validation. RESULTS: Compared to the original CHADS2 and CHA2DS2-VASc scores, significant improvements were achieved by modeling their predictors using machine learning models (Area Under the Receiver Operating Characteristic curve from near 0.70 to above 0.80). Furthermore, applying our disparity mitigation strategies can effectively enhance model fairness compared to the conventional cross-validation approach. DISCUSSION: Modeling CHADS2 and CHA2DS2-VASc risk factors with LightGBM and our disparity mitigation strategies achieved decent discriminative performance and excellent fairness performance. In addition, this approach can provide a complete interpretation of each predictor. These highlight its potential utility in clinical practice. CONCLUSIONS: Our research presents a practical example of addressing clinical challenges through the All of Us Research Program data. The disparity mitigation framework we proposed is adaptable across various models and data modalities, demonstrating broad potential in clinical informatics.

19.
Data Brief ; 55: 110598, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38974007

RESUMEN

In online food delivery apps, customers write reviews to reflect their experiences. However, certain restaurants use a "review event" strategy to solicit favorable reviews from customers and boost their revenue. Review event is a marketing strategy where a restaurant owner gives free services to customers in return for a promise to write a review. Nevertheless, current datasets of app reviews for food delivery services neglect this situation. Furthermore, there appears to be an absence of datasets with reviews written in Korean. To solve this gap, this paper presents a dataset that contains reviews obtained from restaurants on a Korean app which use a review event strategy. A total of 128,668 reviews were gathered from 136 restaurants through crawling reviews using the Selenium library in Python. The dataset consists of detailed information of each review which contains information about ordered dishes, each review's written time, whether the food image is included in the review or not, and various star ratings such as total, taste, quantity, and delivery ratings. This dataset supports an innovative process of preparing AI training data for achieving fairness AI by proposing a bias-free dataset of food delivery app reviews with data poisoning attacks as an example.Additionally, the dataset is beneficial for researchers who are examining review events or analyzing the sentiment of food delivery app reviews.

20.
Diagnostics (Basel) ; 14(13)2024 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-39001244

RESUMEN

Primary Immune Thrombocytopenia (ITP) is a rare autoimmune disease characterised by the immune-mediated destruction of peripheral blood platelets in patients leading to low platelet counts and bleeding. The diagnosis and effective management of ITP are challenging because there is no established test to confirm the disease and no biomarker with which one can predict the response to treatment and outcome. In this work, we conduct a feasibility study to check if machine learning can be applied effectively for the diagnosis of ITP using routine blood tests and demographic data in a non-acute outpatient setting. Various ML models, including Logistic Regression, Support Vector Machine, k-Nearest Neighbor, Decision Tree and Random Forest, were applied to data from the UK Adult ITP Registry and a general haematology clinic. Two different approaches were investigated: a demographic-unaware and a demographic-aware one. We conduct extensive experiments to evaluate the predictive performance of these models and approaches, as well as their bias. The results revealed that Decision Tree and Random Forest models were both superior and fair, achieving nearly perfect predictive and fairness scores, with platelet count identified as the most significant variable. Models not provided with demographic information performed better in terms of predictive accuracy but showed lower fairness scores, illustrating a trade-off between predictive performance and fairness.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA