Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.052
Filter
1.
Ophthalmol Sci ; 5(1): 100596, 2025.
Article in English | MEDLINE | ID: mdl-39386055

ABSTRACT

Objective: Despite advances in artificial intelligence (AI) in glaucoma prediction, most works lack multicenter focus and do not consider fairness concerning sex, race, or ethnicity. This study aims to examine the impact of these sensitive attributes on developing fair AI models that predict glaucoma progression to necessitating incisional glaucoma surgery. Design: Database study. Participants: Thirty-nine thousand ninety patients with glaucoma, as identified by International Classification of Disease codes from 7 academic eye centers participating in the Sight OUtcomes Research Collaborative. Methods: We developed XGBoost models using 3 approaches: (1) excluding sensitive attributes as input features, (2) including them explicitly as input features, and (3) training separate models for each group. Model input features included demographic details, diagnosis codes, medications, and clinical information (intraocular pressure, visual acuity, etc.), from electronic health records. The models were trained on patients from 5 sites (N = 27 999) and evaluated on a held-out internal test set (N = 3499) and 2 external test sets consisting of N = 1550 and N = 2542 patients. Main Outcomes and Measures: Area under the receiver operating characteristic curve (AUROC) and equalized odds on the test set and external sites. Results: Six thousand six hundred eighty-two (17.1%) of 39 090 patients underwent glaucoma surgery with a mean age of 70.1 (standard deviation 14.6) years, 54.5% female, 62.3% White, 22.1% Black, and 4.7% Latinx/Hispanic. We found that not including the sensitive attributes led to better classification performance (AUROC: 0.77-0.82) but worsened fairness when evaluated on the internal test set. However, on external test sites, the opposite was true: including sensitive attributes resulted in better classification performance (AUROC: external #1 - [0.73-0.81], external #2 - [0.67-0.70]), but varying degrees of fairness for sex and race as measured by equalized odds. Conclusions: Artificial intelligence models predicting whether patients with glaucoma progress to surgery demonstrated bias with respect to sex, race, and ethnicity. The effect of sensitive attribute inclusion and exclusion on fairness and performance varied based on internal versus external test sets. Prior to deployment, AI models should be evaluated for fairness on the target population. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

2.
Sci Eng Ethics ; 30(5): 46, 2024 Oct 09.
Article in English | MEDLINE | ID: mdl-39384600

ABSTRACT

The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls's theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls's theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls's theory of justice as fairness.


Subject(s)
Artificial Intelligence , Ethical Theory , Social Justice , Artificial Intelligence/ethics , Humans , Democracy , Guidelines as Topic
3.
Future Healthc J ; 11(3): 100177, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39371535

ABSTRACT

Artificial intelligence (AI) is a technology that enables computers to simulate human intelligence and has the potential to improve healthcare in a multitude of ways. However, there are also possibilities that it may continue, or exacerbate, current disparities. We discuss the problem of bias in healthcare and AI, and go on to highlight some of the ongoing and future solutions that are being researched in the area.

4.
Digit Health ; 10: 20552076241277705, 2024.
Article in English | MEDLINE | ID: mdl-39372817

ABSTRACT

Digitalization in medicine offers a significant opportunity to transform healthcare systems by providing novel digital tools and services to guide personalized prevention, prediction, diagnosis, treatment and disease management. This transformation raises a number of novel socio-ethical considerations for individuals and society as a whole, which need to be appropriately addressed to ensure that digital medical devices (DMDs) are widely adopted and benefit all patients as well as healthcare service providers. In this narrative review, based on a broad literature search in PubMed, Web of Science, Google Scholar, we outline five core socio-ethical considerations in digital medicine that intersect with the notions of equity and digital inclusion: (i) access, use and engagement with DMDs, (ii) inclusiveness in DMD clinical trials, (iii) algorithm fairness, (iv) surveillance and datafication, and (v) data privacy and trust. By integrating literature from multidisciplinary fields, including social, medical, and computer sciences, we shed light on challenges and opportunities related to the development and adoption of DMDs. We begin with an overview of the different types of DMDs, followed by in-depth discussions of five socio-ethical implications associated with their deployment. Concluding our review, we provide evidence-based multilevel recommendations aimed at fostering a more inclusive digital landscape to ensure that the development and integration of DMDs in healthcare mitigate rather than cause, maintain or exacerbate health inequities.

5.
Br J Soc Psychol ; 2024 Oct 08.
Article in English | MEDLINE | ID: mdl-39377471

ABSTRACT

In this article, we investigate how being socially excluded (vs. included) affects people's distributive fairness judgements and their willingness to cooperate with others in subsequent interactions. For this purpose, we conducted three experiments in which we assessed individual differences in having experienced being socially excluded (Experiment 1, N = 164), and manipulated social exclusion (Experiment 2, N = 120; Experiment 3, N = 492). We studied how this impacted fairness judgements of three different outcome distributions (disadvantageous inequality, advantageous inequality, and equality) both within-participants (Experiments 1 and 2) and between-participants (Experiment 3). To assess behavioural consequences, we then also assessed participants' cooperation in a social dilemma game. Across the three experiments, we consistently found that social exclusion impacted fairness judgements. Compared to inclusion, excluded participants judged disadvantageous inequality as more unfair and advantageous inequality as less unfair. Moreover, compared to socially included participants, socially excluded participants were more willing to cooperate after experiencing advantageous rather than disadvantageous inequality, and feelings of acceptance served as a mediator in these associations.

6.
Sci Rep ; 14(1): 22822, 2024 10 01.
Article in English | MEDLINE | ID: mdl-39354030

ABSTRACT

Sense of agency (SoA) describes the feeling of control over one's actions and their consequences. One proposed index of implicit SoA is temporal compression, which refers to the phenomenon that voluntary actions and their outcomes are perceived as closer in time than they actually are. The present study measured temporal compression in the social norm violation situation. In two experiments participants joined in an Ultimatum game (UG), in which they were presented with offers that varied in fairness and they could choose to accept or reject the offers by pressing buttons. A neutral sound would occur after their choices in the UG and the participants had to estimate the time interval between their button pressing and the occurrence of the sound, and EEG signals were recorded during the task. Experiment 1 demonstrated that rejecting unfair offers decreased the perceived interval between action and outcome compared to accepting fair offers, suggesting a higher level of SoA after rejecting unfair offers. Experiment 2 replicated these results and further revealed an attenuated N1 in response to the sound following rejections of unfairness. Taken together, these results highlight the importance of social norms in affecting people's behaviors and agency experiences.


Subject(s)
Brain , Electroencephalography , Humans , Male , Female , Adult , Young Adult , Brain/physiology , Evoked Potentials/physiology
7.
Psych J ; 2024 Oct 01.
Article in English | MEDLINE | ID: mdl-39351915

ABSTRACT

Previous studies have highlighted the critical role that the belief in a just world (BJW) plays in maintaining and promoting prosocial behaviors within individuals. Considered a stable personality trait, the crux of BJW lies in the conviction that individuals receive what they deserve, and deserve what they receive. Simultaneously, the relationship between BJW and prosocial behavior is impacted by an individual's sense of fairness or unfairness. However, past research has primarily focused on real-life prosocial behavior, with limited exploration into the relationship between BJW and online prosocial behavior. This study, comprising a survey and an experiment, aimed to delve deeper into this relationship. The survey section randomly selected 4212 college students to examine how BJW correlates with online prosocial behavior. Findings predominantly revealed a significant positive correlation between online prosocial behavior and BJW. Additionally, the study explored how gender and place of origin influence these behaviors. Results showed that male students and those from urban areas exhibited significantly higher online prosocial behavior. The experimental research investigated the performance differences in online prosocial behaviors among college students under different fairness scenarios, revealing that the online prosocial behavior in an unfair situation was significantly higher than in fair or neutral situations. Furthermore, in unfair situations, a significant correlation was observed between BJW and online prosocial behavior. The findings from this study significantly advance our understanding of the dynamics between BJW and online prosocial behavior among college students, emphasizing that perceived injustices can markedly enhance prosocial behaviors in virtual settings. This study underscores the profound impact of fairness perceptions and highlights the modulating effects of gender and geographical background on online interactions.

8.
Neural Netw ; 181: 106781, 2024 Oct 05.
Article in English | MEDLINE | ID: mdl-39388994

ABSTRACT

Graph Neural Networks (GNNs) play a key role in efficiently learning node representations of graph-structured data through message passing, but their predictions are often correlated with sensitive attributes and thus lead to potential discrimination against some groups. Given the increasingly widespread applications of GNNs, solutions are required urgently to prevent algorithmic discrimination associated with GNNs, to protect the rights of vulnerable groups and to build trustworthy artificial intelligence. To learn the fair node representations of graphs, we propose a novel framework, the Fair Disentangled Graph Neural Network (FDGNN). With the proposed FDGNN framework, we enhance data diversity by generating instances that have identical sensitivity values but different adjacency matrices through data augmentation. Additionally, we design a counterfactual augmentation strategy for constructing instances with varying sensitive values while preserving the same adjacency matrices, thereby balancing the distribution of sensitive values across different groups. Subsequently, we employ a disentangled contrastive learning strategy to acquire disentangled representations of non-sensitive attributes such that sensitive information does not affect the prediction of node information. Finally, the learned fair representations of non-sensitive attributes are employed for building a fair predictive model. Extensive experiments on three real-world datasets demonstrate that FDGNN yields the best fairness predictions compared to the baseline methods. Additionally, the results demonstrate the potential of disentanglement in learning fair representations.

9.
AI Soc ; 39(5): 2183-2199, 2024.
Article in English | MEDLINE | ID: mdl-39309255

ABSTRACT

Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values such as transparency and fairness from the outset. Drawing on insights from psychological theories, we assert the need to understand the values that underlie decisions made by individuals involved in creating and deploying AI systems. We describe how this understanding can be leveraged to increase engagement with de-biasing and fairness-enhancing practices within the AI healthcare industry, ultimately leading to sustained behavioral change via autonomy-supportive communication strategies rooted in motivational and social psychology theories. In developing these pathways to engagement, we consider the norms and needs that govern the AI healthcare domain, and we evaluate incentives for maintaining the status quo against economic, legal, and social incentives for behavior change in line with transparency and fairness values.

10.
Foods ; 13(18)2024 Sep 22.
Article in English | MEDLINE | ID: mdl-39335931

ABSTRACT

This work examines consumers' perceptions of products containing bee propolis using the theory of planned behavior as a theoretical foundation. As antecedents of attitude, this work employs price fairness, healthiness, eco-friendliness, and ease of use. A survey was issued to participants who had experience using bee propolis products and who were recruited using the Clickworker platform service. In total, 305 valid observations were collected for analysis. This study used a maximum likelihood-based structural equation model to test the research hypotheses and find that price fairness, healthiness, eco-friendliness, and ease of use positively affected attitude. Moreover, the intention to use is positively impacted by attitude, subjective norms, and behavioral control. This research contributes to the literature by demonstrating the explanatory power of the theory of planned behavior with respect to bee propolis products.

11.
Crit Care ; 28(1): 301, 2024 Sep 12.
Article in English | MEDLINE | ID: mdl-39267172

ABSTRACT

In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.


Subject(s)
Artificial Intelligence , Humans , Artificial Intelligence/trends , Artificial Intelligence/standards , Critical Care/methods , Critical Care/standards , Clinical Decision-Making/methods , Physicians/standards
15.
Brain Res Bull ; 217: 111082, 2024 Oct 15.
Article in English | MEDLINE | ID: mdl-39307435

ABSTRACT

Costly third-party punishment (TPP) is an effective way to enforce fairness norms and promote cooperation. Recent studies have shown that the third party considers not only the proposer's suggested allocation but also the receiver's response to the allocation, which was typically ignored in traditional TPP studies when making punishment decisions. However, it remains unclear whether and how the varying unfair allocations and receivers' responses are integrated into third-party punishment. The current study addressed these issues at behavioral and electrophysiological levels by employing a modified third-party punishment task involving proposers' highly or moderately unfair allocations and the receivers' acceptance or rejection responses. At the behavioral level, participants punished proposers more often when receivers rejected relative to accepted unfair allocations. This effect was further modulated by the unfairness degree of allocations, indicated by a more pronounced rejection-sensitive effect when participants observed the moderately unfair offers. Electrophysiologically, when the receiver rejected the moderately unfair allocations, a stronger late-stage component P300/LPP, which was considered to be involved in allocations of attention resources, was found. Meanwhile, separated from the P300/LPP, the P200 associated with early attention capture demonstrated a rejection-sensitive effect. Together, in the costly TPP studies, the receiver is typically designated as passive and silent, and her/his responses to unfairness are conventionally ignored. However, our results indicate that except for the proposer's distribution behavior, the receiver's response does have an impact on third-party punishment in a way that interacts with the unfairness of allocations.


Subject(s)
Punishment , Humans , Male , Female , Young Adult , Adult , Electroencephalography/methods , Decision Making/physiology , Evoked Potentials/physiology , Cooperative Behavior , Brain/physiology , Attention/physiology
16.
Behav Brain Res ; 476: 115272, 2024 Sep 24.
Article in English | MEDLINE | ID: mdl-39326637

ABSTRACT

Cooperation is a universal human principle reflecting working with others to achieve common goals. The rational decision-making model contends that cooperation is the best strategy for maximizing benefits in an iterative prisoner's dilemma. However, the motivations for cooperation (or betrayal) are complex and diverse, and often include fairness reflections. In this study, we used functional magnetic resonance imaging to study underlying neural differences in brain regions related to fairness when people interact with an opponent who tend to cooperate or betray, at different decision-making stages. Results based on 40 university students (25 women) indicate that experiences of cooperation or betrayal affect people's fairness perception. Distinct neural activities occur in expectation, decision, and outcome phases of decisions. In the expectation phase, those in the cooperative condition exhibited increased activation in the anterior cingulate gyrus, medial superior frontal gyrus, and caudate nucleus compared to those in the uncooperative condition. During the decision phase, those in the cooperative condition showed greater activation in the middle frontal gyrus, caudate nucleus/frontal insula, inferior frontal gyrus, and cingulate gyrus compared to those in the uncooperative condition. In the outcome feedback phase, the caudate nucleus, insula, cingulate gyrus, and inferior frontal gyrus of the orbit were more active in the uncooperative condition than in the cooperative condition. Results also showed a significant correlation between caudate activity and the perception of fairness when expecting uncooperative conditions.

18.
Front Psychol ; 15: 1253831, 2024.
Article in English | MEDLINE | ID: mdl-39315034

ABSTRACT

Fairness constitutes a cornerstone of social norms, emphasizing equal treatment and equitable distribution in interpersonal relationships. Unfair treatment often leads to direct responses and can spread to others through a phenomenon known as pay-it-forward (PIF) reciprocity. This study examined how unfairness spreads in interactions with new partners who have higher, equal, or lower status than the participants. In the present study, participants (N = 47, all Korean) were given either fair or unfair treatment in the first round of a dictator game. They then allocated monetary resources among partners positioned at various hierarchical levels in the second round. Our main goal was to determine if the severity of inequity inflicted on new partners was influenced by their hierarchical status. The results revealed an inclination among participants to act more generously towards partners of higher ranking despite prior instances of unfair treatment, whereas a tendency for harsher treatment was directed towards those with lower ranking. The interaction between the fairness in the first round (DG1) and the hierarchical status of the partner in the second round (DG2) was significant, indicating that the effect of previous fairness on decision-making differed depending on the ranking of the new partners. This study, therefore, validates the presence of unfairness PIF reciprocity within hierarchical contexts.

19.
Trends Ecol Evol ; 2024 Sep 26.
Article in English | MEDLINE | ID: mdl-39333000

ABSTRACT

Biodiversity is declining at alarming rates, with some negative impacts caused by activities that are necessary for meeting basic human needs and others which should be avoided to prevent ecological collapse. Avoidance of biodiversity impacts is costly; these costs must be distributed fairly. Principles of fair allocation - which are grounded in longstanding theories of justice and are mathematically operationalizable - are rarely used in biodiversity decision-making but can help to deliver procedural and distributive justice alongside biodiversity outcomes. We show how incorporating rules of fair allocation into biodiversity decision-making could advance policy formulation towards a safe and just future. Such rules provide a means to operationalize equity and create space for cooperatively and constructively negotiating avoidance liabilities within biodiversity impact mitigation.

20.
Cogn Dev ; 702024.
Article in English | MEDLINE | ID: mdl-39328307

ABSTRACT

Previous research has shown that morally-relevant theory of mind enables children to avoid blaming a peer for an accidental transgression. The current study investigated whether this form of theory of mind helps children recognize that gender inequalities are unfair and create negative emotional experiences. Further, the study examined this ability across three perspectives (for themselves, for those who have been advantaged by inequality, and for those who have been disadvantaged by inequality). Participants were 141 children (M Age = 6.67 years, 49% female, 32% ethnic/racial minority) recruited from the mid-Atlantic region of the U.S. Experience with the negative consequences of gender bias and more advanced mental state understanding was associated with more negative evaluations of gender inequalities and more neutral attributions of others' emotions. These findings shed light on the role of different forms of mental state understanding in children's evaluations of inequalities based on gender.

SELECTION OF CITATIONS
SEARCH DETAIL