Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
Med J Aust ; 220(8): 409-416, 2024 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-38629188

RESUMEN

OBJECTIVE: To support a diverse sample of Australians to make recommendations about the use of artificial intelligence (AI) technology in health care. STUDY DESIGN: Citizens' jury, deliberating the question: "Under which circumstances, if any, should artificial intelligence be used in Australian health systems to detect or diagnose disease?" SETTING, PARTICIPANTS: Thirty Australian adults recruited by Sortition Foundation using random invitation and stratified selection to reflect population proportions by gender, age, ancestry, highest level of education, and residential location (state/territory; urban, regional, rural). The jury process took 18 days (16 March - 2 April 2023): fifteen days online and three days face-to-face in Sydney, where the jurors, both in small groups and together, were informed about and discussed the question, and developed recommendations with reasons. Jurors received extensive information: a printed handbook, online documents, and recorded presentations by four expert speakers. Jurors asked questions and received answers from the experts during the online period of the process, and during the first day of the face-to-face meeting. MAIN OUTCOME MEASURES: Jury recommendations, with reasons. RESULTS: The jurors recommended an overarching, independently governed charter and framework for health care AI. The other nine recommendation categories concerned balancing benefits and harms; fairness and bias; patients' rights and choices; clinical governance and training; technical governance and standards; data governance and use; open source software; AI evaluation and assessment; and education and communication. CONCLUSIONS: The deliberative process supported a nationally representative sample of citizens to construct recommendations about how AI in health care should be developed, used, and governed. Recommendations derived using such methods could guide clinicians, policy makers, AI researchers and developers, and health service users to develop approaches that ensure trustworthy and responsible use of this technology.


Asunto(s)
Inteligencia Artificial , Humanos , Australia , Femenino , Masculino , Adulto , Atención a la Salud , Persona de Mediana Edad , Anciano
2.
J Med Ethics ; 2023 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-36823101

RESUMEN

BACKGROUND: There is a growing concern about artificial intelligence (AI) applications in healthcare that can disadvantage already under-represented and marginalised groups (eg, based on gender or race). OBJECTIVES: Our objectives are to canvas the range of strategies stakeholders endorse in attempting to mitigate algorithmic bias, and to consider the ethical question of responsibility for algorithmic bias. METHODOLOGY: The study involves in-depth, semistructured interviews with healthcare workers, screening programme managers, consumer health representatives, regulators, data scientists and developers. RESULTS: Findings reveal considerable divergent views on three key issues. First, views on whether bias is a problem in healthcare AI varied, with most participants agreeing bias is a problem (which we call the bias-critical view), a small number believing the opposite (the bias-denial view), and some arguing that the benefits of AI outweigh any harms or wrongs arising from the bias problem (the bias-apologist view). Second, there was a disagreement on the strategies to mitigate bias, and who is responsible for such strategies. Finally, there were divergent views on whether to include or exclude sociocultural identifiers (eg, race, ethnicity or gender-diverse identities) in the development of AI as a way to mitigate bias. CONCLUSION/SIGNIFICANCE: Based on the views of participants, we set out responses that stakeholders might pursue, including greater interdisciplinary collaboration, tailored stakeholder engagement activities, empirical studies to understand algorithmic bias and strategies to modify dominant approaches in AI development such as the use of participatory methods, and increased diversity and inclusion in research teams and research participant recruitment and selection.

3.
J Med Philos ; 47(6): 735-748, 2022 12 23.
Artículo en Inglés | MEDLINE | ID: mdl-36562842

RESUMEN

Pathologizing ugliness refers to the use of disease language and medical processes to foster and support the claim that undesirable features are pathological conditions requiring medical or surgical intervention. Primarily situated in cosmetic surgery, the practice appeals to the concept of "aesthetic pathology", which is a medical designation for features that deviate from some designated aesthetic norms. This article offers a two-pronged conceptual analysis of aesthetic pathology. First, I argue that three sets of claims, derived from normativist and naturalistic accounts of disease, inform the framing of ugliness as a disease. These claims concern: (1) aesthetic harms, (2) aesthetic dysfunction, and (3) aesthetic deviation. Second, I introduce the notion of a hybridization loop in medicine, which merges the naturalist and normative understanding of the disease that potentially enables pathologizing practices. In the context of cosmetic surgery, the loop simultaneously promotes the framing of beauty ideals as normal biological attributes and the framing of normal appearance as an aesthetic ideal to legitimize the need for cosmetic interventions. The article thus offers an original discussion of the conceptual problems arising from a specific practice in cosmetic surgery that depicts ugliness as the disease.


Asunto(s)
Medicina , Cirugía Plástica , Humanos , Estética
4.
Health Care Anal ; 30(2): 163-195, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-34704198

RESUMEN

This article provides a critical comparative analysis of the substantive and procedural values and ethical concepts articulated in guidelines for allocating scarce resources in the COVID-19 pandemic. We identified 21 local and national guidelines written in English, Spanish, German and French; applicable to specific and identifiable jurisdictions; and providing guidance to clinicians for decision making when allocating critical care resources during the COVID-19 pandemic. US guidelines were not included, as these had recently been reviewed elsewhere. Information was extracted from each guideline on: 1) the development process; 2) the presence and nature of ethical, medical and social criteria for allocating critical care resources; and 3) the membership of and decision-making procedure of any triage committees. Results of our analysis show the majority appealed primarily to consequentialist reasoning in making allocation decisions, tempered by a largely pluralistic approach to other substantive and procedural values and ethical concepts. Medical and social criteria included medical need, co-morbidities, prognosis, age, disability and other factors, with a focus on seemingly objective medical criteria. There was little or no guidance on how to reconcile competing criteria, and little attention to internal contradictions within individual guidelines. Our analysis reveals the challenges in developing sound ethical guidance for allocating scarce medical resources, highlighting problems in operationalising ethical concepts and principles, divergence between guidelines, unresolved contradictions within the same guideline, and use of naïve objectivism in employing widely used medical criteria for allocating ICU resources.


Asunto(s)
COVID-19 , COVID-19/epidemiología , Cuidados Críticos , Asignación de Recursos para la Atención de Salud , Humanos , Unidades de Cuidados Intensivos , Pandemias , Triaje/métodos
5.
Bioethics ; 34(4): 431-441, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-32036617

RESUMEN

Pathologizing ugliness refers to the framing of unattractive features as a type of disease or deformity. By framing ugliness as pathology, cosmetic procedures are reframed as therapy rather than enhancement, thereby potentially avoiding ethical critiques regularly levelled against cosmetic surgery. As such, the practice of pathologizing ugliness and the ensuing therapeuticalization of cosmetic procedures require an ethical analysis that goes beyond that offered by current enhancement critiques. In this article, I propose using a thick description of the goals of medicine as an ethical framework for evaluating problematic medical practices. I first describe the goals of medicine based on Daniel Callahan's account. I then propose that the goals work best in conjunction with ancillary ethical concepts, namely medical knowledge and skills, standards of practice and medical duties and virtues. Next, I apply the thick description of the goals of medicine in critiquing the practice of framing ugliness as disease. Here, I demonstrate ethical conflicts between aesthetic judgments that underpin the practice of pathologizing ugliness and medical judgments that inform ethical medical practices. In particular, the thick description of the goals of medicine helps reveal ethical conflicts in at least three key domains common to clinical practices, which include (a) disease determination, (b) diagnostic evaluation and (c) establishing clinical indications. My analysis offers a novel way of critiquing the practice of pathologizing ugliness in cosmetic surgery, which tends to be neglected by enhancement critiques.


Asunto(s)
Análisis Ético , Medicalización/ética , Apariencia Física , Cirugía Plástica/ética , Estética , Ética Médica , Objetivos , Humanos
7.
Med Health Care Philos ; 19(3): 431-41, 2016 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26983846

RESUMEN

This review aims to identify (1) sources of knowledge and (2) important themes of the ethical debate related to surgical alteration of facial features in East Asians. This article integrates narrative and systematic review methods. In March 2014, we searched databases including PubMed, Philosopher's Index, Web of Science, Sociological Abstracts, and Communication Abstracts using key terms "cosmetic surgery," "ethnic*," "ethics," "Asia*," and "Western*." The study included all types of papers written in English that discuss the debate on rhinoplasty and blepharoplasty in East Asians. No limit was put on date of publication. Combining both narrative and systematic review methods, a total of 31 articles were critically appraised on their contribution to ethical reflection founded on the debates regarding the surgical alteration of Asian features. Sources of knowledge were drawn from four main disciplines, including the humanities, medicine or surgery, communications, and economics. Focusing on cosmetic surgery perceived as a westernising practice, the key debate themes included authenticity of identity, interpersonal relationships and socio-economic utility in the context of Asian culture. The study shows how cosmetic surgery of ethnic features plays an important role in understanding female identity in the Asian context. Based on the debate themes authenticity of identity, interpersonal relationships, and socio-economic utility, this article argues that identity should be understood as less individualistic and more as relational and transformational in the Asian context. In addition, this article also proposes to consider cosmetic surgery of Asian features as an interplay of cultural imperialism and cultural nationalism, which can both be a source of social pressure to modify one's appearance.


Asunto(s)
Pueblo Asiatico , Belleza , Cara , Cirugía Plástica , Pueblo Asiatico/psicología , Blefaroplastia/ética , Femenino , Humanos , Masculino , Rinoplastia/ética , Cirugía Plástica/ética , Cirugía Plástica/psicología
8.
Int J Med Inform ; 186: 105417, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38564959

RESUMEN

OBJECTIVE: With the recent increase in research into public views on healthcare artificial intelligence (HCAI), the objective of this review is to examine the methods of empirical studies on public views on HCAI. We map how studies provided participants with information about HCAI, and we examine the extent to which studies framed publics as active contributors to HCAI governance. MATERIALS AND METHODS: We searched 5 academic databases and Google Advanced for empirical studies investigating public views on HCAI. We extracted information including study aims, research instruments, and recommendations. RESULTS: Sixty-two studies were included. Most were quantitative (N = 42). Most (N = 47) reported providing participants with background information about HCAI. Despite this, studies often reported participants' lack of prior knowledge about HCAI as a limitation. Over three quarters (N = 48) of the studies made recommendations that envisaged public views being used to guide governance of AI. DISCUSSION: Provision of background information is an important component of facilitating research with publics on HCAI. The high proportion of studies reporting participants' lack of knowledge about HCAI as a limitation reflects the need for more guidance on how information should be presented. A minority of studies adopted technocratic positions that construed publics as passive beneficiaries of AI, rather than as active stakeholders in HCAI design and implementation. CONCLUSION: This review draws attention to how public roles in HCAI governance are constructed in empirical studies. To facilitate active participation, we recommend that research with publics on HCAI consider methodological designs that expose participants to diverse information sources.


Asunto(s)
Inteligencia Artificial , Atención a la Salud , Humanos , Instituciones de Salud
9.
Int J Med Inform ; 169: 104903, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36343512

RESUMEN

BACKGROUND: Alongside the promise of improving clinical work, advances in healthcare artificial intelligence (AI) raise concerns about the risk of deskilling clinicians. This purpose of this study is to examine the issue of deskilling from the perspective of diverse group of professional stakeholders with knowledge and/or experiences in the development, deployment and regulation of healthcare AI. METHODS: We conducted qualitative, semi-structured interviews with 72 professionals with AI expertise and/or professional or clinical expertise who were involved in development, deployment and/or regulation of healthcare AI. Data analysis using combined constructivist grounded theory and framework approach was performed concurrently with data collection. FINDINGS: Our analysis showed participants had diverse views on three contentious issues regarding AI and deskilling. The first involved competing views about the proper extent of AI-enabled automation in healthcare work, and which clinical tasks should or should not be automated. We identified a cluster of characteristics of tasks that were considered more suitable for automation. The second involved expectations about the impact of AI on clinical skills, and whether AI-enabled automation would lead to worse or better quality of healthcare. The third tension implicitly contrasted two models of healthcare work: a human-centric model and a technology-centric model. These models assumed different values and priorities for healthcare work and its relationship to AI-enabled automation. CONCLUSION: Our study shows that a diverse group of professional stakeholders involved in healthcare AI development, acquisition, deployment and regulation are attentive to the potential impact of healthcare AI on clinical skills, but have different views about the nature and valence (positive or negative) of this impact. Detailed engagement with different types of professional stakeholders allowed us to identify relevant concepts and values that could guide decisions about AI algorithm development and deployment.


Asunto(s)
Inteligencia Artificial , Humanos , Atención a la Salud
10.
BMJ Health Care Inform ; 30(1)2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-37257921

RESUMEN

Objectives: Applications of artificial intelligence (AI) have the potential to improve aspects of healthcare. However, studies have shown that healthcare AI algorithms also have the potential to perpetuate existing inequities in healthcare, performing less effectively for marginalised populations. Studies on public attitudes towards AI outside of the healthcare field have tended to show higher levels of support for AI among socioeconomically advantaged groups that are less likely to be sufferers of algorithmic harms. We aimed to examine the sociodemographic predictors of support for scenarios related to healthcare AI.Methods: The Australian Values and Attitudes toward AI survey was conducted in March 2020 to assess Australians' attitudes towards AI in healthcare. An innovative weighting methodology involved weighting a non-probability web-based panel against results from a shorter omnibus survey distributed to a representative sample of Australians. We used multinomial logistic regression to examine the relationship between support for AI and a suite of sociodemographic variables in various healthcare scenarios.Results: Where support for AI was predicted by measures of socioeconomic advantage such as education, household income and Socio-Economic Indexes for Areas index, the same variables were not predictors of support for the healthcare AI scenarios presented. Variables associated with support for healthcare AI included being male, having computer science or programming experience and being aged between 18 and 34 years. Other Australian studies suggest that these groups may have a higher level of perceived familiarity with AI.Conclusion: Our findings suggest that while support for AI in general is predicted by indicators of social advantage, these same indicators do not predict support for healthcare AI.


Asunto(s)
Inteligencia Artificial , Atención a la Salud , Masculino , Humanos , Adolescente , Adulto Joven , Adulto , Femenino , Australia , Factores Socioeconómicos
11.
Soc Sci Med ; 338: 116357, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37949020

RESUMEN

INTRODUCTION: Despite the proliferation of Artificial Intelligence (AI) technology over the last decade, clinician, patient, and public perceptions of its use in healthcare raise a number of ethical, legal and social questions. We systematically review the literature on attitudes towards the use of AI in healthcare from patients, the general public and health professionals' perspectives to understand these issues from multiple perspectives. METHODOLOGY: A search for original research articles using qualitative, quantitative, and mixed methods published between 1 Jan 2001 to 24 Aug 2021 was conducted on six bibliographic databases. Data were extracted and classified into different themes representing views on: (i) knowledge and familiarity of AI, (ii) AI benefits, risks, and challenges, (iii) AI acceptability, (iv) AI development, (v) AI implementation, (vi) AI regulations, and (vii) Human - AI relationship. RESULTS: The final search identified 7,490 different records of which 105 publications were selected based on predefined inclusion/exclusion criteria. While the majority of patients, the general public and health professionals generally had a positive attitude towards the use of AI in healthcare, all groups indicated some perceived risks and challenges. Commonly perceived risks included data privacy; reduced professional autonomy; algorithmic bias; healthcare inequities; and greater burnout to acquire AI-related skills. While patients had mixed opinions on whether healthcare workers suffer from job loss due to the use of AI, health professionals strongly indicated that AI would not be able to completely replace them in their professions. Both groups shared similar doubts about AI's ability to deliver empathic care. The need for AI validation, transparency, explainability, and patient and clinical involvement in the development of AI was emphasised. To help successfully implement AI in health care, most participants envisioned that an investment in training and education campaigns was necessary, especially for health professionals. Lack of familiarity, lack of trust, and regulatory uncertainties were identified as factors hindering AI implementation. Regarding AI regulations, key themes included data access and data privacy. While the general public and patients exhibited a willingness to share anonymised data for AI development, there remained concerns about sharing data with insurance or technology companies. One key domain under this theme was the question of who should be held accountable in the case of adverse events arising from using AI. CONCLUSIONS: While overall positivity persists in attitudes and preferences toward AI use in healthcare, some prevalent problems require more attention. There is a need to go beyond addressing algorithm-related issues to look at the translation of legislation and guidelines into practice to ensure fairness, accountability, transparency, and ethics in AI.


Asunto(s)
Algoritmos , Inteligencia Artificial , Humanos , Escolaridad , Emociones , Empatía
12.
Syst Rev ; 11(1): 142, 2022 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-35841073

RESUMEN

BACKGROUND: In recent years, innovations in artificial intelligence (AI) have led to the development of new healthcare AI (HCAI) technologies. Whilst some of these technologies show promise for improving the patient experience, ethicists have warned that AI can introduce and exacerbate harms and wrongs in healthcare. It is important that HCAI reflects the values that are important to people. However, involving patients and publics in research about AI ethics remains challenging due to relatively limited awareness of HCAI technologies. This scoping review aims to map how the existing literature on publics' views on HCAI addresses key issues in AI ethics and governance. METHODS: We developed a search query to conduct a comprehensive search of PubMed, Scopus, Web of Science, CINAHL, and Academic Search Complete from January 2010 onwards. We will include primary research studies which document publics' or patients' views on machine learning HCAI technologies. A coding framework has been designed and will be used capture qualitative and quantitative data from the articles. Two reviewers will code a proportion of the included articles and any discrepancies will be discussed amongst the team, with changes made to the coding framework accordingly. Final results will be reported quantitatively and qualitatively, examining how each AI ethics issue has been addressed by the included studies. DISCUSSION: Consulting publics and patients about the ethics of HCAI technologies and innovations can offer important insights to those seeking to implement HCAI ethically and legitimately. This review will explore how ethical issues are addressed in literature examining publics' and patients' views on HCAI, with the aim of determining the extent to which publics' views on HCAI ethics have been addressed in existing research. This has the potential to support the development of implementation processes and regulation for HCAI that incorporates publics' values and perspectives.


Asunto(s)
Inteligencia Artificial , Atención a la Salud , Instituciones de Salud , Humanos , Aprendizaje Automático , Literatura de Revisión como Asunto
13.
Theor Med Bioeth ; 38(3): 213-225, 2017 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-28105531

RESUMEN

The popularity of surgical modifications of race-typical features among Asian women has generated debates on the ethical implications of the practice. Focusing on blepharoplasty as a representative racial surgery, this article frames the ethical discussion by viewing Asian cosmetic surgery as an example of medicalization, which can be interpreted in two forms: treatment versus enhancement. In the treatment form, medicalization occurs by considering cosmetic surgery as remedy for pathologized Asian features; the pathologization usually occurs in reference to western features as the norm. In the enhancement form, medicalization occurs by using medical means to improve physical features to achieve a certain type of beauty or physical appearance. Each type of medicalization raises slightly different ethical concerns. The problem with treatment medicalization lies in the pathologization of Asian features, which is oppressive as it continues to reinforce racial norms of appearance and negative stereotypes. Enhancement medicalization is ethically problematic because cosmetic surgery tends to conflate beauty and health as medical goals of surgery, overemphasizing the value of appearance that can further displace women's control over their own bodies. I conclude that in both forms of medicalization, cosmetic surgery seems to narrowly frame a complex psychosocial issue involving physical appearance as a matter that can be simply solved through surgical means.


Asunto(s)
Pueblo Asiatico/psicología , Blefaroplastia/ética , Cirugía Plástica , Humanos , Medicalización , Cirugía Plástica/ética , Cirugía Plástica/psicología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA