Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
2.
Health Promot Int ; 39(2)2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38558241

RESUMEN

Although digital health promotion (DHP) technologies for young people are increasingly available in low- and middle-income countries (LMICs), there has been insufficient research investigating whether existing ethical and policy frameworks are adequate to address the challenges and promote the technological opportunities in these settings. In an effort to fill this gap and as part of a larger research project, in November 2022, we conducted a workshop in Cape Town, South Africa, entitled 'Unlocking the Potential of Digital Health Promotion for Young People in Low- and Middle-Income Countries'. The workshop brought together 25 experts from the areas of digital health ethics, youth health and engagement, health policy and promotion and technology development, predominantly from sub-Saharan Africa (SSA), to explore their views on the ethics and governance and potential policy pathways of DHP for young people in LMICs. Using the World Café method, participants contributed their views on (i) the advantages and barriers associated with DHP for youth in LMICs, (ii) the availability and relevance of ethical and regulatory frameworks for DHP and (iii) the translation of ethical principles into policies and implementation practices required by these policies, within the context of SSA. Our thematic analysis of the ensuing discussion revealed a willingness to foster such technologies if they prove safe, do not exacerbate inequalities, put youth at the center and are subject to appropriate oversight. In addition, our work has led to the potential translation of fundamental ethical principles into the form of a policy roadmap for ethically aligned DHP for youth in SSA.


Asunto(s)
Salud Digital , Política de Salud , Humanos , Adolescente , Sudáfrica , Promoción de la Salud
3.
Nat Commun ; 15(1): 1619, 2024 Feb 22.
Artículo en Inglés | MEDLINE | ID: mdl-38388497

RESUMEN

The Consolidated Standards of Reporting Trials extension for Artificial Intelligence interventions (CONSORT-AI) was published in September 2020. Since its publication, several randomised controlled trials (RCTs) of AI interventions have been published but their completeness and transparency of reporting is unknown. This systematic review assesses the completeness of reporting of AI RCTs following publication of CONSORT-AI and provides a comprehensive summary of RCTs published in recent years. 65 RCTs were identified, mostly conducted in China (37%) and USA (18%). Median concordance with CONSORT-AI reporting was 90% (IQR 77-94%), although only 10 RCTs explicitly reported its use. Several items were consistently under-reported, including algorithm version, accessibility of the AI intervention or code, and references to a study protocol. Only 3 of 52 included journals explicitly endorsed or mandated CONSORT-AI. Despite a generally high concordance amongst recent AI RCTs, some AI-specific considerations remain systematically poorly reported. Further encouragement of CONSORT-AI adoption by journals and funders may enable more complete adoption of the full CONSORT-AI guidelines.


Asunto(s)
Inteligencia Artificial , Estándares de Referencia , China , Ensayos Clínicos Controlados Aleatorios como Asunto
4.
Rev. panam. salud pública ; 48: e13, 2024. tab, graf
Artículo en Español | LILACS-Express | LILACS | ID: biblio-1536672

RESUMEN

resumen está disponible en el texto completo


ABSTRACT The CONSORT 2010 statement provides minimum guidelines for reporting randomized trials. Its widespread use has been instrumental in ensuring transparency in the evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate impact on health outcomes. The CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence) extension is a new reporting guideline for clinical trials evaluating interventions with an AI component. It was developed in parallel with its companion statement for clinical trial protocols: SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 29 candidate items, which were assessed by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a two-day consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The CONSORT-AI extension includes 14 new items that were considered sufficiently important for AI interventions that they should be routinely reported in addition to the core CONSORT 2010 items. CONSORT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention is integrated, the handling of inputs and outputs of the AI intervention, the human-AI interaction and provision of an analysis of error cases. CONSORT-AI will help promote transparency and completeness in reporting clinical trials for AI interventions. It will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the quality of clinical trial design and risk of bias in the reported outcomes.


RESUMO A declaração CONSORT 2010 apresenta diretrizes mínimas para relatórios de ensaios clínicos randomizados. Seu uso generalizado tem sido fundamental para garantir a transparência na avaliação de novas intervenções. Recentemente, tem-se reconhecido cada vez mais que intervenções que incluem inteligência artificial (IA) precisam ser submetidas a uma avaliação rigorosa e prospectiva para demonstrar seus impactos sobre os resultados de saúde. A extensão CONSORT-AI (Consolidated Standards of Reporting Trials - Artificial Intelligence) é uma nova diretriz para relatórios de ensaios clínicos que avaliam intervenções com um componente de IA. Ela foi desenvolvida em paralelo à sua declaração complementar para protocolos de ensaios clínicos, a SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials - Artificial Intelligence). Ambas as diretrizes foram desenvolvidas por meio de um processo de consenso em etapas que incluiu revisão da literatura e consultas a especialistas para gerar 29 itens candidatos. Foram feitas consultas sobre esses itens a um grupo internacional composto por 103 interessados diretos, que participaram de uma pesquisa Delphi em duas etapas. Chegou-se a um acordo sobre os itens em uma reunião de consenso que incluiu 31 interessados diretos, e os itens foram refinados por meio de uma lista de verificação piloto que envolveu 34 participantes. A extensão CONSORT-AI inclui 14 itens novos que, devido à sua importância para as intervenções de IA, devem ser informados rotineiramente juntamente com os itens básicos da CONSORT 2010. A CONSORT-AI preconiza que os pesquisadores descrevam claramente a intervenção de IA, incluindo instruções e as habilidades necessárias para seu uso, o contexto no qual a intervenção de IA está inserida, considerações sobre o manuseio dos dados de entrada e saída da intervenção de IA, a interação humano-IA e uma análise dos casos de erro. A CONSORT-AI ajudará a promover a transparência e a integralidade nos relatórios de ensaios clínicos com intervenções que utilizam IA. Seu uso ajudará editores e revisores, bem como leitores em geral, a entender, interpretar e avaliar criticamente a qualidade do desenho do ensaio clínico e o risco de viés nos resultados relatados.

5.
Rev. panam. salud pública ; 48: e12, 2024. tab, graf
Artículo en Español | LILACS-Express | LILACS | ID: biblio-1536674

RESUMEN

resumen está disponible en el texto completo


ABSTRACT The SPIRIT 2013 statement aims to improve the completeness of clinical trial protocol reporting by providing evidence-based recommendations for the minimum set of items to be addressed. This guidance has been instrumental in promoting transparent evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate their impact on health outcomes. The SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence) extension is a new reporting guideline for clinical trial protocols evaluating interventions with an AI component. It was developed in parallel with its companion statement for trial reports: CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 26 candidate items, which were consulted upon by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The SPIRIT-AI extension includes 15 new items that were considered sufficiently important for clinical trial protocols of AI interventions. These new items should be routinely reported in addition to the core SPIRIT 2013 items. SPIRIT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention will be integrated, considerations for the handling of input and output data, the human-AI interaction and analysis of error cases. SPIRIT-AI will help promote transparency and completeness for clinical trial protocols for AI interventions. Its use will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the design and risk of bias for a planned clinical trial.


RESUMO A declaração SPIRIT 2013 tem como objetivo melhorar a integralidade dos relatórios dos protocolos de ensaios clínicos, fornecendo recomendações baseadas em evidências para o conjunto mínimo de itens que devem ser abordados. Essas orientações têm sido fundamentais para promover uma avaliação transparente de novas intervenções. Recentemente, tem-se reconhecido cada vez mais que intervenções que incluem inteligência artificial (IA) precisam ser submetidas a uma avaliação rigorosa e prospectiva para demonstrar seus impactos sobre os resultados de saúde. A extensão SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials - Artificial Intelligence) é uma nova diretriz de relatório para protocolos de ensaios clínicos que avaliam intervenções com um componente de IA. Essa diretriz foi desenvolvida em paralelo à sua declaração complementar para relatórios de ensaios clínicos, CONSORT-AI (Consolidated Standards of Reporting Trials - Artificial Intelligence). Ambas as diretrizes foram desenvolvidas por meio de um processo de consenso em etapas que incluiu revisão da literatura e consultas a especialistas para gerar 26 itens candidatos. Foram feitas consultas sobre esses itens a um grupo internacional composto por 103 interessados diretos, que participaram de uma pesquisa Delphi em duas etapas. Chegou-se a um acordo sobre os itens em uma reunião de consenso que incluiu 31 interessados diretos, e os itens foram refinados por meio de uma lista de verificação piloto que envolveu 34 participantes. A extensão SPIRIT-AI inclui 15 itens novos que foram considerados suficientemente importantes para os protocolos de ensaios clínicos com intervenções que utilizam IA. Esses itens novos devem constar dos relatórios de rotina, juntamente com os itens básicos da SPIRIT 2013. A SPIRIT-AI preconiza que os pesquisadores descrevam claramente a intervenção de IA, incluindo instruções e as habilidades necessárias para seu uso, o contexto no qual a intervenção de IA será integrada, considerações sobre o manuseio dos dados de entrada e saída, a interação humano-IA e a análise de casos de erro. A SPIRIT-AI ajudará a promover a transparência e a integralidade nos protocolos de ensaios clínicos com intervenções que utilizam IA. Seu uso ajudará editores e revisores, bem como leitores em geral, a entender, interpretar e avaliar criticamente o delineamento e o risco de viés de um futuro estudo clínico.

6.
Patterns (N Y) ; 4(11): 100864, 2023 Nov 10.
Artículo en Inglés | MEDLINE | ID: mdl-38035190

RESUMEN

Artificial intelligence (AI) tools are of great interest to healthcare organizations for their potential to improve patient care, yet their translation into clinical settings remains inconsistent. One of the reasons for this gap is that good technical performance does not inevitably result in patient benefit. We advocate for a conceptual shift wherein AI tools are seen as components of an intervention ensemble. The intervention ensemble describes the constellation of practices that, together, bring about benefit to patients or health systems. Shifting from a narrow focus on the tool itself toward the intervention ensemble prioritizes a "sociotechnical" vision for translation of AI that values all components of use that support beneficial patient outcomes. The intervention ensemble approach can be used for regulation, institutional oversight, and for AI adopters to responsibly and ethically appraise, evaluate, and use AI tools.

7.
Nat Med ; 29(11): 2929-2938, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37884627

RESUMEN

Artificial intelligence as a medical device is increasingly being applied to healthcare for diagnosis, risk stratification and resource allocation. However, a growing body of evidence has highlighted the risk of algorithmic bias, which may perpetuate existing health inequity. This problem arises in part because of systemic inequalities in dataset curation, unequal opportunity to participate in research and inequalities of access. This study aims to explore existing standards, frameworks and best practices for ensuring adequate data diversity in health datasets. Exploring the body of existing literature and expert views is an important step towards the development of consensus-based guidelines. The study comprises two parts: a systematic review of existing standards, frameworks and best practices for healthcare datasets; and a survey and thematic analysis of stakeholder views of bias, health equity and best practices for artificial intelligence as a medical device. We found that the need for dataset diversity was well described in literature, and experts generally favored the development of a robust set of guidelines, but there were mixed views about how these could be implemented practically. The outputs of this study will be used to inform the development of standards for transparency of data diversity in health datasets (the STANDING Together initiative).


Asunto(s)
Inteligencia Artificial , Atención a la Salud , Humanos , Consenso , Revisiones Sistemáticas como Asunto
8.
J Nucl Med ; 64(12): 1848-1854, 2023 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-37827839

RESUMEN

The development of artificial intelligence (AI) within nuclear imaging involves several ethically fraught components at different stages of the machine learning pipeline, including during data collection, model training and validation, and clinical use. Drawing on the traditional principles of medical and research ethics, and highlighting the need to ensure health justice, the AI task force of the Society of Nuclear Medicine and Molecular Imaging has identified 4 major ethical risks: privacy of data subjects, data quality and model efficacy, fairness toward marginalized populations, and transparency of clinical performance. We provide preliminary recommendations to developers of AI-driven medical devices for mitigating the impact of these risks on patients and populations.


Asunto(s)
Inteligencia Artificial , Aprendizaje Automático , Humanos , Recolección de Datos , Comités Consultivos , Imagen Molecular
9.
JAMA Netw Open ; 6(9): e2335377, 2023 09 05.
Artículo en Inglés | MEDLINE | ID: mdl-37747733

RESUMEN

Importance: Artificial intelligence (AI) has gained considerable attention in health care, yet concerns have been raised around appropriate methods and fairness. Current AI reporting guidelines do not provide a means of quantifying overall quality of AI research, limiting their ability to compare models addressing the same clinical question. Objective: To develop a tool (APPRAISE-AI) to evaluate the methodological and reporting quality of AI prediction models for clinical decision support. Design, Setting, and Participants: This quality improvement study evaluated AI studies in the model development, silent, and clinical trial phases using the APPRAISE-AI tool, a quantitative method for evaluating quality of AI studies across 6 domains: clinical relevance, data quality, methodological conduct, robustness of results, reporting quality, and reproducibility. These domains included 24 items with a maximum overall score of 100 points. Points were assigned to each item, with higher points indicating stronger methodological or reporting quality. The tool was applied to a systematic review on machine learning to estimate sepsis that included articles published until September 13, 2019. Data analysis was performed from September to December 2022. Main Outcomes and Measures: The primary outcomes were interrater and intrarater reliability and the correlation between APPRAISE-AI scores and expert scores, 3-year citation rate, number of Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) low risk-of-bias domains, and overall adherence to the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) statement. Results: A total of 28 studies were included. Overall APPRAISE-AI scores ranged from 33 (low quality) to 67 (high quality). Most studies were moderate quality. The 5 lowest scoring items included source of data, sample size calculation, bias assessment, error analysis, and transparency. Overall APPRAISE-AI scores were associated with expert scores (Spearman ρ, 0.82; 95% CI, 0.64-0.91; P < .001), 3-year citation rate (Spearman ρ, 0.69; 95% CI, 0.43-0.85; P < .001), number of QUADAS-2 low risk-of-bias domains (Spearman ρ, 0.56; 95% CI, 0.24-0.77; P = .002), and adherence to the TRIPOD statement (Spearman ρ, 0.87; 95% CI, 0.73-0.94; P < .001). Intraclass correlation coefficient ranges for interrater and intrarater reliability were 0.74 to 1.00 for individual items, 0.81 to 0.99 for individual domains, and 0.91 to 0.98 for overall scores. Conclusions and Relevance: In this quality improvement study, APPRAISE-AI demonstrated strong interrater and intrarater reliability and correlated well with several study quality measures. This tool may provide a quantitative approach for investigators, reviewers, editors, and funding organizations to compare the research quality across AI studies for clinical decision support.


Asunto(s)
Inteligencia Artificial , Sistemas de Apoyo a Decisiones Clínicas , Humanos , Reproducibilidad de los Resultados , Aprendizaje Automático , Relevancia Clínica
10.
J Nucl Med ; 64(10): 1509-1515, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37620051

RESUMEN

The deployment of artificial intelligence (AI) has the potential to make nuclear medicine and medical imaging faster, cheaper, and both more effective and more accessible. This is possible, however, only if clinicians and patients feel that these AI medical devices (AIMDs) are trustworthy. Highlighting the need to ensure health justice by fairly distributing benefits and burdens while respecting individual patients' rights, the AI Task Force of the Society of Nuclear Medicine and Molecular Imaging has identified 4 major ethical risks that arise during the deployment of AIMD: autonomy of patients and clinicians, transparency of clinical performance and limitations, fairness toward marginalized populations, and accountability of physicians and developers. We provide preliminary recommendations for governing these ethical risks to realize the promise of AIMD for patients and populations.


Asunto(s)
Medicina Nuclear , Médicos , Humanos , Inteligencia Artificial , Comités Consultivos , Imagen Molecular
11.
Am J Bioeth ; 23(9): 55-56, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37647467
12.
Arch Dis Child ; 108(11): 929-934, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37419673

RESUMEN

OBJECTIVE: Spinal muscular atrophy (SMA) is a neuromuscular disorder that manifests with motor deterioration and respiratory complications. The paradigm of care is shifting as disease-modifying therapies including nusinersen, onasemnogene abeparvovec and risdiplam alter the disease trajectory of SMA. The objective of this study was to explore caregivers' experiences with disease-modifying therapies for SMA. DESIGN: Qualitative study including semistructured interviews with caregivers of children with SMA who received disease-modifying therapies. Interviews were audio recorded, transcribed verbatim, coded and analysed using content analysis. SETTING: The Hospital for Sick Children (Toronto, Canada). RESULTS: Fifteen family caregivers of children with SMA type 1 (n=5), type 2 (n=5) and type 3 (n=5) participated. There were two emerging themes and several subthemes (in parentheses): (1) inequities in access to disease-modifying therapies (variable regulatory approvals, prohibitively expensive therapies and insufficient infrastructure) and (2) patient and family experience with disease-modifying therapies (decision making, hope, fear and uncertainty). CONCLUSION: The caregiver experience with SMA has been transformed by the advent of disease-modifying therapies. Consistent and predictable access to disease-modifying therapies is a major concern for caregivers of children with SMA but is influenced by regulatory approvals, funding and eligibility criteria that are heterogenous across jurisdictions. Many caregivers described going to great lengths to access therapies, highlighting issues related to justice, such as equity and access. This diverse population reflects contemporary patients and families with SMA; their broad experiences may inform the healthcare delivery of other emerging orphan drugs.


Asunto(s)
Atrofia Muscular Espinal , Atrofias Musculares Espinales de la Infancia , Niño , Humanos , Cuidadores , Atrofia Muscular Espinal/tratamiento farmacológico , Atrofias Musculares Espinales de la Infancia/tratamiento farmacológico , Investigación Cualitativa , Incertidumbre
13.
JAMA Netw Open ; 6(5): e2310659, 2023 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-37126349

RESUMEN

Importance: Understanding the views and values of patients is of substantial importance to developing the ethical parameters of artificial intelligence (AI) use in medicine. Thus far, there is limited study on the views of children and youths. Their perspectives contribute meaningfully to the integration of AI in medicine. Objective: To explore the moral attitudes and views of children and youths regarding research and clinical care involving health AI at the point of care. Design, Setting, and Participants: This qualitative study recruited participants younger than 18 years during a 1-year period (October 2021 to March 2022) at a large urban pediatric hospital. A total of 44 individuals who were receiving or had previously received care at a hospital or rehabilitation clinic contacted the research team, but 15 were found to be ineligible. Of the 29 who consented to participate, 1 was lost to follow-up, resulting in 28 participants who completed the interview. Exposures: Participants were interviewed using vignettes on 3 main themes: (1) health data research, (2) clinical AI trials, and (3) clinical use of AI. Main Outcomes and Measures: Thematic description of values surrounding health data research, interventional AI research, and clinical use of AI. Results: The 28 participants included 6 children (ages, 10-12 years) and 22 youths (ages, 13-17 years) (16 female, 10 male, and 3 trans/nonbinary/gender diverse). Mean (SD) age was 15 (2) years. Participants were highly engaged and quite knowledgeable about AI. They expressed a positive view of research intended to help others and had strong feelings about the uses of their health data for AI. Participants expressed appreciation for the vulnerability of potential participants in interventional AI trials and reinforced the importance of respect for their preferences regardless of their decisional capacity. A strong theme for the prospective use of clinical AI was the desire to maintain bedside interaction between the patient and their physician. Conclusions and Relevance: In this study, children and youths reported generally positive views of AI, expressing strong interest and advocacy for their involvement in AI research and inclusion of their voices for shared decision-making with AI in clinical care. These findings suggest the need for more engagement of children and youths in health care AI research and integration.


Asunto(s)
Inteligencia Artificial , Medicina , Humanos , Masculino , Niño , Femenino , Adolescente , Investigación Cualitativa , Emociones , Toma de Decisiones Conjunta
15.
J Adolesc Health ; 72(6): 827-828, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37032212
16.
Front Public Health ; 11: 968319, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36908403

RESUMEN

In this work, we examine magnetic resonance imaging (MRI) and ultrasound (US) appointments at the Diagnostic Imaging (DI) department of a pediatric hospital to discover possible relationships between selected patient features and no-show or long waiting room time endpoints. The chosen features include age, sex, income, distance from the hospital, percentage of non-English speakers in a postal code, percentage of single caregivers in a postal code, appointment time slot (morning, afternoon, evening), and day of the week (Monday to Sunday). We trained univariate Logistic Regression (LR) models using the training sets and identified predictive (significant) features that remained significant in the test sets. We also implemented multivariate Random Forest (RF) models to predict the endpoints. We achieved Area Under the Receiver Operating Characteristic Curve (AUC) of 0.82 and 0.73 for predicting no-show and long waiting room time endpoints, respectively. The univariate LR analysis on DI appointments uncovered the effect of the time of appointment during the day/week, and patients' demographics such as income and the number of caregivers on the no-shows and long waiting room time endpoints. For predicting no-show, we found age, time slot, and percentage of single caregiver to be the most critical contributors. Age, distance, and percentage of non-English speakers were the most important features for our long waiting room time prediction models. We found no sex discrimination among the scheduled pediatric DI appointments. Nonetheless, inequities based on patient features such as low income and language barrier did exist.


Asunto(s)
Citas y Horarios , Imagen por Resonancia Magnética , Humanos , Niño , Imagen por Resonancia Magnética/métodos , Modelos Logísticos , Hospitales , Aprendizaje Automático
19.
J Med Ethics ; 49(8): 573-579, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-36581457

RESUMEN

Researchers are studying how artificial intelligence (AI) can be used to better detect, prognosticate and subgroup diseases. The idea that AI might advance medicine's understanding of biological categories of psychiatric disorders, as well as provide better treatments, is appealing given the historical challenges with prediction, diagnosis and treatment in psychiatry. Given the power of AI to analyse vast amounts of information, some clinicians may feel obligated to align their clinical judgements with the outputs of the AI system. However, a potential epistemic privileging of AI in clinical judgements may lead to unintended consequences that could negatively affect patient treatment, well-being and rights. The implications are also relevant to precision medicine, digital twin technologies and predictive analytics generally. We propose that a commitment to epistemic humility can help promote judicious clinical decision-making at the interface of big data and AI in psychiatry.


Asunto(s)
Trastornos Mentales , Psiquiatría , Humanos , Inteligencia Artificial , Trastornos Mentales/diagnóstico , Medicina de Precisión , Toma de Decisiones Clínicas
20.
Lancet Child Adolesc Health ; 7(1): 69-76, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36206789

RESUMEN

Treatment of anorexia nervosa poses a moral quandary for clinicians, particularly in paediatrics. The challenges of appropriately individualising treatment while balancing prospective benefits against concomitant harms are best highlighted through exploration and discussion of the ethical issues. The purpose of this Viewpoint is to explore the ethical tensions in treating young patients (around ages 10-18 years) with severe anorexia nervosa who are not capable of making treatment-based decisions and describe how harm reduction can reasonably be applied. We propose the term AN-PLUS to refer to the subset of patients with a particularly concerning clinical presentation-poor quality of life, lack of treatment response, medically severe and unstable, and severe symptomatology-who might benefit from a harm reduction approach. From ethics literature, qualitative studies, and our clinical experience, we identify three core ethical themes in making treatment decisions for young people with AN-PLUS: capacity and autonomy, best interests, and person-centred care. Finally, we consider how a harm reduction approach can provide direction for developing a personalised treatment plan that retains a focus on best interests while attempting to mitigate the harms of involuntary treatment. We conclude with recommendations to operationalise a harm reduction approach in young people with AN-PLUS.


Asunto(s)
Anorexia Nerviosa , Humanos , Adolescente , Niño , Anorexia Nerviosa/terapia , Calidad de Vida , Toma de Decisiones
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...