Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
J Am Med Dir Assoc ; : 105105, 2024 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-38909630

RESUMEN

This article proposes a framework for examining the ethical and legal concerns for using artificial intelligence (AI) in post-acute and long-term care (PA-LTC). It argues that established frameworks on health, AI, and the law should be adapted to specific care contexts. For residents in PA-LTC, their social, psychological, and mobility needs should act as a gauge for examining the benefits and risks of integrating AI into their care. Using those needs as a gauge, 4 areas of particular concern are identified. First, the threat that AI poses to the autonomy of residents can undermine their core needs. Second, how discrimination and bias in algorithmic decision-making can undermine Medicare coverage for PA-LTC, causing doctors' recommendations to be ignored and denying residents the care they are entitled to. Third, privacy rules concerning data use may undermine developers' ability to train accurate AI systems, limiting their usefulness in PA-LTC contexts. Fourth, the importance of obtaining consent before AI is used and discussions about how that care should continue if there are concerns about an ongoing decline in cognition. Together, these considerations elevate existing frameworks and adapt them to the context-specific case of PA-LTC. It is hoped that future research will examine the legal implications of these matters in each of these specific cases.

3.
J Law Med Ethics ; 51(2): 287-300, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37655571

RESUMEN

This article examines the legal and ethical challenges for the provision of healthcare in the metaverse. It proposes that the issues arising in the metaverse are an extension of those found in telehealth and virtual health communities, albeit with greater complexity. It argues that international collaboration between policymakers, lawmakers, and researchers is required to regulate this space and facilitate the safe and effective development of meta-medicine.


Asunto(s)
Telemedicina , Humanos , Instituciones de Salud , Investigadores
4.
Stud Health Technol Inform ; 305: 640-643, 2023 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-37387113

RESUMEN

The growing accessibility of large health datasets and AI's ability to analyze them offers significant potential to transform public health and epidemiology. AI-driven interventions in preventive, diagnostic, and therapeutic healthcare are becoming more prevalent, but they raise ethical concerns, particularly regarding patient safety and privacy. This study presents a thorough analysis of ethical and legal principles found in the literature on AI applications in public health. A comprehensive search yielded 22 publications for review, revealing ethical principles such as equity, bias, privacy, security, safety, transparency, confidentiality, accountability, social justice, and autonomy. Additionally, five key ethical challenges were identified. The study emphasizes the importance of addressing these ethical and legal concerns and encourages further research to establish comprehensive guidelines for responsible AI implementation in public health.


Asunto(s)
Inteligencia Artificial , Salud Pública , Humanos , Responsabilidad Social , Instituciones de Salud , Seguridad del Paciente
5.
Artículo en Inglés | MEDLINE | ID: mdl-36743720

RESUMEN

Background: The rates of mental health disorders such as anxiety and depression are at an all-time high especially since the onset of COVID-19, and the need for readily available digital health care solutions has never been greater. Wearable devices have increasingly incorporated sensors that were previously reserved for hospital settings. The availability of wearable device features that address anxiety and depression is still in its infancy, but consumers will soon have the potential to self-monitor moods and behaviors using everyday commercially-available devices. Objective: This study aims to explore the features of wearable devices that can be used for monitoring anxiety and depression. Methods: Six bibliographic databases, including MEDLINE, EMBASE, PsycINFO, IEEE Xplore, ACM Digital Library, and Google Scholar were used as search engines for this review. Two independent reviewers performed study selection and data extraction, while two other reviewers justified the cross-checking of extracted data. A narrative approach for synthesizing the data was utilized. Results: From 2408 initial results, 58 studies were assessed and highlighted according to our inclusion criteria. Wrist-worn devices were identified in the bulk of our studies (n = 42 or 71%). For the identification of anxiety and depression, we reported 26 methods for assessing mood, with the State-Trait Anxiety Inventory being the joint most common along with the Diagnostic and Statistical Manual of Mental Disorders (n = 8 or 14%). Finally, n = 26 or 46% of studies highlighted the smartphone as a wearable device host device. Conclusion: The emergence of affordable, consumer-grade biosensors offers the potential for new approaches to support mental health therapies for illnesses such as anxiety and depression. We believe that purposefully-designed wearable devices that combine the expertise of technologists and clinical experts can play a key role in self-care monitoring and diagnosis.

6.
Am J Law Med ; 49(2-3): 250-266, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38344795

RESUMEN

Artificial intelligence (AI) is being tested and deployed in major hospitals to monitor patients, leading to improved health outcomes, lower costs, and time savings. This uptake is in its infancy, with new applications being considered. In this Article, the challenges of deploying AI in mental health wards are examined by reference to AI surveillance systems, suicide prediction and hospital administration. The examination highlights risks surrounding patient privacy, informed consent, and data considerations. Overall, these risks indicate that AI should only be used in a psychiatric ward after careful deliberation, caution, and ongoing reappraisal.


Asunto(s)
Inteligencia Artificial , Salud Mental , Humanos , Servicio de Psiquiatría en Hospital , Consentimiento Informado
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...