Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 119
Filtrar
7.
Hastings Cent Rep ; 50(3): 18-21, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32596887

RESUMEN

Artificial intelligence surveillance can be used to diagnose individual cases, track the spread of Covid-19, and help provide care. The use of AI for surveillance purposes (such as detecting new Covid-19 cases and gathering data from healthy and ill individuals) in a pandemic raises multiple concerns ranging from privacy to discrimination to access to care. Luckily, there exist several frameworks that can help guide stakeholders, especially physicians but also AI developers and public health officials, as they navigate these treacherous shoals. While these frameworks were not explicitly designed for AI surveillance during a pandemic, they can be adapted to help address concerns regarding privacy, human rights, and due process and equality. In a time where the rapid implementation of all tools available is critical to ending a pandemic, physicians, public health officials, and technology companies should understand the criteria for the ethical implementation of AI surveillance.


Asunto(s)
Inteligencia Artificial/ética , Infecciones por Coronavirus/epidemiología , Neumonía Viral/epidemiología , Vigilancia de la Población/métodos , Betacoronavirus , Derechos Humanos/ética , Humanos , Pandemias , Privacidad , Racismo/ética
9.
Artículo en Alemán | MEDLINE | ID: mdl-32410053

RESUMEN

Digitization offers considerable potential for strengthening prevention in the healthcare system. Data from various clinical and nonclinical sources can be collected in a structured way and systematically processed using algorithms. Prevention needs can thus be identified more quickly and precisely, and interventions can be planned, implemented, and evaluated for specific target groups. At the same time, however, it is necessary that data processing not only meets high technical but also ethical standards and legal data protection regulations in order to avoid or minimize risks.This discussion article examines the potentials and risks of digital prevention first from a "data perspective," which deals with the use of health-related data for the purpose of prevention, and second from an "algorithm perspective," which focuses on the use of algorithmic systems, including artificial intelligence, for the assessment of needs and evaluation of preventive measures, from an ethical and legal point of view. Finally, recommendations are formulated for framework conditions that should be created to strengthen the further development of prevention in the healthcare system.


Asunto(s)
Algoritmos , Inteligencia Artificial , Prestación de Atención de Salud/ética , Registros Electrónicos de Salud/ética , Principios Morales , Inteligencia Artificial/ética , Inteligencia Artificial/legislación & jurisprudencia , Bioética , Conjuntos de Datos como Asunto/ética , Prestación de Atención de Salud/métodos , Alemania , Humanos
10.
AJOB Neurosci ; 11(2): 120-127, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32228385

RESUMEN

The ethics of robots and artificial intelligence (AI) typically centers on "giving ethics" to as-yet imaginary AI with human-levels of autonomy in order to protect us from their potentially destructive power. It is often assumed that to do that, we should program AI with the true moral theory (whatever that might be), much as we teach morality to our children. This paper argues that the focus on AI with human-level autonomy is misguided. The robots and AI that we have now and in the near future are "semi-autonomous" in that their ability to make choices and to act is limited across a number of dimensions. Further, it may be morally problematic to create AI with human-level autonomy, even if it becomes possible. As such, any useful approach to AI ethics should begin with a theory of giving ethics to semi-autonomous agents (SAAs). In this paper, we work toward such a theory by evaluating our obligations to and for "natural" SAAs, including nonhuman animals and humans with developing and diminished capacities. Drawing on research in neuroscience, bioethics, and philosophy, we identify the ways in which AI semi-autonomy differs from semi-autonomy in humans and nonhuman animals. We conclude on the basis of these comparisons that when giving ethics to SAAs, we should focus on principles and restrictions that protect human interests, but that we can only permissibly maintain this approach so long as we do not aim at developing technology with human-level autonomy.


Asunto(s)
Inteligencia Artificial/ética , Bioética , Autonomía Personal , Animales , Humanos , Robótica/ética
11.
AJOB Neurosci ; 11(2): 105-112, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32228383

RESUMEN

This article examines the ethical and policy implications of using voice computing and artificial intelligence to screen for mental health conditions in low income and minority populations. Mental health is unequally distributed among these groups, which is further exacerbated by increased barriers to psychiatric care. Advancements in voice computing and artificial intelligence promise increased screening and more sensitive diagnostic assessments. Machine learning algorithms have the capacity to identify vocal features that can screen those with depression. However, in order to screen for mental health pathology, computer algorithms must first be able to account for the fundamental differences in vocal characteristics between low income minorities and those who are not. While researchers have envisioned this technology as a beneficent tool, this technology could be repurposed to scale up discrimination or exploitation. Studies on the use of big data and predictive analytics demonstrate that low income minority populations already face significant discrimination. This article urges researchers developing AI tools for vulnerable populations to consider the full ethical, legal, and social impact of their work. Without a national, coherent framework of legal regulations and ethical guidelines to protect vulnerable populations, it will be difficult to limit AI applications to solely beneficial uses. Without such protections, vulnerable populations will rightfully be wary of participating in such studies which also will negatively impact the robustness of such tools. Thus, for research involving AI tools like voice computing, it is in the research community's interest to demand more guidance and regulatory oversight from the federal government.


Asunto(s)
Inteligencia Artificial/ética , Bioética , Trastornos Mentales/diagnóstico , Enfermos Mentales , Grupos Minoritarios , Pobreza , Software de Reconocimiento del Habla/ética , Humanos
12.
AJOB Neurosci ; 11(2): 77-87, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32228387

RESUMEN

Clinical neuroscience is increasingly relying on the collection of large volumes of differently structured data and the use of intelligent algorithms for data analytics. In parallel, the ubiquitous collection of unconventional data sources (e.g. mobile health, digital phenotyping, consumer neurotechnology) is increasing the variety of data points. Big data analytics and approaches to Artificial Intelligence (AI) such as advanced machine learning are showing great potential to make sense of these larger and heterogeneous data flows. AI provides great opportunities for making new discoveries about the brain, improving current preventative and diagnostic models in both neurology and psychiatry and developing more effective assistive neurotechnologies. Concurrently, it raises many new methodological and ethical challenges. Given their transformative nature, it is still largely unclear how AI-driven approaches to the study of the human brain will meet adequate standards of scientific validity and affect normative instruments in neuroethics and research ethics. This manuscript provides an overview of current AI-driven approaches to clinical neuroscience and an assessment of the associated key methodological and ethical challenges. In particular, it will discuss what ethical principles are primarily affected by AI approaches to human neuroscience, and what normative safeguards should be enforced in this domain.


Asunto(s)
Inteligencia Artificial/ética , Macrodatos , Bioética , Neurociencias/ética , Neurociencias/métodos , Humanos
13.
AJOB Neurosci ; 11(2): 113-119, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32228384

RESUMEN

The human species is combining an increased understanding of our cognitive machinery with the development of a technology that can profoundly influence our lives and our ways of living together. Our sciences enable us to see our strengths and weaknesses, and build technology accordingly. What would future historians think of our current attempts to build increasingly smart systems, the purposes for which we employ them, the almost unstoppable goldrush toward ever more commercially relevant implementations, and the risk of superintelligence? We need a more profound reflection on what our science shows us about ourselves, what our technology allows us to do with that, and what, apparently, we aim to do with those insights and applications. As the smartest species on the planet, we don't need more intelligence. Since we appear to possess an underdeveloped capacity to act ethically and empathically, we rather require the kind of technology that enables us to act more consistently upon ethical principles. The problem is not to formulate ethical rules, it's to put them into practice. Cognitive neuroscience and AI provide the knowledge and the tools to develop the moral crutches we so clearly require. Why aren't we building them? We don't need superintelligence, we need superethics.


Asunto(s)
Inteligencia Artificial/ética , Bioética , Neurociencia Cognitiva/ética , Empatía , Humanos
14.
J Surg Res ; 253: 92-99, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32339787

RESUMEN

Surgeons perform two primary tasks: operating and engaging patients and caregivers in shared decision-making. Human dexterity and decision-making are biologically limited. Intelligent, autonomous machines have the potential to augment or replace surgeons. Rather than regarding this possibility with denial, ire, or indifference, surgeons should understand and steer these technologies. Closer examination of surgical innovations and lessons learned from the automotive industry can inform this process. Innovations in minimally invasive surgery and surgical decision-making follow classic S-shaped curves with three phases: (1) introduction of a new technology, (2) achievement of a performance advantage relative to existing standards, and (3) arrival at a performance plateau, followed by replacement with an innovation featuring greater machine autonomy and less human influence. There is currently no level I evidence demonstrating improved patient outcomes using intelligent, autonomous machines for performing operations or surgical decision-making tasks. History suggests that if such evidence emerges and if the machines are cost effective, then they will augment or replace humans, initially for simple, common, rote tasks under close human supervision and later for complex tasks with minimal human supervision. This process poses ethical challenges in assigning liability for errors, matching decisions to patient values, and displacing human workers, but may allow surgeons to spend less time gathering and analyzing data and more time interacting with patients and tending to urgent, critical-and potentially more valuable-aspects of patient care. Surgeons should steer these technologies toward optimal patient care and net social benefit using the uniquely human traits of creativity, altruism, and moral deliberation.


Asunto(s)
Inteligencia Artificial/tendencias , Sistemas de Apoyo a Decisiones Clínicas/instrumentación , Invenciones/tendencias , Procedimientos Quirúrgicos Robotizados/tendencias , Cirujanos/ética , Inteligencia Artificial/ética , Inteligencia Artificial/historia , Sistemas de Apoyo a Decisiones Clínicas/ética , Sistemas de Apoyo a Decisiones Clínicas/historia , Difusión de Innovaciones , Historia del Siglo XX , Historia del Siglo XXI , Humanos , Invenciones/ética , Invenciones/historia , Responsabilidad Legal , Participación del Paciente , Procedimientos Quirúrgicos Robotizados/ética , Procedimientos Quirúrgicos Robotizados/historia , Cirujanos/psicología
15.
Soins ; 65(842): 41-45, 2020.
Artículo en Francés | MEDLINE | ID: mdl-32245558

RESUMEN

The shift in our healthcare system towards organisational models based on patient care management is one of the structural changes that have taken place in recent years. Digital technology represents a major lever to support this transformation, which has high stakes for improving the quality and efficiency of patient care. Positive regulation of the associated ethical issues can be achieved through the principle of a human guarantee of digital technology and artificial intelligence in health care, which is currently being recognised in the framework of the revision of the bioethics law.


Asunto(s)
Tecnología Biomédica/ética , Prestación de Atención de Salud/organización & administración , Inteligencia Artificial/ética , Humanos
16.
Anesth Analg ; 130(5): 1234-1243, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-32287130

RESUMEN

Artificial intelligence-driven anesthesiology and perioperative care may just be around the corner. However, its promises of improved safety and patient outcomes can only become a reality if we take the time to examine its technical, ethical, and moral implications. The aim of perioperative medicine is to diagnose, treat, and prevent disease. As we introduce new interventions or devices, we must take care to do so with a conscience, keeping patient care as the main objective, and understanding that humanism is a core component of our practice. In our article, we outline key principles of artificial intelligence for the perioperative physician and explore limitations and ethical challenges in the field.


Asunto(s)
Algoritmos , Inteligencia Artificial/ética , Macrodatos , Conciencia , Medicina Perioperatoria/ética , Humanos , Medicina Perioperatoria/tendencias , Médicos/ética
18.
Zhongguo Yi Xue Ke Xue Yuan Xue Bao ; 42(1): 128-131, 2020 Feb 28.
Artículo en Chino | MEDLINE | ID: mdl-32131952

RESUMEN

As an important branch of artificial intelligence,the emerging medical artificial intelligence(MAI)is facing many ethical issues.MAI may offer the optimal diagnosis and treatment for patients but may also bring adverse effects on society and human beings.This article discusses the ethical problems caused by MAI and elucidates its development in a direction that meets ethical principles and requirements.


Asunto(s)
Inteligencia Artificial/ética , Ética Médica , Humanos
19.
Radiology ; 295(3): 675-682, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-32208097

RESUMEN

In this article, the authors propose an ethical framework for using and sharing clinical data for the development of artificial intelligence (AI) applications. The philosophical premise is as follows: when clinical data are used to provide care, the primary purpose for acquiring the data is fulfilled. At that point, clinical data should be treated as a form of public good, to be used for the benefit of future patients. In their 2013 article, Faden et al argued that all who participate in the health care system, including patients, have a moral obligation to contribute to improving that system. The authors extend that framework to questions surrounding the secondary use of clinical data for AI applications. Specifically, the authors propose that all individuals and entities with access to clinical data become data stewards, with fiduciary (or trust) responsibilities to patients to carefully safeguard patient privacy, and to the public to ensure that the data are made widely available for the development of knowledge and tools to benefit future patients. According to this framework, the authors maintain that it is unethical for providers to "sell" clinical data to other parties by granting access to clinical data, especially under exclusive arrangements, in exchange for monetary or in-kind payments that exceed costs. The authors also propose that patient consent is not required before the data are used for secondary purposes when obtaining such consent is prohibitively costly or burdensome, as long as mechanisms are in place to ensure that ethical standards are strictly followed. Rather than debate whether patients or provider organizations "own" the data, the authors propose that clinical data are not owned at all in the traditional sense, but rather that all who interact with or control the data have an obligation to ensure that the data are used for the benefit of future patients and society.


Asunto(s)
Inteligencia Artificial/ética , Diagnóstico por Imagen/ética , Ética Médica , Difusión de la Información/ética , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...