Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Sci Eng Ethics ; 29(6): 38, 2023 10 26.
Artigo em Inglês | MEDLINE | ID: mdl-37882881

RESUMO

The convergence of human and artificial intelligence is currently receiving considerable scholarly attention. Much debate about the resulting Hybrid Minds focuses on the integration of artificial intelligence into the human brain through intelligent brain-computer interfaces as they enter clinical use. In this contribution we discuss a complementary development: the integration of a functional in vitro network of human neurons into an in silico computing environment.To do so, we draw on a recent experiment reporting the creation of silico-biological intelligence as a case study (Kagan et al., 2022b). In this experiment, multielectrode arrays were plated with stem cell-derived human neurons, creating a system which the authors call DishBrain. By embedding the system into a virtual game-world, neural clusters were able to receive electrical input signals from the game-world and to respond appropriately with output signals from pre-assigned motor regions. Using this design, the authors demonstrate how the DishBrain self-organises and successfully learns to play the computer game 'Pong', exhibiting 'sentient' and intelligent behaviour in its virtual environment.The creation of such hybrid, silico-biological intelligence raises numerous ethical challenges. Following the neuroscientific framework embraced by the authors themselves, we discuss the arising ethical challenges in the context of Karl Friston's Free Energy Principle, focusing on the risk of creating synthetic phenomenology. Following the DishBrain's creator's neuroscientific assumptions, we highlight how DishBrain's design may risk bringing about artificial suffering and argue for a congruently cautious approach to such synthetic biological intelligence.


Assuntos
Inteligência Artificial , Silício , Humanos , Encéfalo , Inteligência , Aprendizagem
2.
Bioethics ; 36(2): 154-161, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34142373

RESUMO

Trust constitutes a fundamental strategy to deal with risks and uncertainty in complex societies. In line with the vast literature stressing the importance of trust in doctor-patient relationships, trust is therefore regularly suggested as a way of dealing with the risks of medical artificial intelligence (AI). Yet, this approach has come under charge from different angles. At least two lines of thought can be distinguished: (1) that trusting AI is conceptually confused, that is, that we cannot trust AI; and (2) that it is also dangerous, that is, that we should not trust AI-particularly if the stakes are as high as they routinely are in medicine. In this paper, we aim to defend a notion of trust in the context of medical AI against both charges. To do so, we highlight the technically mediated intentions manifest in AI systems, rendering trust a conceptually plausible stance for dealing with them. Based on literature from human-robot interactions, psychology and sociology, we then propose a novel model to analyse notions of trust, distinguishing between three aspects: reliability, competence, and intentions. We discuss each aspect and make suggestions regarding how medical AI may become worthy of our trust.


Assuntos
Inteligência Artificial , Medicina , Humanos , Relações Médico-Paciente , Reprodutibilidade dos Testes , Confiança
3.
Camb Q Healthc Ethics ; : 1-10, 2022 Oct 20.
Artigo em Inglês | MEDLINE | ID: mdl-36263755

RESUMO

Artificial intelligence (AI) plays a rapidly increasing role in clinical care. Many of these systems, for instance, deep learning-based applications using multilayered Artificial Neural Nets, exhibit epistemic opacity in the sense that they preclude comprehensive human understanding. In consequence, voices from industry, policymakers, and research have suggested trust as an attitude for engaging with clinical AI systems. Yet, in the philosophical and ethical literature on medical AI, the notion of trust remains fiercely debated. Trust skeptics hold that talking about trust in nonhuman agents constitutes a category error and worry about the concept being misused for ethics washing. Proponents of trust have responded to these worries from various angles, disentangling different concepts and aspects of trust in AI, potentially organized in layers or dimensions. Given the substantial disagreements across these accounts of trust and the important worries about ethics washing, we embrace a diverging strategy here. Instead of aiming for a positive definition of the elements and nature of trust in AI, we proceed ex negativo, that is we look at cases where trust or distrust are misplaced. Comparing these instances with trust expedited in doctor-patient relationships, we systematize these instances and propose a taxonomy of both misplaced trust and distrust. By inverting the perspective and focusing on negative examples, we develop an account that provides useful ethical constraints for decisions in clinical as well as regulatory contexts and that highlights how we should not engage with medical AI.

4.
Hist Philos Life Sci ; 44(4): 50, 2022 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-36282442

RESUMO

The aim of the study is to encourage a critical debate on the use of normality in the medical literature on DSD or intersex. For this purpose, a scoping review was conducted to identify and map the various ways in which "normal" is used in the medical literature on DSD between 2016 and 2020. We identified 75 studies, many of which were case studies highlighting rare cases of DSD, others, mainly retrospective observational studies, focused on improving diagnosis or treatment. The most common use of the adjective normal was in association with phenotypic sex. Overall, appearance was the most commonly cited criteria to evaluate the normality of sex organs. More than 1/3 of the studies included also medical photographs of sex organs. This persistent use of normality in reference to phenotypic sex is worrisome given the long-term medicalization of intersex bodies in the name of a "normal" appearance or leading a "normal" life. Healthcare professionals should be more careful about the ethical implications of using photographs in publications given that many intersex persons describe their experience with medical photography as dehumanizing.


Assuntos
Transtornos do Desenvolvimento Sexual , Metáfora , Humanos , Estudos Retrospectivos , Transtornos do Desenvolvimento Sexual/diagnóstico , Transtornos do Desenvolvimento Sexual/terapia
5.
Psychol Med ; 51(15): 2515-2521, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-32536358

RESUMO

Recent advances in machine learning (ML) promise far-reaching improvements across medical care, not least within psychiatry. While to date no psychiatric application of ML constitutes standard clinical practice, it seems crucial to get ahead of these developments and address their ethical challenges early on. Following a short general introduction concerning ML in psychiatry, we do so by focusing on schizophrenia as a paradigmatic case. Based on recent research employing ML to further the diagnosis, treatment, and prediction of schizophrenia, we discuss three hypothetical case studies of ML applications with view to their ethical dimensions. Throughout this discussion, we follow the principlist framework by Tom Beauchamp and James Childress to analyse potential problems in detail. In particular, we structure our analysis around their principles of beneficence, non-maleficence, respect for autonomy, and justice. We conclude with a call for cautious optimism concerning the implementation of ML in psychiatry if close attention is paid to the particular intricacies of psychiatric disorders and its success evaluated based on tangible clinical benefit for patients.


Assuntos
Aprendizado de Máquina , Psiquiatria/métodos , Esquizofrenia , Algoritmos , Bioética , Diagnóstico por Computador/ética , Diagnóstico por Computador/métodos , Humanos , Aprendizado de Máquina/ética , Esquizofrenia/diagnóstico , Esquizofrenia/terapia
6.
Med Health Care Philos ; 24(3): 341-349, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33713239

RESUMO

Machine Learning (ML) is on the rise in medicine, promising improved diagnostic, therapeutic and prognostic clinical tools. While these technological innovations are bound to transform health care, they also bring new ethical concerns to the forefront. One particularly elusive challenge regards discriminatory algorithmic judgements based on biases inherent in the training data. A common line of reasoning distinguishes between justified differential treatments that mirror true disparities between socially salient groups, and unjustified biases which do not, leading to misdiagnosis and erroneous treatment. In the curation of training data this strategy runs into severe problems though, since distinguishing between the two can be next to impossible. We thus plead for a pragmatist dealing with algorithmic bias in healthcare environments. By recurring to a recent reformulation of William James's pragmatist understanding of truth, we recommend that, instead of aiming at a supposedly objective truth, outcome-based therapeutic usefulness should serve as the guiding principle for assessing ML applications in medicine.


Assuntos
Educação Médica , Aprendizado de Máquina , Viés , Atenção à Saúde , Humanos , Princípios Morais
9.
Front Psychiatry ; 14: 1209862, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37692304

RESUMO

Harnessing the power of machine learning (ML) and other Artificial Intelligence (AI) techniques promises substantial improvements across forensic psychiatry, supposedly offering more objective evaluations and predictions. However, AI-based predictions about future violent behaviour and criminal recidivism pose ethical challenges that require careful deliberation due to their social and legal significance. In this paper, we shed light on these challenges by considering externalist accounts of psychiatric disorders which stress that the presentation and development of psychiatric disorders is intricately entangled with their outward environment and social circumstances. We argue that any use of predictive AI in forensic psychiatry should not be limited to neurobiology alone but must also consider social and environmental factors. This thesis has practical implications for the design of predictive AI systems, especially regarding the collection and processing of training data, the selection of ML methods, and the determination of their explainability requirements.

10.
Digit Health ; 8: 20552076221074488, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35173981

RESUMO

Using artificial intelligence to improve patient care is a cutting-edge methodology, but its implementation in clinical routine has been limited due to significant concerns about understanding its behavior. One major barrier is the explainability dilemma and how much explanation is required to use artificial intelligence safely in healthcare. A key issue is the lack of consensus on the definition of explainability by experts, regulators, and healthcare professionals, resulting in a wide variety of terminology and expectations. This paper aims to fill the gap by defining minimal explainability standards to serve the views and needs of essential stakeholders in healthcare. In that sense, we propose to define minimal explainability criteria that can support doctors' understanding, meet patients' needs, and fulfill legal requirements. Therefore, explainability need not to be exhaustive but sufficient for doctors and patients to comprehend the artificial intelligence models' clinical implications and be integrated safely into clinical practice. Thus, minimally acceptable standards for explainability are context-dependent and should respond to the specific need and potential risks of each clinical scenario for a responsible and ethical implementation of artificial intelligence.

11.
Front Psychiatry ; 13: 1063238, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36733415

RESUMO

Introduction: Threat processing, enabled by threat circuits, is supported by a remarkably conserved neural architecture across mammals. Threatening stimuli relevant for most species include the threat of being attacked by a predator or an aggressive conspecific and the threat of pain. Extensive studies in rodents have associated the threats of pain, predator attack and aggressive conspecific attack with distinct neural circuits in subregions of the amygdala, the hypothalamus and the periaqueductal gray. Bearing in mind the considerable conservation of both the anatomy of these regions and defensive behaviors across mammalian species, we hypothesized that distinct brain activity corresponding to the threats of pain, predator attack and aggressive conspecific attack would also exist in human subcortical brain regions. Methods: Forty healthy female subjects underwent fMRI scanning during aversive classical conditioning. In close analogy to rodent studies, threat stimuli consisted of painful electric shocks, a short video clip of an attacking bear and a short video clip of an attacking man. Threat processing was conceptualized as the expectation of the aversive stimulus during the presentation of the conditioned stimulus. Results: Our results demonstrate differential brain activations in the left and right amygdala as well as in the left hypothalamus for the threats of pain, predator attack and aggressive conspecific attack, for the first time showing distinct threat-related brain activity within the human subcortical brain. Specifically, the threat of pain showed an increase of activity in the left and right amygdala and the left hypothalamus compared to the threat of conspecific attack (pain > conspecific), and increased activity in the left amygdala compared to the threat of predator attack (pain > predator). Threat of conspecific attack revealed heightened activity in the right amygdala, both in comparison to threat of pain (conspecific > pain) and threat of predator attack (conspecific > predator). Finally, for the condition threat of predator attack we found increased activity in the bilateral amygdala and the hypothalamus when compared to threat of conspecific attack (predator > conspecific). No significant clusters were found for the contrast predator attack > pain. Conclusion: Results suggest that threat type-specific circuits identified in rodents might be conserved in the human brain.

12.
Soc Cogn Affect Neurosci ; 15(5): 561-570, 2020 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-32415970

RESUMO

The reduction of aversive emotions by a conspecific's presence-called social buffering-is a universal phenomenon in the mammalian world and a powerful form of human social emotion regulation. Animal and human studies on neural pathways underlying social buffering typically examined physiological reactions or regional brain activations. However, direct links between emotional and social stimuli, distinct neural processes and behavioural outcomes are still missing. Using data of 27 female participants, the current study delineated a large-scale process model of social buffering's neural underpinnings, connecting changes in neural activity to emotional behaviour by means of voxel-wise multilevel mediation analysis. Our results confirmed that three processes underlie human social buffering: (i) social support-related reduction of activity in the orbitofrontal cortex, ventromedial and dorsolateral prefrontal cortices, anterior and mid-cingulate; (ii) downregulation of aversive emotion-induced brain activity in the superficial cortex-like amygdala and mediodorsal thalamus; and (iii) downregulation of reported aversive feelings. Results of the current study provide evidence for a distinct neural process model of aversive emotion regulation in humans by social buffering.


Assuntos
Encéfalo/diagnóstico por imagem , Regulação Emocional/fisiologia , Emoções/fisiologia , Adulto , Afeto , Mapeamento Encefálico , Medo/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Vias Neurais/diagnóstico por imagem , Estimulação Luminosa , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA