Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 121(24): e2317967121, 2024 Jun 11.
Artículo en Inglés | MEDLINE | ID: mdl-38833474

RESUMEN

Large language models (LLMs) are currently at the forefront of intertwining AI systems with human communication and everyday life. Thus, aligning them with human values is of great importance. However, given the steady increase in reasoning abilities, future LLMs are under suspicion of becoming able to deceive human operators and utilizing this ability to bypass monitoring efforts. As a prerequisite to this, LLMs need to possess a conceptual understanding of deception strategies. This study reveals that such strategies emerged in state-of-the-art LLMs, but were nonexistent in earlier LLMs. We conduct a series of experiments showing that state-of-the-art LLMs are able to understand and induce false beliefs in other agents, that their performance in complex deception scenarios can be amplified utilizing chain-of-thought reasoning, and that eliciting Machiavellianism in LLMs can trigger misaligned deceptive behavior. GPT-4, for instance, exhibits deceptive behavior in simple test scenarios 99.16% of the time (P < 0.001). In complex second-order deception test scenarios where the aim is to mislead someone who expects to be deceived, GPT-4 resorts to deceptive behavior 71.46% of the time (P < 0.001) when augmented with chain-of-thought reasoning. In sum, revealing hitherto unknown machine behavior in LLMs, our study contributes to the nascent field of machine psychology.


Asunto(s)
Decepción , Lenguaje , Humanos , Inteligencia Artificial
2.
Nat Comput Sci ; 3(10): 833-838, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38177754

RESUMEN

We design a battery of semantic illusions and cognitive reflection tests, aimed to elicit intuitive yet erroneous responses. We administer these tasks, traditionally used to study reasoning and decision-making in humans, to OpenAI's generative pre-trained transformer model family. The results show that as the models expand in size and linguistic proficiency they increasingly display human-like intuitive system 1 thinking and associated cognitive errors. This pattern shifts notably with the introduction of ChatGPT models, which tend to respond correctly, avoiding the traps embedded in the tasks. Both ChatGPT-3.5 and 4 utilize the input-output context window to engage in chain-of-thought reasoning, reminiscent of how people use notepads to support their system 2 thinking. Yet, they remain accurate even when prevented from engaging in chain-of-thought reasoning, indicating that their system-1-like next-word generation processes are more accurate than those of older models. Our findings highlight the value of applying psychological methodologies to study large language models, as this can uncover previously undetected emergent characteristics.


Asunto(s)
Intuición , Solución de Problemas , Humanos , Lenguaje , Lingüística , Sesgo
3.
PLOS Digit Health ; 1(2): e0000016, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36812545

RESUMEN

Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents a review of the key arguments in favor and against explainability for AI-powered Clinical Decision Support System (CDSS) applied to a concrete use case, namely an AI-powered CDSS currently used in the emergency call setting to identify patients with life-threatening cardiac arrest. More specifically, we performed a normative analysis using socio-technical scenarios to provide a nuanced account of the role of explainability for CDSSs for the concrete use case, allowing for abstractions to a more general level. Our analysis focused on three layers: technical considerations, human factors, and the designated system role in decision-making. Our findings suggest that whether explainability can provide added value to CDSS depends on several key questions: technical feasibility, the level of validation in case of explainable algorithms, the characteristics of the context in which the system is implemented, the designated role in the decision-making process, and the key user group(s). Thus, each CDSS will require an individualized assessment of explainability needs and we provide an example of how such an assessment could look like in practice.

4.
Minds Mach (Dordr) ; 31(4): 563-593, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34602749

RESUMEN

Machine behavior that is based on learning algorithms can be significantly influenced by the exposure to data of different qualities. Up to now, those qualities are solely measured in technical terms, but not in ethical ones, despite the significant role of training and annotation data in supervised machine learning. This is the first study to fill this gap by describing new dimensions of data quality for supervised machine learning applications. Based on the rationale that different social and psychological backgrounds of individuals correlate in practice with different modes of human-computer-interaction, the paper describes from an ethical perspective how varying qualities of behavioral data that individuals leave behind while using digital technologies have socially relevant ramification for the development of machine learning applications. The specific objective of this study is to describe how training data can be selected according to ethical assessments of the behavior it originates from, establishing an innovative filter regime to transition from the big data rationale n = all to a more selective way of processing data for training sets in machine learning. The overarching aim of this research is to promote methods for achieving beneficial machine learning applications that could be widely useful for industry as well as academia.

5.
J Eur CME ; 10(1): 1989243, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34804636

RESUMEN

Health data bear great promises for a healthier and happier life, but they also make us vulnerable. Making use of millions or billions of data points, Machine Learning (ML) and Artificial Intelligence (AI) are now creating new benefits. For sure, harvesting Big Data can have great potentials for the health system, too. It can support accurate diagnoses, better treatments and greater cost effectiveness. However, it can also have undesirable implications, often in the sense of undesired side effects, which may in fact be terrible. Examples for this, as discussed in this article, are discrimination, the mechanisation of death, and genetic, social, behavioural or technological selection, which may imply eugenic effects or social Darwinism. As many unintended effects become visible only after years, we still lack sufficient criteria, long-term experience and advanced methods to reliably exclude that things may go terribly wrong. Handing over decision-making, responsibility or control to machines, could be dangerous and irresponsible. It would also be in serious conflict with human rights and our constitution.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA