Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Stud Health Technol Inform ; 316: 853-857, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176927

RESUMO

Clinical notes contain valuable information for research and monitoring quality of care. Named Entity Recognition (NER) is the process for identifying relevant pieces of information such as diagnoses, treatments, side effects, etc., and bring them to a more structured form. Although recent advancements in deep learning have facilitated automated recognition, particularly in English, NER can still be challenging due to limited specialized training data. This exacerbated in hospital settings where annotations are costly to obtain without appropriate incentives and often dependent on local specificities. In this work, we study whether this annotation process can be effectively accelerated by combining two practical strategies. First, we convert usually passive annotation tasks into a proactive contest to motivate human annotators in performing a task often considered tedious and time-consuming. Second, we provide pre-annotations for the participants to evaluate how recall and precision of the pre-annotations can boost or deteriorate annotation performance. We applied both strategies to a text de-identification task on French clinical notes and discharge summaries at a large Swiss university hospital. Our results show that proactive contest and average quality pre-annotations can significantly speed up annotation time and increase annotation quality, enabling us to develop a text de-identification model for French clinical notes with high performance (F1 score 0.94).


Assuntos
Registros Eletrônicos de Saúde , Processamento de Linguagem Natural , Humanos , Anonimização de Dados , Suíça
2.
PLoS Comput Biol ; 17(6): e1009070, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-34081705

RESUMO

Classic reinforcement learning (RL) theories cannot explain human behavior in the absence of external reward or when the environment changes. Here, we employ a deep sequential decision-making paradigm with sparse reward and abrupt environmental changes. To explain the behavior of human participants in these environments, we show that RL theories need to include surprise and novelty, each with a distinct role. While novelty drives exploration before the first encounter of a reward, surprise increases the rate of learning of a world-model as well as of model-free action-values. Even though the world-model is available for model-based RL, we find that human decisions are dominated by model-free action choices. The world-model is only marginally used for planning, but it is important to detect surprising events. Our theory predicts human action choices with high probability and allows us to dissociate surprise, novelty, and reward in EEG signals.


Assuntos
Adaptação Psicológica , Comportamento Exploratório , Modelos Psicológicos , Reforço Psicológico , Algoritmos , Comportamento de Escolha/fisiologia , Biologia Computacional , Tomada de Decisões/fisiologia , Eletroencefalografia/estatística & dados numéricos , Comportamento Exploratório/fisiologia , Humanos , Aprendizagem/fisiologia , Modelos Neurológicos , Recompensa
3.
Elife ; 82019 11 11.
Artigo em Inglês | MEDLINE | ID: mdl-31709980

RESUMO

In many daily tasks, we make multiple decisions before reaching a goal. In order to learn such sequences of decisions, a mechanism to link earlier actions to later reward is necessary. Reinforcement learning (RL) theory suggests two classes of algorithms solving this credit assignment problem: In classic temporal-difference learning, earlier actions receive reward information only after multiple repetitions of the task, whereas models with eligibility traces reinforce entire sequences of actions from a single experience (one-shot). Here, we show one-shot learning of sequences. We developed a novel paradigm to directly observe which actions and states along a multi-step sequence are reinforced after a single reward. By focusing our analysis on those states for which RL with and without eligibility trace make qualitatively distinct predictions, we find direct behavioral (choice probability) and physiological (pupil dilation) signatures of reinforcement learning with eligibility trace across multiple sensory modalities.


Assuntos
Cognição/fisiologia , Tomada de Decisões/fisiologia , Aprendizagem/fisiologia , Memória/fisiologia , Pupila/fisiologia , Reforço Psicológico , Recompensa , Algoritmos , Humanos , Cadeias de Markov , Modelos Neurológicos , Desempenho Psicomotor/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA