Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Cortex ; 179: 62-76, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39141936

RESUMO

The quantification of cognitive powers rests on identifying a behavioural task that depends on them. Such dependence cannot be assured, for the powers a task invokes cannot be experimentally controlled or constrained a priori, resulting in unknown vulnerability to failure of specificity and generalisability. Evaluating a compact version of Raven's Advanced Progressive Matrices (RAPM), a widely used clinical test of fluid intelligence, we show that LaMa, a self-supervised artificial neural network trained solely on the completion of partially masked images of natural environmental scenes, achieves representative human-level test scores a prima vista, without any task-specific inductive bias or training. Compared with cohorts of healthy and focally lesioned participants, LaMa exhibits human-like variation with item difficulty, and produces errors characteristic of right frontal lobe damage under degradation of its ability to integrate global spatial patterns. LaMa's narrow training and limited capacity suggest matrix-style tests may be open to computationally simple solutions that need not necessarily invoke the substrates of reasoning.


Assuntos
Inteligência , Redes Neurais de Computação , Humanos , Inteligência/fisiologia , Masculino , Feminino , Adulto , Pessoa de Meia-Idade , Cognição/fisiologia , Adulto Jovem , Testes de Inteligência , Idoso , Testes Neuropsicológicos
2.
Patterns (N Y) ; 3(5): 100483, 2022 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-35607619

RESUMO

The value of biomedical research-a $1.7 trillion annual investment-is ultimately determined by its downstream, real-world impact, whose predictability from simple citation metrics remains unquantified. Here we sought to determine the comparative predictability of future real-world translation-as indexed by inclusion in patents, guidelines, or policy documents-from complex models of title/abstract-level content versus citations and metadata alone. We quantify predictive performance out of sample, ahead of time, across major domains, using the entire corpus of biomedical research captured by Microsoft Academic Graph from 1990-2019, encompassing 43.3 million papers. We show that citations are only moderately predictive of translational impact. In contrast, high-dimensional models of titles, abstracts, and metadata exhibit high fidelity (area under the receiver operating curve [AUROC] > 0.9), generalize across time and domain, and transfer to recognizing papers of Nobel laureates. We argue that content-based impact models are superior to conventional, citation-based measures and sustain a stronger evidence-based claim to the objective measurement of translational potential.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA