Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
2.
Science ; 380(6651): 1222-1223, 2023 06 23.
Article in English | MEDLINE | ID: mdl-37347992

ABSTRACT

Models can convey biases and false information to users.


Subject(s)
Artificial Intelligence , Knowledge , Humans , Bias
3.
Math Biosci ; 362: 109033, 2023 08.
Article in English | MEDLINE | ID: mdl-37257641

ABSTRACT

We provide a critique of mathematical biology in light of rapid developments in modern machine learning. We argue that out of the three modelling activities - (1) formulating models; (2) analysing models; and (3) fitting or comparing models to data - inherent to mathematical biology, researchers currently focus too much on activity (2) at the cost of (1). This trend, we propose, can be reversed by realising that any given biological phenomenon can be modelled in an infinite number of different ways, through the adoption of a pluralistic approach, where we view a system from multiple, different points of view. We explain this pluralistic approach using fish locomotion as a case study and illustrate some of the pitfalls - universalism, creating models of models, etc. - that hinder mathematical biology. We then ask how we might rediscover a lost art: that of creative mathematical modelling.


Subject(s)
Models, Biological , Models, Theoretical , Animals , Locomotion
4.
Cogn Sci ; 47(1): e13230, 2023 01.
Article in English | MEDLINE | ID: mdl-36625324

ABSTRACT

A fundamental fact about human minds is that they are never truly alone: all minds are steeped in situated interaction. That social interaction matters is recognized by any experimentalist who seeks to exclude its influence by studying individuals in isolation. On this view, interaction complicates cognition. Here, we explore the more radical stance that interaction co-constitutes cognition: that we benefit from looking beyond single minds toward cognition as a process involving interacting minds. All around the cognitive sciences, there are approaches that put interaction center stage. Their diverse and pluralistic origins may obscure the fact that collectively, they harbor insights and methods that can respecify foundational assumptions and fuel novel interdisciplinary work. What might the cognitive sciences gain from stronger interactional foundations? This represents, we believe, one of the key questions for the future. Writing as a transdisciplinary collective assembled from across the classic cognitive science hexagon and beyond, we highlight the opportunity for a figure-ground reversal that puts interaction at the heart of cognition. The interactive stance is a way of seeing that deserves to be a key part of the conceptual toolkit of cognitive scientists.


Subject(s)
Cognition , Cognitive Science , Humans , Interdisciplinary Studies
6.
Artif Life ; 27(1): 44-61, 2021 06 11.
Article in English | MEDLINE | ID: mdl-34529757

ABSTRACT

On the one hand, complexity science and enactive and embodied cognitive science approaches emphasize that people, as complex adaptive systems, are ambiguous, indeterminable, and inherently unpredictable. On the other, Machine Learning (ML) systems that claim to predict human behaviour are becoming ubiquitous in all spheres of social life. I contend that ubiquitous Artificial Intelligence (AI) and ML systems are close descendants of the Cartesian and Newtonian worldview in so far as they are tools that fundamentally sort, categorize, and classify the world, and forecast the future. Through the practice of clustering, sorting, and predicting human behaviour and action, these systems impose order, equilibrium, and stability to the active, fluid, messy, and unpredictable nature of human behaviour and the social world at large. Grounded in complexity science and enactive and embodied cognitive science approaches, this article emphasizes why people, embedded in social systems, are indeterminable and unpredictable. When ML systems "pick up" patterns and clusters, this often amounts to identifying historically and socially held norms, conventions, and stereotypes. Machine prediction of social behaviour, I argue, is not only erroneous but also presents real harm to those at the margins of society.


Subject(s)
Artificial Intelligence , Machine Learning , Humans , Social Behavior
7.
Patterns (N Y) ; 2(2): 100205, 2021 Feb 12.
Article in English | MEDLINE | ID: mdl-33659914

ABSTRACT

It has become trivial to point out that algorithmic systems increasingly pervade the social sphere. Improved efficiency-the hallmark of these systems-drives their mass integration into day-to-day life. However, as a robust body of research in the area of algorithmic injustice shows, algorithmic systems, especially when used to sort and predict social outcomes, are not only inadequate but also perpetuate harm. In particular, a persistent and recurrent trend within the literature indicates that society's most vulnerable are disproportionally impacted. When algorithmic injustice and harm are brought to the fore, most of the solutions on offer (1) revolve around technical solutions and (2) do not center disproportionally impacted communities. This paper proposes a fundamental shift-from rational to relational-in thinking about personhood, data, justice, and everything in between, and places ethics as something that goes above and beyond technical solutions. Outlining the idea of ethics built on the foundations of relationality, this paper calls for a rethinking of justice and ethics as a set of broad, contingent, and fluid concepts and down-to-earth practices that are best viewed as a habit and not a mere methodology for data science. As such, this paper mainly offers critical examinations and reflection and not "solutions."

SELECTION OF CITATIONS
SEARCH DETAIL
...