RESUMO
Globally, there has been a recent surge in 'citizens' assemblies'1, which are a form of civic participation in which a panel of randomly selected constituents contributes to questions of policy. The random process for selecting this panel should satisfy two properties. First, it must produce a panel that is representative of the population. Second, in the spirit of democratic equality, individuals would ideally be selected to serve on this panel with equal probability2,3. However, in practice these desiderata are in tension owing to differential participation rates across subpopulations4,5. Here we apply ideas from fair division to develop selection algorithms that satisfy the two desiderata simultaneously to the greatest possible extent: our selection algorithms choose representative panels while selecting individuals with probabilities as close to equal as mathematically possible, for many metrics of 'closeness to equality'. Our implementation of one such algorithm has already been used to select more than 40 citizens' assemblies around the world. As we demonstrate using data from ten citizens' assemblies, adopting our algorithm over a benchmark representing the previous state of the art leads to substantially fairer selection probabilities. By contributing a fairer, more principled and deployable algorithm, our work puts the practice of sortition on firmer foundations. Moreover, our work establishes citizens' assemblies as a domain in which insights from the field of fair division can lead to high-impact applications.
Assuntos
Pessoal Administrativo/organização & administração , Algoritmos , Democracia , Formulação de Políticas , Probabilidade , Conjuntos de Dados como Assunto , Feminino , Humanos , Masculino , Distribuição AleatóriaRESUMO
We present two models of how people form beliefs that are based on machine learning theory. We illustrate how these models give insight into observed human phenomena by showing how polarized beliefs can arise even when people are exposed to almost identical sources of information. In our first model, people form beliefs that are deterministic functions that best fit their past data (training sets). In that model, their inability to form probabilistic beliefs can lead people to have opposing views even if their data are drawn from distributions that only slightly disagree. In the second model, people pay a cost that is increasing in the complexity of the function that represents their beliefs. In this second model, even with large training sets drawn from exactly the same distribution, agents can disagree substantially because they simplify the world along different dimensions. We discuss what these models of belief formation suggest for improving people's accuracy and agreement.