Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 21
2.
Sci Rep ; 13(1): 16088, 2023 09 26.
Article En | MEDLINE | ID: mdl-37752210

Attribute inference-the process of analyzing publicly available data in order to uncover hidden information-has become a major threat to privacy, given the recent technological leap in machine learning. One way to tackle this threat is to strategically modify one's publicly available data in order to keep one's private information hidden from attribute inference. We evaluate people's ability to perform this task, and compare it against algorithms designed for this purpose. We focus on three attributes: the gender of the author of a piece of text, the country in which a set of photos was taken, and the link missing from a social network. For each of these attributes, we find that people's effectiveness is inferior to that of AI, especially when it comes to hiding the attribute in question. Moreover, when people are asked to modify the publicly available information in order to hide these attributes, they are less likely to make high-impact modifications compared to AI. This suggests that people are unable to recognize the aspects of the data that are critical to an inference algorithm. Taken together, our findings highlight the limitations of relying on human intuition to protect privacy in the age of AI, and emphasize the need for algorithmic support to protect private information from attribute inference.


Algorithms , Intuition , Humans , Privacy , Machine Learning
3.
PNAS Nexus ; 2(8): pgad264, 2023 Aug.
Article En | MEDLINE | ID: mdl-37601308

With over two billion monthly active users, YouTube currently shapes the landscape of online political video consumption, with 25% of adults in the United States regularly consuming political content via the platform. Considering that nearly three-quarters of the videos watched on YouTube are delivered via its recommendation algorithm, the propensity of this algorithm to create echo chambers and deliver extremist content has been an active area of research. However, it is unclear whether the algorithm may exhibit political leanings toward either the Left or Right. To fill this gap, we constructed archetypal users across six personas in the US political context, ranging from Far Left to Far Right. Utilizing these users, we performed a controlled experiment in which they consumed over eight months worth of videos and were recommended over 120,000 unique videos. We find that while the algorithm pulls users away from political extremes, this pull is asymmetric, with users being pulled away from Far Right content stronger than from Far Left. Furthermore, we show that the recommendations made by the algorithm skew left even when the user does not have a watch history. Our results raise questions on whether the recommendation algorithms of social media platforms in general, and YouTube, in particular, should exhibit political biases, and the wide-reaching societal and political implications that such biases could entail.

4.
Sci Rep ; 13(1): 12187, 2023 08 24.
Article En | MEDLINE | ID: mdl-37620342

The emergence of large language models has led to the development of powerful tools such as ChatGPT that can produce text indistinguishable from human-generated work. With the increasing accessibility of such technology, students across the globe may utilize it to help with their school work-a possibility that has sparked ample discussion on the integrity of student evaluation processes in the age of artificial intelligence (AI). To date, it is unclear how such tools perform compared to students on university-level courses across various disciplines. Further, students' perspectives regarding the use of such tools in school work, and educators' perspectives on treating their use as plagiarism, remain unknown. Here, we compare the performance of the state-of-the-art tool, ChatGPT, against that of students on 32 university-level courses. We also assess the degree to which its use can be detected by two classifiers designed specifically for this purpose. Additionally, we conduct a global survey across five countries, as well as a more in-depth survey at the authors' institution, to discern students' and educators' perceptions of ChatGPT's use in school work. We find that ChatGPT's performance is comparable, if not superior, to that of students in a multitude of courses. Moreover, current AI-text classifiers cannot reliably detect ChatGPT's use in school work, due to both their propensity to classify human-written answers as AI-generated, as well as the relative ease with which AI-generated text can be edited to evade detection. Finally, there seems to be an emerging consensus among students to use the tool, and among educators to treat its use as plagiarism. Our findings offer insights that could guide policy discussions addressing the integration of artificial intelligence into educational frameworks.


Artificial Intelligence , Communication , Humans , Universities , Schools , Perception
5.
Nat Commun ; 14(1): 3108, 2023 05 30.
Article En | MEDLINE | ID: mdl-37253759

With the progress of artificial intelligence and the emergence of global online communities, humans and machines are increasingly participating in mixed collectives in which they can help or hinder each other. Human societies have had thousands of years to consolidate the social norms that promote cooperation; but mixed collectives often struggle to articulate the norms which hold when humans coexist with machines. In five studies involving 7917 individuals, we document the way people treat machines differently than humans in a stylized society of beneficiaries, helpers, punishers, and trustors. We show that a different amount of trust is gained by helpers and punishers when they follow norms over not doing so. We also demonstrate that the trust-gain of norm-followers is associated with trustors' assessment about the consensual nature of cooperative norms over helping and punishing. Lastly, we establish that, under certain conditions, informing trustors about the norm-consensus over helping tends to decrease the differential treatment of both machines and people interacting with them. These results allow us to anticipate how humans may develop cooperative norms for human-machine collectives, specifically, by relying on already extant norms in human-only groups. We also demonstrate that this evolution may be accelerated by making people aware of their emerging consensus.


Cooperative Behavior , Trust , Humans , Artificial Intelligence , Consensus , Social Norms
6.
Proc Natl Acad Sci U S A ; 120(13): e2215324120, 2023 03 28.
Article En | MEDLINE | ID: mdl-36940343

Disparities continue to pose major challenges in various aspects of science. One such aspect is editorial board composition, which has been shown to exhibit racial and geographical disparities. However, the literature on this subject lacks longitudinal studies quantifying the degree to which the racial composition of editors reflects that of scientists. Other aspects that may exhibit racial disparities include the time spent between the submission and acceptance of a manuscript and the number of citations a paper receives relative to textually similar papers, but these have not been studied to date. To fill this gap, we compile a dataset of 1,000,000 papers published between 2001 and 2020 by six publishers, while identifying the handling editor of each paper. Using this dataset, we show that most countries in Asia, Africa, and South America (where the majority of the population is ethnically non-White) have fewer editors than would be expected based on their share of authorship. Focusing on US-based scientists reveals Black as the most underrepresented race. In terms of acceptance delay, we find, again, that papers from Asia, Africa, and South America spend more time compared to other papers published in the same journal and the same year. Regression analysis of US-based papers reveals that Black authors suffer from the greatest delay. Finally, by analyzing citation rates of US-based papers, we find that Black and Hispanic scientists receive significantly fewer citations compared to White ones doing similar research. Taken together, these findings highlight significant challenges facing non-White scientists.


Authorship , Publications , Humans , Asia , Black People , Hispanic or Latino
7.
Nat Hum Behav ; 7(3): 353-364, 2023 03.
Article En | MEDLINE | ID: mdl-36646836

Scientific editors shape the content of academic journals and set standards for their fields. Yet, the degree to which the gender makeup of editors reflects that of scientists, and the rate at which editors publish in their own journals, are not entirely understood. Here, we use algorithmic tools to infer the gender of 81,000 editors serving more than 1,000 journals and 15 disciplines over five decades. Only 26% of authors in our dataset are women, and we find even fewer women among editors (14%) and editors-in-chief (8%). Career length explains the gender gap among editors, but not editors-in-chief. Moreover, by analysing the publication records of 20,000 editors, we find that 12% publish at least one-fifth, and 6% publish at least one-third, of their papers in the journal they edit. Editors-in-chief tend to self-publish at a higher rate. Finally, compared with women, men have a higher increase in the rate at which they publish in a journal soon after becoming its editor.


Gender Equity , Publishing , Female , Humans , Male
8.
Sci Rep ; 13(1): 1213, 2023 01 21.
Article En | MEDLINE | ID: mdl-36681708

A fundamental question in social and biological sciences is whether self-governance is possible when individual and collective interests are in conflict. Free riding poses a major challenge to self-governance, and a prominent solution to this challenge has been altruistic punishment. However, this solution is ineffective when counter-punishments are possible and when social interactions are noisy. We set out to address these shortcomings, motivated by the fact that most people behave like conditional cooperators-individuals willing to cooperate if a critical number of others do so. In our evolutionary model, the population contains heterogeneous conditional cooperators whose decisions depend on past cooperation levels. The population plays a repeated public goods game in a moderately noisy environment where individuals can occasionally commit mistakes in their cooperative decisions and in their imitation of the role models' strategies. We show that, under moderate levels of noise, injecting a few altruists into the population triggers positive reciprocity among conditional cooperators, thereby providing a novel mechanism to establish stable cooperation. More broadly, our findings indicate that self-governance is possible while avoiding the detrimental effects of punishment, and suggest that society should focus on creating a critical amount of trust to harness the conditional nature of its members.


Cooperative Behavior , Punishment , Humans , Game Theory , Altruism , Social Interaction
9.
Proc Natl Acad Sci U S A ; 120(3): e2212649120, 2023 01 17.
Article En | MEDLINE | ID: mdl-36623193

The World Wide Web (WWW) empowers people in developing regions by eradicating illiteracy, supporting women, and generating economic opportunities. However, their reliance on limited bandwidth and low-end phones leaves them with a poorer browsing experience compared to privileged users across the digital divide. To evaluate the extent of this phenomenon, we sent participants to 56 cities to measure the cost of mobile data and the average page load time. We found the cost to be orders of magnitude greater, and the average page load time to be four times slower, in some locations compared to others. Analyzing how popular webpages have changed over the past years suggests that they are increasingly designed with high processing power in mind, effectively leaving the less fortunate users behind. Addressing this digital inequality through new infrastructure takes years to complete and billions of dollars to finance. A more practical solution is to make the webpages more accessible by reducing their size and optimizing their load time. To this end, we developed a solution called Lite-Web and evaluated it in the Gilgit-Baltistan province of Pakistan, demonstrating that it transforms the browsing experience of a Pakistani villager using a low-end phone to almost that of a Dubai resident using a flagship phone. A user study in two high schools in Pakistan confirms that the performance gains come at no expense to the pages' look and functionality. These findings suggest that deploying Lite-Web at scale would constitute a major step toward a WWW without digital inequality.


Employment , Internet , Humans , Female , Pakistan
11.
Sci Rep ; 12(1): 21461, 2022 12 12.
Article En | MEDLINE | ID: mdl-36509790

Nations worldwide are mobilizing to harness the power of Artificial Intelligence (AI) given its massive potential to shape global competitiveness over the coming decades. Using a dataset of 2.2 million AI papers, we study inter-city citations, collaborations, and talent migrations to uncover dependencies between Eastern and Western cities worldwide. Beijing emerges as a clear outlier, as it has been the most impactful city since 2007, the most productive since 2002, and the one housing the largest number of AI scientists since 1995. Our analysis also reveals that Western cities cite each other far more frequently than expected by chance, East-East collaborations are far more common than East-West or West-West collaborations, and migration of AI scientists mostly takes place from one Eastern city to another. We then propose a measure that quantifies each city's role in bridging East and West. Beijing's role surpasses that of all other cities combined, making it the central gateway through which knowledge and talent flow from one side to the other. We also track the center of mass of AI research by weighing each city's geographic location by its impact, productivity, and AI workforce. The center of mass has moved thousands of kilometers eastward over the past three decades, with Beijing's pull increasing each year. These findings highlight the eastward shift in the tides of global AI research, and the growing role of the Chinese capital as a hub connecting researchers across the globe.


Artificial Intelligence , Cities , Beijing
12.
Sci Rep ; 12(1): 22582, 2022 12 30.
Article En | MEDLINE | ID: mdl-36585429

As the COVID-19 pandemic has demonstrated, identifying the origin of a pandemic remains a challenging task. The search for patient zero may benefit from the widely-used and well-established toolkit of contact tracing methods, although this possibility has not been explored to date. We fill this gap by investigating the prospect of performing the source detection task as part of the contact tracing process, i.e., the possibility of tuning the parameters of the process in order to pinpoint the origin of the infection. To this end, we perform simulations on temporal networks using a recent diffusion model that recreates the dynamics of the COVID-19 pandemic. We find that increasing the budget for contact tracing beyond a certain threshold can significantly improve the identification of infected individuals but has diminishing returns in terms of source detection. Moreover, disease variants of higher infectivity make it easier to find the source but harder to identify infected individuals. Finally, we unravel a seemingly-intrinsic trade-off between the use of contact tracing to either identify infected nodes or detect the source of infection. This trade-off suggests that focusing on the identification of patient zero may come at the expense of identifying infected individuals.


COVID-19 , Humans , COVID-19/epidemiology , Contact Tracing/methods , Pandemics , Budgets
13.
iScience ; 25(9): 104956, 2022 Sep 16.
Article En | MEDLINE | ID: mdl-36093057

Influencing others through social networks is fundamental to all human societies. Whether this happens through the diffusion of rumors, opinions, or viruses, identifying the diffusion source (i.e., the person that initiated it) is a problem that has attracted much research interest. Nevertheless, existing literature has ignored the possibility that the source might strategically modify the network structure (by rewiring links or introducing fake nodes) to escape detection. Here, without restricting our analysis to any particular diffusion scenario, we close this gap by evaluating two mechanisms that hide the source-one stemming from the source's actions, the other from the network structure itself. This reveals that sources can easily escape detection, and that removing links is far more effective than introducing fake nodes. Thus, efforts should focus on exposing concealed ties rather than planted entities; such exposure would drastically improve our chances of detecting the diffusion source.

14.
PNAS Nexus ; 1(5): pgac256, 2022 Nov.
Article En | MEDLINE | ID: mdl-36712321

Recent breakthroughs in machine learning and big data analysis are allowing our online activities to be scrutinized at an unprecedented scale, and our private information to be inferred without our consent or knowledge. Here, we focus on algorithms designed to infer the opinions of Twitter users toward a growing number of topics, and consider the possibility of modifying the profiles of these users in the hope of hiding their opinions from such algorithms. We ran a survey to understand the extent of this privacy threat, and found evidence suggesting that a significant proportion of Twitter users wish to avoid revealing at least some of their opinions about social, political, and religious issues. Moreover, our participants were unable to reliably identify the Twitter activities that reveal one's opinion to such algorithms. Given these findings, we consider the possibility of fighting AI with AI, i.e., instead of relying on human intuition, people may have a better chance at hiding their opinion if they modify their Twitter profiles following advice from an automated assistant. We propose a heuristic that identifies which Twitter accounts the users should follow or mention in their tweets, and show that such a heuristic can effectively hide the user's opinions. Altogether, our study highlights the risk associated with developing machine learning algorithms that analyze people's profiles, and demonstrates the potential to develop countermeasures that preserve the basic right of choosing which of our opinions to share with the world.

15.
Sci Rep ; 11(1): 5329, 2021 03 05.
Article En | MEDLINE | ID: mdl-33674635

Disinformation continues to raise concerns due to its increasing threat to society. Nevertheless, a threat of a disinformation-based attack on critical infrastructure is often overlooked. Here, we consider urban traffic networks and focus on fake information that manipulates drivers' decisions to create congestion at a city scale. Specifically, we consider two complementary scenarios, one where drivers are persuaded to move towards a given location, and another where they are persuaded to move away from it. We study the optimization problem faced by the adversary when choosing which streets to target to maximize disruption. We prove that finding an optimal solution is computationally intractable, implying that the adversary has no choice but to settle for suboptimal heuristics. We analyze one such heuristic, and compare the cases when targets are spread across the city of Chicago vs. concentrated in its business district. Surprisingly, the latter results in more far-reaching disruption, with its impact felt as far as 2 km from the closest target. Our findings demonstrate that vulnerabilities in critical infrastructure may arise not only from hardware and software, but also from behavioral manipulation.

17.
Nat Commun ; 11(1): 5855, 2020 11 17.
Article En | MEDLINE | ID: mdl-33203848

We study mentorship in scientific collaborations, where a junior scientist is supported by potentially multiple senior collaborators, without them necessarily having formal supervisory roles. We identify 3 million mentor-protégé pairs and survey a random sample, verifying that their relationship involved some form of mentorship. We find that mentorship quality predicts the scientific impact of the papers written by protégés post mentorship without their mentors. We also find that increasing the proportion of female mentors is associated not only with a reduction in post-mentorship impact of female protégés, but also a reduction in the gain of female mentors. While current diversity policies encourage same-gender mentorships to retain women in academia, our findings raise the possibility that opposite-gender mentorship may actually increase the impact of women who pursue a scientific career. These findings add a new perspective to the policy debate on how to best elevate the status of women in science.


Academic Success , Mentors , Serial Publications , Female , Humans , Male , Science , Serial Publications/statistics & numerical data , Surveys and Questionnaires
18.
PLoS One ; 15(8): e0236517, 2020.
Article En | MEDLINE | ID: mdl-32785250

Social media has made it possible to manipulate the masses via disinformation and fake news at an unprecedented scale. This is particularly alarming from a security perspective, as humans have proven to be one of the weakest links when protecting critical infrastructure in general, and the power grid in particular. Here, we consider an attack in which an adversary attempts to manipulate the behavior of energy consumers by sending fake discount notifications encouraging them to shift their consumption into the peak-demand period. Using Greater London as a case study, we show that such disinformation can indeed lead to unwitting consumers synchronizing their energy-usage patterns, and result in blackouts on a city-scale if the grid is heavily loaded. We then conduct surveys to assess the propensity of people to follow-through on such notifications and forward them to their friends. This allows us to model how the disinformation may propagate through social networks, potentially amplifying the attack impact. These findings demonstrate that in an era when disinformation can be weaponized, system vulnerabilities arise not only from the hardware and software of critical infrastructure, but also from the behavior of the consumers.


Communication , Information Dissemination , Social Media , Social Networking , Cities , Computer Systems , Deception , Humans , London , Software , Surveys and Questionnaires
19.
PLoS One ; 15(1): e0227049, 2020.
Article En | MEDLINE | ID: mdl-31923244

We consider a demand response program in which a block of apartments receive a discount from their electricity supplier if they ensure that their aggregate load from air conditioning does not exceed a predetermined threshold. The goal of the participants is to obtain the discount, while ensuring that their individual temperature preferences are also satisfied. As such, the apartments need to collectively optimise their use of air conditioning so as to satisfy these constraints and minimise their costs. Given an optimal cooling profile that secures the discount, the problem that the apartments face then is to divide the total discounted cost in a fair way. To achieve this, we take a coalitional game approach and propose the use of the Shapley value from cooperative game theory, which is the normative payoff division mechanism that offers a unique set of desirable fairness properties. However, applying the Shapley value in this setting presents a novel computational challenge. This is because its calculation requires, as input, the cost of every subset of apartments, which means solving an exponential number of collective optimisations, each of which is a computationally intensive problem. To address this, we propose solving the optimisation problem of each subset suboptimally, to allow for acceptable solutions that require less computation. We show that, due to the linearity property of the Shapley value, if suboptimal costs are used rather than optimal ones, the division of the discount will be fair in the following sense: each apartment is fairly "rewarded" for its contribution to the optimal cost and, at the same time, is fairly "penalised" for its contribution to the discrepancy between the suboptimal and the optimal costs. Importantly, this is achieved without requiring the optimal solutions.


Air Conditioning/economics , Cooperative Behavior , Game Theory , Group Processes , Independent Living/economics , Models, Economic , Cost-Benefit Analysis , Electricity , Humans , Reward
20.
Sci Rep ; 9(1): 12208, 2019 08 21.
Article En | MEDLINE | ID: mdl-31434975

Our private connections can be exposed by link prediction algorithms. To date, this threat has only been addressed from the perspective of a central authority, completely neglecting the possibility that members of the social network can themselves mitigate such threats. We fill this gap by studying how an individual can rewire her own network neighborhood to hide her sensitive relationships. We prove that the optimization problem faced by such an individual is NP-complete, meaning that any attempt to identify an optimal way to hide one's relationships is futile. Based on this, we shift our attention towards developing effective, albeit not optimal, heuristics that are readily-applicable by users of existing social media platforms to conceal any connections they deem sensitive. Our empirical evaluation reveals that it is more beneficial to focus on "unfriending" carefully-chosen individuals rather than befriending new ones. In fact, by avoiding communication with just 5 individuals, it is possible for one to hide some of her relationships in a massive, real-life telecommunication network, consisting of 829,725 phone calls between 248,763 individuals. Our analysis also shows that link prediction algorithms are more susceptible to manipulation in smaller and denser networks. Evaluating the error vs. attack tolerance of link prediction algorithms reveals that rewiring connections randomly may end up exposing one's sensitive relationships, highlighting the importance of the strategic aspect. In an age where personal relationships continue to leave digital traces, our results empower the general public to proactively protect their private relationships.


Algorithms , Interpersonal Relations , Models, Theoretical , Social Media , Female , Humans , Male
...