Your browser doesn't support javascript.
loading
Quantifying the uncertainty of LLM hallucination spreading in complex adaptive social networks.
Hao, Guozhi; Wu, Jun; Pan, Qianqian; Morello, Rosario.
Afiliación
  • Hao G; Graduate School of Information, Production and Systems, Waseda University, Fukuoka, 808-0135, Japan.
  • Wu J; Graduate School of Information, Production and Systems, Waseda University, Fukuoka, 808-0135, Japan. jun.wu@ieee.org.
  • Pan Q; Graduate School of Information, Production and Systems, Waseda University, Fukuoka, 808-0135, Japan.
  • Morello R; Department of Information Engineering, Infrastructure and Sustainable Energy, University Mediterranea of Reggio Calabria, Via Graziella, Reggio Calabria, 89122, Italy.
Sci Rep ; 14(1): 16375, 2024 Jul 16.
Article en En | MEDLINE | ID: mdl-39014013
ABSTRACT
Large language models (LLMs) are becoming a significant source of content generation in social networks, which is a typical complex adaptive system (CAS). However, due to their hallucinatory nature, LLMs produce false information that can spread through social networks, which will impact the stability of the whole society. The uncertainty of LLMs false information spread within social networks is attributable to the diversity of individual behaviors, intricate interconnectivity, and dynamic network structures. Quantifying the uncertainty of false information spread by LLMs in social networks is beneficial for preemptively devising strategies to defend against threats. To address these challenges, we propose an LLMs hallucination-aware dynamic modeling method via agent-based probability distributions, spread popularity and community affiliation, to quantify the uncertain spreading of LLMs hallucination in social networks. We set up the node attributes and behaviors in the model based on real-world data. For evaluation, we consider the spreaders, informed people, discerning and unwilling non-spreaders as indicators, and quantified the spreading under different LLMs task situations, such as QA, dialogue, and summarization, as well as LLMs versions. Furthermore, we conduct experiments using real-world LLM hallucination data combined with social network features to ensure the validity of the proposed quantifying scheme.

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Sci Rep Año: 2024 Tipo del documento: Article País de afiliación: Japón

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Sci Rep Año: 2024 Tipo del documento: Article País de afiliación: Japón
...