Your browser doesn't support javascript.
loading
The role of trust in the use of artificial intelligence for chemical risk assessment.
Wassenaar, Pim N H; Minnema, Jordi; Vriend, Jelle; Peijnenburg, Willie J G M; Pennings, Jeroen L A; Kienhuis, Anne.
Afiliação
  • Wassenaar PNH; National Institute for Public Health and the Environment (RIVM), P.O. Box 1, 3720 BA, Bilthoven, the Netherlands. Electronic address: pim.wassenaar@rivm.nl.
  • Minnema J; National Institute for Public Health and the Environment (RIVM), P.O. Box 1, 3720 BA, Bilthoven, the Netherlands.
  • Vriend J; National Institute for Public Health and the Environment (RIVM), P.O. Box 1, 3720 BA, Bilthoven, the Netherlands.
  • Peijnenburg WJGM; National Institute for Public Health and the Environment (RIVM), P.O. Box 1, 3720 BA, Bilthoven, the Netherlands; Institute of Environmental Sciences (CML), Leiden University, P. O. Box 9518, 2300 RA, Leiden, the Netherlands.
  • Pennings JLA; National Institute for Public Health and the Environment (RIVM), P.O. Box 1, 3720 BA, Bilthoven, the Netherlands.
  • Kienhuis A; National Institute for Public Health and the Environment (RIVM), P.O. Box 1, 3720 BA, Bilthoven, the Netherlands.
Regul Toxicol Pharmacol ; 148: 105589, 2024 Mar.
Article em En | MEDLINE | ID: mdl-38403009
ABSTRACT
Risk assessment of chemicals is a time-consuming process and needs to be optimized to ensure all chemicals are timely evaluated and regulated. This transition could be stimulated by valuable applications of in silico Artificial Intelligence (AI)/Machine Learning (ML) models. However, implementation of AI/ML models in risk assessment is lagging behind. Most AI/ML models are considered 'black boxes' that lack mechanistical explainability, causing risk assessors to have insufficient trust in their predictions. Here, we explore 'trust' as an essential factor towards regulatory acceptance of AI/ML models. We provide an overview of the elements of trust, including technical and beyond-technical aspects, and highlight elements that are considered most important to build trust by risk assessors. The results provide recommendations for risk assessors and computational modelers for future development of AI/ML models, including 1) Keep models simple and interpretable; 2) Offer transparency in the data and data curation; 3) Clearly define and communicate the scope/intended purpose; 4) Define adoption criteria; 5) Make models accessible and user-friendly; 6) Demonstrate the added value in practical settings; and 7) Engage in interdisciplinary settings. These recommendations should ideally be acknowledged in future developments to stimulate trust and acceptance of AI/ML models for regulatory purposes.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Confiança Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Confiança Idioma: En Ano de publicação: 2024 Tipo de documento: Article