Your browser doesn't support javascript.
loading
DISCRET: Synthesizing Faithful Explanations For Treatment Effect Estimation.
Wu, Yinjun; Keoliya, Mayank; Chen, Kan; Velingker, Neelay; Li, Ziyang; Getzen, Emily J; Long, Qi; Naik, Mayur; Parikh, Ravi B; Wong, Eric.
Afiliación
  • Wu Y; School of Computer Science, Peking University, Beijing, China.
  • Keoliya M; Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, United States.
  • Chen K; School of Public Health, Harvard University, Boston, MA, United States.
  • Velingker N; Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, United States.
  • Li Z; Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, United States.
  • Getzen EJ; Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.
  • Long Q; Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, United States.
  • Naik M; Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.
  • Parikh RB; Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, United States.
  • Wong E; Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.
Proc Mach Learn Res ; 235: 53597-53618, 2024 Jul.
Article en En | MEDLINE | ID: mdl-39205826
ABSTRACT
Designing faithful yet accurate AI models is challenging, particularly in the field of individual treatment effect estimation (ITE). ITE prediction models deployed in critical settings such as healthcare should ideally be (i) accurate, and (ii) provide faithful explanations. However, current solutions are inadequate state-of-the-art black-box models do not supply explanations, post-hoc explainers for black-box models lack faithfulness guarantees, and self-interpretable models greatly compromise accuracy. To address these issues, we propose DISCRET, a self-interpretable ITE framework that synthesizes faithful, rule-based explanations for each sample. A key insight behind DISCRET is that explanations can serve dually as database queries to identify similar subgroups of samples. We provide a novel RL algorithm to efficiently synthesize these explanations from a large search space. We evaluate DISCRET on diverse tasks involving tabular, image, and text data. DISCRET outperforms the best self-interpretable models and has accuracy comparable to the best black-box models while providing faithful explanations. DISCRET is available at https//github.com/wuyinjun-1993/DISCRET-ICML2024.

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Proc Mach Learn Res Año: 2024 Tipo del documento: Article País de afiliación: China Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Proc Mach Learn Res Año: 2024 Tipo del documento: Article País de afiliación: China Pais de publicación: Estados Unidos