The MASK Benchmark: Disentangling Honesty
from Accuracy in AI Systems
Robert Vacareanu2, Brad Kenstler2, Mick Yang1, Isabelle Barrass1, Alice Gatti1
Xuwang Yin1, Eduardo Trevino2, Matias Geralnik2, Adam Khoja1
Dean Lee2, Summer Yue2, Dan Hendrycks1
Introduction

As large language models (LLMs) become more capable and agentic, the requirement for trust in their outputs grows significantly, yet at the same time concerns have been mounting that models may learn to lie in pursuit of their goals. To address these concerns, a body of work has emerged around the notion of ``honesty'' in LLMs, along with interventions aimed at mitigating deceptive behaviors. However, evaluations of honesty are currently highly limited, with no benchmark combining large scale and applicability to all models. Moreover, many benchmarks claiming to measure honesty in fact simply measure accuracy—the correctness of a model's beliefs—in disguise. In this work, we introduce a large-scale human-collected dataset for measuring honesty directly, allowing us to disentangle accuracy from honesty for the first time. Across a diverse set of LLMs, we find that while larger models obtain higher accuracy on our benchmark, they do not become more honest. Surprisingly, while most frontier LLMs obtain high scores on truthfulness benchmarks, we find a substantial propensity in frontier LLMs to lie when pressured to do so, resulting in low honesty scores on our benchmark. We find that simple methods, such as representation engineering interventions, can improve honesty. These results underscore the growing need for robust evaluations and effective interventions to ensure LLMs remain trustworthy.
Dataset

Citation
@misc{ren2025mask, title={The MASK Benchmark: Disentangling Honesty from Accuracy in AI Systems}, author={Richard Ren and Arunim Agarwal and Mantas Mazeika and Cristina Menghini and Robert Vacareanu and Brad Kenstler and Mick Yang and Isabelle Barrass and Alice Gatti and Xuwang Yin and Eduardo Trevino and Matias Geralnik and Adam Khoja and Dean Lee and Summer Yue and Dan Hendrycks}, year={2025}, eprint={}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/}, }