Making AI-Assisted Grant Evaluation Auditable without Exposing the Model

📰 ArXiv cs.AI

arXiv:2604.25200v1 Announce Type: cross Abstract: Public agencies are beginning to consider large language models (LLMs) as decision-support tools for grant evaluation. This creates a practical governance problem: the model and scoring rubric should not be exposed in a way that allows applicants to optimize against them, yet the evaluation process must remain auditable, contestable, and accountable. We propose a TEE-based architecture that helps reconcile these requirements through remote attest

Published 29 Apr 2026
Read full paper → ← Back to Reads