Interpretative Interfaces


Ongoing



Interpretative Interfaces is a research-through-design project that explores new ways of interacting with language models by making their internal representations manipulable. The project reframes explainability as readerly endeavour, not dissimilar to annotating a book when reading it, leaving material traces of interpretation in the information architecture of models.

Situated at the intersection of design fiction and mechanistic interpretability, Interpretative Interfaces aims not to build tools to expand our imaginaries of what it means to work with an AI system.

Repository

Gabrielle Benabdallah. Interpretative Interfaces: Designing for AI-Mediated Reading Practices and the Knowledge Commons. Proceedings of the CHI 2026 Workshop: Ethics at the Front-End: Responsible User-Facing Design for AI Systems. arXiv:2603.15863.




Work  Home