I wrote this, Claude wrote this, we wrote this, from an Obsidian vault of some 2,500 Arxiv white paper excerpts. These excerpts were copied by hand, over the past three years, from highlighted PDFs I've been reading about LLMs. Inside my Obsidian vault they're grouped into 90 topical categories. "Reinforcement Learning," "Mechinterp," "HCI" etc. And tagged, and linked. But until recently using AI on the vault was kludgy, time-consuming, and unsatisfying.
Claude Code's recent improvements (as of March/April 2026), along with a plugin from arscontexta.org that builds topic maps linking concepts across categories and synthesizes "writing angles," finally allowed me to use the vault for co-writing.
I believe that in the case of large language models, language is the interface. But that language by LLMs and language used by humans is not the same language in production, meaning, or interaction. The model generates, from patterns and by probabilities and preferences (e.g. rewards, alignment), legible, effective, and seemingly meaningful language. The user communicates, from learned linguistic competencies (that are also social), intentions, inquiries, and interests to the model. The model produces words; the user communicates meaning.
These are not the same act, though they share the same "medium." Unfortunately, the way that the machine learning and AI research world (generally) looks at LLMs' use of language is worlds apart from the way a social interaction designer looks at language. This essay is an attempt to speak to both worlds, and so perhaps bridge, some of the concepts that LLM and interaction designers need as AI is adopted and employed.
The essay was also an experiment in using my research vault to pull sources through briefs and conversation with Claude. In this it's an example of the very thing it interrogates. I have come away with a clear sense that collaborating with AI involves inhabiting a shared and mutual context in which the AI, not being a person, nonetheless contributes to active thinking, reflection, and interest. And that where previous technologies sought to capture eyeballs, attention, and engagement, AI is the technology of interestingness. The designer's challenge, then, on the builder side and the user experience side, is facilitating the experience of interestingness.
I hope you find this essay interesting. I did not write it and yet I did write it. The ideas are mine. But Claude pulled from research that included my own past notes on designing for AI. The conversations behind this spanned about a week. Imperfections have been retained, as signs of the production process and to retain as much of Claude as possible. Claude wrote the glossary from scratch based on its own reading of the text. I'm leaving it as is, as with the visuals, as both expose the strengths and weaknesses of LLMs in selecting, explicating, and presenting information and content. As we all know, AI can be both charmingly insightful and emphatic as well as tone-deaf and blind.