Articles on AI
The design community has been slow to adopt the changes wrought by generative AI, which is unusual but perhaps not entirely surprising — and not, as is sometimes supposed, because designers harbor a specific fear or distaste for AI's synthetic “creativity.” The more honest explanation is that language models do not present a design surface, a navigable information space, or an interactive user experience of the kind the field has spent decades learning to work with and on. Conversation, though arguably a valid experience in its own right — navigation and negotiation of functions, tasks, and actions, with a qualitative stretch of time and cognitive investment of attention and interest — is not easily diagrammed or wireframed. What formalization has been applied to conversation as a form of action and interaction has largely been handled by NLP rather than by UX.
And yet the success or failure of generative AI commercially, particularly in consumer and customer-facing applications, will likely come down to user experience. How well do users understand the AI, what do they expect it to do, how well does it perform, and does it satisfy their needs easily? These and many more AI-specific user-experience questions will shape the reception of AI more generally. What is needed, in other words, is for design theory to catch up to a technology that does not look like any of the artifacts or platforms design theory was built around.
A concern that comes up in almost every conversation with designers about AI, and one worth engaging with directly, is the fear that AI will take design jobs. The truth is closer to the opposite. AI needs design, and the design community has not yet meaningfully shown up at the table where AI is being built. What designers bring to this work is not, primarily, visual polish, though we are good at that. It is the user's perspective — the habit of thinking about how people actually experience a product, where friction shows up, how expectations collide with what the system can do, and where the happy path so rarely describes real use.
It is worth being careful here, because the model labs are in fact deeply focused on how AI behaves. Sycophancy, hallucination, misalignment, deception, the coherence breakdowns some are beginning to call AI psychosis — these get serious attention in alignment research, safety evaluation, and post-training. What is less developed, across the field, is the habit of treating model flaws and failure modes as user-experience issues as well as engineering ones. For every technical failure there tends to be a corresponding experience on the user's side, and that correspondence deserves its own methods, heuristics, and design vocabulary. The gap is not that labs are indifferent to users. It is that user-experience designers have not yet built out the toolkit for approaching what labs produce as a design problem alongside the engineering one.
Something of this view carries over from earlier work. What I called social interaction design — the years spent on likes, follows, shares, comments, feeds, and threads as the features through which people interacted with each other online — was never about the interface itself so much as about the experience it mediated. The interface was the venue; the interaction between people was the point. The lens carries over to AI; the object of design has shifted.
AI shifts that frame, because it is a different kind of mediation. It is not social in the conventional sense, at least not yet — we do not share our chats with each other, AI agents are not really conversation partners on social media, and the interaction is between a user and a system rather than between users. But the habit that carries over is useful: design the experience being mediated, not merely the interface that hosts it. The chat window is the venue; the emerging interaction between user and generating system is the point. That framing will become more interesting as AI agents begin to show up as participants on social platforms, and as models are increasingly designed as agents, actors, subjects — quasi-persons with roles to play, skills, and workflows, more coworker than tool.
Designers ought, therefore, to be invested in framing our own perspectives on generative AI. We should have heuristics ready for different kinds and uses of it. We should have customer journeys, personas, and service-design blueprints that reflect the unique character of AI interactions rather than treating them as another species of software. Our very design theories need updating to reflect the intrusion of AI into the digital domain, where generative models are disrupting long-standing applications and services from search to customer service chats, and from image and video to music creation.
The core concepts of human-centered design — utility, use, need, requirement, satisfaction, ease-of-use — all bear reconsidering in an AI-era setting. It is possible that AI will challenge us to treat it as human-adjacent, or even post-human. When intelligence is post-subjective, non-human but human-informed and human-steered, what concepts will we have for design and knowledge, design and trust, design and relationship? These are not idle questions. They get answered, one way or another, by the choices engineers and founders are making about how AI behaves, and they would be better answered with designers in the room.
While design theory at the macro level needs to accommodate AI, conversational AI in particular calls for new design concepts. One useful move is to take seriously the idea that conversational AI should mirror, reflect, and make use of learned human linguistic competencies as much as possible, rather than require users to speak to machines in a machine-shaped way. The challenge is more than straightforward: humans often do not communicate clearly, and language models do not understand their own use of language. And yet language models do perform adequately in many situations, and can, with careful design, sustain long interactions with users with reasonable success.
A communication-centric approach to the design of language models takes the view that AI should at least try to approximate the two-sided, intersubjective nature of human communication. Much current research into reasoning in language models focuses on the model's own reasoning and its reflections on that reasoning — a monological view. In human communication, we reflect not only on our own reasons but on those of the Other as well. Sociologists have called this double contingency — captured in the image of “I know that you know that I know.” Language models have been pre-trained to assume a common linguistic ground with users, but much research and design practice still takes a monological rather than a dialogical view of user interaction. Design theory could be useful here if it can explicate the ways in which user experience might be better reflected in language-model customization and behavior. Understanding, empathy, emotion, common ground, interest and interestingness, relationship and trust, even co-presence — all of which belong to human social interaction — can be of use in shaping conversational AI design.
One of the theoretical moves I have been advocating for is that the goal of LLM-based interaction is not search, not attention, and not engagement, but something closer to interestingness. Search is about finding what one already knows one wants. Attention and engagement are what platforms optimize for, often at the user's expense. Interestingness is different in that it engages the mind, invites exploration, and helps someone discover and work through what they did not already know. LLMs are, arguably, the first tool we have built that can plausibly do this kind of work at scale. Whether a given product actually realizes the possibility depends on decisions about shared context, persona, grounding, and continuity — which is to say, on decisions that belong to design. I have been working this out in more depth in my Conversation Is the Medium project.
Much of what AI products lack right now is what the design field has spent decades developing: an interpretive capability, a habit of holding multiple user perspectives in mind at once, and a willingness to sit with ambiguity long enough to find the shape of a genuine need rather than reaching for the first solution. Designers also bring a sense for the experience under the hood of an interaction — the layered nature of conversation, the emotional texture of tone and voice, the subtle cues that shape whether a user trusts what they are hearing. All of this is rich material for AI design, even if the surface we work on is no longer visual.
The invitation to the design field, then, is to reclaim this territory. Design theory has always evolved when new technologies have unsettled its assumptions — from print to interaction, from interaction to service, from service to social. The current unsettling, from social to linguistic-generative, calls for the same kind of evolution. Designers need not fear AI so much as recognize that AI, as it is currently being built, desperately needs what design brings.
Related pieces and projects:
If any of this resonates with a project you are working on, feel free to get in touch — I am always interested in talking through specific design problems in AI.