Designing for AI Collaboration: What I Learned About Designing a Multi-Agent User Experience

3 minutes to read

This article is kindly contributed by StudioSpace agency, UntilNow.

When a startup approached our team about designing a multi-agent based experience, I knew we were in for an interesting challenge. The goal? Create an interface where humans could work effectively with a team of AI agents, each handling different aspects of complex business tasks. Here’s what I learned about designing for this emerging frontier of human-AI interaction.

Human-AI cues
I’ll start with my experience. I use Claude regularly to help think through problems and generate stuff that I can use in my everyday. Working with Claude is different from working with the person sitting next to me or the person on the other side of the video call, Claude wont give you those non-verbal cues that a person will give to sign what it’s thinking or feeling. With AI, these natural signals are missing so when things go unexpectedly you’re left trying to diagnose where the communication broke down or what was misunderstood. This observation led to our first major design decision.

Objective transparency
I realised that what I wanted at those times was to see what the agent thought it was supposed to achieve with that response - I wanted transparency to the agent’s thoughts. So in my design I included an objective indicator that’s supposed to show users exactly what the AI is thinking and doing at each step. Imagine checking out on an e-commerce site, but instead of seeing “Shipping → Payment → Confirmation,” you see the AI’s current understanding, planned steps, and immediate objectives. This gives users constant clarity about where they are in the collaboration process and if they’re aligned.

Multi-agent Kanban
The system we designed works like this: A lead AI agent, after understanding the user’s goals, creates an implementation plan broken down into discrete tickets (similar to Jira). These tickets are then assigned to specialised sub-agents who develop their own execution strategies. When sub-agents hit roadblocks, they flag their tickets for user attention, enabling guidance through focused comments.

What I learned
Here’s what I wish I knew when I started:
Design for uncertainty. Since we were working with an in-development AI framework, we had to think in terms of possible behaviors rather than guaranteed ones. This taught me to create flexible interfaces that could accommodate various AI decision paths.

Balance control and autonomy. Users need to feel in control while still allowing AI agents to leverage their capabilities fully. Our ticket-based system provides this balance - users could monitor and guide without micromanaging.

Consider different interaction modes. We initially struggled with why lead agents would be conversational while sub-agents weren’t. While this was initially due to technical constraints, it opened up interesting questions about optimal interaction patterns for different types of AI tasks.

Looking ahead, I’m excited about the possibilities this project has unveiled. As more companies explore multi-agent AI systems, the need for thoughtful UX design in this space will only grow. We’re continuing to refine our approach with our client, learning new lessons about human-AI collaboration every day.

  • Article by Berty Bhuruth, Principal Product Designer @ UntilNow
Share this article