Multi-modal AI Experiences
UX Design, Design Strategy
Overview
Over the course of 3 months we were tasked with exploring potential use cases, touchpoints, channels, and interaction models for a branded AI assistant. The goal was to determine where an assistant experience could improve the customer experience in meaningful ways and define best practices around AI powered experiences. Due to the nature of the project, I cannot go into any specifics, but can talk about the process.
Role
I co-led a team of 5 Product Designers on the design and prototyping track for this project. I was responsible for project planning, defining the high and low level tasks, defining the areas of to focus, guiding and providing feedback to the design team, building exec-ready presentations, and interfacing with client stakeholders on a regular basis. I also worked closely with my Experience Strategy and Content Strategy colleagues on the strategy and content tracks of work to ensure we were building on each other’s work.
Process
Multi-modal AI experiences are new enough that use cases and interaction patterns are still being established and the industry as a whole is still figuring out what “good” looks like. Because of this I took the approach of learning by making instead of depending on user research to generate ideas.
This meant that we were rapidly generating prototypes and pressure testing them with SMEs on a weekly basis before testing them with users. This approach allowed us to develop dozens of prototypes and get to our strongest directions in a short period of time.
Batch 0
Our first batch of prototypes were selected based on our client’s hypothesis on good areas to explore and thinking through what scenarios would help answer some of our big questions. We knew this batch would get many things wrong, so we kept it at a lower fidelity and moved fast. By the end of a few weeks we’d learned an incredible amount and had a much clearer picture of the important questions that our following batches needed to answer.
Batch 1 and 2
These were the “real” batches of prototypes where we took them to higher fidelities and put them through a round of user testing to get real customer feedback. We selected these prototypes based on what we’d learned from Batch 0 and findings that were coming in from the strategy track of work where they were diving deep into market research, academic literature, and talking to a range of stakeholders across the client’s business.
User testing
After each batch was complete, we worked with our research partners to design and run user testing studies to get feedback on our concepts. In total we ran three studies in order to get feedback as quickly as possible to synthesize the findings for our clients.
Outcome
By the end of the project we had built 9 prototypes across 11 touchpoints and 10 products. The user testing and process of learning by making generated valuable insights into where customers expected and preferred conversational assistant experiences to appear and helped inform where the business would invest resources to build production ready experiences.