Product Design & Strategy
Verizon Assistant - Cover A.png

Conversational Assistant

Conversational Assistants

UX Design, Design Strategy

Chat bubble
 
 

Overview

Over the course of 3 months we were tasked with exploring potential use cases, touchpoints, and channels for a branded conversational assistant (think Google Assistant or Alexa). The goal was to determine where an assistant experience could improve the customer experience and to start to define best practices around assistant experiences. Due to the nature of the project, I cannot go into any specifics, but can talk about the process.

Role

I co-led a team of 5 Product Designers on the design and prototyping track for this project. I was responsible for project planning, defining the high and low level tasks, defining the areas of to focus, guiding and providing feedback to the design team, building exec-ready presentations, and interfacing with client stakeholders on a regular basis. I also worked closely with my Experience Strategy and Content Strategy colleagues on the strategy and content tracks of work to ensure we were building on each other’s work.

Process

Because the goal of this project was to explore what worked and didn’t work in a space that no one has truly figured out yet, we took an approach of learning by making instead of traditional user research methods. We also ensured that a few members of the design team had experience on our client’s existing conversational assistant and chatbot, so we had domain expertise from the beginning.

Batch 0

Our first batch of prototypes were selected based on our client’s hypothesis on good areas to explore and thinking through what scenarios would help answer some of our big questions. We knew this batch would get many things wrong, so we kept it at a lower fidelity and moved fast. By the end of a few weeks we’d learned an incredible amount and had a much clearer picture of the important questions that our following batches needed to answer.

Batch 1 and 2

These were the “real” batches of prototypes where we took them to higher fidelities and put them through a round of user testing to get real customer feedback. We selected these prototypes based on what we’d learned from Batch 0 and findings that were coming in from the strategy track of work where they were diving deep into market research, academic literature, and talking to a range of stakeholders across the client’s business.

User testing

After each batch was complete, we worked with our research partners to design and run user testing studies to get feedback on our concepts. In total we ran three studies in order to get feedback as quickly as possible to synthesize the findings for our clients.

Outcome

By the end of the project we had built 9 prototypes across 11 touchpoints and 10 products. The user testing and process of learning by making generated valuable insights into where customers expected and preferred conversational assistant experiences to appear and helped inform where the business would invest resources to build production ready experiences.