Service design research FAQ

When you work on a project with Harmonic it is very likely you will have the opportunity to participate in design research. This is something many people have never done before. We find that some people are either curious or anxious about what they can expect.

The following is a list of questions I’ve been asked more than a few times, and the answers I’ve given that seem to help people new to service design research feel informed and prepared, expressed in my own voice. Some of my fellow Harmonicas have expressed concern over how some of my answers are worded, so please know that anything here that strikes you as overstated or impolitic has likely been left in despite the advice of my colleagues.

Why are we doing so much research?

Short answer: Understanding the people involved in the service is, by far, the most important thing a team can do to ensure the success of a service.

Services are provided by people, for people. If you understand the people who receive the service, provide the service, and support the service behind the scenes (what we call in general terms the “actors” in a service) our chances of designing a service people value is far more likely. Our goal is to design services that people find useful, convenient and emotionally satisfying — the kinds of experiences that generate brand loyalty.

But people are surprising. Often what we think we know about them (even what we think we know about people in general) is wrong, and in ways that obscure real opportunities to improve their lives. Having a unique understanding of people gives a team access to new perspectives, new ways of thinking about serving them and can drive innovation and differentiation that is not only different but remarkable and relevant. (We call this “precision inspiration”.)

What kinds of research do you do?

To put it in the simplest terms, we think of our research in terms of foundational research (which helps us understand the actors who receive the service and those who provide and support it, which includes their needs, attitudes, behavior, contexts and worldviews), generative research (which helps us discover opportunities and conceive new ideas for improving service experiences), and evaluative research (which helps us see what ideas are most valuable to actors and how they can be made even more valuable). All these methods are qualitative, which means they are conducted with small numbers of people with a goal of gaining deep insights into not only what they do, and how they do it, but why they think, feel and behave the way they do. Most of the research we do contains elements of foundational, generative and evaluative research, but toward the beginning of most projects, the emphasis is on foundational and generative research, so most of what will be discussed here will focus on these.

Why do we need to do research with our own front-line employees?

Often when a company’s services fall short, it has little to do with the competence or attitudes of the employees who deliver the service. It has much more to do with how employees are evaluated or compensated, policies that limit what they can do for customers, or require them to do things customers don’t want. Or employees lack information needed to help. Or the systems they use get in their ways. Or employees are starting from behind, trying to salvage an already damaged experience, and (in extreme cases) numbed from constant exposure to customer anger. In other words, often a bad experience is not the employees’ fault, or anyone’s fault. The service has just organically evolved into something that isn’t working out for everyone.

And more often than not, no one person understands every factor that is contributing to the problem. The people on the front line who know the problems well are not always in a position to change the situation. And the people with the power to make changes are often far from the sites where the service is delivered and are operating with incomplete and sometimes incorrect information.

We do research with employees to help understand the big service delivery system, so we can find ways to make everyones’ lives easier. And we try to talk with them in ways that encourage them to tell us the full, unfiltered truth as they experience it, which is why we favor individual sessions, or sessions with small teams who collaborate together, especially when we are talking with front-line employees. We can only get the full truth if they are relaxed enough to speak freely and naturally.

How do we decide who we are going to talk with?

In order to ensure we are designing a service that works for everyone, we talk with a representative cross-section of people who use or might use the service we are designing. We are interested in both what they have in common, but also any important differences that might need to be considered.

Using what we learn from members of the client team, stakeholders we interview and available existing research we study, we list all the factors that might change the needs or attitudes or use contexts of the people involved in the service. These are developed into criteria we will use to determine the kinds of people we need to talk to.

Then we develop quotas for each of the criteria. We try to get at least three participants who have each of the criteria we identify, so we can get a sense of how that factor might affect the service. We try to get three because this is the minimum number where we can tell the difference between idiosyncratic answers and typical ones.

These quotas are used to recruit our research participants.

Why do we call them “participants”?

We call the people we invite to our sessions “participants” because they play such an active role in the session. Participant might even be an understatement. Effectively, we are asking them to play the role of teacher, and to help us understand who they are, what their life is like, and what they need and want in a service. We do design activities meant to help them do this teaching, but these are tools to aid the collaborative process of generating understanding.

Aren’t your sample sizes awfully small?

Typically, we will talk with twelve to twenty-four customers and with the same number of employees. To a person accustomed to marketing research, yes, this looks like a puny sample. The small samples make more sense, though, when you realize that what we are after is not only answers to questions — questions we think are the right ones to ask — but an understanding of how people see the world. This can sometimes help us see new non-obvious questions or even help us see that we have been asking the wrong questions!

The kind of research we do, especially early in the process, is designed to help people teach us about their lives, their needs, their way seeing the world, the significance of the service and its context to what they care about. This is very different from surveying them or asking them a list of questions. Imagine if you were learning a new subject at school, but instead of allowing the teacher to explain the subject to you, trying instead to survey the teacher to get the factual data you think you need to pass your tests. To allow someone to teach it is important to allow them to present information in their own way to convey the material in its own terms. (I like to believe we use the word “subject” to refer both to academic subjects and human subjects because we come to understand them by allowing ourselves to be taught.)

Instead of asking our participants a long list of question invite people to tell us stories, to help us understand how they see the world, to help them communicate what is most relevant to them, and in general to help us understand what questions we should ask them to learn what we need to know to design the best service for them.

By the time we are done with the first round of research, we have the information needed to design much better quantitative research. If you start with quantitative research, you’ll have a bigger sample, but you risk having statistically significant answers to insignificant, irrelevant questions.

How do we decide what we will do in the sessions?

We start our process by identifying the research objective: what does the research need to do in order to support the design? This is often mostly a re-statement of the project goal. We need to inform the design of the service by understanding all the people involved in it, their lives and how they might engage the service.

Then we identify areas of inquiry. In order to achieve the research objective, what will we need to learn about. Areas of inquiry are not questions — they are topics the team will ask to be taught about in various ways.

Once the areas of inquiry are defined and agreed upon, the team designs the research approach. It will include interview questions and interactive exercises designed to give the research participant opportunities to learn about the areas of opportunity.

The team then writes up a research protocol (sometimes called a “research guide”) and designs the materials used in the session.

It is important to note that the protocol is not a script. Some parts of it might be read like a script at some points, but the facilitator will use the protocol loosely to pace the session and to ensure the areas of inquiry are covered. But it is only a guide, and facilitators will often deviate from it. The goal of a session is to get the participant to teach, and that means keeping it conversational and giving the participant enough space to tell us things we might not have anticipated. Our sessions are designed for uncovering the unexpected — because this is where the biggest opportunities come to light.

Who from my organization should attend sessions?

Ideally, everyone would attend. Realistically, at least one person from any department or role that will be involved in shaping or delivering the service based on the research should be involved in the research.

We recommend this for two reasons. First, different roles notices different things in the sessions, and interprets them in different ways. Having a wide range of disciplinary lenses present in the sessions enriches the team’s understanding of what it hears.

Second, service design touches many roles throughout the organization. We believe people have a right to help shape their own futures. But pragmatically, it is a great way to build alignment, credibility, ownership and enthusiasm for initiatives if respected members of the teams who will contribute to the service were directly involved in shaping the service, and can explain why the service was designed the way it is.

What do I need to do to prepare for a session?

Generally, very little preparation work is needed. If you are observing a session the team will give you everything you need to know before the session. Often the team will schedule an orientation session to help everyone understand the purpose of the research and the flow of the session. Anyone who is playing an active support role will get additional training. Prior to each session the facilitator will remind everyone of what they need to know.

Generally, for in-person field research, everyone involved in a session will be given all the equipment they need. If the session is remote, you’ll want to bring a notebook and something to write with. We ask that you do not use your laptop for note-taking when conducting an in-person session.

What are the sessions like?

Typically sessions last ninety minutes, and are followed by a debrief that lasts between thirty minutes to an hour. Please plan to attend the debrief for your session, because this is a very important part of our process.

Normally, the session starts with an overview. The facilitator thanks the participant and explains the purpose of the session. The people attending the session are quickly introduced. Then the facilitator gives an overview of the session and sets the participant at ease by telling them that we are here to learn from them, that there are no wrong answers, that it is okay if they don’t remember everything and most of all to please tell us the full unfiltered truth about their experiences, and to not worry about hurting anyone’s feelings. We will sometimes joke with them and generally do whatever it takes to make them feel comfortable and ready to converse naturally with us. We answer whatever questions they have and make sure they have filled in the required release forms, understand the compensation and give us permission to record.

Then we usually start with an interview, leading with some easy warm up questions. We learn about who they are. We will sometimes ask them their opinion on something tangential and fun to answer. Where do they like to eat, or what is their current favorite service? Then we get background on how they use the service, what they value about it, how it compares to other services, etc. We also sometimes touch on their current brand perceptions.

Then we shift into a more interactive mode, and do some collaboration. Almost always will ask them to tell us stories that help us understand their needs, while we visually capture the story, step by step. We are interested in hearing about their whole experiences, not only the part where they might use the service. And we ask them to tell us not only what they did, but how they felt, what they were expecting, what they were thinking about, what they wanted to accomplish. We also might visualize their service ecosystem, inventorying the people, places, tools, and related services that make up their lives. Frequently the team will design other interactive exercises to help us get at needs, behaviors, attitudes and preferences.

Another activity we often do is show prototypes to participants and ask them to respond. By prototype, we mean any kind of artifact that allows the participant to imagine what it would be like to engage the service. It might be storyboards or screens, or we might ask them to act out service scenarios with us.

If all goes well, the facilitator will build enough rapport with the participant loosens up and feels free to express their feelings in their natural voice. We like to get these moments on video so we can show them to people who were not in the session. We want to help everyone in an organization relate to the humanity of their customers and the people who serve them, and to see the human impacts of decisions.

How do these sessions differ from pre-Covid times?

Online sessions are very similar to in-person. The big difference is how the activities are done. In-person, we are handing our participants the pen and asking them to interact with the materials. This is more difficult remotely. We are often using virtual surfaces, electronic sticky notes, and doing some of the interactions for the participants under their direction.

The dynamics are also a little different, especially for customers. When we do in-person session we often visit them in their homes, offices — in their spaces. We work hard to make them comfortable, but sometimes it takes a few minutes for them to get used to us being there. It seems to be a little easier to adjust to a video conference. The downside is there is a connection made in person, and insight you get from being in their space that doesn’t happen with the same intensity in a remote session.

The biggest positive tradeoffs are probably geographic flexibility and the simplification of logistics. With remote sessions it becomes affordable to recruit participants from many diverse regions instead of limiting sessions to a small set of locations. In-person sessions require a lot of coordination of people traveling to the market where the research is being conducted and ensuring they have transportation to and from the session location. Remote sessions remove most of this complexity.

Once Covid is overcome we return to relative normalcy, some of the methods developed to cope with the pandemic will stay in our toolbox and continue to be used to fit client needs and make optimal tradeoffs.

What am I supposed to do in a session?

There are multiple roles in a session. Generally, one person facilitates and one or more people assist. When we do sessions in-person we normally limit the number of people in the session to three, not counting the participant. Participants are not used to research, and they can get stage-fright if too many people are staring at them. With remote sessions it is possible, though not desirable, to have more attendees.

If you are assisting with the research, you will get special instructions and training from the team on how to use the research tools. With remote sessions we sometimes ask for help operating our virtual whiteboards. With in-person sessions we sometimes need assistance with organizing materials or operating cameras. It is never terribly complicated, and you will never be put in any situations for which you were not prepared.

The primary thing to keep in mind is that we are trying to create a conversational dynamic. This requires some conditions that we do our best to set up and maintain. What we do not want to happen is for the session to feel or look like a meeting where multiple people are talking together, and this tends to be the default unless steps are taken to prevent it. When the session is in-person, we usually try to arrange ourselves so the facilitator and participant are facing each other and others present sit to the side out of the direct line of sight. With remote sessions, we often ask everyone to turn off their cameras and microphones except the facilitator, until it is time to open the session up for questions.

Taking notes is very important. Write down anything that speaks to the areas of inquiry, strikes you as relevant to how the service should be designed, or surprises you. And if the participant says any great quotes, capture as much of it as you can, or at least jot down some of the key words and roughly what time it was said so we can find it in the transcript later.

Your notes will be useful during the debrief.

What should I know about asking questions?

During the session, try to hold your questions or comments until the facilitator prompts participants. It can be helpful to write questions down as they occur to you.

Sometimes the facilitator has a specific way to ask the question in mind. And sometimes the facilitator will leave more silence after a question than is comfortable. Trust the facilitator, and resist the urge to jump in and clarify questions, or to try to help the participant answer or break awkward silences. It’s hard to do, but it is important.

When the floor is opened for questions, try to ask open-ended questions. The trick is to start the questions the right way. Starting with “Can you talk to us about…” or “Please tell us about when…” almost always finish well. Sentences that start with “Do you…” or “Would you…” are risky. If you notice your question has devolved into a multiple-choice and you are finding yourself stringing together a bunch of “or” options, your question is on the wrong track.

The good news is you can always interrupt yourself and say “Actually, let me try asking this question another way.”

What am I supposed to do in a debrief?

After the session ends, the team will reconvene for a debrief. This is one of the most important activities we do during field research. The purpose is to capture what was learned in the session while it is fresh in everyones’ mind.

The debrief facilitator interviews the team on each area of inquiry, documenting what was learned in a format that makes it easy to compare findings between different participants.

Often there are disagreements or differing interpretations of what was said, and this is good. The discussions around differing understandings are central to the process and helps the extended team align on what has been learned.

One thing to keep in mind: A debrief is not meant to be an exhaustive compiling of everyone’s notes in a single document. The debrief is meant to be a summary of what the group learned. Someone not in the session should be able to pick up a debrief and learn who was interviewed and how much was learned from that participant about each of the areas of inquiry. The debriefs are a powerful tool the team will use during analysis.

How do we make sense of what we hear?

When the field research is done, the team analyzes the debrief forms and the outputs from the activities, supported by video footage and/or transcripts of the session.

The analysis is done partly in collaboration with the people who helped do the research. Sometimes we will conduct an internal team interview to outline the high level findings of the research. We then use the debriefs and transcripts to guide discussions and exercises to find patterns and themes in what we learned.

We will also compare the stories we heard and look for commonalities and variants which are documented in an experience map which visually documents the experience customers and others are having receiving and delivering the experience.

When possible somewhere toward the middle of research analysis the team will invite employees of the organization into the analysis process. We call this Research Open Studio. We show people our raw research materials, including the stories we gathered, and selected footage from the sessions. We share the findings in their current rough state, along with the questions we are asking ourselves, and bring them into our conversation so they can share the thought process and the excitement of discovery.

When do we get a readout?

Often within a few days of completing the research the team will send out an informal top-line summary of findings, and sometimes will include links to the debriefs. But the full presentation of what the team has learned usually comes at the beginning of the next workshop, when the research is digested and interpreted to identify opportunities to improve or even reinvent the service and to generate new ideas.

What do we do with what we learn?

The research outputs are designed for multiple purposes. First, they are designed to communicate what was learned as clearly and compellingly as possible, and to help the organization align around a single version of the truth created by the extended team.

The second purpose of the research outputs is to serve as workshop tools, to help with opportunity identification and prioritization, idea generation, concept assessment and concept prioritization. The experience maps, the themes we identify and the other artifacts generated during research analysis become ideation canvases workshop participants use to think about experiences from a customer’s or front-liner’s perspective.

Am I going to love research and want to be involved in it as much as possible in the future?


Leave a Reply