Recently, I spent a week preparing for and moderating concept tests for one of my largest clients. Concept testing—which is different from usability testing—puts two different website design treatments in front of representative users to see if one concept is perceived to be more usable or engaging than the other. Once a design is chosen and client-approved, the build phase can begin.
Users’ opinions about graphic design are obviously very subjective. Just as people have different tastes in music, food and home décor, so too do their preferences differ online. When asked to compare two designs side by side, participants may have a difficult time articulating what it is that they like about a design. Even if they can find the words, people’s rationale for their tastes is likely not related to the business or brand goals at the core of a design concept.
The fact that someone dislikes bright colors doesn’t help us as researchers to assess the emotional impact of an interface or how it aligns with intended brand attributes. Therefore, we must create tests that try to structure participant feedback, and remove some of the subjectivity that accompanies their varying tastes.
First, begin with the participant recruit. As in other user research, it’s important to make sure you’re testing with a representative user population. If possible, recruit from a customer list or develop a screener to filter out participants who are wildly dissimilar from your typical user base.
Next, draft a discussion guide. Your discussion guide sets the objectives for your test and then outlines the activities and questions that the moderator will ask the participant during the course of the research. Having a script will ensure that each research session stays on track and that each participant experiences roughly the same stimulus.
I generally like to break the discussion guide down into components so that each piece feels like a separate activity or game, keeping participants engaged and cooperative throughout the session. This is particularly important in concept testing where all questions are repeated for both concepts. (To control for bias, don’t forget to alternate which concept is shown first to each participant.)
Background and Rapport
With the tests I ran, I started by asking a series of background questions to help the participant feel more comfortable: What do you do for work? For fun? How much time do you spend on the computer each week? What sites do you visit most regularly? It’s important to be friendly and to establish a rapport right from the onset. You’ll find your users to be more open and talkative if you’re not too stiff or business-like.
As part of this warm-up, I also ask participants to write on an index card a list of the information they’d likely be trying to find—or the tasks they’d be trying to complete—when coming to this type of website. We then set the list to the side for later.
Next, we move forward with a 15-second test where I show the participant a mocked-up homepage design (on screen) before turning off the monitor. Then I ask the participant what he remembers about the page, what stood out and what types of tasks could be accomplished given the page’s functionality. This exercise helps confirm whether or not users are seeing and internalizing your page’s main messages and top calls to action. Because participants have limited interaction with the design, they are providing you with a glimpse into their first impressions.
After turning the monitor back on, the participant and I talk further about the homepage design—things he particularly likes or dislikes, content that doesn’t make sense or seems misplaced, font sizes and readability, and modules that should be added or removed. By asking if anything should be added or removed, you’re also asking about the amount of content on the page. If it feels too busy, participants will let you know.
From this point, I recommend you move away from the screen and give users a printout of the design they’ve been looking at. Ask them to circle everything on the page that they believe to be clickable. By doing so, you’re gauging whether or not users appropriately understand the conventions established in your design concept. You’re also opening the door to start talking about navigation and information architecture.
During this portion of the conversation, I ask the participant to talk a bit about the links in the main navigation area—where would he anticipate each link would go? What type of content would he expect to see on that next page? Does he find the button labels clear and meaningful? This is also where the user’s index card list comes back into play.
After discussing each of the main navigation items, I ask the user to look back at the list of information he said he’d be interested in on a site like this one. How would he find that information given this website? Where would he click? This is a solid way to ensure your design is user-centered, and that the content users most commonly seek is appropriately placed.
Finally, in this portion of the concept test I give the participants a set of index cards, each one printed with one of the concept’s main navigation labels. I ask participants to order the cards from most important to least important. As we refine our designs, we may reorder the navigation items to be better aligned with users’ stated priorities.
Before repeating the entire sequence with the second design concept, I ask the participant to choose five or six adjectives (from a list of 50 descriptions both positive and negative) that best describe his feelings about the design he’s been working with. By analyzing the resulting data across participants, researchers can align certain adjectives with each visual design option and assess how each option aligns with a business’ intended emotional response and brand attributes.
For clients, choosing between design directions can sometimes be difficult. Feedback from a concept test can be helpful input in that decision-making process. In the case of my current client, not only will this study help in selecting a design direction, the insights gained will challenge our assumptions as designers and inform revisions of our chosen design concept.
Cristin Siegel is the director of user experience and research at Chicago-based interactive agency Designkitchen. Reach her at firstname.lastname@example.org.