There has been a lot of interest around the subject of data visualization by the marketing research industry, and why shouldn’t there be? We know from our co-workers in the marketing department that visualizations help people to retain more information. We see from publications that images are being used not just to complement the story, but to actually tell the story on their own.
What does the adoption of data visualization mean for us as marketing researchers? If our deliverables do not have interesting images, does that mean our research results will not be properly digested and utilized? To answer these questions, I devised a piece of research that would ask B2B panelists to evaluate different deliverables.
The deliverable that I chose was a “topline” or executive summary. This was practical for testing as it was only one page in length with a limited number of data points. Results would also be most relevant to marketing researchers as this is a typical deliverable that, depending on who designs it, may or may not have some visual component.
The research question was straightforward. Should we as researchers include data visualization in our topline deliverables? And if the answer to that is “it depends,” then the question becomes, what does it depend on?
Who did we talk to?
We interviewed 1,813 business people. Forty-three percent were owners/partners in a business, 37 percent were middle management (directors/department heads), and 19 percent were senior management (CEO/VP/managing director).
What was tested?
Three deliverables were created. The first was a basic topline that contained a bullet point summary but no visuals. The second was an infographic. For the infographic deliverable, I gave the data to our graphic designer, who did a wonderful job giving the data a very visually appealing format. Finally, I added a third deliverable that included simple charts as the visual component.
The bullet point summary and the infographic both contained the same information and the exact same number of data points. The chart summary did include four extra data points, but this was only to complete a bar chart. For the most part, all three deliverables had the same information.
How was the test conducted?
In order to test these deliverable types, I asked members of our B2B panel to take part in a role-playing exercise. I wanted to use information that everyone would be able to relate to on some level and, for that reason, I chose human resources (HR) as the topic. I asked our B2B participants to pretend that they were an HR manager for a large firm. They have asked one of their HR employees to conduct an employee satisfaction survey and to create a topline summary that will be sent to the company’s CEO. As the manager in charge of this project, it was our survey participants’ responsibility to review their employees’ topline and provide feedback prior to giving it to the CEO.
Each survey participant was initially shown only one image and asked to evaluate it. Later in the survey, they were told that another employee also created a topline from the same data and were asked to evaluate this topline as well. The images were rotated. Finally, they were asked to choose between the two toplines they had seen. This created six unique pathways through the survey (three images with each participant only receiving two which were rotated for each survey).
Our participants were then given a list of primary goals that needed to be met in the research.
- The primary goals of the research are to answer these key questions:
- How satisfied are employees with working for the company?
- How satisfied are employees with their retirement plan?
- What do they like best about working at the company?
- Do employees feel they have enough training to do their job effectively?
- Do employees feel they receive enough recognition for the work that they do?
- Do employees feel they have a clear path to a promotion?
Did the data visualizations make a difference in the initial evaluation?
When we asked our survey participants if the primary goals of the research were accomplished based on the deliverable they saw, we found a similar level of agreement across all deliverable types. (chart, next page)
However, when we asked participants to rate the deliverable (the key measure that was actually asked first), the rating was somewhat higher among those who saw the infographic. (Chart, next page.)
It is worth pointing out that the ratings in general were lower than the proportion of the sample who agreed that the primary goals of the research were met. This is an indication that our B2B participants expect more than just receiving the primary goals when asking for a deliverable.
Additionally, we asked people to rate the layout of the deliverable. As might be expected, rating of the bullet point summary was lower (19 percent “excellent” for the bullet summary versus 27 percent for the infographic and 24 percent for the chart summary).
Feedback on the deliverables.
Participants were given a list of adjectives and asked to select the ones they would use to describe the deliverables. The infographic was most likely to be described as “interesting” (42 percent) and “artistic” (24 percent), but it was least likely to be viewed as “simplistic” (29 percent). Upon examining the open-ended responses, simplicity is seen as a plus for this deliverable. Some participants disliked the “busy-ness” of the infographic as it delayed them in finding the information they desired.
The chart summary was seen as the “easiest to read” (60 percent versus 50 percent for the infographic, which was lowest).
All three topline deliverables were seen as “to the point” (49–53 percent).
Participants were asked what they would do to make the deliverable better in an open-ended question.
Thirty-one percent of respondents who received the infographic told us that it “looked good and needed no changes” compared to 27 percent who received the chart summary and 25 percent who received the bullet point summary. Thirteen percent of participants who received the bullet point summary specifically mentioned “adding more/better visuals” compared to only 7 percent for the infographic summary and 9 percent for the chart.
After receiving a second image, participants were asked to choose which deliverable they preferred among the two.
The table above shows that the Infographic was chosen over the bullet point summary and that the chart summary was chosen over the bullet point summary, but that panelists were equally divided between the infographic and chart summary.
Were there differences by management style?
Fifty-nine percent of our sample told us that they were the type of managers who “give their employees freedom to execute their tasks as they see fit.” These managers were more likely to select the infographic (55 percent) as a deliverable over the chart summary (45 percent).
However, Senior Management preferred the chart summary (55 percent) over the infographic (45 percent).
So which deliverable format should we use for our toplines?
If this was a concept test for the general consumer, it would be clear that both the infographic and chart summary should be used over the bullet point summary. However, 20 percent or more of the sample selected the bullet point summary when given the choice. The important thing to remember concerns who the topline summary is being created for. After all, we may just be preparing it for one person. What is their personal preference?
In fact, this study shows us that a deliverable that meets the needs of all employers/clients is probably not possible. The open-ended feedback we received was contradictory. For example, some participants liked toplines that were simple and to the point while others expect more detail than what they had asked for. Some wanted more visuals and a written summary while others liked just the facts without visual clutter.
This study reminds us that different clients will have different preferences, too, and that not all of our deliverables should be the same for each client.
The plot thickens...
One of the most interesting aspects of this piece of research came when I reviewed the choice data by pathway, which takes into account which deliverable was shown first.
When looking at the data this way, you can clearly see that, when the same image was shown in the second position, it had a clear advantage. So for instance, when the bullet summary was shown first, 11 percent chose it over the chart summary. But when the bullet summary was shown second, 31 percent chose it over the chart summary. Similarly, when the infographic was shown first, 38 percent chose it over the chart summary. But when the infographic was shown second, 66 percent chose it over the chart summary.
This is a clear difference, larger than other biases I have noticed in the past where the item first exposed had a small advantage (an advantage overcome through rotating).
After further examination of the open-ended responses, we were able to determine that the act of evaluating the first deliverable was leading to a higher score for the second deliverable. If the second deliverable the participant was shown improved upon a criticism of the first deliverable, then the participant would rate the second deliverable higher.
For example, when evaluating the bullet point summary, one participant recommended that we “use specific percentages instead of half and quarter.” When asked why they chose the chart summary in the forced choice exercise later in the survey over the bullet point summary, they mentioned that “it uses exact percentages. The graphics make it easier to visualize the percentages.”
One participant who evaluated the infographic told us that it was “too brief, does not give enough information for review.” Later, when asked why they chose the bullet point summary, they said, “more detailed information so more usable, plus differences between managers and non-managers.” However, both deliverables had the exact same number of data points and both deliverables had the manager breakdown by segment.
This pattern continued for many panelists. They evaluated the first deliverable and then, when they saw a change in the second deliverable that met their need, they were more likely to rate that deliverable higher. In the chart below, you can see this manifest itself in the overall ratings from when the deliverable was shown in the first position versus when it was shown in the second.
Many of you may be thinking that I asked the survey participants to take part in a role-playing exercise which forced them to evaluate the deliverable, which then created this scenario. However, I would argue that this is the exact scenario many managers face: receiving a deliverable and then evaluating it. This suggests that having a discussion with your clients about improving their deliverable and then changing their deliverable to meet those suggestions will result in a deliverable that is perceived to be better, even if the net sum is not necessarily better.
So what have we learned?
Managers want more than just the primary goals met.
Although graphics do add value to topline deliverables, an infographic may not be necessary. Additionally, executives prefer toplines to be simple and will choose a chart summary over the infographics.
Everyone has different preferences, so learn what your client’s or boss’s are.
Changes to deliverables will more than likely be well received provided they meet a client pain/observation.