1 Topic, 5 Blogs – “Impact of Rich Media Question Types in MR”
I am privileged to be one of 5 bloggers who, each 15th of the month – will produce a POV on an issue facing the Marketing Research industry. You’ll also be hearing from Annie Pettit (organizer), Josh Mendelsohn, Joel Rubinson and Brandon Bertelsen. Links to their posts will follow.
Our first topic, is something which has been of utmost importance and urgency to element54 this year, and one where my partners and I have invested considerably, to advance the issue in our business (some previous posts on this site offer practical & prescriptive solutions).
We are witnessing an explosion of engaging new question types – intended to captivate respondents, and ensure the long-term viability of an online survey platform.
Research to-date has described “respondent engagement” as the notion that improving the usability and interest of Online question types will enhance respondents’ enjoyment, and the thoughtfulness of their survey experience. However, there has been little work to validate the myriad of new question types, and specifically, how response patterns vary across each type.
element54 conducted 2 transparent studies in 2009, which have been presented at several North American conferences (most recently, The Market Research Event, Las Vegas – Oct ’09)
Study #1 – “Sexy Questions, Dangerous Results?” (Data Consistency):
In January 2009, element54 and ResearchNow, conducted the first-ever fully transparent research study (full dataset available to MR peers), which examined the response patterns to various Online question types. The findings from this 2000 interview study were clear. Changing the visual layout of an online survey leads to differences in the data.
- 36% variance in how often people say they brush their teeth,
- Up to 8% overclaiming of behavioral product category consumption,
- Up to 10% understating of attitudinal “issue importance”.
Study #2 – “Eyes Don’t Lie” (Respondent Usability):
In May 2009, element54, along with partners UX Research & Consulting, and MBA Research, conducted the largest transparent “Eye Tracking” study on survey usability, with 100 one-on-one Qual/Quant interviews, using SMI Vision eye tracking software to validate respondent eye movements & patterns. The findings from this second study leveraged how Usability insights and applications can be applied to designing better surveys:
- Inference vs. Instruction; up to 40% of “2.0” survey questions aren’t read when visuals dominate the screen. It is certainly concerning if respondents are inferring our intent.
- Error Prevention & Recovery; ever tried to get off an elevator when you miss your floor – yes, you push other buttons to get off. When survey “error messages” are not linked to where the problem is, respondents are likely to change their answers to escape. This issue raises a host of related questions around how we treat “error” data once the respondent successfully clicks “Next”.
Technology is headed in the right direction. To sustain the long-term viability of Online surveys, we do need to create appropriately engaging platforms for respondents. However, in the current space race, and range of available DIY software platforms – there’s no glory in getting to the moon first, if you crash land. This is where best practices and standards can, and must catch up with all this exciting technology.
Here are the links to Annie, Joel, Josh and Brandon (soon).
Annie Pettit of Lovestats: http://lovestats.wordpress.com/2009/12/21/1-topic-5-blogs-rich-media-in-surveys/
Joel Rubinson of the ARF: http://blog.joelrubinson.net/2009/12/getting-the-most-out-of-online-research/
Josh Mendelsohn of Chadwick Martin Bailey:http://betterresearch.blogspot.com/2009/12/1-topic-5-blogs-interactive-questions.html
Brandon Bertelsen: link posted shortly at http://bertelsen.ca/