Best Practices for Crowdsourcing Consumer Research


As the 21st century moves right along, marketers and market research professionals have found that the internet is among the best places to poll consumers with the goal of getting a better understanding of market trends and consumer behavior. But with the advent of readily available subjects to survey, there are also new problems to watch out for that can lead to biased results that can call the whole endeavor into question. 

A 2017 study in the Journal of Consumer Research (J. Goodman & G. Paolacci) reckons that while the benefits of crowdsourcing consumer opinions can vastly outweigh the negatives, there are still some pitfalls that researchers should look out for when designing their surveys. 

Specifically, the article deals with the use of the crowdsourcing site Amazon Mechanical Turk (MTurk) as a platform for sourcing respondents for surveys. MTurk is a site design by Amazon that allows users the world over to complete small tasks for compensation, including, but not limited to, filling out surveys, data input and the like. These are tedious, time-consuming tasks not well suited for automation that a business or organization would otherwise have to staff themselves. But with MTurk, these organizations gain access to a vast database of human beings ready and willing to complete these tasks 24/7.

MTurk solves the problem of volume for researchers. For a modest price, they are able to source the opinions of thousands of respondents that meet the geographic and demographic criteria required. According to the study, this is a big improvement for academics who have historically relied on undergraduates as subjects, which some have argued can lead to skewed results.

But sourcing survey participants from MTurk can have its own issues. Sometimes people who do not meet the right criteria to participate still do so to make a quick buck. What are some strategies that researchers use to design surveys and recruit respondents that best suit their needs?

Researchers should design studies and questionnaires that minimizes the risk of self-selection, where participants who may not fit the desired profile choose to participate anyway. Researchers can work around this by designing screening questions that are vague enough that they do not divulge the specific attributes they are looking for in participants, for example, attitudes toward a brand. So instead of saying that they are looking for respondents with specific attitudes, simply asking what their attitudes are without making it clear what attitudes they seek to poll. Once this is done, researchers can then recruit only those participants they feel are best suited to the study. 

Furthermore, the authors of the study suggest that researchers should seek to avoid nonselective and selective attrition in their studies by being upfront that participants are indeed signing up for a study, but also be very vague in descriptions of the study. This can help investigators make sure that they are indeed polling a random selection of the population, but also decrease the odds that study participants will quit midway through. To this end, the study also recommends increasing the effort required from the participant pool during the screening phase (and increasing compensation appropriately) so that workers are more committed to seeing the survey through.

Aside from these factors, the authors suggest that by making the most of MTurk’s quality filters to identify an appropriate pool of participants, and by paying those participants a fair wage, they can go a long way in increasing the validity of their survey results. 

The internet has been a boon to consumer researchers by providing them an almost unlimited supply of study participants, which has historically been one of the biggest bottlenecks in designing good studies. It is not immune, however, to human nature. With these and other strategies outlined in this and other reports, researchers can design surveys and questionnaires that are truly representative of their target audiences and that will help them make the most of this bounty of willing participants to get the most valid results.