216-230-5699 hello@opinionroute.com

Implementing cost efficient quality control measures in online data collection

We at OpinionRoute spend a lot of time talking about the correlation between the method of respondent recruitment and its correlation to the ultimate quality of the data they provide in our surveys. As the dynamic of online data collection continues to morph and evolve, the opportunity to work within the survey program to help drive better data can’t be overlooked. Today, data collection firms have a tremendous suite of technologies available to help with this need. From automatic verbatim comment parsing to data consistency algorithms, we have never had it so great.

While we have access to some of these technologies, I have found that there are simple, cost efficient ways to implement effective quality control mechanisms without adding technology expense.

Here is a brief overview of techniques you can use to markedly improve the data you’ll be reviewing.

Attention Checks

In my 15 or so years in this industry, I have come across two general approaches to designing questions to test a respondent’s attention level.

  1. Questions designed to “trap” the respondent
  2. Questions designed to “remind” the respondent

While in my experience both can be equally effective, I tend to utilize the “remind” method for the following reasons:

  • There was time and effort spent to get the respondent into the survey, it’s better practice to try and push them to provide quality responses than it is to replace them
  • It reinforces a professional relationship between the data collection company and the respondent
  • It is a softer approach which makes for a better respondent experience
  • Easier to implement globally (less opportunity for cultural misunderstandings)

These questions should be designed to openly communicate to the respondent, “Hey, we are paying you for your honest and sincere feedback, please ensure that is what you are providing.” They should not include any subordinating language nor attempt and trick the respondent into selecting an incorrect answer. KEEP IT SIMPLE! For example, “Please select ‘5’ from the scale below.” It is simple and straight forward enough that those who select another value other than 5 either did so by honest mistake or was randomly clicking to try and speed through the survey.

We typically recommend one attention check question be added for every 5 minutes of survey length. Attention check questions should be placed at points in the survey were fatigue is expected to set in. For example:

  • in between long batteries of ranking or rating exercises
  • at the start of new lines of questioning
  • after more engaging methodologies (i.e., conjoint, MaxDiff, Gabor Granger, etc.)

When collecting data internationally, it is very important to be aware of any cultural differences that would impact how an attention check question would be interpreted and answered.

Speed Traps

Speeders, or respondents who complete surveys in less time than it should if reading all content, are fairly straight forward with regards to quality control measures. How we typically approach this function is by converting survey questions into response equivalent time measures. Those measures are added together to come up with an estimated average and minimum respondent survey length. One important item to keep in mind is that some questions may not be asked of all respondents, so a minimum survey length must be used for estimation purposes. Using that minimum, we take 50% of that time and use that as the recommended threshold for removal.

During the soft launch of a project, we will disable automatic speeder removal, compare the times of those who completed to the threshold and adjust as necessary before full launch.

Again, when collecting data internationally, different cultures take surveys at different speeds. It is important for these projects to be able to institute speeder checks by country. Be sure to rely on the soft launch data to confirm realistic minimum time thresholds.

Open Text Analysis

For surveys that include questions which prompt the respondent to provide verbose text, we would implement a mechanism which counts both the words and characters of the respondent’s answer. This is helpful in identifying respondents to review, however does not always allow as a true quality control mechanism and is typically not utilized within an automatic removal algorithm. In rare circumstances where we can anticipate a minimum amount of words required, we can use this data point within an automated system.

Straight-lining

Straight-lining, or the event where a respondent selects the same response for all attributes in a grid, is an area I approach with extreme caution. There are a few factors we look at when assessing if this metric can be included into the quality control mechanism:

  • how many attributes are being evaluated?
  • how many answers are available to select from?
  • is it possible for a respondent to select all the same answer?

There are very rare circumstances where I feel straight-lining is a legitimate quality control measure. One such instance is when there are attributes in the same question using the same answer list that have opposing meaning. For example, “how likely are you to recommend…” vs. “how likely are you to not recommend…”.

Again, keep in mind cultural differences when implementing straight-lining as a quality control measure.

Automatic Removal Algorithm

Each survey is unique, as is the sample that will be going through it. For many projects we can implement an automatic removal algorithm based on the total number of quality control mechanisms and the number of failed ones. For example, if we have 3 quality control mechanisms, I would recommend a threshold to automatically terminate respondents who fail 2 or more.

For surveys that are 10 minutes or less in length, I would not recommend implementing an automatic removal algorithm. In these cases, we would typically be between 0-2 quality control mechanisms in the survey. To err is human! It is common that an engaged respondent accidently fails a quality control mechanism. Failing 2 is far less likely and can be more reliable. However, it is still possible that removal based on 2 fails could remove good quality respondents. For this reason, I do not recommend implementing an automatic algorithm but more of a manual data review at 10%, 50%, 90% and 100% of fielding.

For some projects, and for some clients, we only identify respondents in the data with a marker that we feel should be removed and allow the client to make the final cuts.

Setting the expectation

The most important item to successfully implementing good quality control mechanisms is to clearly communicate with the respondent at the beginning of the engagement that this survey contains quality control mechanisms and failure will result in ineligibility of incentive.

The inclusion of what you are implementing, why you are implementing it and what will happen when adherence is not met, will give you a solid foundation when respondents are not compliant and find themselves being denied incentives. I recommend that this communication be on the first page of the survey.

In conclusion

So often we forget how important the respondent is to what we do in market research. I absolutely agree, that there are people out there who are always finding new ways to cheat the system and they unfortunately make it that much harder for the good ones to participate. So, it is very important to be able to weed out those bad apples but just as important to not alienate the good ones in the process. Implementation of a simple and clear quality control program in your survey can be a cost-efficient way to achieve both objectives.