Blog | April 30, 2024
Researcher Data Review is Vital, and so are deep tech diagnostics.
Researchers need the right tools to complement their expertise during data reviews.
We often get asked some variation of the question, “What can happen to a dataset that I won’t catch in my manual cleaning?” We figured we’d answer that for you with some real project results.
We recently took a sample of 12 recent projects using ResponseID™ API within the survey to identify fraud caught by researchers vs. fraud that goes undetected manually. But before we get started, here are some background concepts.
ResponseID™ is OpinionRoute’s in-survey fraud detection technology. It attaches to open ends and monitors a respondent’s device and behaviors.
Fraud Levels: ResponseID™ has a risk-factored scoring algorithm that identifies absolute bad responses (aka fraud) and otherwise identifies risk factors to aid the researcher’s decision-making in cleaning. Some fraud characteristics are grayer than we’d like. So we proceed to “warn” the researcher in those cases.
Outcomes
By the numbers
“Unseen”
87% of the 1477n were “unseen” fraud. These cases involve utilization of technologies to complete the survey that are consistent with fraudulent behaviors. This includes: usage of a developer console, using an on-screen translator, and achieving typing rates impossible for humans (hallmarks of ChatGPT answers).
Easy to Spot
13% contained attributes commonly identified through manual data reviews. These cases included for duplicate answers, misaligned answers, jibberish answers, etc.)
Recommendation
Researchers commonly believe the data is the whole story when it comes to quality. However, in today’s high-tech world of survey fraud, so many violations require deeper diagnostics than just the data set. OpinionRoute advocates for a marriage of Researcher + Technology in all data reviews. It will improve your accuracy of data cleaning, but it will save significant time as you work.
Hit us up to have a chat about protecting your surveys!