Illustration of man wearing mask taking selfie

Opinion: Imposter Participants Are Compromising Qualitative Research

When recruiting study volunteers online, how can researchers deal with participants who fake their identities?

Days after posting my ad for research participants on Facebook, someone had finally replied. As a Ph.D. student offering a small financial reward to volunteers for a study on fertility care access, I was thrilled to get a response. But then the emails kept coming. Soon, my inbox was flooded with enthusiastic but strangely stilted participation requests. Over the weekend, I received hundreds of emails, all from different email addresses, confessing different motivations to be part of the study.

Imposter participants — people who fake their identities or embellish their experiences so they can take part in online studies — are becoming a commonplace concern for researchers like me. Such qualitative researchers use techniques that include interviews to gain insight into people’s perspectives and experiences. Imposters might obscure their identities through fake names, email addresses, and phone numbers, and could use IP address proxies to hide their computers’ locations. A typical entry point is a social media post: Qualitative researchers might look for interviewees by posting in a range of public online groups used by the study population.

Since the Covid-19 pandemic began, researchers have increasingly viewed online recruitment as a convenient way to gain access to research participants. Travel costs can be reduced, and online interviews are perceived as safer to conduct for both researcher and participant compared to in-person interviews. Social media can also help reach people over a wider geographical area, or who might be hidden from society’s view due to social stigma.

As a researcher, it feels violating to have someone blatantly lie to you about their identity and to seemingly not care about polluting your painstakingly collected data. Because qualitative research often focuses on participants’ experiences, such subterfuge can make it difficult to say definitively if someone is part of the study population or not. In some cases, researchers have had to halt their studies or discard part of their analysis, wasting the valuable time and public money invested into their project.

Identity verification is not simple, according to Emma Waite, a research associate at the University of the West England Bristol who has encountered suspected imposters in her work. In one study Waite worked on, which focused on the experiences of young people with visible differences (such as burn scars, a cleft palate, or amputated limbs), the researchers offered a shopping voucher for taking part.

Waite explained to me that even with a video screening call, participants who didn’t appear to have a visible difference and also looked around 30 years old would claim they were eligible for the study and that they were 17.

Since the Covid-19 pandemic began, researchers have increasingly viewed online recruitment as a convenient way to gain access to research participants.

But are imposters really malevolent actors, determined to sabotage our work? Researchers engaging with potential imposters have heard babies crying in the background or suspected that calls are being taken in a busy call center, suggesting that imposters are people in complex personal situations.

Although the compensation that researchers offer varies depending on factors such as the length of the study, it’s generally capped by policies from major funding bodies such as the National Institutes of Health. Such funders stipulate that compensation must not be unreasonably large, as eligible participants must not be coerced or unduly influenced into taking part.

So if these individuals decide that the calculated effort of deception is worth it for a small amount of study compensation, one argument is that they need our empathy and understanding.

But we also need to consider this problem in the wider global context. A 2023 United Nation’s report on people in Southeast Asia who were forced into executing online scams found that inequality, poverty, lack of decent work opportunities, and climate change all contribute towards the conditions which allow these human rights violations. This is an extremely complex and harrowing problem, and researchers are limited in what they can do to help the people behind the scams. But it is still worth remembering that when recruiting digitally, our research activities also inadvertently enter an unknown and potentially troubling online space. Jacob Sims, a visiting expert on transnational crime at the U.S. Institute of Peace, told me that although he has not encountered research study scams specifically in his work on forced scamming, it is plausible that research study scams could be emanating from industrial scale sites.

In some cases, researchers have had to halt their studies or discard part of their analysis, wasting the valuable time and public money invested into their project.

However, others point out that fake participants can also be motivated by ideological beliefs. Mari Greenfield, a research fellow at Kings College London, was studying the experience of new and expectant LGBTQ+ parents during the Covid-19 pandemic in 2020. Greenfield found that some participants had not meaningfully filled in the survey; instead, they had focused on criticizing gender identity-related questions at the end of the survey and left offensive comments. The data were excluded.

Caro Cruys and LB Klein, researchers at the University of Wisconsin-Madison’s Sandra Rosenbaum School of Social Work, encountered both trolls and imposters when conducting an interview study on LGBTQ+ survivors of sexual and relationship violence. In a thread on Twitter (now X), Klein highlighted the discomfort felt when put in the difficult position of interrogating a participant’s legitimacy, particularly when they belong to a marginalized, seldom-heard population. Klein wrote: “Queer and especially trans folks are often not compensated for sharing stories” and “I don’t want potential participants to think I don’t trust or believe them.”

Because identification is such a tricky process, Cruys and Klein are developing an imposter participant protocol to help others navigate this problem. They suggest that the impact of imposters can be mitigated by involving lived experience advisory groups and members of the population of interest as co-researchers, as well as by building relationships with community groups.

And datasets containing imposter narratives may not be the end of the world. Such datasets do not have to be thrown out completely, Anna Dowrick, a medical sociologist at the University of Oxford who has recently encountered scam emails, said in an interview. Quantitative scientists often have large, complex numerical datasets to manage, but procedures are in place to polish these imperfect data and maintain research integrity. Similarly, qualitative researchers might be able to carefully clean up datasets, as long as the aims of the research are kept in mind, Dowrick told me.

Or, alternatively, why not find value in imposter cases? Imposters get their information from somewhere — fake narratives are probably not plucked from imagination alone, especially with the advent of AI chatbots. Their narratives might still speak to wider themes or theories, and completely out-of-place data could bring interesting contrast and might drive discussion in a focus group setting.

I am not arguing that imposter participants should just be ignored or accepted. But the real issue is that researchers can’t usually know who is behind the screen, and building a hostile research culture won’t protect against the wider troubling trend of online scams. Rather, because of this uncertainty, and the ethical and moral questions left unanswered, researchers like myself need to be more cognizant of the online ecosystem they are entering when posting about their study. We need to take steps early on to build safeguards with our online recruitment and interviewing — and reckon with the wider implications.


Becca Muir is a Ph.D. candidate at Queen Mary University of London researching fertility care access. She has written for outlets such as New Scientist, The Guardian, Prospect magazine, and elsewhere. 

Republish