Google’s Jigsaw unit sponsors a RAND report that recommends infiltrating and subverting online conspiracy groups from within while planting authoritative messaging wherever possible.
With a focus on online chatter relating to alien visitations, COVID-19 origins, white genocide, and anti-vaccination, the Google-sponsored RAND report published last week shows how machine learning can help detect and understand the language used by “conspiracy theorists.”
While the 108-page report can be highly technical in describing machine learning approaches for identifying and making sense of conspiracy language online, here we’re not going to focus on any of that.
Instead, we will zoom-in on the report’s “Policy Recommendations for Mitigating the Spread of and Harm from Conspiracy Theories” section and attempt to see how they might be received in the real world.
“Conspiracists have their own experts on whom they lean to support and strengthen their views […] One alternative approach could be to direct outreach toward moderate members of those groups who could, in turn, exert influence on the broader community” — RAND report
Diving into the report’s policy recommendations, they all have one thing in common — they all seek to plant authoritative messaging wherever possible while making it seem more organic, or to make the messaging more relatable to the intended audience at the very least.
The four policy recommendations are:
- Transparent and Empathetic Engagement with Conspiracists
- Correcting Conspiracy-Related False News
- Engagement with Moderate Members of Conspiracy Groups
- Addressing of Fears and Existential Threats
The original narrative from authoritative sources always stays the same, but the message is usually filtered through intermediaries that act like marketing, advertising, and PR firms.
What follows doesn’t have anything to do with the validity of any conspiracy theory, but rather focuses on the Google-sponsored RAND report’s messaging strategy through the following lens:
Are ‘conspiracy theorists’ more likely to believe an authoritative message when it comes from someone else?
Are they more likely to focus on the validity of the message itself without placing all their trust on the messenger?
The Google-sponsored RAND report recommends that the government bet on the former.
But could such a move actually encourage the latter?