Turning the Tables: Using BigTech community standards as friction strategies

How can easily implementable solutions such as the CAPTCHA-test and friction strategies limit the production and proliferation of harmful online content while fostering constructive online spaces and dialogue? Banner image: Shutterstock/IMC11
Turning the Tables: Using BigTech community standards as friction strategies
Like

This article is part of a series in which OECD experts and thought leaders  from around the world and all parts of society  address the COVID-19 crisis, discussing and developing solutions now and for the future. Aiming to foster the fruitful exchange of expertise and perspectives across fields to help us rise to this critical challenge, opinions expressed do not necessarily represent the views of the OECD.

To keep updated on all of the OECD's work supporting the fight against COVID-19, visit our Digital Content Hub. 


Politicians and lawmakers across the United States and Europe are eager to criticize BigTech—from Meta and Facebook to Google and YouTube—but lack concrete and easily implementable suggestions to limit the proliferation of misinformation, hate speech, cyber-bullying and malevolent content in general. Here is an idea using BigTech’s own community standards and terms of usage—so called “friction strategies”—to curb harmful content.

Friction strategies are behavioural design measures that could make the production and proliferation of harmful content on social platforms quite cumbersome and time consuming—for humans as well as bots. Such strategies are considered some of the most promising to battle subprime information products and bad behaviour online.

Friction may consist in micro-payments to be made by users for sharing or receiving information, or user-oriented mental expenditures through quizzes, puzzles or other micro-exams passed before user action is to commence. A concrete example tech platforms make use of already is CAPTCHA: Completely Automated Public Turing test to tell Computers and Humans Apart. As digital gateways, CAPTCHAs may be easy enough for humans to pass but present more of a difficulty for bots. They have proved quite effective in validating users, minimising inappropriate opportunity for anonymity, diminishing the spread of misinformation and harmful content and in general changing incentives for information sharing.

A more sophisticated CAPTCHA-test is the renowned street signs and traffic infrastructure identification:

It is no accident that users are asked to identify street signs, from full stops and pedestrian walks over bike paths to highway overpasses and bridges. Google and other companies using these sorts of CAPTCHAs long ago worked out how to cannibalise and capitalise on the gateways designed to sort human users from bots: the answers provided are useful machine learning datasets to train artificial intelligence, which later may find its way into completely automated cars, drones and other vehicles.

A friction strategy that may stimulate reflection and circumspection among users of some platform service may be actuated by a CAPTCHA-like initiative: give users randomly repeated micro-exams (e.g. multiple choice) when they are about to log in, share, quote, upvote etc. on social media—but with questions pertaining to the governing community standards of the platform in question. Such arbitrarily recurring tests may act as bulwarks against information vandalism, hate speech, misinformation, digital harassment, harmful content, exploitation of immaterial rights and all the other items mentioned in platforms’ very own community standards. As gateways, they may ensure that users have read and understood—or at least reflected on—what they have agreed to by having a social media profile on. Once users familiarise themselves with these standards, it should ease the platform’s own burden of enforcing them.

However, they may possibly also change the platforms own behaviour and encourage them to:

  • Enforce their own adopted principles of content moderation consistently
  • Create transparency in their practice
  • Stimulate the public debate and political conversation pertaining to the online public sphere, democratic ambition, freedom of expression, privacy, user rights, product declaration and labeling of information products, all themes that the tech giants themselves routinely demand paying increasing attention to

Here is the concrete blueprint for how governments, politicians, lawmakers and global NGOs interested in CAPTCHA-approach can move forward:

  • Ask the platform the rhetorical question whether they agree it would be beneficial for all parties—themselves, their users, society more broadly—to be reasonably well-versed in their community standards. To avoid a Frances Haugen-moment, the chances are that the platform would (or should) agree.
  • Ask the platform to install an arbitrary friction strategy (that is, not for every single action), such that when users log on, are about to share a post, upvote a post etc. they are asked three to five randomly generated questions pertaining to the relevant community standards.
  • If the user can answer, they may immediately proceed to their profile or take the desired action.
  • If not, the users have to study the standards in order to answer the questions correctly, after which they may proceed.

It should be easy for the governing body of lawmakers, NGO representatives and other relevant parties to check whether the platform and its other services and apps have implemented and are maintaining the friction strategy in force. By way of example, members of government—read politicians—would likely be set the task of answering these very queries themselves at arbitrary times.

Read more on the Forum Network: Put the public back in tech policy, by Marietje Schaake, President, Cyber Peace institute & International Policy Director, Cyber Policy Center, Stanford University

The working hypothesis is that such a friction strategy may actually work in terms of reducing misinformation, hate speech etc. The price to pay for platforms apparently enforcing their own community standards and administering their terms of use among users accordingly is that engagement and data harvest would be reduced, along with their profits. On the other hand, the significant resources they spend on content moderation may make up for these losses and lower advertising sales.

Such a friction strategy based on the social platforms’ own community standards as sketched above would further harness their self-proclaimed logos, ethos, pathos and service raison d’etre of “bringing the world closer together”. Hence, it should be a win-win for everybody playing ball on the courts of the BigTech—at their on their own game and by the standards with which they rule the digital public square.

It should be an offer—or eventually a piece of legislation—BigTech can’t refuse. It’s in their own best interest…right? There is only one way to find out: politicians, lawmakers and NGOs should take this proposal to BigTech right away.

Listen to the OECD podcast Getting Big Tech to play fair: Denmark's Casper Klynge which examines how competition in the digital economy can be improved

Related Topics: 

Tackling COVID-19 Artificial Intelligence Post-truth Privacy & Cybersecurity

Please sign in

If you are a registered user on The OECD Forum Network, please sign in