While democracy has taken some body blows of late, it is clear that an invisible alternative regime has been on the rise. Welcome to datacracy: you are a participating citizen without even knowing it and you didn’t have to join a caravan or scale a border wall to get here – all you needed to do was join half the world’s inhabitants by going on the Internet and leaving a digital trace. In a datacracy, you are part of a societal organization ruled by the analysis of your and every other citizens’ digital traces. In fact, over the course of 2018, we got to know just how wide, deep and inter-woven the “data state” is – and how much more we have yet to learn. Consider this recent revelation from a New York Times investigation: one of the largest data states, the nation of Facebookistan, has accumulated not only over 2.2 billion monthly users but has, simultaneously, assembled a network of data-sharing deals with dozens of other partners, from shadowy outfits, such as political consultants Cambridge Analytica, to Yandex, a large Kremlin-connected tech company, to the best-known companies in the US, such as Amazon, Netflix, Spotify and Yahoo. The deals gave partners more intrusive access to user data than ever disclosed while the partners funneled more users to Facebook. Misinformation campaigns aimed to undermine democracy could now spread further and be reinforced by this gigantic mesh.
An activist wearing a Facebook CEO Mark Zuckerberg mask stands in front of Portcullis House in Westminster as an international committee of parliamentarians met for a hearing on the impact of disinformation on democracy in London, Tuesday, Nov. 27, 2018. Lawmakers from nine countries grilled a Facebook executive who came in place of Zuckerberg on Tuesday as part of an international hearing at Britain's parliament on disinformation and "fake news." (AP Photo/Frank Augstein)ASSOCIATED PRESS
With revelations tumbling out even as the lights go out on 2018, the upcoming new year has to be one in which democracy’s capacity to defend itself gets tested. And there will be no shortage of opportunities; 2019 promises to be a politically energetic year. There is no doubt that the pressure to get ahead of these problems in advance of the US 2020 elections will be intense. As they brace for the year ahead, I would strongly urge the major platforms, Facebook, Google, Twitter, others, to keep a few things in view:
All Data Politics is Global: Yes, other countries have elections as well. Data crosses borders faster than airplanes or Intercontinental ballistic missiles. The spread of datacracy has extended well beyond the borders of the US. In fact, since data moves so swiftly and it can be analyzed and fresh information can be fed back to a distant user so readily, it is easy enough to be cocooned in one’s comfy offices in Silicon Valley filled with free sunshine and sushi and have profound effects on the lives of citizens far away. Gone are the days when the salespeople of Coke or Colgate or Unilever had to trudge through outlying villages and small towns to get to their markets. As a handy reference for all those who are working to protect democracy, note several important countries that will be holding elections in 2019: Nigeria, February 16th. Ukraine, March 31st. India, April/May. Indonesia, April 17th. Afghanistan, April 20th. European Parliament, May 23-26th. South Africa, May-August. Argentina, October. Canada, October 21st. Israel, before November 5. While all eyes are on 2020, make sure to not take your eyes off of 2019.
The Potential for Digital Damage is the Greatest to Democracies in the “Digital South”: Our Digital Evolution Index, published in theHarvard Business Review, demarcates two distinct zones on the digital atlas of the world. A disproportionate number of the countries that have not as yet reached their full digital potential are embracing the technology very fast – much faster than the capacity of human beings, institutions and other cultural, regulatory and policy guardrails that can provide adequate safeguards. These countries are also ones where we found that users exhibit more trust in the digital systems and embrace it with fewer questions asked. This is the “digital south”. Here, the dangers of misinformation are greater. Here, lives can be lost, as was the case in India, entire ethnic groups can be hounded and driven from their homes, as was the case in Myanmar and “unprecedented industrial use of disinformation as a campaign strategy” can become a hallmark of an election, as was the case in Brazil.
YOU MAY ALSO LIKE
Tuning In: Advanced Hearing Tech Helps Listeners Stay Engaged In Noisy Environments
The Fake War on Fake News May Have Worked in 2018, But It’s Time to Get Smarter in 2019: The masters of datacracy in the past year have mostly postured with some mixture of supposed cooperation, compliance and contrition. They have issued “fixes”, lavishly advertised good intent, both digitally and via the traditional full-page, and eaten humble pie while suppressing smiles in front of lawmakers who haven’t the foggiest idea about how the whole digital thing works: how Facebook makes money or why the leader of the free world’s face shows up when one searches for “idiot” on Google Images. However, at the close of 2018, we also know that the posturing has not been accompanied by energetic action. As new reports produced for the Senate Intelligence Committee suggest, the major tech platforms were evasive, obstructive and disingenuous in their response to Russian interference in the US elections in 2016. YouTube failed to disclose how many people watched or shared videos created by Russian trolls; Facebook failed to share release user comments on Russian content; Twitter failed to provide specific details on Russian-origin accounts: it is hard to analyze the true impact of the violation of democracy by Russian manipulators in the absence of good data from the datacrats.
Of course, on the surface, it appeared that the platforms had taken steps to stay ahead of election manipulators. Google claims they had taken algorithmic action; they have “trained our systems to recognize these events and adjust our signals toward more authoritative content.” Twitterpartnered with UNESCO to promote information literacy. Facebook, for example, had assembled a “war room” to monitor misinformation campaigns ahead of the 2018 mid-term elections in the US and the Brazilian presidential elections.
How effective were these initiatives? Sure, on November 5, the day before the US mid-terms, Facebook announced it had blocked around 30 Facebook accounts and 85 Instagram accounts for “coordinated inauthentic behavior.”
For companies that have mastered the art of changing behaviors through data analytics and design thinking, I must say these remedies seem a bit, um, industrial-age; it is hardly the stuff of disruption-worthy action to be rehashed in business school classrooms. It is hard to tell what the effect has been on misinformation leading up to the US midterms. Different groups have come up with differing analyses of the volume of fake news in the US. In the case of the Brazilian elections, the verdict is more decisive. According to one analyst, who leads Brazil’s respected fact-checking organization, Aos Fatos, the 2018 elections were marked by “an unprecedented industrial use of disinformation (as noted earlier) as a campaign strategy and by the lack of prompt, structured institutional answers against it.”
How could this happen? It seems to me that the efforts in the Facebook war room did not extend to the most critical platform in the Facebook stable – WhatsApp. The winning side in Brazil had free reign on WhatsApp as a transmission mechanism for delivering a barrage of daily misinformation to millions of Brazilians. The extent of the damage even compelled an apologetic op-ed by WhatsApp’s vice president, Chris Daniels. “Every day, millions of Brazilians trust WhatsApp with their most private conversations,” he wrote and acknowledged that the trust had been violated, and admitted: “Because both good and bad information can go viral on WhatsApp, we have a responsibility to amplify the good and mitigate the harm.”
This begs the question: why didn’t all the brilliant analysts at Facebook not check the data (duh?) on the two platforms? WhatsApp had 120 million Brazilians on it in May 2017 – 57% of the population. 83% of urban Brazilians used WhatsApp vs 75% that used Facebook for all purposes – and WhatsApp’s use for news had been increasing, while Facebook’s had been falling. What happened was entirely predictable. Facebook’s efforts were successful in chasing out fake posts on its main platform, so the manipulators simply migrated to greener pastures: WhatsApp. A bonus for manipulators with the new pasture was that the messages are encrypted making them hard to track and kill.
Of course, given the avalanche of revelations and an ever louder drumbeat of demands for regulation, 2019 will mean more seriousness in the way the datacrats approach the future of democracy. Already, the election monitors in the countries with 2019 election dates have given notice. In a no-holds barred international committee hearing in Westminster, London in late November, where Mark Zuckerberg was represented by an empty chair, Leopoldo Moreau, from Argentina's Freedom of Expression Commission, questioned Zuckerberg’s stand-in what Facebook’s plans were regarding fixing its WhatsApp problem. He pointedly said that officials from Argentina had already asked to meet with Facebook, but the company’s Argentine office had ducked a response and its Latin American representative never showed up.
In the case of Nigeria, the Poynter Institute has documented numerous false rumors about local political candidates on the major social media platforms. What is even more troubling is the capacity for such news to travel widely. A recent survey revealed that over 25% of Nigerians acknowledge having shared news stories which turned out to be fabricated. To make matters worse, 20% of Nigerians said they had shared a political news story online even after they had thought it was a fabrication.
India has had a rather horrible year with mob killings and attacks fueled by rumors over WhatsApp. At least 83% of Indian news consumers are concerned about fake news, and nearly 72% of them have a hard time distinguishing real from fake news, according to the BBC. Additional studies by the BBC revealed that nearly 37% of messages on WhatsApp were about “scams and scares” about 30% were nationalist and religious propaganda designed to play to a populist base.
Onward to 2019. The platforms are on notice. They are continuing to push ahead with some campaigns that appear to be in the right direction. WhatsApp imposed a forward limit in India and Brazil to stop mass forwarding of messages, appointed a grievance officer and a company headfor WhatsApp in India, along with newspaper ads, theater and three sixty second films in which the protagonist teaches someone important in their lives to help prevent the spread of rumors. It is time the platforms added to all these old-fashioned PR techniques the core element that got them where they are today: data and persuasive design. Follow the data; understand where the bad stuff originates; develop pattern recognition and early warning signals; kill the bad stuff before it goes viral; design the apps to deter bad behavior. Come on; it cannot be this hard to get there.
If we, indeed, get there, we might even note that 2019 was the year when datacracy saved democracy. Now, that would be a wonderful reason to celebrate the arrival of a new year.
Bhaskar Chakravorti is the Dean of Global Business at Tufts University’s Fletcher School and Founding Executive Director of Fletcher’s Institute for Business in the Global Context.