Here's Why Neither Mark Zuckerberg Nor EU Regulators Will Save Us From the Digital Apocalypse

Neither European regulators nor Mark Zuckerberg alone will secure our digital futures around the world: ensuring privacy, transparency, and innovation takes work. There are no short cuts. Regulators, consumer advocates, and technology policymakers will have to do the hard work of developing an independent vision that offers checks and balances. Governments will need the political will to institute regulations that strike a balance between local realities and global competitiveness.
Here's Why Neither Mark Zuckerberg Nor EU Regulators Will Save Us From the Digital Apocalypse
Like

This article is part of the Forum Network series on International Co-operation & Digitalisation.

This article first appeared in the Harvard Business Review.

The digital industry is riding an important—and turbulent—wave of change right now. As Facebook and others grapple with tough questions about data privacy and security practices, trust in social platforms appears to be plummeting. Companies and analysts are scrambling to figure out how to make privacy rules clear, protect user data, and evolve the business models that made them successful in the first place.

A rising chorus of voices suggests that there is a ready-made solution to these pressures around data—and it has already been prepared by regulators in Europe. The EU’s upcoming General Data Protection Regulation (GDPR) will put in place the world’s most demanding set of rules on how user data can be collected and used. Many are expecting that when these regulations go into effect in May 2018, they will address many consumers’ concerns.

GDPR could become a model for the rest of the world, the argument goes, since many global companies serve users in the EU. Firms will have to adapt to these regulations anyway, and it could make business sense for them to implement these digital privacy principles worldwide. Facebook itself has hinted that it might extend some of the EU-driven protections worldwide; others may follow its lead. Either way, it’s tempting to think that the biggest challenges to privacy in the digital age would be addressed and that would be the end of the matter.

I disagree with this argument. The U.S. and other countries cannot free-ride on Europe’s policy decisions, just as consumers cannot rely on companies to “self-regulate.” First consider some basic details about GDPR.

***

Wait, What's GDPR Again?

A central tenet of the EU regulations is that individuals have the rights to give or withhold consent for how their personal data are used, shared, or analyzed; individuals also have the right to retrieve their data, take it elsewhere and even ask that their information be “forgotten.” Even the idea of what constitutes “personal data” is expansive, including IP addresses, location, encrypted data, where the encryption key is held separately, and other metadata.

Company responsibilities are also considered expansively in the GDPR; they extend to the chain of data processing partners, buyers, brokers, and sub-contractors. A major principle is transparency in how a company informs users about the specific purposes of data collection and about how their practices are in regulatory compliance. Not only are the burdens on the company high, breaches and violations have to be reported within 72 hours of the company becoming aware of it (there are many details and questions around that requirement that remain to be further understood). Companies in violation could face fines up to 4% of global revenues.

***

For one thing, support for regulation varies widely from country to country — and of course, within countries. Public opinion in some EU member states shows support for stringent rules, but that support is not always shared in other countries. For example, in response to a Pew survey question, 85% of Germans favored the more stringent European data privacy standards, while only 29% of American respondents felt the same.

Dell-EMC study reveals significant differences in terms of willingness to trade privacy for services across countries and across different uses of digital applications; of 15 countries studied, Germans were most concerned about privacy and the Indians were the least.

In the U.S., the approach to digital privacy has been more piecemeal: in effect, it is predicated in part on the idea that the ability of companies to collect, analyze, sell and monetize user data with minimal restrictions is the basis for an innovative digital industry; new users are drawn by free services and companies make money from the data collected. The idea of protecting U.S. competitiveness and its position in technology development will likely be central to the lobbying that will surround any efforts to change laws or boost federal regulations. Plus, many of privacy laws are set by states and are quite different from place to place. California lawmakers, for example, have proposed legislation to establish a data-protection authority while other statesmay offer very little regulatory protection.

Overall, it is clear that the societal demands and willingness to “pay” by trading off privacy for other benefits varies significantly. Both consumers and companies will likely have to manage different rules for different markets, and different technologies.

Emerging markets are often overlooked in these conversations, and they bring up a host of different issues. Some of Facebook’s biggest markets are in the developing world, and Facebook is experiencing its fastest growth in Asia and Africa. Of the top 10 countries with the most Facebook users, only two are in the developed world. Those two nations, the U.S. and the U.K., collectively account for 13% of all Facebook users. The remaining eight account for 41% of all Facebook users. What’s more, of the top 10 cities with the largest number of active Facebook users as of July 2017, all are in the developing world.

Our research on digital trust around the world, reported earlier in HBR, has found that users in the developing world are more trusting of online content, and—combined with fewer sources of objective information or little access to a free press—more vulnerable to manipulation by false information.

In Myanmar, for instance, Facebook is the dominant internet site because of its Free Basics program, which lets mobile-phone users connect to a few selected internet sites, including Facebook, without paying extra fees or using up allotted data in their mobile plans. In 2014, Facebook had 2 million users in Myanmar; after Free Basics arrived in 2016, that number climbed to 30 million. Recent rumor campaigns inciting violence against the Rohingya minority ethnic group in Myanmar were spread in part on Facebook, sparking systematic persecution and violence.

Facebook-owned WhatsApp has been identified as a primary carrier of fake news and divisive rumors in India, where its users’ messages have been described as a “mix of off-color jokes, doctored TV [clips], wild rumors and other people’s opinions, mostly vile.” Kenya has identified 21 hate-mongering WhatsApp groups. Data from WhatsApp can be harvested for a variety of socially harmful purposes.

While the developing world should also be given the information safeguards that are likely to appear in the West, I believe governments in the developing world should be wary of regulations as extensive as GDPR. Such regulations would impose costs on the mostly small businesses that operate in these regions and there is a sense that imposing a heavy burden on fledgling local data industries could stifle the chance for those companies to grow and compete.

Facebook, for its part, is taking steps to ensure that 1.5 billion users who live mostly in developing nations will not be able to file complaints under the EU GDPR and are governed instead by U.S. privacy laws. Collectively, these factors raise the specter of a world balkanized by digital “safe zones” in the advanced nations digital “red zones” in the developing nations. Far from being a force for equalization and inclusion, digital technology penetration and the degree of data protection could become a new form of inequality.

So if GDPR isn’t the answer for companies outside of Europe, what about self-regulation? Some are hopeful that CEOs will put privacy protections in place out of a sense of social responsibility, something Zuckerberg himself has discussed. During Zuckerberg’s recent testimony before the U.S. Congress, this theme of “responsibility” was repeated—by lawmakers, by Zuckerberg, and by commentators. The challenge with leaving digital privacy up to the company’s responsibility and self-regulation, of course, is that the digital industry has been enormously successful precisely because it has collected and monetized data with few constraints. Facebook itself experienced a growth of 61% in its profits at the end of 2017, despite the fact that it was a challenging year. Its revenues were a handsome $12.7 billion for the last three months of 2017 with a profit of $4.26 billion.

To get a sense of how privacy restrictions could bite into the bottom line, consider that, according to analyses by Goldman Sachs, Facebook could “potentially see a negative impact of up to 7% from GDPR.” With this much at stake, self-regulation is not a tenable option as a means to ensure the full safeguards necessary for digital privacy. Since data is the currency that runs the core of the business models in the industry—and there aren’t any viable alternative business models in sight—a company’s own sense of “digital social responsibility” will be moderated by the negative economic impact of limiting its use of consumer data; companies can be expected to do just enough to keep consumers and political entities from revolting, while building goodwill through other means. If there is a demand among citizens for even greater digital privacy, it would have to come through forward-looking public policy, consumer activism and regulation.

It’s important to get this right, and not just free-ride on other countries’ regulations or hopes that CEOs will make their own rules. The development of new technologies, such as AI, may hang in the balance. Consider that Facebook, much like fellow digital giants Google, Amazon, and Apple, is betting on artificial intelligence (AI) as the next source of innovation and competitive advantage. The more constraints there are on data collection and processing, the slower is the ability to capitalize on advances in AI. This, too, would create an opportunity cost—revenue and market share losses in the future, especially in competition with the rising tech companies in China that have access to data from a vast local market with few data privacy concerns and rules and a more intrusive government. Policymakers will have to grapple whether increased regulation and rules about data protection will hinder future competitiveness and present a society with a crucial tradeoff in the services that AI and machine learning could create.

The bottom line is that neither European regulators nor Mark Zuckerberg alone will secure our digital futures around the world: ensuring privacy, transparency, and innovation takes work. There are no short cuts. Regulators, consumer advocates, and technology policymakers will have to do the hard work of developing an independent vision that offers checks and balances. Governments will need the political will to institute regulations that strike a balance between local realities and global competitiveness. The EU regulations and Facebook’s serial stumbles and apologies can, at best, be a good place to start this essential conversation. But it has a long way to go.

Bhaskar Chakravorti is the Senior Associate Dean of International Business & Finance at The Fletcher School at Tufts University and founding Executive Director of Fletcher’s Institute for Business in the Global Context. He is the author of The Slow Pace of Fast Change.

Report this         

Related Topics

Privacy & Cybersecurity


Please sign in

If you are a registered user on The OECD Forum Network, please sign in