This article is part of the Forum Network series on Digitalisation
In the age of targeted advertising and algorithmic discrimination, digitalisation has come to pose significant risks to our most fundamental civil liberties. In the words of Sir Tim Berners-Lee, the challenge is therefore not only to get the other half of the world connected, but also to ensure that the rest of the world wants to connect to the web we have today. Could a set of universal digital rights be the only viable way out of this 21st century dilemma?
Released days prior to the, Sir Tim Berners-Lee’s annual letter took on a particular resonance this year. As the inventor of the web celebrated the 29th anniversary of its creation, the fact that personal data could be leveraged not only for commercial but also political purposes became clear to millions. In the words of Sir Tim, the power concentrated among a handful of dominant platform companies had made it possible to weaponise the web at scale, and we could not count on big tech to stop it.
Harnessing the benefits of digitalisation, while managing the risks it poses to our most fundamental civil liberties, has come to constitute one of the greatest regulatory challenges of our time. The Forum session Universal Digital Rights & Digital Inclusion hence explored the pathways to be taken to forestall a-like dystopia, and ensure that digitalisation remains a force for good. Indeed, Rebecca MacKinnon, Director of Ranking Digital Rights, stressed from the start that digital rights are not a set of special rights, but rather an extension of human rights into the digital realm. And in this regard, concerns appear to have grown ever more pertinent.
Waking up to digital woes
Data-driven innovation forms, and is now surpassing physical trade as the . But for users of digital services, the monetisation of their personal data has often become the hefty price to pay for free access to digital services. In the words of Julia Hobsbawm, Author of it should not be forgotten that databases are really “people-bases”. And Philip J. Jennings, General Secretary, UNI Global Union, stressed that when an entity possesses the digital profiles of millions of people – profiles which can be used to manipulate their judgements and impulses – we have something to worry about.
Important initiatives, such as the, foster greater transparency in the practices of leading players. When assessing the policies and practices of the world’s most powerful internet, mobile and telecommunications companies, Ms. MacKinnon’s index found, however, that the best performers only scored a “D”. Yet, it would be a mistake to draw the conclusion that human rights are doomed to be sacrificed on the altar of technological progress. On the contrary, panelists gave grounds for hope, suggesting important pathways toward a digital future more respectful of civil liberties.
“Move fast and break things”: waking up to the consequences
If digital technologies reshape society, it should first be noted that technologies can themselves be shaped so as to induce innovation in a direction beneficial to society. Ms. Hobsbawm diagnosed an inherent set of contradictions in our current internet era, where something which is structurally democratic and free has nevertheless become a monopolistic vehicle. However, the author of Fully Connected stresses that technology should not be demonised.
To curtail human rights abuses in the digital realm, it is necessary to first look at the culprits and – sometimes inadvertent – enablers. One does not need to believe in a Silicon Valley-run conspiracy to acknowledge that humans make mistakes. Whilst the construction of a road or building is subject to environmental impact assessments, no such procedures are in place when it comes to digital infrastructures. As Ms. MacKinnon observed, entrepreneurs and CEOs start out with good intentions in their early days, but often fail to anticipate the potential abuses of the services they provide. Ms. Hobsbawn concurred, arguing that Mark Zuckerberg was probably sincere when he claimed he did not know how powerful Facebook’s advertising was going to be. Yet, now that this risk is better understood, it is clear that complacency is no longer an option. In the words of Ms. MacKinnon, we may have gotten here without thinking, but we can now do something about it.
Tech remedies to digital woes
Part of the solution may lie in the design of our information technology services. Targeted advertising – and the business models and data collection processes built around it – has emerged as the primary fuel of our digital economies. In the process, it has however enabled private interests and malicious actors to surreptitiously shape our behaviours on an unprecedented scale. For our information ecosystem to remain compatible with the kind of fair and democratic society citizens want, moving our business models away from targeted advertising may therefore prove indispensable. Is such a scenario realistic? Sir Tim himself appears to think so, and has already started developing products toby making use of peer-to-peer connectivity and blockchain technology.
Whether such solutions manage to take off in the face of powerful network effects remains to be seen. As the Forum explored, there may also be instances, notably in healthcare, where our data can form part of a public good and should thus be harnessed in a respectful manner for the benefit of society as a whole.
Thankfully, other technological solutions more suited to the shorter term have already emerged. The panelist Rand Hindi, who founded, a company developing AI-powered voice assistants, contends that the growing demand for privacy makes it possible for newly created companies to build their business models around technologies guaranteeing privacy-by-design. Whilst acknowledging the existence of a trade-off between convenience and privacy when it comes to digital services, the entrepreneur argues that it would be a mistake to believe that a choice is to be made between privacy and efficiency. A point he strives to prove through his own company, which has developed ways to train its algorithms with fake data, thereby avoiding the need to collect users’ private data to develop powerful AI software.
Toward a cultural shift?
Beyond privacy-by-design, the ability of technological fixes to remedy digital woes may ultimately depend on the capacity of innovators to embrace “ethics-by-design”. Influential voices such as Azeem Azhar, the founder of, note that digital technologies wield unprecedented influence over societal changes due to their scale and speed. As a result, innovators can no longer afford to simply apologise for the unintended consequences of their creations, and truly need to become conscious of the potential effects of the services they design. In the words of Ms. Hobsbawm, there is a dire need for a cultural shift. A cultural shift which, fortunately, may well be underway.
The(GDPR) constituted a tremendous leap forward in giving control to individuals over their personal data. And whilst it remains to be seen how smoothly some of its provisions with the , its role as a driving force for change around the world should not be underestimated. Indeed, a number of countries acknowledge it as an important source of inspiration for the updating of their own national privacy laws, while none other than leading tech actors such as Apple and Facebook have come to
Regulating the digital world: the journey that awaits
For all its merits, it is clear that within the regulatory framework needed for our increasingly digital world, the GDPR remains nevertheless a milestone, not a finishing line.from the Oxford Internet Institute note, the greatest risks of Artificial Intelligence and Big Data analytics do not stem from how private information is used, but rather from the inferences that are drawn about us from the collected data. Inferences which may help predict or from search engine interactions, but also impact on our private lives, identities, reputations, and self-determination. Yet, the researchers find that current data protection laws in Europe fail to protect data subjects from what may constitute the greatest risks in terms of privacy and discrimination.
Whether by reproducing the unconscious bias of their developers or learning particular social preferences from biased data, decision-making algorithms often reflect and amplify pre-existing real-life discriminations under a veil of objectiveness. From predictive justiceto the “digital poorhouses” documented by Virginia Eubanks in , technology is too often used to profile, police and punish the populations in need of support. Ms Eubanks further warns that the quote “the future is already here – it's just not very evenly distributed” may well be true, but not for the reasons its author William Gibson envisioned: rather than observing the wealthiest people gaining access to the latest technologies first, she cautions that our digital future may well resemble what is already being experimented on marginalised communities, who have little choice but to give away their privacy and accept the rule of algorithms biased against them, if they are to gain access to indispensable resources.
Squaring the digital circle: universal digital rights for digital inclusion
The stakes are high, and in the words of Sir Tim, the challenge is therefore not only to get the other half of the world connected, but also to ensure that the rest of the world wants to connect to the web we have today. To this end, the web’s inventor recently launched a call for an online "" to be supported by public institutions, government officials and corporations alike. For all its pitfalls, connectivity has indeed become an essential dimension of modern life. To be offline today is to be excluded from opportunities to learn and earn, to access valuable services and to participate in democratic debates. And whilst digitalisation may exacerbate some existing inequalities, so would the perpetuation of the digital divide. For the web to remain a public good which connects and serves Humanity, a set of universal digital rights may thus well be the only viable way out of this 21st century dilemma.
|Privacy & Cybersecurity||Digital Inclusion||Artificial Intelligence|