Strengthening Democracy Through Algorithm Regulation: From the Periphery to the Center of EU Rhetoric

Across the digital economy, Europe has been missing. The biggest tech companies are not based in Europe, and even European companies often run their businesses on infrastructure from non-European-based companies.

Yet, Europe has put the goal of technological sovereignty on the agenda. The EU might not be capable yet of technological sovereignty in producing technology, but it can further establish itself as the world leader in Internet regulation, especially when it comes to data and privacy, seeking the golden mean between regulation and connectivity. Will this pursuit be accompanied by an effort on the part of the EU to nurture its own tech ecosystem? It remains to be seen.

What we can say with certainty, is that the EU has managed to reposition data protection laws from the periphery of legal consciousness to the center of intensive legal and media publicity. And it has done so, primarily, through the enactment of the General Data Protection Regulation Act (GDPR) and its related case law. While such regulation is indeed necessary, we highlight that there is a tendency for EU data protection law to focus on legalistic mechanisms to protect data transfers rather than on protection in practice, and particularly, with regards to the exploitation of data and microtargeting in the commercial and political context.

Regulation of data transfers needs to go beyond formalistic measures and legal fictions, so that the EU adopts a pre-emptive rather than a firefighting role. And, perhaps, we should start thinking of solutions that go beyond the digital ecosystem to tackle the problems within it.

Data protection through legal means

The European Union has celebrated the European Data Protection Day on the 28th of January since 2006. On that very day, in 2014, Viviane Reding, Vice President of the European Commission, responsible for Justice, Fundamental Rights and Citizenship, spoke on “A data protection compact for Europe” arguing that “data collection by companies and surveillance by governments are connected, not separate”.  She went on: “Data should not be kept simply because storage is cheap. Data should not be processed simply because algorithms are refined. Safeguards should apply and citizens should have rights.”

Those statements can be read as a precursor to the GDPR, EU law’s regulatory tile in the mosaic of data protection and privacy. The GDPR, ‎as implemented since May 2018, has two unique elements.

Firstly, through the GDPR, the EU enshrines data protection as a fundamental human right (Recital 1). Article 8(1) of the Charter of Fundamental Rights of the European Union and Article 16(1) of the Treaty on the Functioning of the European Union (TFEU) also provide support for the assertion that everyone has the right to the protection of personal data concerning them.

Secondly, the GDPR also applies to data controllers and processors outside of the European Economic Area (EEA) if they are engaged in the “offering of goods or services” (regardless of whether a payment is required) to data subjects within the EEA, or are monitoring the behaviour of data subjects within the EEA (Article 3(2)) – regardless of where the processing takes place. This has been interpreted as intentionally giving the GDPR extraterritorial jurisdiction for non-EU establishments if they are doing business with people located in the EU.

The GDPR acknowledges that it would make no sense for the EU to assert fundamental rights for EU nationals, or a particular geographic region, but not for anyone else. Given the open nature of the internet, there had to be one data protection act to rule them all. It was a warning to companies everywhere that they would not evade the reach of European law simply by being located outside the EU. And the extent to which the EU’s vision as global rule maker in the context of data regulation was to be fulfilled is constantly tested judicially in the European public order.

Data-related concerns beyond the ambit of the law: promoting democracy or auctioning democracy off to the highest bidder?

While the European legal order has succeeded in recognizing data protection as a fundamental human right and has indeed established itself as the regulatory leader on the digital sphere, this does not mean that everyone has suddenly grown complacent of digital platforms’ growing interference with data.

There is something unique about data – the simple rule that the higher the consumption of a good, the lower its reserves become, does not hold for them. Data does not run the risk of becoming scarce. Not in the sense that we might have endless data – lack of efficient compression algorithms means that a data shortage could be imminent in the next decade, unless large and costly data centers are erected. But in the sense that one piece of information about someone can be used again and again by different stakeholders, without it losing its value. The political implications of the infinitely usable nature of data are of great importance and of potentially great value. And what is the main threat? Microtargeting, i.e. the platforms’ unique capacity of launching ultra-successful campaigns through direct marketing data mining techniques that involve predictive market segmentation.

I.                    Multiplying fake news

Modern media have multiplied the possibilities of the (cross-border) dissemination of marginal, uninformed voices, since the great achievement of the democratization of information has abolished the filters that once prevented false information from gaining access to the public sphere. The yellow press and vulgar “reality” shows have always existed. Audiences of these formats were relatively homogeneous, but the multiplicity of interaction and the socialization of the believers, was absent; an unbalanced conspiracy theorist had difficulty meeting interlocutors. The internet solved this problem. It brought together people with marginal views, who created communities and groups of like-minded people, within which the confidence and dynamism of marginal views is multiplied.

AI fact-checking could be a solution, interpreting the comments of the President of the Commission in her State of the Union Address: “We want a set of rules that puts people at the center.  Algorithms must not be a black box and there must be clear rules if something goes wrong”.

But even statements that could be debunked in a seemingly straightforward manner – for instance, the now-infamous Brexit campaign claim that the UK would save £350m per week by leaving the European Union – present a thorny challenge for automated verification. Two risks lurk. On the one hand, there is bias – the regulators’ own stereotypes, prejudices, and partialities that could potentially be projected on fact-checking. On this front, we can only hope for more sophisticated deep learning mechanisms.

On the other hand, there is the dilemma of balancing free speech and access to accurate information. The argument is that unless they cross specific legal red lines – such as those barring defamation – fake news stories are not illegal, and as such, regulatory bodies have no legitimacy in prohibiting or censoring them. The basis for such an argument is often found in Article 10 of the ECHR (freedom of expression), the First Amendment and international free expression safeguards. Nevertheless, the superficial protection that free speech rhetoric offers to fake news does not nullify the danger it poses for open discourse, freedom of opinion, or democratic governance. The rise of fraudulent news and the related erosion of public trust in mainstream journalism pose a looming crisis for free expression. Usually, free expression advocacy centers on the defense of contested speech from efforts at suppression, but it also demands steps to fortify the open and reasoned debate that underpins the value of free speech in our society and our lives. The championing of free speech must not privilege any immutable notion of the truth to the exclusion of others. But this does not mean that free speech proponents should be indifferent to the quest for truth, or to attempts to deliberately undermine the public’s ability to distinguish fact from falsehood. The European Democracy Action Plan, to be unveiled in late 2020, represents the next step in the EU’s regulatory fight against fake news, in the spirit of countering the aforementioned observations.

II.                 Polarizing content

If the reproduction of fake news is the first symptom, the second one would be militancy. An internal Facebook study documented that this medium’s algorithms exploit the human brain’s attraction to divisiveness and polarization. In his fascinating book Thinking, Fast and Slow, Daniel Kahneman distinguishes two “systems” of mental function: “fast” thinking works automatically, spontaneously, uncritically and impulsively. “Slow” thinking requires effort, rational assessment and strategic reasoning. “Fast thinking” is the function that flourishes on social media, where reaction is decisively influenced by the image, the context, the group dynamics, the echo chamber, the mass. Without filters and balances, “fast thinking” spreads false and divisive speech like dry grass spreads fire. Then social media acts as a weapon for conspiracy theorists and demagogues.

Democracy needs the resistance of “slow thinking”. It requires exhaustive dialogue, negotiation, the seeking of consensus, and compromises. These are the ingredients of liberal democracy, which becomes devoid of meaning when arguments are replaced by lies and insults, and the understanding of the other’s position by mob e-lynching and the “cancelling” of ideological opponents. Liberal pluralism has always been based on an optimistic premise that if you allow all views free to express themselves, to compete, to clash with each other, the truth will emerge and prevail. But as free markets require regulation to work effectively, so does pluralism require rules. When all the flowers are left free to bloom, the parasites grow bigger and smother them.

III.              Creating echo-chambers

But a third, and perhaps even more concerning reality that hinders our democracies is neither the magnifying effect that Facebook has on fake news, nor the divisive rhetoric, but rather its contribution to the creation of echo-chambers.

Before the House Financial Services Committee, Alexandria Ocasio-Cortez asked Mark Zuckerberg, “Would she be able to run advertisements on Facebook targeting Republicans in primaries saying that they voted for the Green New Deal?” His answer was, “Yes, in most cases, in a democracy, I believe that people should be able to see for themselves what politicians, that they may or may not vote for, are saying or think so they can judge their character for themselves.”

And herein lies the problem. Politics used to be part of the public sphere. When something is said in public, people may judge by themselves, but they also judge with others. When a piece of information is displayed to the public, we assume that if it is incorrect, illegal, or fake someone with the knowledge or the interest or incentive to debunk it (e.g. the party that is being damaged by it or the authorities if it violates a law) will do so. We rely on public scrutiny and public discourse to counter the asymmetry of information and the asymmetry of power between the broadcaster and the recipients of a message. The public sphere creates an informal system of checks and balances, and thus, of accountability – notions that are lost in the echo chambers of digital platforms. Due to microtargeting, there is no one to jump in to doubt the accuracy of a message, simply because that message would not have been sent to anyone, who would care to react or who would have the knowledge to react. Therefore, microtargeting creates an asymmetry that strongly favors the broadcaster of the message and puts the recipient in disadvantage.

Reform or regulate?

It is true that digital platforms have taken steps to self-regulate. But self-regulation is determined by a “we know we have more work to do” mentality, an idea that seems to be repeated by platforms once found at fault.For instance, such rhetoric was brought up both after the #StopHateForProfit advertising boycott campaign launched against Facebook. The phrase is both a promise and a deflection. It is a plea for unearned trust – give us time, we are working toward progress. And it cuts off meaningful criticism – yes, we know this isn’t enough, but more is coming.

Platforms frequently use unfathomably vast amounts of content as an excuse for inaction. But this defence is also an admission: they are too big to govern responsibly. There will always be more work to do because Twitter’s or Facebook’s design will always produce more hate than anyone could monitor. How do you reform that? Or an even more pertinent question perhaps – can you reform that or do all signs point to a system beyond reform?

The EU has opted for the path of regulation. And indeed, the EU’s steps towards regulating digital ecosystems are welcome, as it is, indeed, true that we cannot rely on self-regulation for something so powerful and so (potentially) dangerous for the pillars of our democracies.

Tech companies might resist, but negative externalities will always justify such efforts. A recent example of platforms and regulators reaching this equilibrium comes from Facebook’s threats to leave Europe due to proposals for new data-sharing regulations. Complying to those new regulations would be complicated, restrictive, and expensive. But by a complete pull-out, Facebook would lose a lot of money and market share. As such, the most likely scenario would eventually see Facebook forced to establish EU-only data centers.

But at the same time, we have to be aware of the limits of the regulation effort. Perhaps we have to face the reality that no matter how much we regulate, something will be lacking. It will always be the case that another platform will emerge in a different jurisdiction or a new technology will make its appearance, rendering the legal regulatory regimes outdated and redundant. Finding ways to live with those technologies is equally important with finding ways to regulate those technologies. As such, policies that could indirectly strengthen the collective immunity against the negative implications beyond the digital world should be welcome. Investing in public education is a good first step, particularly when it comes to demonstrating the difference between passionate argument and hate speech, heterodox views and public paranoia, good journalism and fake news trash. Admittedly, easier said than done.