Evolving digital environments and their effects on citizens’ welfare in the EU and Hungary

2021-12-10

About the project

Political Capital, in the framework of a joint project with the Heinrich Böll Foundation, has launched a substantive debate on the regulation of social media, with a special focus on its legal and ethical implications, the spread of disinformation, radicalization and its impact on democracy. Given that new approaches to regulating the digital environment can play an important role in improving democratic processes, media consumption habits and the quality of the rule of law, Political Capital aims to improve understanding of these issues among politicians, experts, academics and the general public.

Through this project, Political Capital explored new approaches to and contemporary debates about the digital space, as well as possible ways of regulating. As part of this, we initiated a debate with a number of national and international experts, who have been working on digital regulation and digital rights for years. The invited guest authors' articles were published on Political Capital's website and blog, followed by a roundtable discussion with experts, and a podcast on the topic.

To conclude the project, a paper summarizes the developments in social media regulation in 2021, incorporating experts' views.

Background

In recent years, social media platforms and big tech companies saw considerable changes to experts’ and academics’ approaches to how digital platforms should and should not operate. This experiment in deregulation had a chance to prove itself by being the default mode since the rise of the internet in the late 1990s, but the results speak for themselves. Initial optimism about social media has turned into an open expression of growing political, ethical, legal and economic concerns about tech companies. In the past year alone, nearly 50 countries have introduced new regulations affecting the tech sector. Various countries, including Germany, Turkey and Russia, have already adopted social media regulations, but there is more to come. 

In 2020, leaders of the big four – Amazon, Apple, Facebook, Google – had to testify before the United States House of Representatives and Senate because of their potentially anti-competitive practices. At the same time, the events that unfolded at the United States Capitol on 6th January 2021 highlighted what we already suspected, namely that social media platforms are the hotbed of conspiracy theories, disinformation and extremist movements and can lead to violent action in a matter of moments. While we have known this from previous cases (for example, the wave of violence against the Rohingya in Myanmar clearly involved social media, particularly Facebook), the impact of violence in the center of the western world was sobering, just like the role of social media in fueling the infodemic around the coronavirus outbreak. These developments have given new impetus to the regulation of social media platforms.

2021 will most probably mark the start of a new era in which big tech and social media face ever greater scrutiny. The year started with Australia passing a new law forcing big tech platforms to pay publishers for their content. During the preparation of the law, there was a serious conflict between the Australian government and Facebook, which eventually ended with the adoption of a compromise law. The precedent-setting legislation encourages the tech giants and news agencies to negotiate payment agreements with each other and obliges Facebook and Google to invest tens of millions of dollars in local digital content.

In 2021, OECD member states successfully moved forward with the overhaul of the global digital tax structure, with more than 130 countries joining the reform proposal. The two-pillar package - the result of negotiations coordinated by the OECD for much of the past decade - aims to ensure that large multinational corporations pay tax where they operate and generate profits, while providing the international tax system with the certainty and stability it needs. But this is only the beginning of reforming the international tax system.

The debate around social media has reached Hungary too, but it is mostly limited to the false myth of the "conservative bias" of social media. Many conservative and far-right politicians accuse social media platforms of unjustly and disproportionately restricting right-wing posts and sites. No research has yet backed up these accusations (indeed, New York University research has explicitly refuted this theory), but there are still plenty of problems.

Minister of Justice Judit Varga announced in January 2021 that the ministry is working on legislation to regulate Facebook, which would be introduced in the spring. The terms and objectives of the planned regulation are not yet clear, and the minister has since seemingly left the task of pushing for regulation of digital platforms to EU experts, leaving the task of regulating Facebook in Hungary to the Competition Office, which has already imposed billions of forints in consumer protection fines on the tech company - but these were later annulled by the Curia.

Executive summary and recommendations

The global debate on the regulation of big tech companies - especially Facebook - has been going on for years and has already reached Hungary, but less attention is paid to the fact that the various restrictive measures should be examined in the light of the domestic political conditions and the state of democracy in the countries concerned. For example, limiting the spread of fake news and misleading information is undoubtedly a noble goal, but the same restrictions that have a moderate effect in a developed democracy may have a much more extreme effects in a less democratic state, and may be aimed at silencing voices critical of the government. Although the issue of regulating the digital environment – and social media – is extremely complex, it can be traced back to three basic problems:

  1. the transnational challenges presented by the internet,
  2. the business model of Big Tech,
  3. the psychological vulnerability of content users.

Although trust in social media is declining, the opportunities they offer remain in great demand, so eliminating them cannot be a serious objective, but making them more ethical and risk-free could.

In our view, the desirable principle is to reconcile the legitimate financial interests of technological platforms with the interests of the community, with minimal interference to freedom of expression and with the least possible leeway for manipulation. Contrary to popular perception, the legal, political, and technological tools to do this are already or will soon be at our disposal, and existing media regulations can help us to develop them. However, content regulation is based on human principles and human judgment (which is desirable), and therefore will never be perfect and cannot serve the needs of all stakeholders.

The start is to make sure that our laws are equally enforceable in the online space, and international harmonization of digital services legislation can ensure a transparent and predictable operating environment for these companies and users. Enforcing transparency in state interventions is key: making the operation of content regulation, algorithms, and targeted advertising transparent would make companies' decisions and actions more accountable. Researchers, journalists and civil society should have access to the platforms' databases through appropriate procedures to comprehensively study the processes in digital space and develop suitable responses to the challenges.

“The greater transparency would also put public pressure on the companies to do a better job at content moderation by putting more resources into the work, refining their standards, and improving the quality of their enforcement actions. There is very little downside to transparency.” – Mark MacCarthy

Based on our research, we offer the following suggestions:

  1. Reform of the content regulation system should be undertaken in a way that respects freedom of expression. Platforms should be required to remove clearly illegal content within a short timeframe and, in the event of failure to remove or wrongful removal, to give courts the power to order the platform to correct the decision. Given the likely high volume of complaints, e-courts should be set up, possibly at the EU level, where both users and platforms can appeal. Users and authorities can report content that is not in breach of the law but is considered harmful, and the platform can then decide to restrict or remove it under its own set of rules. Internal and external redress should also be available.
  2. Decisions on content regulation should not be automated but, as far as possible, be made by a real human being – even if assisted by algorithms. This does not mean that everything should be done by humans, but that there should be meaningful human control over decision-making. Algorithms and automated solutions (or artificial intelligence) can assist the human operator in detecting, identifying, and flagging problematic content, who can then decide how to sanction the problematic content. The starting point, however, is that the judgement of human activity should be undertaken by humans, not AI; any deviation from this should be carefully justified. For example, a capacity limitation due to a high case volume may not be a sufficient reason, and the costs involved must be managed by the company.
  3. More should be spent on fact-checking, moderating content that is dangerous to national security or potentially violent. This is a manpower-intensive activity: Facebook's fact-checking activities in Hungary and Slovakia, for example, are carried out by one-man editorial teams - clearly inadequate, even if some people seem to be doing a good job. Policymakers can encourage technology companies to build up their fact-checking capacity in order to avoid further state interference.
  4. Reducing psychological harm: in recent years, several studies have concluded that passive social media use harms users' self-esteem and thus their mental health through comparisons, while there are positive effects of active use. More intensive public discourse and targeted education are needed on the negative psychological effects of social media on individuals and communities, so that users can make educated choices about how much and for what purpose they use these platforms. In addition to the public sector, the civil sector can do much to raise awareness: inform, educate, and encourage caution, both in the general use of social media and in critical thinking.

A global, or at least European, regulation is needed to develop the right framework. Regulation at the national level has not proved effective due to the transnational nature of the challenges and has recently been rightly elevated to the international level. In the case of Hungary, after discussions with the companies concerned, national authorities  and EU bodies, the government has also come to the conclusion that only common European regulation can provide a framework that can ensure the supremacy of national and European legal institutions, the transparency and accountability of these companies.

At the very least, it is worth waiting for the proposed EU legislation to be finalized and, then, further regulating the area if the legal basis is provided. In addition, transatlantic harmonization or cooperation should be considered in order to create a more coherent international environment.

With the draft legislation on Digital Services (DSA) and Markets (DMA), we believe that EU legislation is on the right track on most issues (transparency, differentiated responsibilities and multi-level auditing) and can contribute to a truly comprehensive and modern regulatory environment. However, there are also a number of questionable points - notably the already difficult separation between government regulators and harmful but not illegal content - where there is a risk of national and corporate lobbying to find the 'right way'. The corporate lobby is interested in reducing the scope of the legislation and minimizing the resulting loss in revenue. Conversely, some governments, taking advantage of the securitization of the issue and public sentiment, want to increase the influence of national authorities to have a more active role in the content management decisions of platforms.

“The social media regulator should not be allowed to second-guess the content moderation decisions of social media companies and it should have no role in determining the content standards the companies use. (…) This is needed to prevent a government regulator from imposing a partisan political perspective on the content moderation decisions of social media companies.” – Mark MacCarthy

EU legislation should limit the state's involvement in the content management of platforms, while also requiring companies to be as transparent as possible and to make content management based on human choices. This could create a digital environment where companies do not under-regulate, and governments do not over-regulate these social platforms.

The latter risks pushing the EU away from a regulatory, normative model towards an actively interventionist model, and therefore towards authoritarian practices. This is not to say that the DSA in its current form is building the Russian or Chinese model, but ad-hoc interference and active state monitoring in content management decisions could easily lead to considerable restrictions on internet freedom.

The role of civil society is therefore now to push national governments and EU bodies in the EU legislative process in a direction that results in comprehensive but moderate regulation which strikes a balance between security and the protection of fundamental rights.

Please find the full study (pdf) here.