Our Rights in Social Media – Or the Impact of Facebook & Co for Democratic Discourse

Text and Concept: Dr. Judit Bayer / Editing: Dr Astrid Burgbacher, Alison Seiler / Illustrations: Dr. Judit Bayer

Social media platforms have appeared only relatively recently in the history of public communication, around 2004. Prior to platforms, publishing on the internet was possible only for tech-savvy people, or only in smaller circles (in private bulletin boards). 

The convenience of easy publication and sharing made platforms rapidly a beloved application all over the world. Their technology allows all people around the globe to express themselves so that potentially the entire world can hear them. Platforms aggregate and redistribute content, and thereby perform a task that used to be played earlier by agencies. They connect supply and demand, as demonstrated by commercial and booking platforms, like eBay or AirBnB. The same mechanism connects users – people who write with those who read, share, and like – on social media networking platforms.

The Problem of Ranking

The abundance of content needs to be ordered somehow. Many commercial platforms allow users to choose the logic of ordering, offering options such as: most recent, price, relevance, etc. In contrast to this, the largest social media platforms have not allowed such ranking options yet, although they do allow searches, setting preferences, and subscribing to certain fellow users. However, the bulk of the content gets ordered based on the preferences of each individual user, or rather, what platforms assume that their preferences are. Platforms base their assumptions on observed user behaviour, information shared by the users themselves, and data that was inferred about each user. This is, of course, intrusive of user privacy, but the current legal rules allow platforms to ask and to recieve permission from users in a way that does not ensure conscious awareness and fair treatment. In 2022, the European Union passed a package of regulations that will directly apply in all Member States to online platforms. My focus is specifically on the Digital Services Act (DSA) which will require that very large online platforms must offer at least one alternative ranking option which is not based on user behaviour. Additionally, all platforms must be transparent about the ranking criteria that they use in their recommender systems, meaning they must explain to users why they’re seeing the content that they see.

Beyond private content, commercial and political content has also found its way onto platforms. Binding the private sphere with the professional and commercial sphere exposes specific vulnerabilities of users. Touching upon these vulnerabilities, social media platforms are able to engage viewers’ attention with content that may appear at first sight personally relevant. However, this practice is not only intrusive on privacy, but damaging to the public discourse.

The Opinion Power of Platforms

The biggest platforms have significantly more users than any media company ever had. Facebook has 2.85 billion active users monthly. Some platforms became so prevalent to communication and daily use, that in some parts of the world they took over the role of the internet as such. (Because, due to an agreement between the giant platform and the mobile internet access provider, free access to Facebook is included in the basic subscription fee, whereas  browsing on the open web remains costly.

Even though the content that they make accessible is not their own, through the ranking of that content, platforms exert a power over access to information by users. They resemble more and more a classic media service provider, which proactively decides about the content transmitted, even if it is not created through the provider itself. For example, when a broadcaster airs shows, movies or talkshows, they are legally responsible for the entire content, even if it was not produced by the broadcaster itself, but by invited contributors.

While platforms were initially more neutral towards transmitted content, gradually they increase their role in determining which content gets priority and which one becomes suppressed. Through this consistent activity, platforms possess a considerable opinion power. True, they represent only one element of the informational environment, and not every individual uses them. However, they are available almost everywhere around the globe, and the number of their users is staggering.

 The Political Power of Social Media

Since 2016, several instances have been revealed when the public discourse was dramatically influenced by social media communication: the Brexit referendum, the 2016 American presidential elections and subsequent election campaigns over the world, the COVID-19 pandemic’s informational controversies (also called an “infodemic”). These demonstrated that the public opinion can be influenced effectively and rapidly, with less rational content than what had been known before in political communication. This was interpreted as a prompt for regulation all over the world.

Regulatory Initiatives for Social Media Platforms

One of the most prominent of these initiatives is the DSA cited above. The DSA requires that platform providers owe “due diligence” to users, which, among other things, also includes that they adhere to self-regulatory codices which should supervise areas that could not be encompassed by state regulation. Such codices are to be passed to tackle disinformation, online advertising, accessibility, and protocols for addressing crisis situations. The original Code of Practice against Disinformation has been strengthened in 2022, and now includes concrete measures that are to be measured using Qualitative Reporting Elements and Service Level Indicators – which will foster voluntary compliance and allow a consistent control.

Regulation of Value-Based Content Ranking in Social Media?

What has not yet been addressed through such regulatory instruments is whether platforms can distinguish (or in other words, discriminate against) content based on values. On the one hand, they already do this when they apply their terms of services, for example by rejecting nudity, violence, etc. On the other hand, they are supposed to be neutral on other issues, such as religion or political opinion. However, we have no guarantees that this would remain the case, and already there have been accusations that neutrality is not always a given. In absence of existing legal restraint, platforms could possibly apply value-laden criteria for ranking and presentation of their content. Nevertheless, if that happens, they should be transparent about it in their terms of services.

The research within my WiRe project focuses on whether it would be constitutionally possible to oblige platforms to refrain from discrimination and to ensure viewpoint diversity in their services.  Would this require an active tailoring of their content ranking algorithms, or necessitate the allowance of more options to their users, or merely refraining from discrimination?

Ongoing Legal Initiatives and Their Weak Point

In response to this question, there have already been sporadic regulatory reactions. Most importantly, the German Media State Treaty’s latest amendment in 2020 has introduced a requirement for online social media platforms (called “media intermediaries” in the Treaty). When transmitting journalistic content, they are not allowed to systematically discriminate. This does not apply to user-generated content, and it is very questionable whether such an obligation could be realistically expected and technically realised in relation to user-generated content, due to its high level of variability. Further, it is less than entirely clear whether this prohibition would extend to systematic removal only, or also systematic downranking.

Another policy instrument addressed this question, too: the European Media Freedom Act proposal (EMFA) is the most recent element in the European regulatory package that had been envisaged in the Democracy Action Plan of the European Union in 2020, intending to lay the grounds for a robust democracy especially by strengthening the information and media environment. It aims to create a level playing field for media freedom and pluralism across the European Member States.

This proposal outlines privileges and obligations for media service providers. This would require that online platforms, whenever they remove professional media content because it is contrary to their terms of services, would be obliged to consider the impact of that decision on freedom of expression, including from the perspective of media pluralism. If the online platform is “very large” under European terms (i.e. it has more than 45 million users within the European Union), then it would be required to communicate the reasons to the media service provider before the removal or suspension.

If, in the opinion of a media service provider, such removal takes place frequently and without sufficient grounds, then the media service provider is entitled to invite and engage the online platform in a dialogue to find a common solution. To summarize the effect of this envisaged regulation, while it would not be prohibited to remove any media content from online platforms, an enhanced level of fairness would be required from platforms when they do so. (The current text of the proposal is subject to meaningful changes before the final version is accepted).

Nevertheless, EMFA does not address the question of content that is not removed, but systematically deprioritised. Whether removed or hidden, the outcome is very similar: users will not access the content.

To date, none of the cited legal instruments require providers to refrain from discrimination among users regarding political viewpoint, gender, race, nationality, or other protected characteristics. As private corporations, this obligation is not self-evident, because only states are obliged to respect international human rights (although there are some exceptions.

What is the Legal Basis for Restricting a Platform’s Freedom? 

The second part of my research takes a step further: what exactly is the legal justification behind regulating this new area of digitalisation, (specifically here: platforms, but with an effect on AI regulation and data policy), and can we find a sound and solid legitimate basis that can serve digital society for a long time?

The theoretical underpinning behind the regulatory programme has not been yet elaborated on. From the perspective of private enterprises, they merely do best effort to serve their customers (platform users). They claim that they do everything with user consent only, and that their services are not indispensable to use.

However, from the perspective of the society, we can observe micro human rights violations in the forms of privacy infringements and infringements on informational rights. When users do not get access to relevant or accurate information on public matters from a source which they expect to, then their rights to information are violated. This right is the passive side of freedom of expression, and the social goal behind the right to freedom of expression.

The users are not always aware of these micro violations, and often they even support these structures with their consent and cooperation. This consent is not always, or often not given in the full knowledge of the circumstances.

The Bigger Picture: Why We Need a Different Perspective On Micro Human Rights Violations

We are also aware that these micro distortions accumulate on the global scale and lead to massive societal impact, such as the infodemics (disinformation on the pandemics). Big data also allows us to calculate the number of affected persons and shows us that each minor move has an accelerated impact.

However, the violations are so minor at the individual level that individually pursued prosecution does not lead to individual remedies within the human rights system. Classic western human rights theory prefers individual rights to group rights, because during the history of mankind, the individual rights were often sacrificed for the so-called social interest. However, today’s technology helps us to concretise the effects of wrong-doing for an individual level and conflate these with a greater social interest: we can provide evidence that a general transgression is comprised of the right of the many people.

In sum, individuals in the online environment can and should be treated as a collective group. This would open a horizon to the mutual interconnections between individuals and direct our attention to the subtle rights and interests that are elusive at the personal level, whereas an ongoing awareness of the diversity of these collective groups is also needed. 

Achieving a Productive Role for Technology to Advance Democracy and Human Rights

This phenomenon has wider repercussions in the networked environment, where all persons are mutually interconnected. The consequences of digital actions spread like waves in a sea. They expand and escalate – or sometimes taper off, for that matter. However, they always leave traces – which enables us to follow up. The same logic applies to other digital instances such as personal data management, and the use of AI. Therefore, legal scholarship must prepare for the coming era, which will be defined by digitalisation and AI, by rethinking its concepts.

Social Media Platforms are Shaping Opinions when in the Private Realm

In a diverse informational environment, people access a variety of different sources, and they also consult with their friends, family members, or colleagues about the information. They then shape their opinion about the controversial issues after they reflected on them in their social environment (see Lazarsfeld’s Two-step theory). However, in these times, even personal and social interaction between friends and family often occurs through social media which further facilitates a filtered selection of this social environment by creating closer connections with those family members or friends who hold similar views. The posts and comments of these like-minded friends and relatives will be shown to users more often than other, new perspectives. People also tend to unfriend those who post and share views that are sharply conflicting with their own views. This phenomenon was described as living in a filter bubble.

The concept of a bubble has since become disputed because it is possible to break the bubble and to consult independent information if one actively searches for other sources. Nevertheless, we can still describe this as a filtered information environment, (even if not a closed bubble) in which people’s ideas and reflections about what reality is are mirrored by like-minded friends (including online “friends”).