Digital Authoritarianism and the Censorship Bubble in Erdoğan’s Turkey

Image source: Haaretz

Based on the qualitative data gathered from 161 countries for the purpose of Article 19’s 2018/2019 Global Expression Report, many countries, including Turkey, are displaying a sophisticated form of digital authoritarianism. This digital authoritarianism is utilized as a tool for overtaking the online infrastructure and what is shared on social media platforms for surveillance and limiting the freedom of expression which protects the public display of opinion.

More than 400,000 websites are blocked in Turkey as of 2020. Out of these numbers, more than 130 large social media websites and applications are unavailable to Turkish citizens. These numbers come as a result of changes to Turkey’s 2007 Internet Act which was updated to maintain stronger control over what is posted on social media networks. Over the years, YouTube has faced several bans in Turkey commencing with the 2007 ban which was elaborated by Turkey as preserving national security and blaming YouTube for ”Insulting Turkishness” by allowing videos defaming Attaturk to remain online. The ban was reinstated briefly in 2010, and more recently in 2016 during Turkish involvement in Syria after videos showing the immolation of Turkish soldiers by Jihadist militants were posted.

Image source: VPN Overview

As of July 2020, the new Social Media Law gave even more powers to the Executive to curb the influence of big social media making them more pressured to perform self-censorship or face consequences. The Social Media Law outlines that social media companies are obligated to store their user data in Turkey and maintain permanent representatives. Those social media platforms that would potentially refuse to instate a permanent representative tasked to deal with social media content moderation and overseeing the work of moderators would potentially be subjected to bans on advertising, reduction of bandwidth and financial fines effectively rendering these platforms useless. The Law states that requests for removal of content ought to be resolved within 48 hours, if not – the platform would be liable for damages. Non-complying in this regard could potentially lead to losing a big market of more than 70 million internet users.

For comparison, a similar law exists in Germany as well, however, an independent judiciary is often a corrective factor in petitions for censorship. In Turkey, the situation is completely different and social media cannot moderators cannot count on their judgement to evaluate content as it is the Turkish Criminal Court that often petitions for content takedowns on basis of defamation of Turkish historical figures or political leaders. In this regard, content moderation is further influenced by what the Turkish authorities deem necessary to be removed.

What are the options for the big social media platforms? Social media platforms are developing artificial intelligence software to perform social media content moderating. However, such services would either have to be “tweaked” to censor local/region/country-specific content or these companies would have to rely on human social media content moderators that are trained to detect and moderate local/region/country-specific online content and that are aware of the demands of the governments in question.

The “Online Iron Curtain” or “Social Media Censorship Bubbles”?

While some social media platforms like Facebook are trying to react by introducing the Oversight Board that will have a major say in what content is modified, others are reacting by establishing “Social Media Censorship Bubbles”. In this regard, there are indicators that some social media platforms are tailoring their censorship to local-specific content by either hiring a locally-sourced workforce or creating local/region/country-specific guidelines for content moderation. Concerning this, the emerging trend of outsourcing social media content moderator workforce from countries in which these industries are booming such as the Philippines does not present as a solution for local-specific content moderation such as the one in Turkey for simple reasons – the political/societal sensitive context of what is evaluated.

Social media platforms have admitted that they are aware of and are banning local-specific content in Turkey such as content that could be deemed as defamatory towards Turkish President Recep Tayyip Erdoğan or LGBT-friendly content. TikTok, an emerging platform that is experimenting with allowing its users to post short political videos on its platform in Turkey and China has admitted that it has had Turkey-specific guidelines targeting the removal of content aiming at president Erdoğan or LGBT content.

The penetration of YouTube in Turkey has been decreasing since the bans started in 2007. As of most recently, in the past two years, YouTube has faced decreasing penetration in Turkey and fell from 10, 47% in September 2019 to 3,85% August 2020. As a result, Turkey stands among the top ten countries where YouTube censors its content in 2020. Turkey usually ranks amongst the first places in Facebook annual reports of countries in which most of the content is censored, topping the list in 2018

Image source: Human Right Watch

YouTube is under the ownership of Google Inc.  As of 2010, Google publishes its annual Transparency Report for Turkey in which it details requests made by Turkey for content moderation. Defamation stands around 50% of all annual requests for removal of content followed by reasons of national security and privacy and security. This trend has remained almost unchanged in the past decade. On average, Google approves the removal of around 35% of requests for removal of Turkish military-related content and in 2019, it approved the removal of 98% of content that defames government officials. There is a noticeable trend in Google’s content removal practices concerning the allegation of defamation towards Turkish officials – they are on a steep rise.

Constant bans of social media in Turkey have led to these corporations exercising self-censorship. The example of Google’s content removal practices indicates that rather than an “Online Iron Curtain” is being cast over the online space, it is more probable that many different “Social Media Censorship Bubbles” are being tailor-made for specific regions or countries.

The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of The Kootneeti Team

Facebook Comments

Nemanja Dukic

Nemanja Dukic is a teaching assistant at the Corvinus University of Budapest, Hungary where he also works as a research assistant at the Cold War History Research Centre. He can be contacted at

You may also like...