fbpx
Harry Fear's Blog

Information

This article was written on 14 Aug 2024, and is filed under Journalism.

The Illusion of Free Speech: How Deep State and Big Tech Collude to Shape Public Discourse

A detailed cartoon-style landscape image depicting a social media platform interface with distorted or censored content, such as blurred text, redacted lines, and warning labels. The interface resembles a typical social media app layout. In the background, a large crowd of confused and frustrated users looks up at the scene, with some holding smartphones. The image also subtly integrates symbols of AI, like gears and circuit patterns, into the platform, indicating the technological influence on the manipulation. The color palette is dark and muted, with the social media interface slightly brighter to draw attention.

In the wake of the 2024 UK riots, a chorus of voices has risen, calling for stricter regulation of online speech. As a former Westminster political correspondent and foreign correspondent who has covered the rise of the far right across Europe, I find myself compelled to cut through the noise and expose the deeply troubling realities that underpin our current debates on free speech, misinformation, and censorship.

The Facade of an Unregulated Digital Frontier

Let’s dispel a dangerous myth right from the outset: the idea that we’re dealing with an unregulated online world is patently false. In reality, we already live in an era of extensive censorship, facilitated by a cozy collaboration between social media giants and what some might call the ‘deep state’ (or permanent government institutions).

This unholy alliance affects our understanding of conflicts from Ukraine to Israel-Palestine. The social media platforms we naively consider as bastions of free speech are, in fact, aligning themselves with the national defence interests of the West, dancing to the tune of NATO communications policies.

The practical implications of this allegiance are far-reaching. We’ve witnessed everything from shadow banning to the addition of friction to particular users, posts, hashtags and accounts. Direct censorship of specific images, content, and language has become commonplace. The notion of these platforms as neutral arbiters of information is a comforting illusion – one that’s increasingly difficult to maintain.

The Shallow State’s Struggle

While the ‘deep state’ (permanent government institutions) may be satisfied with the current level of control, the ‘shallow state’ (local law enforcement) often finds itself frustratingly powerless. Domestic police forces have hit walls when trying to access technical details from social media companies, particularly when it comes to identifying users.

Previously, the refusal of Silicon Valley tech giants to hand over crucial data like IP addresses has left the police hamstrung.

The Naivety of Our Current Debate

Against this backdrop, our contemporary debates about online speech regulation appear woefully naive. The idea that we can simply police or censor individuals spreading false information – whether intentionally or not – is nothing short of fantasy.

Moreover, we must confront an uncomfortable truth: our allies, as well as our own ‘deep state’, routinely implement organized policies of misinformation and disinformation to suit their needs. The moral and ethical hypocrisies involved in advocating for democratic, free, open, fair, and honest platforms are staggering when viewed against this reality.

Our discussions, as seen on programmes like BBC Any Questions, barely scratch the surface of these complex issues. They remain reactive, shallow, and devoid of the nuanced understanding required to genuinely address the problems at hand.

The Technical Ignorance of Policymakers

Adding insult to injury is the stark reality that many policymakers and politicians lack even a basic understanding of how these social platforms actually function. This technological illiteracy allows them to maintain a level of naivety that is as remarkable as it is dangerous.

Take, for instance, the recent push for AI-driven content moderation. A New York Post article from February 2024 revealed Biden’s AI plan to develop censorship tools, justified by the claim that Americans can’t ‘tell fact from fiction’. This approach naively assumes that AI can objectively determine truth, ignoring the reality that these systems are inherently biased, reflecting the prejudices and agendas of their creators.

The Hypocrisy of Selective Censorship

The collaboration between social media platforms and state interests has led to blatant instances of politically motivated censorship. A prime example is the suppression of the Hunter Biden laptop story during the 2020 US election. Mark Zuckerberg admitted that Facebook restricted the story based on FBI misinformation warnings, effectively interfering in the domestic US political process.

This is not just a free speech violation; it’s a fundamental democratic violation. It exposes the partisan nature of these platforms’ censorship practices, contradicting their carefully cultivated image as neutral arbiters of information. Subsequent testimony has revealed that FBI employees who warned social media companies about a potential ‘hack and leak’ operation knew that the Hunter Biden laptop wasn’t Russian disinformation.

The Illusion of Oversight

Attempts to address these issues through oversight boards have proven to be little more than window dressing. Meta’s oversight board, for instance, is stacked with advisors aligned with think tanks that follow transatlantic alliance NATO policies. These boards serve more as a fig leaf for continued censorship than as genuine arbiters of free speech.

The Suppression of Dissenting Voices

The effects of this system are particularly chilling when it comes to voices challenging the prevailing narratives. Take, for example, the case of Declassified UK, an independent outlet that exposes UK defense links with Israel. Despite their presence on social media, their explosive work seems mysteriously limited in reach. While there’s no smoking gun evidence of manual restriction, it’s a fair guess that they’re being ‘reach restricted’, especially given their previous official blacklisting by the UK Ministry of Defence.

The policing of Israel-Palestine content has been particularly aggressive since the October 7th massacres, with posts mentioning Gaza effectively delisted or having friction added to their sharing on platforms like Instagram.

Moving Forward: A Call for Realism

So, where do we go from here? First and foremost, we need to get real about the way the world works. We must acknowledge the self-interests of nation-states and the existing levels of censorship, suppression, and control already applied by both our allies and our adversaries.

Policymakers must become more technically literate, and we should prioritize the input of those with genuine technical expertise in policy-shaping forums. We need to have honest conversations about the use of bot farms by state actors to create inauthentic influence campaigns, and the challenges this poses to the concept of one person, one voice in our digital public squares.

Most importantly, we must resist the temptation to believe that more censorship is the answer to our current crises. The events of 2024 have shown us the dangers of unchecked misinformation, but they’ve also highlighted the perils of entrusting our right to free expression to corporate entities aligned with state interests.

As we navigate these murky waters, we must hold fast to the principles of genuine free speech and open debate. Only by confronting the uncomfortable realities of our current system can we hope to build a truly democratic digital public sphere. The stakes are too high for us to continue sleepwalking through this critical juncture in our history.



 

Q1: could you please tell me about the topic you’d like to discuss and share a bit of your background? Specifically, what brings you to this topic, and what stance or perspective do you have on it?

Well, I’m a former UK political correspondent based in Westminster, as well as a roving foreign correspondent that has covered the rise of the far right within the UK and throughout Europe. That’s my background. The topic itself is about contemporary debates over free speech, online misinformation, disinformation, policing and censorship, given the UK 2024 riots and civil unrest, including the racist online posting and instigations to racial violence. And including context with respect to contemporary debates and the nature of these debates, given this context.

Q2: could you elaborate on your perspective regarding these debates over free speech, misinformation, and censorship, especially in light of the 2024 UK riots and the broader European context?

So my hot take is that [1] we already have extensive censorship and collaboration between the social media platforms and what some would call the deep state. This affects things like the Ukraine conflict, the Israel-Palestine conflict, and core key fundamental issues surrounding defense questions in the West. The West-based social media platforms seek to broadly align themselves with the defense interests of the West. And we have already seen over the last decades censorship of various forms directly implemented as a result of this allegiance and cozying up to Western and NATO communications policies. It means in practical reality we have already seen everything from shadow banning to adding friction to particular users, posts, and accounts, as well as the direct banning and censorship of particular images, content, and language. So [2] the idea that we have an unregulated field at present is wrong. Moreover, in respect of particular domestic legislations, incitement to racial hatred, for example, is something that is already illegal, even if it’s online. But there has been an issue with domestic police sometimes getting access to the technical details that social media companies have failed to disclose that have allowed users to remain anonymous because the police have not been granted access to, for example, IP addresses. So there are some points of contention where although the deep state is satisfied, the shallow state, that is to say police forces, might not be satisfied with social media. So there are two important components of context here, which I think should erect our tent of background context for this discussion. Further, [3] my main stance is that our debate and conversation in the wake of the 2024 racial riots is therefore deeply naive, and the idea that we’re able to police against or censor people spreading false information advertently or inadvertently is extremely naive. Moreover, our allies, as well as our own deep state, implement organised policies of misinformation or disinformation as suits their needs. So we have to be realistic about the moral and ethical hypocrisies involved in suggesting that we want a set of platforms that are democratic, free, open, fair and honest. The background reality shows that actually, in actual reality, the play out of our deep state interests is much more morally grey area and much more morally complicated. And that our rather shallow superficial level debates that we see on programmes like BBC Any Questions, for example, shows that we really are not having a very serious discussion about the actual reality that illuminates these topics and that, in a sense, our contemporary debate is very reactive to the current news framing of the racial riots and doesn’t show a sufficient level of depth and awareness about the complexity of these issues. Moreover, [4] to add a fourth column of background and context, a lot of the time policymakers and politicians lack an understanding of the technical aspects of how these social platforms work that allows them to maintain a level of naivety that is remarkable.

Q3: You’ve highlighted the existing collaboration between social media platforms and state interests, particularly regarding censorship. Can you expand on specific examples where you’ve seen this dynamic play out, and how it has influenced public discourse?

Yeah, so if we look sort of over the last 10 years, we can see how organizations like META or Twitter have had sort of things like their oversight boards, and these have had advisors who themselves are aligned with think tanks that basically follow transatlantic alliance NATO policies. That is to say that the regulatory frameworks in which these two big American social media companies operate, they are sensitive to, and they want to keep on side these powerful Western governments, so particularly the US and NATO allies. And this is a fundamental operating principle. This is one of the red lines of capitalism in our era. You don’t cross these particular actors and these particular values. So we’ve seen how censorship decisions and the application of shadow banning and other such means of censorship, direct, covert, and indirect, have been aligned with these deep state permanent interests. Of course, it’s worth noting, cursorily, that this is contradictory on the face of it with this notion that these platforms are free and fair and openly democratic. They’re not. They are very much rooted in Western biases and power structures. So we’ve seen public debate about fake news. That’s mainly been where the free speech conversation vis-a-vis social media has been rooted in questions of how to deal with fake news. So we’ve seen measures like labeling community notes, suppression of fake material, delisting, shadow banning, all of these sorts of measures used in alignment with the interests of these Western states. For example, the Biden laptop story is a good example, where the spreading of this story was deliberately suppressed in order to not influence American voters, even though this was something of partisan relevance. That is to say that it was effectively interference in the domestic U.S. political process. Not only is this a free speech violation, but it’s actually a fundamental democratic violation because it was partisan interference. It goes to show how these platforms have partaken in censorship or forms of censorship, the uses of their technological power, publishing influence, that have effectively contradicted their free speech image, even at the detriment of U.S. democracy.

Q4: You’ve pointed out the suppression of the Biden laptop story as a key example of this dynamic. Do you think this kind of interference has long-term consequences for how these platforms are perceived globally, especially by nations that are not aligned with Western interests?

Well, obviously, China has its own ecosystem of applications. So, in terms of great power competition, they already have the great firewall of China. In India, I know that WhatsApp is used much more, but that also has its own meta underlying censorship context. In terms of the image of social media in Europe, we can see that people are moving away from Twitter. Elon Musk has taken it over because they have a sense that the platform has become full of far-right and racialist hate. And there’s even talk of UK Labour MPs starting to delist themselves from the platform in recognition that the platform has sunk to a new low since it was taken over by Musk. And he disbanded the various teams that oversaw things like compliance and free speech policy and safeguarding. The cynical takeaway is that populations are going to start to see that these once thought of democratizing tools are actually not that democratizing because they’re open to corporate control and influence. And that actually signal and private, uncensored, and unsurveilled forms of communication and collaboration and organization may be more powerful. Which is why, of course, just worth mentioning, the UK government wants to ban encryption.

Q5: Given the growing distrust and shift toward encrypted, private communication, how do you think this will impact the future of public discourse and political mobilization, particularly in Western democracies?

I don’t have much interest in that topic illuminated by the follow-up question, but we can see the growth of Signal as people become cynical about the big tech platforms, including predominantly private communication apps like WhatsApp, while they support groups and communities. We will see the use of these technologies, like Signal, by protest groups like Palestine Action in the UK that seek to disrupt Israeli apartheid arms manufacturers on UK soil. But I see that the majority of activists are under-aware of the state to which the Facebook, Twitter, Instagram, etc. of our time are controlled essentially by American core deep interests. They’re not as democratic as they might feel that they are. We can just see how, for example, policing of Israel-Palestine content has occurred, where posts mentioning Gaza have been effectively delisted or have had friction added to their sharing on, for example, Instagram. It has meant that Palestinian voices feel literally banned, blacklisted, shadow banned, or censored, especially since October 7th massacres. That is to say that these red lines of the Western states and things like foreign policy and defence questions often translate into censorships on the social media platforms of these states. Another example would be how declassified UK exposes UK defence links with Israel, even ongoing despite the Gaza genocide. These stories never make it into the mainstream. And although declassified UK is an independent outlet that is on Twitter, they never seem to get the reach that you would expect. Now, this sounds like a conspiracy theory and there’s no evidence to suggest that they are manually having their reach limited, but it would be a fair guess to me that they are being blacklisted and having their reach limited and restricted, given that they have already been blacklisted by the UK Ministry of Defence that was later exposed. That’s these sorts of collaborations that keep the free mind and the free thinking. I don’t feel that the wider population is really aware of this. And so I don’t think, looping back to your direct question, that this is going to have an immediate, large effect on the UK population’s behaviours and attitudes towards…

Q6: In the wake of the 2024 UK riots, there’s been a surge in calls for stricter regulation of online speech. Given your stance, what do you think is the most misunderstood aspect of this debate by the general public and policymakers?

Well, people seem to think that you can prevent people spreading misinformation and disinformation. I figure you can’t, because this is human nature, and if people are able to communicate untruths in private, broadly speaking, they’re going to be able to do the same, publishing on their own feeds, on platforms like Twitter. However, if they are instigating or inciting some kind of crime, it is usually prohibited by domestic law. So long as there is not friction created by these platforms, by domestic, shallow state law enforcement, these people should be able to be prosecuted. However, the idea, therefore, that there isn’t already policing available is wrong, because there actually is. So this is a core misunderstanding. The second is that you can’t scientifically easily say the same thing is always true or false in a binary way. You might want to be able to, but often you can’t because of the nature and complexity of nuance and language. Third, it’s more likely that social media companies will be told to use pay up to dynamically filter and end friction on their platforms. Meaning messages, if something is actually dubious, like automated community networks, for example, and we can see that in a New York Post article from February 2024, Biden’s AI plan to censor you revealed. That is to say, the use of AI to basically take on the workload. They can use censorship, free speech, compliance, and safeguarding questions. Of course, these models can be trained to do this. And of course, yes, there will be, in some cases, errors that could be challenged. But the point is that actually AI is considered to be something that isn’t always impartial and is, of course, subject to its input, configuration, and biases. And so the idea that AI will be doing an objective and fair job is, again, naive. So even if we think about how these quite naive tools of policing truth online would be executed, even if we’re generous and say we’ll use AI. We’re still going to hit the wall, but it’s not going to be perfect. And some people are going to have free speech infringements or censorship infringements put on them that are not really fair, not really in keeping with the democratic human rights principles. The idea also that we’re getting Silicon Valley companies to use big Silicon Valley AI to police free speech or to add friction to individuals’ free speech and publication capability is something we’re going to have to think about.

Q7: You’ve talked about the ‘deep state’ being satisfied with current levels of censorship, but the ‘shallow state,’ such as local police forces, sometimes struggles with enforcement. Could you delve deeper into this tension and its implications for law enforcement and civil liberties?

Yeah, if police can’t understand who’s posted incitement to racial hatred because the American Silicon Valley tech companies refuse to hand over the details of the IP addresses, for example, then this is obviously a friction and a frustration of domestic law enforcement, for example, in the UK. So this is how, although at a deeper level, there may be compliance with state interests, at a shallow level, there might not be. And just as the UK has a one-sided extradition request relationship with the US, data sharing with respect to law enforcement is often also considered to be asymmetric. And so we can see that there are different levels of compliance and alignment here, further adding to the new…

Q8: Given the moral and ethical complexities you’ve outlined, particularly with Western states sometimes engaging in disinformation themselves, what do you think should be the guiding principles for any future regulation of online speech?

Unfortunately, a lot of social platforms are subject to bot farms that the Saudis, the Turkish, the British, the Americans use to create influence. These organized campaigns are fundamentally non-democratic because they use habitats that are not actual citizens, fundamentally invalidating the concept of one person, one voice. These inauthentic accounts are used in this malicious way to exert inauthentic, oversized influence on social media by state parties. And this is one of the fundamental techniques with social media platform manipulation, is the spam accounts. However, this is very difficult to overcome from a technological point of view without infringing on people’s privacy rights. However, the dirty play tactics of these states leads to an atmosphere where there’s a race to the bottom to manipulate these platforms, which degrades the online free space.

Q9: You mentioned that current debates, especially in mainstream media, lack depth and technical understanding. How do you propose we move the conversation forward to address the real complexities involved?

Policymakers should be more realistic about the technical nuances, and there should be more meritocratic use of those who are technically minded in policy-shaping forums. However, we should get more realistic about the way the world really works in terms of self-interests of nation-states and the existing levels of censorship and suppression and control and dirty unfair tactics that are already applied by us and our allies as well as our enemies. We should therefore get real as a starting point.