The recent online discussion on "freedom of speech" that has flared up after Pavel Durov's arrest has forced me to dig this topic out of the backlog. To be precise, "this topic" means the eternal conflict between two contradicting forces: (1) protection of free speech and (2) respect for the rule of law online. Funnily enough, even with freedom of speech being a so-called wicked problem, the biggest challenge here may not be the topic itself but how to make this post concise enough for you all to read - many massive books have already been written on that single thing alone ...
Anyway, let's try.
First, I have nothing against "freedom of speech", defined as a right to express one's mind. In my opinion (that you don't have to agree to), words are not a form of violence (unlike physical harm) unless there's a direct threat expressed. I don't feel guilty for the emotional distress someone experiences because they disagree with my opinions. I'm also against the concept of a "thoughtcrime" - or anything that even remotely resembles it. However, such freedom of expression is IMHO applicable to phrasing opinions, questions, and hypotheses (clearly marked as such).
Issue 1: free speech and facts (vs opinions)
But not everything we talk about belongs to these categories - there are also facts - entirely based on the objective truth. And this is how we've encountered the first challenge (of many): is there even an "objective truth"? We, humans, live in (different) contexts, driven by sums of our individual (and very subjective) experiences. That's why we follow different guiding values in our lives. What is "truthy" for me (male, early Millenial, Western civilization atheist, growth mindset, pragmatist, limited EQ, inquisitive centrist) may be questionable for someone else (e.g., female, Gen Z, Middle East, conservative Muslim, high EQ extravert).
The keyword to focus on here is "truthy". Yes, we could disagree - and on many topics. But the facts are facts - they are supposed to be absolute, immutable, and universal. Unfortunately, we, human beings, are terrible at separating facts and opinions (TBH, our languages do not help much with that). On top of that, opinions are often more important for us (than facts) just because they strongly correlate to our identity and personal beliefs (while facts can be hard to accept and bear). As a result, we value interpretations (of facts) much more than the facts themselves.
What does it mean in practice? That radical free speech (everyone says whatever they want) is potentially very harmful (hence unacceptable) as it could lead to falsifying facts (e.g., history, scientific knowledge) - which are the foundations underneath our viewpoints, principles, and everyday decisions. This issue is even more critical for global Internet platforms that (as we know) have all the means to amplify (or filter out) the message selectively (& faster than ever before).
That's why, instead of quarreling over the speech should be free and to what degree, we ought to focus on some more fundamental problems to solve first.
Real problems to solve: How could we separate facts from opinions so facts get scrutinized while letting opinions fly freely? With such a high velocity of information spreading, is it possible to keep facts and opinions "tagged" as such (by tracing facts' ancestry and source)? Is it possible without real-time arbitration? Could such an arbitration be trustful and commonly accepted? Is it possible to avoid duplication and ambiguity (where facts partially overlap with opinions or other facts)?
Issue 2: free speech ... beyond speech
Wait, but our activity online is about more than just sharing facts and opinions. We haven't considered "actions" yet. What if someone distributes forbidden materials (classified documents coming from espionage), trades prohibited substances (drugs, weapons), or performs other illegal activities (insider trading, doxing, revenge porn)? Well, all these actions have one thing in common - they violate the (already existing) law. Sadly, that obvious truth doesn't make things simpler. Why?
- Whose law? Platforms are global, but laws are local, and sometimes these laws vary significantly (between countries or even their federated parts)
- Laws are so complex that through the ages, we've evolved entire legal systems with tons of dedicated roles (policeman, judge, prosecutor, ...) & specializations (criminal law, labor law, corporate law, ...) to tackle all that complexity.
- The identities (entities that perform actions) on the Internet work differently than in "real" life: (a) in many cases, identities' veracity is not validated by any authority, (b) identities may be stolen/hijacked/cloned/shared/transferred/..., (c) a physical person may have several identities, (d) identity doesn't have to belong to a physical person, etc.
The aforementioned legal systems were designed (& evolved) to work in the physical world - which is subject to the common, identical everywhere & generally understood rules (of physics). That's why, e.g., defining the suitable limits to police powers (and making them understood by the general population) was relatively easy.
The digital space is much more challenging - it's far more malleable, it has lower inertia, and you can stack many more abstractions on top of one another. It also evolves with an astounding velocity, outpacing the adaptation ability of law enforcement - they have neither (sufficient) skills nor the necessary (& up-to-date!) legal tools (legislative acts) to keep up the pace.
All the potential ways out of this situation (disregarding forced deacceleration) conceptually span somewhere between two extremes:
- Finding a way to accelerate adapting/expanding existing legal systems to the reality of the Internet era
- Forcing online platforms to self-govern by creating their own (internal) law enforcement systems (yikes!) - they supposedly know their platforms best and have nearly unlimited control over them
Real problems to solve: None of the ends of the spectrum above is realistic - so where's the sweet spot? If, to be available in a given country, the platform should commit to respecting the law in that country, wouldn't that kill the universal Internet as we know it? How can we make sure that the operational burden related to cooperation with law enforcement doesn't effectively kill the platforms? Where's the borderline between not preventing someone commit a crime on your platform and complicity? What should happen if you (unknowingly) have benefitted from someone committing a crime on your platform (e.g., because of ads displayed alongside something illegal)?
Issue 3: free speech and privacy
Let's get back to dealing with facts alone. Verifying (1) what is and is not a fact, (2) whether the fact is true, isn't trivial. Especially if we want to do it as quickly as possible (in near-real-time). Yes, such monitoring & processing capabilities could be (to some degree) automated, but that doesn't solve many of the challenges:
- A planet-scale platform would still require planet-scale monitoring (and that means enormous costs, a lot of additional complication, and ... great power to tempt you)
- One thing is monitoring public communication (with unlimited access), but what about private conversations? (1:1, closed groups, etc.); that's a significant threat to confidentiality and privacy, which could be easily abused (by dictators/regimes, corrupt officials, etc.)
- What should be the consequences if we detect fact distortion or toxic behaviors that go beyond discussion/sharing opinions (e.g., direct threats)? Penalize the committers to disincentivize that behavior? Hide/censor the problematic content?
Doesn't that sound like a path towards the (in)famous scoring system from China? Even if it were to be governed by democratically elected governments - would it make it more acceptable? I don't think so ... Yes, we could consider using fully automated mechanisms (like DAO or AI) - but let's be honest, we're not there yet, and it's not a matter of a year or five.
Real problems to solve: What has to be monitored and by whom? Who will watch the watchmen (Quis custodiet ipsos custodes?)? Where are the borderlines of anonymity (since a feature of the law should be the inevitability of punishment in the event of its violation, and this can't be executed when the committer is fully anonymous)? How far can we go in invading privacy for prevention - e.g., is it OK to intervene and penalize the intent before the (potential) harm is done?
OK, enough
Frankly, I have more (issues listed). Free speech and asymmetric access to information, free speech and the flow of time (when we collect new information with delay), free speech and different appetite for details (black-or-white simplicity versus shades of grey), free speech and challenging the status quo (getting out of fossilized, yet inaccurate truths), free speech and the greater good (e.g., raison d'etat). And so on and so forth.
But enough is enough (to prove my point). I believe that the binary discussion about "free speech" (yes or no, we either have it or not) is pointless. It's far too high level, too naive, too focused on principles without diving deep into practicalities. To move it at least an inch in any helpful direction, we need to focus on those little pesky "real problems to solve" that I've listed above.
YES, "freedom of speech" is essential, but as with almost everything else - extremes are bad. Extremely bad. For now, it seems that 99% of discussions regarding free speech online are just about protecting the statements of individuals from our tribe that are heavily opposed & criticized by zealots of the opposing mob. I don't consider it productive and beneficial in any way. And it's definitely not about real freedom of expression.