Skip to main content

Nexus and algorithmic complacency

·5487 words·26 mins
Privacy Philosophy AI
Luĉjo
Author
Luĉjo
Studanto kaj via loka esperantisto
Table of Contents

Every little increase in human freedom has been fought over ferociously between those who want us to know more and be wiser and stronger, and those who want us to obey and be humble and submit. - Philip Pullman

Yuval Noah Harari’s books are some of my favourite books of all time - they are writen in a very accessible but incredibly logical way and managed to blow my mind multiple times with Harari’s outstanding big-picture thinking. I love both Homo Deus and Nexus and am right now reading Sapiens, but in this essay I wish to specifically address the problems presented in the book Nexus, which humanity will have to solve in the near future.

Nexus addresses the structure of human information networks throughout history, how they evolved, how they enabled democracy but also totalitarian dictatorships and how they will likely change due to the rise of digital networks, automation (bots), large language models (LLMs) and generative artificial intelligence (AI). It also addresses the many dangers posed by non-human intelligence (in our case AI) in modern democratic discussions. Harari proposes that social media platforms ban bots for example.

This essay is one of the reasons why I don’t use social media and instead rely on online newspapers like the Guardian. I do not wish my opinion to be molded by bots or by algorithms. Yes, not even Mastodon, even though its decentralised and open-source nature fixes some of the issues.

Introduction
#

Fundamentally, human societies are based on fictional realities (such as rules, laws, customs and even ethics), which don’t exist in the real physical world. This ability to create fictional realities, which many people belive in has enabled humans to establish modern civilisation unlike any other animal. Harari argues that if you simply tell the unvarnished truth, no one will pay attention. Consequently, you would have no power, as power comes from the stories you convince other people to believe. There will never be a society that values truth over power. Nevertheless, he insists that facts do exist, separate from myths and propaganda. Scientists and journalists need to do their best to spin stories around facts rather than derive “facts” from stories. Some stories are able to create a third level of reality: intersubjective reality. Whereas subjective things like pain exist in a single mind, intersubjective things like laws, gods, nations, corporations, and currencies exist in the nexus between large numbers of minds. More specifically, they exist in the stories people tell one another. The information humans exchange about intersubjective things doesn’t represent anything that had already existed prior to the exchange of information; rather, the exchange of information creates these things.

The book Nexus presents two different views on information - the naive and the populist view. The core tenet of the naive view of information is that information is an essentially good thing, and the more we have of it, the better. Given enough information and enough time, we are bound to discover the truth about things ranging from viral infections to racist biases. This naive view justifies the pursuit of ever more powerful information technologies and has been the semiofficial ideology of the computer age and the internet. Populists are suspicious of the naive view of information and claim that information is only a tool for power and thus cannot be trusted except for their wise leader, who seeks to overthrow the corrupt elites. Populists are eroding trust in institutions like universities, journalism and international cooperation just when humanity faces existential challenges of ecological collapse, global war, and out-of-control technology.

Information isn’t the raw material of truth, but it isn’t a mere weapon, either. There is enough space between these extremes for a more nuanced and hopeful view of human information networks and of our ability to handle power wisely.

Features of democratic and authoritarian information networks
#

Firstly it’s important to understand that in a democracy, there are:

  • Plentiful and decentralised sources of information (for example several news sources, various authors, bloggers etc.)
  • Strong fact-checking and corrective mechanisms (when falsehoods are spread by one informational source they are called out by another source)
  • Debate (Various paradigms and narratives are evaluated using reason)

These properties enable democracies to constantly evolve - if a democratic society makes a mistake, it can be corrected. This has made democratic societies the most flexible and resilient societies in human history. Democratic information networks work in a similar fashion to the scientific method. Our rules and ethics are like a paradigm, they are assumed to be optimal given our collective knowledge, and if a better paradigm gets found, it gradually replaces the old one. Democracies are essentially utilitarian: they seek to maximise happiness and well-being of as many people in the population as possible in the long run.

In authoritarian information networks however we see the following properties:

  • Centralised and limited information sources
  • Surpression and censorship of dissenting narratives

One merely needs to think about dictatorships like Nazi-Germany, the USSR or modern day countries like Iran, Putin’s Russia or China.

AI, democracy and privacy
#

In the past total surveillance of people was simply practically impossible. It was not possible to watch every person all the time, because there just weren’t enough officers and data processing centres to facilitate such a police state. Even in the USSR during the Stalin era, the NKVD couldn’t surveil the whole population 24/7, simply because the number of agents was limited.

With machine learning, AI increased storage and data processing capabilities, the surveillance state of the 21st century really can observe the entire population 24/7.

Governments and corporations are now using the latest technologies to keep track of us online and offline. From Facial recongition systems and biometric passport data to online data tracking - these technologies can leave a significant information trail. This trail can be used to solve crimes, but also opens the doors to totalitarian nightmares. In places like Iran, for example, facial recognition is used to enforce dress codes, leading to severe privacy invasions and punishments.

With AI, it might become truly impossible to resist a totalitarian dictatorship, as it would recognise dissent in its roots and can attend to every individual dissident individually. Resistance requires dissidents to connect, but this is impossible, if an all-knowing program were to isolate every Andersdenker.

Humans don’t have free will and our patterns are highly predictable, though not deterministic. Algorithms can even find out things about us that we didn’t know ourselves, such as the sexuality of people.

Even in a democracy, if people’s privacy isn’t protected, political parties, foreign actors (such as Russia, I can really recommend this article by the Süddeutsche Zeitung or Mueller’s Report on the 2016 Russian interference in the 2016 US election) and especially corporations (especially big tech) can dramatically influence the outcome of the election by personalising propaganda of political parties that further their interests. This way special interest groups do not only get more attention from users of the internet, but can also adapt AI-based bots to change the opinions of the users, based on the available information about them. Whoever controls the most AI bots, controls the narrative:

Equally alarmingly, we might increasingly find ourselves conducting lengthy online discussions about the Bible, about QAnon, about witches, about abortion, or about climate change with entities that we think are humans but are actually computers. This could make democracy untenable. Democracy is a conversation, and conversations rely on language. By hacking language, computers could make it extremely difficult for large numbers of humans to conduct a meaningful public conversation. When we engage in a political debate with a computer impersonating a human, we lose twice. First, it is pointless for us to waste time in trying to change the opinions of a propaganda bot, which is just not open to persuasion. Second, the more we talk with the computer, the more we disclose about ourselves, thereby making it easier for the bot to hone its arguments and sway our views. In the 2010s social media was a battleground for controlling human attention. In the 2020s the battle is likely to shift from attention to intimacy. One analysis estimated that out of a sample of 20 million tweets generated during the 2016 U.S. election campaign, 3.8 million tweets (almost 20 percent) were generated by bots. By the early 2020s, things got worse. A 2020 study assessed that bots were producing 43.2 percent of tweets. A more comprehensive 2022 study by the digital intelligence agency Similarweb found that 5 percent of Twitter users were probably bots, but they generated “between 20.8% and 29.2% of the content posted to Twitter.” Now a political party, or even a foreign government, could deploy an army of bots that build friendships with millions of citizens and then use that intimacy to influence their worldview.

Privacy is not only essential to protect humanity from the totalitarian potential of nonhuman intelligence, but also to provide the members of a democracy with sufficient intimacy from opinion-altering AI to make independent decisions.

Epistemological danger
#

I would like to touch on the epistemology of AI. In Iran facial recognition algorithms are now being used to identify women who do not wish to adhere to the stringent dress codes present in the islamic theocracy - these women would then be sent to psychiatric help by the automated Iranian surveillance state. In a truly Black Mirror fashion these women actually first receive warning SMSs followed by detention.

But wait if an aritifical intelligence says that Black people are inherently more likely to cause crime thus to prevent crime they should be ostracised, that Women should be denied their rights to dress as they want because there is an all-knowing god who wills so, or that Jews are inherently evil and thus need to be exterminated, does this mean that the AI is right? After all isn’t the AI supposed to make objective decisions?

The answer is a clear no, the output and/or decisions of the AI are inherently shaped by the training data. The weights in the Neural Network are trained on real life data sets (in the case of LLMs texts) and their knowledge is purely delimited to the knowledge present in the training set. However, the training set can easily contain falsehoods, cultural norms, religious nonsense as well as prejudice and hatred. Just imagine training a chat bot on 4chan posts.

Logical reasoning as it was introduced in the most recent large language models (such as o1 - which mimic the slow reasoning present in human brains as described by the psychologist Daniel Kahneman), sadly does not fix this issue. Logics can only be applied within the scope of mathematics and scientific problems. Except for mathematics, logic does not tell us much about the world we live in with the sole exception of the fact that we must exist, because we think (as established by Rene Descartes). From logic alone we cannot derive ethics and we cannot even derive many facts about our world, such as if evolution is real or if we live in a simulation. In the case of evolution, it is empirical data that confirms a hypothesis (the theory must however make logical sense, but its confirmation is dependent on what we observe in the real world). Even physics is fundamentally built on empirical observations. The universe is not inherently deontological or utilitarian, there is no natural property which implies that levying interests is a sin. So if an AI uses logical reasoning to make an informed decision based on (for example) perfectly logical utilitarian reasoning, this reasoning is not objective: it was the human that made the AI use utilitarian logic. Likewise it is the human which passes all the prejudices and falsehoods on to the AI. A society controlled by AI is not an objectively moral or optimal society.

Diagram showcasing epistemology

In terms of this diagram, AI is trained on beliefs which also contain nonsense, false positives and denials.

The Social Network Algorithms
#

I would like to being this chapter with the following video:

Technology Connections correctly identifies that nowadays algorithms are responsible for the content we see. People, especially in the realm of social media, are barely in control of the posts, videos and content they see (which used to be achieved by following and subscribing in the past). He termed this phenomenon algorithmic complacency, where people completely give up the control over the content they see to the recommendation algorithm of a certain platform.

Recommendations can have signifcant impact on the opinions of people. The for example Bible was born as a recommendation list by a council of priests. By taking the misogynist 1 Timothy instead of the more egalitarian Acts of Paul and Thecla when the Bible was compiled, Athanasius and other priests changed the course of history. In the case of the Bible, ultimate power lay not with the authors who composed different religious documents but with the curators who made a selection among the religious documents and turned them into compiled books like the Vedas, Torah, Bible or the Quran. This was the power wielded in the 2010s by social media algorithms.

This brings me to the most important excerpt from the book Nexus:

In 2016–17 a small Islamist organization known as the Arakan Rohingya Salvation Army (ARSA) carried out a spate of attacks aimed to establish a separatist Muslim state in Rakhine, killing and abducting dozens of non-Muslim civilians as well as assaulting several army outposts. In response, the Myanmar army and Buddhist extremists launched a full-scale ethnic-cleansing campaign aimed against the entire Rohingya community. They destroyed hundreds of Rohingya villages, killed between 7,000 and 25,000 unarmed civilians, raped or sexually abused between 18,000 and 60,000 women and men, and brutally expelled about 730,000 Rohingya from the country. The violence was fueled by intense hatred toward all Rohingya. The hatred, in turn, was fomented by anti-Rohingya propaganda, much of it spreading on Facebook, which was by 2016 the main source of news for millions and the most important platform for political mobilization in Myanmar.

An aid worker called Michael who lived in Myanmar in 2017 described a typical Facebook news feed: “The vitriol against the Rohingya was unbelievable online—the amount of it, the violence of it. It was overwhelming…. [T]hat’s all that was on people’s news feed in Myanmar at the time. It reinforced the idea that these people were all terrorists not deserving of rights.” In addition to reports of actual ARSA atrocities, Facebook accounts were inundated with fake news about imagined atrocities and planned terrorist attacks. Populist conspiracy theories alleged that most Rohingya were not really part of the people of Myanmar, but recent immigrants from Bangladesh, flooding into the country to spearhead an anti-Buddhist jihad. Buddhists, who in reality constituted close to 90 percent of the population, feared that they were about to be replaced or become a minority.

Without this propaganda, there was little reason why a limited number of attacks by the ragtag ARSA should be answered by an all-out drive against the entire Rohingya community. And Facebook algorithms played an important role in the propaganda campaign. While the inflammatory anti-Rohingya messages were created by flesh-and-blood extremists like the Buddhist monk Wirathu, it was Facebook’s algorithms that decided which posts to promote. Amnesty International found that “algorithms proactively amplified and promoted content on the Facebook platform which incited violence, hatred, and discrimination against the Rohingya.” A UN fact-finding mission concluded in 2018 that by disseminating hate-filled content, Facebook had played a “determining role” in the ethnic-cleansing campaign.

Readers may wonder if it is justified to place so much blame on Facebook’s algorithms, and more generally on the novel technology of social media. If Heinrich Kramer used printing presses to spread hate speech, that was not the fault of Gutenberg and the presses, right? If in 1994 Rwandan extremists used radio to call on people to massacre Tutsis, was it reasonable to blame the technology of radio? Similarly, if in 2016–17 Buddhist extremists chose to use their Facebook accounts to disseminate hate against the Rohingya, why should we fault the platform? Facebook itself relied on this rationale to deflect criticism. It publicly acknowledged only that in 2016–17 “we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence.” While this statement may sound like an admission of guilt, in effect it shifts most of the responsibility for the spread of hate speech to the platform’s users and implies that Facebook’s sin was at most one of omission—failing to effectively moderate the content users produced. This, however, ignores the problematic acts committed by Facebook’s own algorithms.

The crucial thing to grasp is that social media algorithms are fundamentally different from printing presses and radio sets. In 2016–17, Facebook’s algorithms were making active and fateful decisions by themselves. They were more akin to newspaper editors than to printing presses. It was Facebook’s algorithms that recommended Wirathu’s hate-filled posts, over and over again, to hundreds of thousands of Burmese. There were other voices in Myanmar at the time, vying for attention. Following the end of military rule in 2011, numerous political and social movements sprang up in Myanmar, many holding moderate views. For example, during a flare-up of ethnic violence in the town of Meiktila, the Buddhist abbot Sayadaw U Vithuddha gave refuge to more than eight hundred Muslims in his monastery. When rioters surrounded the monastery and demanded he turn the Muslims over, the abbot reminded the mob of Buddhist teachings on compassion. In a later interview he recounted, “I told them that if they were going to take these Muslims, then they’d have to kill me as well.”

In the online battle for attention between people like Sayadaw U Vithuddha and people like Wirathu, the algorithms were the kingmakers. They chose what to place at the top of the users’ news feed, which content to promote, and which Facebook groups to recommend users to join. The algorithms could have chosen to recommend sermons on compassion or cooking classes, but they decided to spread hate-filled conspiracy theories. Recommendations from on high can have enormous sway over people. Sometimes the algorithms went beyond mere recommendation. As late as 2020, even after Wirathu’s role in instigating the ethnic-cleansing campaign was globally condemned, Facebook algorithms not only were continuing to recommend his messages but were auto-playing his videos. Users in Myanmar would choose to see a certain video, perhaps containing moderate and benign messages unrelated to Wirathu, but the moment that first video ended, the Facebook algorithm immediately began auto-playing a hate-filled Wirathu video, in order to keep users glued to the screen. In the case of one such Wirathu video, internal research at Facebook estimated that 70 percent of the video’s views came from such auto-playing algorithms. The same research estimated that, altogether, 53 percent of all videos watched in Myanmar were being auto-played for users by algorithms. In other words, people weren’t choosing what to see. The algorithms were choosing for them.

But why did the algorithms decide to promote outrage rather than compassion? Even Facebook’s harshest critics don’t claim that Facebook’s human managers wanted to instigate mass murder. The executives in California harbored no ill will toward the Rohingya and, in fact, barely knew they existed. The truth is more complicated, and potentially more alarming. In 2016–17, Facebook’s business model relied on maximizing user engagement in order to collect more data, sell more advertisements, and capture a larger share of the information market. In addition, increases in user engagement impressed investors, thereby driving up the price of Facebook’s stock. The more time people spent on the platform, the richer Facebook became. In line with this business model, human managers provided the company’s algorithms with a single overriding goal: increase user engagement. The algorithms then discovered by trial and error that outrage generated engagement. Humans are more likely to be engaged by a hate-filled conspiracy theory than by a sermon on compassion or a cooking lesson. So in pursuit of user engagement, the algorithms made the fateful decision to spread outrage.

Events in Myanmar in the late 2010s demonstrated how decisions made by nonhuman intelligence are already capable of shaping major historical events. We are in danger of losing control of our future. A completely new kind of information network is emerging, controlled by the decisions and goals of an alien intelligence. At present, we still play a central role in this network. But we may gradually be pushed to the sidelines, and ultimately it might even be possible for the network to operate without us. Some people may object that my above analogy between machine-learning algorithms and human soldiers exposes the weakest link in my argument. Allegedly, I and others like me anthropomorphize computers and imagine that they are conscious beings that have thoughts and feelings. In truth, however, computers are dumb machines that don’t think or feel anything, and therefore cannot make any decisions or create any ideas on their own.

This objection assumes that making decisions and creating ideas are predicated on having consciousness. Yet this is a fundamental misunderstanding that results from a much more widespread confusion between intelligence and consciousness. I have discussed this subject in previous books, but a short recap is unavoidable. People often confuse intelligence with consciousness, and many consequently jump to the conclusion that nonconscious entities cannot be intelligent. But intelligence and consciousness are very different. Intelligence is the ability to attain goals, such as maximizing user engagement on a social media platform. Consciousness is the ability to experience subjective feelings like pain, pleasure, love, and hate. In humans and other mammals, intelligence often goes hand in hand with consciousness. Facebook executives and engineers rely on their feelings in order to make decisions, solve problems, and attain their goals.

But it is wrong to extrapolate from humans and mammals to all possible entities. Bacteria and plants apparently lack any consciousness, yet they too display intelligence. They gather information from their environment, make complex choices, and pursue ingenious strategies to obtain food, reproduce, cooperate with other organisms, and evade predators and parasites. Even humans make intelligent decisions without any awareness of them; 99 percent of the processes in our body, from respiration to digestion, happen without any conscious decision making. Our brains decide to produce more adrenaline or dopamine, and while we may be aware of the result of that decision, we do not make it consciously.

The process of radicalization started when corporations tasked their algorithms with increasing user engagement, not only in Myanmar, but throughout the world. For example, in 2012 users were watching about 100 million hours of videos every day on YouTube. That was not enough for company executives, who set their algorithms an ambitious goal: 1 billion hours a day by 2016. Through trial-and-error experiments on millions of people, the YouTube algorithms discovered the same pattern that Facebook algorithms also learned: outrage drives engagement up, while moderation tends not to. Accordingly, the YouTube algorithms began recommending outrageous conspiracy theories to millions of viewers while ignoring more moderate content. By 2016, users were indeed watching 1 billion hours every day on YouTube.

YouTubers who were particularly intent on gaining attention noticed that when they posted an outrageous video full of lies, the algorithm rewarded them by recommending the video to numerous users and increasing the YouTubers’ popularity and income. In contrast, when they dialed down the outrage and stuck to the truth, the algorithm tended to ignore them. Within a few months of such reinforcement learning, the algorithm turned many YouTubers into trolls.

The social and political consequences were far-reaching. For example, as the journalist Max Fisher documented in his 2022 book, The Chaos Machine, YouTube algorithms became an important engine for the rise of the Brazilian far right and for turning Jair Bolsonaro from a fringe figure into Brazil’s president. While there were other factors contributing to that political upheaval, it is notable that many of Bolsonaro’s chief supporters and aides had originally been YouTubers who rose to fame and power by algorithmic grace.

As the excerpt showcases the problem with modern social media is that recommendation algorithms, these are the algorithms that show users recommended content and are usually separate from their subscription and friend recommendations, are:

  1. actively encouraging outrage, dissatisfaction, artificial division and disinformation
  2. being manipulated by increasingly sophisticated bots for geopolitical or corporate gain

Before the 2025 German Federal Election the following interesting patterns could be observed:

Feed algorithm of X showing clear preference for AfD

Image of the Bundestag according to likes
The image shows how the distribution of parties in the German parliament would look like according to TikTok followers, no matter if caused by bots or the recommendation algorithm alone, the composition is completely warped.

It is very clear that many platforms, especially TikTok and X, formerly known as Twitter, massively distort the attention in political debate thus boosting fringe parties and preventing constructive democratic dialogue which is necessary for a functioning democracy.

The graph above shows that older people were the least likely in Germany to vote for the AfD and BSW. This very likely indicates that social media is indeed responsible for the rise in populist parties and for the weakening of non-populist parties, because:

  1. older people are generally more conservative (and thus the AfD/BSW share should’ve been higher among older people)
  2. older people are generally not as technically skilled and have had the least contact with social media

Thus we are facing the phenomenon, where recommendation algorithms of major platforms manipulate the attention economy and facilitate the spread of artificial outrage, division, disinformation, and other negative phenomena such as religious radicalisation. These recommendation algorithms are a form of a negative feedback algorithm; the more attention people give to artificial outrage and disinformation, the more the algorithm pushes outrage (usually for profit reasons) with a destabilising effect for democracies worldwide. The reality is that democracy cannot base itself on cynical people, who constantly wish to tear down any system in place. Democracy requires reforms not constant revolutions, expertise not hobby contrarians and compromise not constant division. A society where people cannot even agree on basics facts and vilify each other are an open invitation to authoritarianism.

Democracy requires self-correction mechanisms - the distribution of power, a strong judiciary, a strong opposition, decentralisation of information sources and fact-checking of one’s own sources and of others. To be a credible news source fact-checking was instrumental, it was necessary to be seen professional. Social media algorithms however changed the dynamic, credible sources simply cannot compete against other sources such as social media influencers. Their coverage has a sway over a lot of people. This is because a significant portion of the population only comes in contact with the news through social media. Additionally, it accelerates the sway of public opinion by means of saturating posts made by bots and pushing for false or disproportional perceptions.

Exactly for this reason, the removal of fact checkers from Meta is so dangerous. It would exacerbate manipulation. In social media fact-checking cannot compete with disinformation, because controversy is interesting. Eroding the consensus on even the most basic facts and institutions lays ground for “a world that’s right for a dictator” as the Nobel peace prize winner Maria Ressa puts it.

Additionally Elon Musk’s comments that “legacy media” (non-social media news organisations such as the BBC or newspapers like die Zeit, the Guardian, Le Monde or the New York Times) are obsolete because of social media platforms is very dangerous sentiment, because of two reasons:

  1. Reading news from social media is full of misinformation and disinformation. While social media news are real-time, you can be much more certain about the accuracy on the BBC than on Facebook. It’s the job of journalists to fact check their sources. A notable example is how Storm-1516 spread fabricated videos of ballot shredding in Leipzig, but these turned out to be false and fabricated. If we base our perception exclusively on real-time news, we surrender ourselves to manipulation. The truth is worth the time delay. As AI progresses it will be harder to differentiate between real and fake videos, which exactly why we need professional journalists to process information for us reliably.
  2. Social media websites are effectively a news-outlet of their own and centralising information sources on one platform is a very dangerous precedent, because democracy requires decentralisation. While users making posts is decentralised, the recommendation algorithm that determines what users get to see is not and is effectively equivalent to an editor in a newspaper. Yes news outlets are often opinionated (my favourite newspaper The Guardian is a good example of that), but that’s why one should never only read one source. If democratic society obtains its news from only a few social media networks, these gain unprecedented power over our society.

The attention economy can be manipulated extremely well through bots. The best example of this is the Russian disinformation campaign after the invasion of Ukraine. Liberal democracies and their societies were unanimous in condemning the Russian aggression and war of conquest. But as time went on the Kremlin narrative and/or ambivalence became increasingly widespread. This should not be surprising as Kremlin bots artificially saturated the discussion and took advantage of the algorithms. This campaign was very successful, especially on the political fringes (both left and right) and even got into government in countries like the US and Slovakia. If yet another democracy gets attacked by a dictatorship in the future, will other democracies allow the dictatorships to divide and conquer them?

Ameliorations?
#

Harari makes a bold proposal to improve the digital democratic discourse: a ban on bots in social media. It’s a fairly simple solution that would remove the possibility of internal and external special interest groups from manipulating the political debate of a country and I completely agree. Banning bots is not a restriction on freedom of speech, because bots are not humans and don’t have any rights. Social media should first and foremost be about connecting people and human interaction. Would you want to be active on a social media platform where the majority of users and posts are made by computers?

However, while I think a bot ban would be the first major step in the right direction, it wouldn’t fix the fundamental problem that social media has. The Rohingya genocide did not happen because a foreign power or company intentionally flooded Facebook with bots that spread hatred. It happened because the recommendation algorithms specifically spread hatred and outrage maximising content with the simple desire to maximise screen time and thus ad revenue, giving an disproportional and unfair advantage when recommending content to hatred and outrage. Political polarisation is rising across the world, but even if we ban bots and the majority posts only polite and mutually respectful posts, the divisive ones will always be favoured by the algorithm. Thus continuing the vicious cycle of polarisation and the ongoing global democratic backsliding since about 2010.

Conclusion
#

The internet has enabled many displaced people to connect, the Esperanto community is one example which massively benefited from it. We have Wikipedia and can get almost instantaneously in touch with people from Australia or Brazil. This has enabled international collaboration on many projects. Journalists in authoritarian countries use it to bypass censorship. It is a fundamental technology and it’s hard to imagine that it is relatively novel. Unlike the Internet, social media is not a fundamental technology of the 21th century, it is an entertainment product, a slot-machine. The increased mass dependence on social media algorithms has made people dependent on recommendation algorithms, worsened psychological health and the democratic discourse. While regulations may improve certain aspects, I just believe that it is a technology that is not necessary for most people and that will always be just a slot machine. I am happier and calmer without using social media and still get to connect with people and use the internet for things it was meant to be used for.

Maybe more people should know that you can turn this black mirror off.

Sources
#

Reply by Email