The Age of Brainless, an essay on the impact of unregulated social media pivoting into new generation AI in an age dominated by Bots and fake accounts; the impacts on social cohesion, historical perspectives and potential solutions.
By Eddie Hobbs, August 8th 2023.
Introduction
The internet is fast becoming lifeless and brainless, a village green devoid of critical thinking, engagement and debate. Nearly two thirds of activity is now by non-humans while algorithms dictate what is read, who is nudged into what flash mob and when. While asleep at the wheel are we losing the anchor of strong shared bonds and shared imaginations about democracy and how society ought to function? This is before the imminent and further transformation of the internet by next generation Artificial Intelligence.
Anybody, now armed with new generation AI can produce content on just about anything at the same visual standard and fluency of conventional newspapers and broadcasters. Everybody with an agenda is in on the game, not just extremists, like those for example on both sides of climate change. It includes power blocks like the Pharma-State that clearly have sponsored their own brand of misinformation, the military and industrial complex who profit from continuous wars, and technology behemoths who press for a digital world without rules or frontiers. When big powers are allied to the capacity to shut down dissenting voices on platforms and instead flood them with the tools of psychological warfare, truth is no longer deemed valuable. When truth is lost, human society gets into deep trouble, this is the lesson of history.
Most of the 4th Estate has already succumbed and aligned to tribal agendas. Save for in-house lawyers curating content for defamation, there is no longer any major difference between mainstream and social media, both report on and echo the other. Media has become a competition for sensationalism, for outrage and for populism where editors and journalists adhere to the propaganda of their owners. Facts and truth no longer matter, what matters is who wins. Trump was first to grasp the shift, bend it towards his will and win the White House before reshaping it once more into insurrection after losing his high office.
Why is it happening?
Over the past decade there has been a profound shift in the dynamics of human interaction. The rise of artificial intelligence (AI) and the proliferation of bots have transformed the way people engage with online platforms. Simultaneously, the prevalence of anonymous and fake accounts on social media has contributed to a deterioration in dialogue, resulting in the propagation of anger, division, and tribalism.
This essay delves into the consequences of these developments, explores the implications for societal cohesion, democratic discourse and the future of truth and, it looks for potential solutions.
The truth is that the digital landscape, dominated by AI, bots, and anonymous accounts, has transformed the nature of human interaction and societal cohesion. The prevalence of anger, division, and tribalism eats at the bedrock of democracy, as it undermines the capacity for open debate, the search for common ground and the establishment of a collective wisdom. Conversely, to ensure a thriving democratic society, it is crucial to foster responsible AI usage, promote critical thinking, and prioritize journalistic integrity. Getting the balance right will not be easy but it can be done, and it has been done before.
What lessons from history?
During the time of the American Revolution in the 18th century, newspapers played a significant role in shaping public opinion and disseminating information. Analogous to fake accounts today, there was a widespread use of pen names or pseudonyms for writers and editors. These pen names were often used to conceal the identity of the authors, allowing them to write controversial and often malicious and defamatory content against public figures without fear of personal repercussions. Anonymity was used widely as a means of persuasion and propaganda and if that sounds familiar, it should.
The fledgling USA eventually dealt with the toxic atmosphere of charge and counter charge by political opponents through the development of libel laws at State level and that made publications accountable for what was published. State libel laws were later counterbalanced at Federal level by the First Amendment in 1791 which enshrined the principles of free speech and press freedom. This protected newspapers’ right to publish dissenting opinions and criticize government officials, but it also emphasized the importance of truth and accuracy in reporting.
The balance between libel laws and the First Amendment has been the subject of ongoing legal interpretation and court decisions. The landmark case New York Times Co. v. Sullivan (1964) significantly influenced the relationship between libel laws and the First Amendment. The Supreme Court ruled that public officials could only win a libel suit if they could prove “actual malice” on the part of the publisher. The “actual malice” test means that the publisher knew the statement was false or acted with reckless disregard for the truth. This higher standard of proof was established to protect the media’s ability to criticize and report on public officials without fear of unfounded lawsuits. It was an effort to strike a balance between protecting free speech and ensuring accountability for false and defamatory statements.
New Generation AI is the Pivotal Moment
Bots Dominate the Internet:
AI-powered bots have already become increasingly pervasive on the internet, but new generation AI will raise their impact to an entirely new level. These bots, programmed to simulate human behaviour, account for a significant portion of online activities, with recent estimates suggesting they comprise two-thirds of internet traffic. Bots serve various purposes, from automating tasks to engaging with users on social media platforms. While some bots are harmless tools used for customer service or news aggregation, others are malicious, designed to spread misinformation including from State, Corporate and Supranational entities to manipulate public opinion and, for rogue players, to engage in cyber-attacks.
Decline in Human-to-Human Engagement:
The growing presence of bots and automated systems has contributed to a decline in authentic human-to-human engagement on the internet. As bots simulate human behaviour, they can deceive users into believing they are interacting with genuine individuals, leading to a false sense of community. Consequently, the emotional connections that once fuelled meaningful conversations and debates are being replaced by automated responses and pre-programmed narratives.
During elections an extensive network of AI-powered bots can be deployed on social media platforms to influence public opinion. These bots spread disinformation, amplify divisive content, and artificially inflate the popularity of certain political candidates. As a result, genuine human voices can be drowned out, and the democratic process marred by manipulated narratives.
Impact of Anonymous and Fake Accounts on Social Media:
The anonymity offered by social media platforms has given rise to a proliferation of fake accounts. These anonymous accounts, combined with the ability to like, dislike, retweet, or share content, have created echo chambers that reinforce individuals’ pre-existing beliefs and values. This self-reinforcement of ideas has contributed to an erosion of the ability to engage in meaningful debate and has fostered a culture of divisiveness.
The on-line discussion in social movements is vulnerable to becoming infected with bots and fake accounts that impersonated activists. These AI-driven entities flood comment sections and forums with generic and repetitive responses, creating an illusion of massive support for their cause. As a consequence, genuine human activists struggle to have their voices heard, and meaningful dialogue can be stifled by artificial interactions.
The Rise of Anger, Division, and Tribalism:
The use of anonymous accounts and the spread of misinformation have intensified the polarizing nature of online discourse. Social media platforms, originally intended to foster connections and promote healthy discussions, have now become battlegrounds for ideological warfare. Users are more inclined to attack and vilify opponents rather than seek common ground, leading to a society increasingly divided along political, social, and cultural lines.
In response to a controversial news event, a hashtag campaign launched on social media platforms can be hijacked by a multitude of anonymous trolls and bots who flooded the hashtag with offensive and inflammatory content. As a result, what could have been an opportunity for constructive discourse turns into a hostile battleground of anger and animosity.
Psychopaths in the Noise and Chaos:
In this chaotic landscape, psychopathic individuals, about one in every hundred, have found an ideal breeding ground to manipulate and exploit emotions for personal gain or pleasure. These individuals, commonly referred to as “trolls,” can exercise a disproportionate impact on on-line engagements. They thrive on sowing discord and instigating conflicts, further escalating the division and animosity within online communities.
Public figures can often face relentless online harassment and abuse from anonymous psychopaths leveraging multiple fake accounts. These individuals orchestrate campaigns of vitriolic attacks to undermine the figure’s credibility, incite hatred among their followers, and create an atmosphere of fear and division in the online community.
Media’s Role in the Post-Truth Era:
Traditional media has also faced challenges as the digital age progresses. The quest for attracting audiences and increasing ratings has led some media outlets to lower journalistic standards. The rise of “clickbait” and sensationalism has contributed to the blurring of lines between fact and opinion, heralding an era where truth becomes subjective and malleable.
During major elections, a few mainstream media outlets resort to sensational headlines and misrepresent facts to attract more viewership and readership, including alleging widespread corruption of a national vote. This not only distorts the public’s perception of the event but also triggers heated debates on social media, where misinformation and conspiracy theories thrive, further contributing to the post-truth era.
Societal Cohesion and Democracy at Risk:
The effects of AI, bots, and social media on societal cohesion and democracy are alarming. With public discourse becoming increasingly toxic, the capacity to engage in constructive dialogue and find common ground is diminished. The erosion of trust in information sources undermines the foundations of democracy, where informed decision-making relies on reliable and factual information.
The polarization caused by bots, anonymous accounts, and misinformation can reach a tipping point in highly contested and close elections. The increasing division in society makes it difficult for political leaders to reach consensus and collaborate on crucial policies, leading to legislative gridlock and hampering the democratic process.
AI’s Role and Future Implications:
As AI continues to advance, it has the potential to further exacerbate the challenges posed by bots, fake accounts, and misinformation. AI algorithms can easily amplify echo chambers and target individuals with tailored content, deepening the divisions in society. However, AI also offers the potential to detect and mitigate harmful content, making responsible AI deployment essential for preserving democratic values.
In response to the rise of AI-powered misinformation, tech companies and social media platforms invest in AI-driven algorithms to detect and remove harmful content. However, and especially during the Covid pandemic these algorithms also faced criticism for inadvertently suppressing legitimate free speech and opinions, raising concerns about the potential for AI to become a tool of censorship in the digital sphere.
What are the Solutions for a Healthy Digital Ecosystem?
The trajectory of increased AI, bots, fakes, psychological attacks, and manipulation, coupled with censorship, poses significant challenges to humanity and democracy itself. If left unchecked, these trends could further exacerbate divisions, erode trust in institutions, and undermine the foundations of democratic societies.
However, recognizing the risks and taking decisive actions can pave the way for solutions and countermeasures to promote meaningful dialogue and preserve democratic values. To safeguard democratic values and ensure a healthy digital ecosystem, proactive measures must be taken to combat these challenges while preserving the integrity of human engagement and public discourse.
Preserve Democracy and Free Speech:
One of the fundamental principles of democracy is the protection of free speech and the exchange of diverse ideas. To counter the rise of bots, fake accounts, and censorship, it is essential to promote transparency and accountability on social media platforms. Implementing strict regulations to identify and eliminate bots and fake accounts, while ensuring that algorithms do not inadvertently suppress legitimate content, can help restore trust in digital platforms as spaces for authentic engagement.
This is complex ground, requiring a wide range of expertise to get the balance right. Content creators publishing under their own must be afforded the full protection of free speech laws but, anonymous content creators clearly must not. These must become identifiable, even if through credible and rapid curated processes, so they can be held to account by civil suits and where relevant by prosecution for spreading malicious, wilfully reckless content designed to defame persons or to egregiously mislead the public into believing something that they know or ought to know to be untrue.
Enhance Digital Literacy:
Promoting digital literacy is crucial to empower individuals to critically assess information and distinguish between reliable sources and misinformation. Schools, universities, and public institutions should prioritize educating people on how AI and algorithms influence online interactions and how to identify and combat deceptive practices. Digital literacy programs can arm users with the tools to protect themselves against manipulation and enable them to engage in constructive discussions.
Foster Civic Engagement:
Society ought to prioritize fostering civic engagement to encourage active participation in democratic processes. Creating online forums and platforms that facilitate civil and respectful discussions on key issues can help bridge divides and promote understanding among individuals with different viewpoints. Initiatives that encourage people to engage in meaningful dialogue can shift the focus from broadcasting and shouting to listening and understanding.
Regulate AI and Algorithms:
Governments and tech companies need to collaborate on developing ethical guidelines for AI and algorithms. AI should be designed to serve the public interest and uphold democratic values. Striking the right balance between personalisation and echo chambers can ensure that users are exposed to diverse perspectives, reducing the risks of ideological isolation.
Encourage Fact-Checking and Verification:
Good fact-checking organizations play a critical role in countering misinformation, but bad ones sow a toxic distrust. Governments and media outlets should support and promote fact-checking initiatives to increase public awareness of fake news and deceptive narratives. Collaborative efforts between these organizations and social media platforms can provide users with real-time fact-checking tools to verify the accuracy of information, with regulatory oversight to police the ownership, fitness and integrity of fact checkers.
Strengthen Media Literacy:
Media literacy should be incorporated into education curricula to equip individuals with critical thinking skills necessary for evaluating news and media content. Understanding media biases, recognizing clickbait, and questioning the sources of information can foster a more discerning public, less susceptible to manipulation.
Emphasise Responsible AI Development:
Tech companies should prioritize the ethical development and deployment of AI technologies. Ensuring that AI systems are transparent, accountable, and unbiased can mitigate the risks of using AI for harmful purposes. Collaborative research and open dialogue among stakeholders can lead to the responsible integration of AI into our digital lives.
What Role can Balanced Regulation Play?
To encourage engagement, debate, and relationships while minimizing algorithmic manipulation requires a multifaceted approach that involves a combination of legal, technical, and ethical measures. Regulation will require to be reflected globally and provide a careful balance between preserving freedom of expression and protecting users from harmful content and polarization. Here are some practical considerations.
Transparency and Accountability:
Require social media platforms to be transparent about their algorithms and how they prioritize content. Users should know why they see certain posts and how the platform curates their feed. Establish independent oversight bodies to audit platforms’ algorithmic processes and assess their impact on public discourse and polarization.
Algorithmic Fairness and Diversity:
Encourage platforms to diversify content presentation and avoid creating filter bubbles by showing users content from various perspectives, even those with whom they may not agree. Implement strict guidelines against discriminatory algorithms that perpetuate biases based on race, gender, or other sensitive characteristics.
Empower User Control:
Give users more control over their social media experience, such as allowing them to customize their feeds, opt-out of algorithmic recommendations, and mute or block certain content. Provide users with access to clear and simple explanations of how their data is used to personalize their experience.
Fact-Checking and Disinformation Mitigation:
Collaborate with fact-checking organizations to verify the accuracy of content and label misinformation. Provide warnings for flagged or disputed information. Limit the reach and visibility of content found to be false or misleading, without resorting to full-scale censorship.
Combatting Bot and Fake Account Influence:
Implement measures to detect and remove bots and fake accounts that contribute to algorithmic manipulation and viral spread of misinformation. Set stricter verification requirements for accounts with significant reach or influence.
Promoting Civil Discourse and Empathy:
Reward positive engagement and constructive comments to create an online culture that values empathy and respectful discussion. Establish mechanisms to penalize or limit the reach of accounts that engage in toxic behaviour, harassment, or spreading hate speech.
User Education and Media Literacy:
Promote digital literacy and critical thinking skills through educational programs to help users understand the impact of algorithms and the importance of seeking diverse perspectives. Teach users how to identify and respond to manipulative content effectively.
Interoperability and Data Portability:
Encourage interoperability between platforms to foster competition and give users the freedom to move their data and connections to different services. Facilitate data portability to empower users to switch platforms without losing their social networks and relationships.
Collaboration and Global Standards:’
Foster international cooperation to set global standards for social media regulation to ensure consistent and effective measures across platforms and regions.
How to Thwart ‘Cancel Culture’
Thwarting Cancel Culture through regulation is a complex issue that requires careful consideration of individual rights, freedom of speech, and societal values. While addressing the negative aspects of Cancel Culture, we must be cautious not to inadvertently stifle legitimate free expression or discourage whistleblowing. Here are some potential approaches to address the issue without damaging free speech:
Transparency and Due Process:
Social media platforms can establish clear guidelines on what constitutes acceptable behaviour and what actions may lead to content removal or account suspension. Users should have access to transparent appeal processes to contest decisions.
Accountability for Anonymous Accounts:
Require users to verify their identity while registering on social media platforms can help reduce the abuse of faceless, untraceable accounts.
Absent civil laws, regulation and processes that can unveil anonymous account, a two-tier system could be established, allowing users to post anonymously for non-controversial topics, but requiring identity verification for more sensitive discussions or potentially harmful content.
Differentiating Cancel Culture from Constructive Criticism:
Encourage platforms to distinguish between genuine concerns raised by users and coordinated campaigns that seek to silence individuals or groups. Social media platforms should be vigilant in identifying and addressing harassment and organized bullying.
Safeguarding Whistleblowing:
Implement a specific mechanism to protect whistleblowers and their identities when reporting corruption, abuse, or other illegal activities. Anonymous reporting channels can be established to ensure safety and confidentiality.
Media Literacy and Digital Citizenship:
Promote media literacy and digital citizenship through education and public awareness campaigns. Empower users to critically evaluate information, engage in constructive discussions, and be responsible digital citizens.
Empower Users to Moderate Their Own Experience:
Allow users to customize their social media experience by providing tools to filter or mute certain content, accounts, or keywords that they find offensive or harmful.
Collaborate with Civil Society:
Social media platforms can collaborate with civil society organizations, human rights groups, and experts to develop best practices for content moderation and handling contentious issues.
Independent Oversight and Audits:
Establish independent oversight bodies or auditing mechanisms to evaluate platforms’ content moderation policies and their impact on free speech and individual rights.
Global Standards and Multilateral Cooperation:
Foster international cooperation to set common standards for handling Cancel Culture-related issues, ensuring consistent policies and protections across platforms and regions. Protecting individuals from social media mobbing and mitigating the damage caused by Cancel Culture requires further steps involving both preventive measures and responsive actions. Here are some practical ideas that can be considered:
Prevention and Education:
Promote digital literacy and media education to raise awareness of the potential consequences of mobbing and the impact it can have on individuals’ lives.
Teach users about respectful online behaviour, empathy, and the value of constructive criticism over online bullying. Encourage individuals to verify information before sharing or engaging in mob-like behaviour.
Reporting Mechanisms:
Social media platforms should provide straightforward and efficient reporting mechanisms for individuals who are victims of mobbing or harassment. Establish dedicated teams to review and respond to reports promptly to prevent further escalation.
Algorithmic Safeguards:
Social media algorithms should be designed to minimize the virality of harmful content and discourage mob behaviour. Platforms can prioritize showing content from trusted sources or verified accounts to reduce the influence of mob-like campaigns.
User Empowerment:
Allow users to control their privacy settings, manage who can interact with their content, and choose who can follow them. Provide options for users to moderate comments and replies on their posts to avoid harassment.
Independent Mediation:
Create an independent body or ombudsman to mediate and resolve disputes arising from mobbing incidents on social media. This body could work with platforms to implement appropriate remedies, such as content removal or public apologies.
Legal Protections:
Explore legislative measures that provide protections against online harassment and mobbing. Establish clear definitions and guidelines to differentiate between free speech, legitimate criticism, and harmful mob behaviour.
Media and Public Awareness:
Media outlets and public figures can play a role in raising awareness of the consequences of mobbing and promote responsible online behaviour. Encourage prominent figures to call for civility and constructive engagement in online discussions.
Support Networks:
Establish support networks and organizations to provide assistance and advice to individuals affected by online mobbing. These networks can offer emotional support and guidance on navigating the challenges of addressing mob-like attacks.
Swift and Effective Responses:
Social media platforms should respond quickly to reports of mobbing to prevent further harm. In cases of harassment or organized campaigns, platforms should take immediate action to address the issue and protect the individual.
Conclusion
The internet is heading towards lifelessness, a non-human ecosystem where people do not engage directly with each other anymore. This is not just another ho-hum moment in the age of technology, but one that poses serious risks to long established social ecosystems. If we can no longer tell truth from fiction, how can we maintain the protocols that bind us?
This isn’t a fanciful question, it is playing out in the world’s most powerful country where a large minority is convinced the 2020 Presidential Election was corrupt and, the attack on Capitol Hill justified. This is the moment of danger, for falling for what is not, that Macbeth understood, before falling for it, anyway;
“This supernatural soliciting Cannot be ill, cannot be good. If ill, Why hath it given me earnest of success, Commencing in a truth? I am Thane of Cawdor. If good, why do I yield to that suggestion Whose horrid image doth unfix my hair And make my seated heart knock at my ribs, Against the use of nature? Present fears Are less than horrible imaginings: My thought, whose murder yet is but fantastical, Shakes so my single state of man That function is smothered in surmise, And nothing is but what is not.”