ÁñÁ«ÊÓƵapp

Bots, censorship and the death of the internet

Who’s to say what we see online?

Social media in all its forms - text, audio, and video - has provided the globe with an intercultural experience.


This is an opinion piece written by Honorary , lecturer in the Faculty of Business and Law and expert in politics and emerging technologies. The tenor of the article was edited by The Stand with Professor Michael's consent.

We can see ourselves in many of the posts, and we can relate to human experience, both by laughing at ourselves, learning about things we were unaware of, and contributing things that might help another. Social media was meant to allow for the formation of a social network, one that was not limited by geography, and to provide near instantaneous communications. When was the last time anyone wrote a hand-written letter? These have been replaced by short messaging over IP-based platforms.

But rather than social media simply being a place to share photos and videos, and short messages in one’s family, between families, between workers, by advocates; some aspects of it have become a place of unmoderated propaganda, harm, bullying, violence, hate speech, indecent exposure and more. And this without pointing to the disinformation campaigns that are backed by highly skilled social engineers with technical know-how.

Social media allows intelligence gathering at the crucial meso layer where organisations and communities congregate to make decisions. As the meso layer feeds with the micro (everyday citizens and employees) and the macro (society at large) the splintering of thought and the polarisation of communities further impacts any governments ability to control the masses.

The introduction of AI has meant that social media users are presented with potential undetected approaches by bots, deepfakes, and disinformation. The authenticity of a news source is immediately brought into question, if a persona cannot be identified as they are not known to a user, or the source is unsupported by mainstream media. We have seen the manipulation of online platforms like X (formerly Twitter) during election campaigns, referendums, conflicts, and more. Campaigns can be launched with precision that take into consideration microtargeting techniques, personalised to individual user sentiment.

Is the dead internet theory coming to life?

The is considered an online conspiracy theory that relates to the Internet predominantly being run by bots, and algorithms that can influence sentiment of everyday citizens. While the infiltration of bots on the Internet, in addition to deepfakes is acknowledged, bot activity is detectable and simple techniques exist to denote when a human is interacting with the Web, a software bot or a more sophisticated agent. 

We need to remember that the Internet is where about 25% of commerce takes place today. If humans lose trust in the online realm, it will have huge impacts on consumers and their shopping habits, not to mention the economy.

Whereas we may see bots as “negatively” impacting us presently, agentic systems will complicate matters more when each consumer launches an agent to work on their behalf, scouting for the lowest prices, aiding in decision-making, and doing business when certain defined thresholds are met. This is just as important for consumer-to-business transactions as it is for business-to-business transactions.

Social media can be considered a form of open-source intelligence (OSI), if it is real users who are posting real sentiments. But what if those sentiments were flooded with targeted disinformation campaigns, generated by virtual agents on a mission to disrupt natural order, which then led to misinformation between human users?

Additionally, Generative AI is being used to assist people to edit or create catchy and comprehensible posts on social media. Some users are blindly trusting GenAI with messaging, without deeper scrutiny. This can lead to a loss in one’s known persona on individualised messaging, which further exacerbates distrust.

This is not to say, that some state and non-state actors are not using GenAI capabilities to either flood territories and specific communities with different sentiment, or in fact, attempting to positively influence the actions of specific communities by rhetoric that is by design.

A woman is standing in a dark room with a blue light projected on her. Her right eye is covered by a red light. Professor Katina Michael is an academic in the Faculty of Business and Law. Picture: Paul Jones

Power, programming and politics

We used to call this kind of news, propaganda, but now there is the capability to enact change in positive and negative ways, using online means, rather than the dropping of leaflets from planes. The messaging also is personalised, not just to a community or even a given neighborhood, but an individual.

It is controversial that Meta or any other organisation encourage what might be considered hyper-partisanship. Any organisation that meddles with a user’s freedom to access the information they require is curbing their right to know. Whether it is allowing for online searches of one party over another, the of an opposing party, or automatically rolling over official accounts to the newly established government without user consent, all can be seen as directly meddling with political processes.

Meta has also on their platforms including Facebook, WhatsApp and Instagram, beginning with the US, and systematically loosening other content-related restrictions. While industry and government ties have always been pronounced throughout history, the ability for any government to control the flow of information on the Internet provides a new type of denial of service: one that is not propagated by hackers, but is systematically delivered.

While advertising campaigns are allowed for political parties, they come with explicit declarations. Unfortunately, the almost overt manipulation that is occurring today is being declared as “unintentional errors” or conducted in the name of “safety, security and stability”. Whether an error or not, the type of outcome can be considered dubious.

 

As people rely on the Internet on a minute-to-minute basis, their response to these kinds of maneuvers might cause a chilling effect, likely continued cynicism, or even worse, complete apathy. But this is not what Tim Berners-Lee had in mind when he when he invented the World Wide Web that would be “open” and “accessible”. The Internet was not meant to be monopolised by BigTech but a place to share and bring the world closer together.

 

While social media still exists in its current form, OSI can provide a lens into what is happening in the world; but the minute social media is foiled by the introductions of billions of fake images or fake news, the OSI side of it, for intelligence, marketing and education, will become useless. The problem with this? Is that while some people will categorically say this scenario is the worst thing that could have possibly happened in the modern world, others will celebrate the demise of the Internet because it would equate to a toppling of the public surveillance mechanism engrained in social media.

I believe public interest technology will prevail, where the people will create their own infrastructure for their own communities; where regulation will be introduced for the interests of all stakeholders not just the powerful, and where BigTech and advertising will not have the imprint it has today on ecommerce akin to a revolving door in an endless loop.

The next prompt might well be “Hey GPT, build me a social media platform that’s better than Meta, where my community can thrive.”