5 minute read

How Digital Media Shapes Our Perception of Chaos

I recently saw someone ask the following question on a social media platform:

“Is the world more of a mess at the moment or is it that because of digital media reach we are just exposed to how much mess there is in the world?”

It is an interesting question, because very often we are quick to believe that things like corruption, crime or even the costof-living crisis only happen where we are, and in prior years we would only have been exposed to local occurrences of those things. Now, though, we are exposed to the whole world’s “bad things” on a minute-by-minute and sometimes – depending on your choice of social media platform – second-by-second basis.

It is an interesting question, because very often we are quick to believe that things like corruption, crime or even the costof-living crisis only happen where we are, and in prior years we would only have been exposed to local occurrences of those things. Now, though, we are exposed to the whole world’s “bad things” on a minute-by-minute and sometimes – depending on your choice of social media platform – second-by-second basis.

On the one hand it can be reassuring to know that we are not alone in experiencing certain negative things, but on the other hand, as the Elevate Counselling + Wellness psychological practice says, “We weren’t designed to carry the weight of war updates from distant countries, environmental catastrophes, political dramas and endless tragedies about people we’ll never meet.” Being constantly exposed like this has negative effects on our mood, with a Harvard study having shown that it not only immediately increases our stress levels, but just a few minutes of being exposed to negative news in the morning can cause us to remain in a negative mood for up to eight hours later. That is quite literally your entire day ruined.

What makes the situation even worse is that increasingly the “people” serving us this bad news are not even real. The 2025 Imperva Bad Bot Report revealed that, for the first time in history, bot traffic (both good and bad) exceeded human traffic, making up 51% of all web activity, while research company Statista reports that fraudulent traffic through bad bot actors accounted for 37% of all global web traffic in 2024. Facebook reported in the same year that at least 25% of its user database was made up of either fake or automated bots, and while Elon Musk claims to have decreased the number of bots on his platform dramatically, recent figures suggest that up to about 15% of all X profiles are fake – a percentage that equates to about 48 million profiles.

Of course, not all bots are bad. Some, like customer service chatbots, content management bots or data-harvesting bots, are mostly benign. However, with the rise of artificial intelligence (AI) technology, there is an increase in the number of bad bots.

What makes the situation even worse is that increasingly the “people” serving us this bad news are not even real.

Numerous studies have shown how bots are used to influence public opinion, create emotional or even real-life reactions in groups of people, sow panic, cause chaos, spread false information or even drown out good sources of information. For example, US news channel KARE 11 reported that, following the 13 July 2024 assassination attempt on the then former US president Donald Trump, AI disinformation detection company Cyabra found that 45% of the thousands of profiles promoting various conspiracy theories, including that Trump had staged the incident to boost support for the election, were bots, and that the disinformation those bots were spreading had reached “a potential 595 million people”.

AI makes it easier for cyber criminals and other “bad actors” to not only create bots but to improve on their evasion mechanisms when they are detected, making it harder to detect them the next time they are deployed. Cyber Press reports that these tools have allowed bots to evolve in sophistication, with 55% of current bot attacks classified as “moderate or advanced”. More worryingly, though, AI is also “democratising” bot development, making it far easier for “less-skilled attackers to launch high-volume, lowcomplexity attacks”.

It might seem that the only sure way to avoid being manipulated is to avoid social media altogether, but there are ways to spot a bot that are a lot less drastic. Some of the clues to look out for are strange or generic names and photographs that either look like stock images or do not look like they match the username (for example, a photograph of a young blond lady on a profile named “David Bishop”). The language used in posts is also an important clue, especially if it is overly hyperbolic and designed to illicit an immediate reaction, while the time of the post is also often a clue – not too many people are up and posting on social media at 2:30am, or at least not intelligibly.

Other clues include the date a profile was created, the number of previous posts made, the number of followers it has and the number of profiles it follows, as well as whether it posts about a variety of topics or if it posts repetitively about the same topic.

To answer the initial question then, the best way to avoid being manipulated into believing that the world is more of a mess today is to remain sceptical of what we read and where we get what we read.

Until next time, enjoy your journey.

David Bishop

This article is from: