We used to fear that AI would create the perfect lie.
Now, the real threat is something worse: that everyone else believes what the bots want us to believe.
This is not about a rogue deepfake.
This is about a coordinated machine – bot farms – churning out emotional, divisive content at a scale that no human network can match.
They’re not just impersonating us.
They’re outnumbering us.
They’re manipulating us.
They’re getting us to think (exactly) how they want.
No… this is NOT a conspiracy theory.
Imagine thousands of phones in racks, each running social media accounts powered by AI-generated faces, voices, bios, and political opinions.
These AI-generated profiles spend months following real people (and other AI bots) slowly accumulating followers of their own, commenting on other profiles, adding “value” and more to fake “engagement” while amplifying and sharing the “message” their owners are being paid to publish.
Now imagine those accounts liking, sharing, commenting – in perfect sync – to make a fringe idea feel mainstream… to normalize it.
That’s not just misinformation.
That’s manufactured belief.
You’ve probably fallen for it and felt it.
That moment when something weird or extreme pops into your feed and you think, “is this actually popular?”
It might not be.
It might just be a synthetic consensus, algorithmically juiced by an army of digital avatars.
And then shared by someone you know, like/love and trust.
Again, this isn’t conspiracy theory territory.
This is a productized industry.
Bot farms are real.
They’re operated by nation states, PR firms, “influencers” and political actors.
They operate at a penny-per-action… likes, shares, comments, views and followers.
That’s all it takes to hack attention.
And once something looks like it’s trending, it gets boosted by the platforms themselves.
This is “coordinated inauthentic behavior” – Meta’s term, not mine – and it’s eating away at the internet from the inside out.
So here’s my provocation:
What happens when fake accounts outnumber real ones?
When virality becomes a hoax?
When the signal-to-noise ratio is so broken that truth sounds like a minority opinion?
The bots aren’t here to persuade you.
They’re here to exhaust you.
To make everything feel suspect.
Because when truth becomes blurry, power wins by default.
That’s the part we don’t talk about enough.
This isn’t a war on facts.
It’s a war on belief infrastructure.
If democracy depends on a shared reality, how do we defend it when the algorithm can’t tell a troll from a citizen?
We don’t need a panic button.
We need a playbook… one that prioritizes digital literacy, algorithmic transparency and emotional self-defense.
Because in a world where identity is programmable and outrage is profitable, your ability to think clearly – and pause before you share – might be the last real act of resistance.
This is what Elias Makos and I discussed on CJAD 800 AM.
Before you go… ThinkersOne is a new way for organizations to buy bite-sized and personalized thought leadership video content (live and recorded) from the best Thinkers in the world. If you’re looking to add excitement and big smarts to your meetings, corporate events, company off-sites, “lunch & learns” and beyond, check it out.
Episode #984 of Six Pixels of Separation - The ThinkersOne Podcast is now live and…
Welcome to episode #984 of Six Pixels of Separation - The ThinkersOne Podcast. Steve Pratt…
Is there one link, story, picture or thought that you saw online this week that…
Here’s another thread about AI that I probably shouldn’t have pulled at because now it’s…
Episode #983 of Six Pixels of Separation - The ThinkersOne Podcast is now live and…
Welcome to episode #983 of Six Pixels of Separation - The ThinkersOne Podcast. Matthew Weinzierl…
This website uses cookies.