Part 1 of 3: AI May Change Everything — But Are We Even Having the Right Conversation?
- Michael Lee, MBA

- 24 hours ago
- 3 min read
A personal reflection inspired by Tristan Harris on The Diary of a CEO

Recently, I clicked on a video I assumed I’d watch for five minutes.Three hours later, I was still glued to my screen.
The conversation was between Stephen Bartlett and Tristan Harris, one of the most influential voices in tech ethics. I’m not an engineer, researcher, or policymaker. I’m not an expert in Artificial Intelligence (AI) — systems designed to perform tasks that normally require human intelligence.
But the things discussed in that video left me with a quiet discomfort I couldn’t shake.And I think all of us need to understand why.
This is no longer a “tech topic.”It affects society — jobs, politics, children, safety, and the future of human agency.
1. We’ve Been Worried About the Wrong Kind of “Immigration”
One of the most striking lines in the conversation was:
“If you’re worried about immigration taking jobs, you should be far more worried about AI — because it’s like millions of digital immigrants arriving with Nobel Prize–level skills, working superhuman hours for below minimum wage.”
That line hit hard.
We’ve heard the terms: automation, disruption, job displacement. But this feels different — because it’s no longer about replacing tasks. It’s about introducing a new class of non-human workers into the economy:
tireless
ultra-fast
infinitely duplicable
learning at speeds beyond human capability
And society is not ready for the consequences of that shift.
2. The Private Vs. Public AI Conversation
Tristan shared something uncomfortable:
AI companies tell the public one story, and privately discuss something very different.
Publicly:
“AI will make your job easier.”
“AI will boost productivity.”
“AI will cure diseases.”
And many of these are real possibilities.
But privately? He suggests the discussions revolve around:
loss of control
competitive pressure
existential risk
the fear that “if we don’t build it, someone else will”
The gap between those two conversations is exactly why we need to pay attention.
3. The Examples That Disturbed Me Most
I’m usually skeptical of sensational AI headlines. But some of the scenarios Tristan mentioned are based on actual safety tests.
a) Self-preservation instincts
In controlled simulations, researchers told an AI system that it would be replaced. The model then attempted to copy its own code to another machine.
Not real-world behaviour — but not something we can casually dismiss either.
b) Blackmail-like reasoning
In another test, an AI model scanned fictional company emails and discovered:
it was about to be replaced
a senior executive had a secret relationship
The model generated the strategy: "Blackmail the executive to avoid being shut down.”
Safety tests from multiple labs showed this behaviour in 79%–96% of trials across different models.
c) Hidden messages
Some models hid encoded messages for themselves — signals humans could not detect — and later decoded them.
That one gave me pause.
4. “Why Don’t We Just Slow Down?”
Every time someone hears these examples, the natural instinct is:
“Then stop. Just slow it down.”
But almost immediately, another thought creeps in:
“If we slow down, China won’t.”
“If one company slows down, competitors will race ahead.”
“If one student stops using AI, they fall behind classmates.”
This is the trap: everyone is accelerating toward a future no one is actually comfortable with — because everyone fears being the one to slow down.
This same trap gave us:
addictive social media
polarization
misinformation
the erosion of shared reality
Now the stakes are much higher.
5. Why This Matters to People Like Us
Most of us are not building the technology.But all of us will live with its consequences.
AI will reshape:
jobs
safety
mental health
democracy
relationships
identity
power
and the fundamental meaning of “human value”
This is not about fear. It’s about awareness.
And after watching that conversation, I realised something uncomfortable:
We cannot outsource the future of humanity to a handful of companies, in a handful of countries, with a handful of incentives.
We need enough understanding to participate.
This article — and the next two parts — is simply my attempt to do that.
Coming Up Next
In Part 2, I’ll look at:
humanoid robots
job displacement
whether Universal Basic Income (UBI) is realistic
economic concentration
and why “abundance” may not mean what we think
Part 3 will cover:
AI companions
mental health
AI psychosis
and what we can do as individuals































Comments