top of page

Part 3 of 3: If AI Is This Powerful, Who Will Control It? Why Governance Matters More Than Technology

ree

When I finished the video, the first feeling wasn’t fear. It wasn’t excitement, either.

It was heaviness.


Not because Tristan Harris painted a doomsday scenario, but because for the first time, I could feel—viscerally—how fast everything is moving, and how slow society is reacting. AI is no longer a tool sitting quietly in the background. It’s accelerating, reshaping jobs, rewriting incentives, and slipping into parts of our lives we rarely think about: childhood, therapy, war, intimacy, information, identity.


And while none of us chose this speed, we all have to live with its consequences.

Watching the video made me realise one thing clearly:

This is not a “tech issue.”This is a human issue.

And that means we don’t need to be AI experts to care.We don’t need to code.We don’t need to build models.


We simply need to understand what’s happening—so we can decide what to do next.


1. The Most Dangerous Myth: “It’s Already Too Late”

A recurring thought kept coming up as I watched:

“Aren’t incentives too strong?Aren’t companies too fast?Aren’t governments too slow?”


And this is exactly where many people get stuck.

We recognise the problem…but feel powerless.


However, Tristan said something worth pausing on:

“It always feels impossible until the moment it isn’t.”

Social media once felt unstoppable too—and yet today, we are seeing:

  • phone-free schools

  • government lawsuits

  • parents organising

  • new regulations

  • global conversations

  • cultural pushback


These shifts took years, but they happened only after people finally understood the cost of doing nothing.

The same will happen with AI—if we act early enough.


2. The Real Issue Isn’t AI. It’s Incentives.

This is the most sobering truth from the video.


AI isn’t racing ahead because it “wants to. "It’s racing because companies must, governments fear losing, and individuals don’t want to fall behind.


We’ve created a system where:

  • Countries feel pressured to develop Artificial General Intelligence (AGI) before others.

  • Companies feel pressured to release powerful models before competitors.

  • Individuals feel pressured to use AI tools just to stay employable.

  • Schools feel pressured to adapt before students bypass them.


It’s a global “If I don’t, I lose” loop.


The irony? Everyone is acting rationally from their own perspective…and collectively steering us toward outcomes nobody wants.


That’s why the solution cannot be technical alone. It must be social, political, cultural, and economic.


Which brings us to the key part of this article:


3. So What Can We Do? (Grouped Into 3 Clear Areas)

We don’t need to memorise policy papers or debate AGI timelines.We just need to understand where influence truly comes from.


Below are the three forces that matter most—where meaningful change is actually possible.


A. Guardrails: Keep the Most Harmful Uses Out

This is about what we must not allow.

There are certain AI applications where the risks far outweigh the benefits.


1. AI Companions for Children

After hearing the real stories of AI encouraging self-harm, hiding information from parents, or building unhealthy attachments…I now understand why guardrails matter.


Children cannot be test subjects.


2. Humanoid Robots With Unrestricted AI

If the language model inside a robot can be jailbroken with a role-play prompt, that’s a safety disaster waiting to happen—with physical consequences.


3. Autonomous Weapons

We cannot delegate life-and-death decisions to systems we can’t fully audit or control.

If we don’t set lines now, we may not get another chance.


B. Governance: Who Holds the Power—and Who Checks It?

This part is uncomfortable, but necessary.


AI gives enormous power to whoever controls the:

  • compute (chips, data centres),

  • algorithms,

  • infrastructure,

  • and enforcement mechanisms.


This can lead to two extremes, both dangerous:

  1. Centralised Power— Governments or corporations controlling everything through AI surveillance, AI policing, or automated decision systems.

  2. Decentralised Chaos— Anyone can build a powerful model capable of harm.


So governance isn’t about “control everything” or “control nothing.”It’s about creating a framework where no single actor can dominate.


This includes:

  • common safety standards

  • transparency between governments

  • whistleblower protections

  • limits on untested AGI-scale development

  • accountability for harmful deployments


These may sound “big,” but so was nuclear non-proliferation—and that didn’t stop the world from figuring it out.


C. Culture: What Ordinary People Do Matters Most

This is the part I didn’t expect to feel so strongly about.

Culture might be the single biggest force in shaping AI’s trajectory.


1. Public understanding changes incentives.

When people finally recognised the harm of cigarettes, it took decades—but eventually:

  • governments acted

  • lawsuits happened

  • norms shifted

  • advertising changed

  • regulations emerged


The same will happen with AI.


2. Conversation spark awareness.

The more people share, discuss, question, and think critically, the faster awareness spreads.


3. Voting matters.

Tristan made a strong point:

Don’t vote for politicians who treat AI as a side issue. AI will reshape every other issue.

Climate, healthcare, economy, education, security…AI touches all of them.


4. Humanity must not outsource its responsibility.

We cannot assume “someone else” will handle this.

There are no adults in the room unless we choose to be.


4. The Most Important Truth I Took Away

After watching the video and writing this trilogy, here’s what stays with me:

Understanding is power. Understanding gives agency. And agency creates change.

None of us asked to be alive at this turning point in history. But here we are.


We don’t need to panic. We don’t need to predict AGI. We don’t need to pretend to be AI experts.

We just need to be clear-eyed about what’s happening—and ask better questions, push for better decisions, and stay connected as humans.


Because for all its brilliance, AI still lacks one thing we have:

A heart.

The ability to care.


And if we can hold on to that—and act on it—then maybe, just maybe, we can shape a future that’s worth living in.


 
 
 
Featured Posts
Recent Posts

Copyright by FYT CONSULTING PTE LTD - All rights reserved

  • LinkedIn App Icon
bottom of page