top of page

When the “$40M Computer” Says It’s Magma — and Your Best Analyst Disagrees

  • 22 hours ago
  • 4 min read

In an old scene from The Hunt for Red October, a sonar operator (“Jonesy”) brings his captain something awkward: the ship’s expensive classification software has labelled an acoustic trace as “magma displacement” — essentially, a seismic event.

Jonesy doesn’t dismiss the software. He explains why it might be doing what it was built to do (it was written for seismic detection), and why that same strength becomes a weakness when it meets a novel signal. Then he does something the computer can’t: he interprets the sound in context, uses bearings over time to infer intent, and projects where the submarine is likely headed.

That’s already a useful analogy for AI.

But there’s an equally important detail many people miss: Jonesy doesn’t just think better. He communicates better. He gives the captain a concise, coherent, and compelling “data story” — one that makes action feel justified, not reckless.

And that’s exactly where many AI deployments fall short in real organisations.


AI is often excellent at the “narrow job”

Most AI systems are not general decision-makers. They are pattern engines trained to optimise a specific objective based on the data they’ve seen.

That’s why AI can be genuinely powerful at tasks like:

  • triaging and classifying requests

  • extracting fields from unstructured text

  • summarising documents and meetings

  • forecasting in stable environments

Like the submarine software, it’s fast and consistent when the signal looks like what it expects.


The hidden problem: models have “home bases”

Jonesy jokes that the system is “running home to mama.” It’s a great way to describe what models do when they’re uncertain: they often fall back on the closest familiar category — not because it’s correct, but because it’s statistically comfortable.

That “home base” can come from:

  • training data bias (what it saw a lot of)

  • objective bias (what it’s rewarded to optimise)

  • context gaps (what the model cannot know about your reality)

This is why “AI risk” is often not about one dramatic failure. It’s about quiet, plausible outputs becoming the foundation for decisions that weren’t properly challenged. That’s also why structured approaches like the NIST AI Risk Management Framework emphasise governance, validation, and human oversight rather than blind adoption.


The bigger trap is not AI bias — it’s our bias around AI

Even when people know AI can be wrong, teams can still over-trust it in the moment — especially when the output is confident, polished, and quick.

This is closely related to automation bias: our tendency to rely on automated recommendations even when contradictory evidence exists.

In the film scene, the risk wouldn’t just be “magma vs submarine.” The real risk would be the captain ending the conversation with: “The computer said magma, so it’s magma.”


What AI often struggles to do: contextualise, then project

The most valuable move in that scene isn’t that Jonesy disagrees with the computer.

It’s what he does next:

  1. Contextualises the signal (what else is happening, what’s unusual)

  2. Explains the limitation (why the software would mislabel this)

  3. Projects forward (bearings + time + intent → likely trajectory)

AI can support pieces of this chain. But it often lacks the organisational context and responsibility that make projection meaningful.

Which is why at FYT we’re careful about how we frame AI: it replaces tasks, not judgement.


The overlooked lesson: Jonesy wins because he tells a better story

In many workplaces, analysts assume that if the analysis is correct, the decision will follow.

But that’s not how decisions work.

Jonesy’s value isn’t only that he spots a pattern. It’s that he translates it into something the captain can act on. He does the “last mile” work:

  • Concise: he doesn’t flood the captain with technical noise

  • Coherent: he links signal → explanation → implication

  • Compelling: he makes the alternative (“it’s magma”) feel less plausible, without overclaiming certainty

This matters because decision-makers don’t just need “more data.” They need confidence that the interpretation is anchored, and that the risk is understood.

That’s a textbook example of what FYT calls the progression from Insights → Decisions — the step that breaks when analysis isn’t communicated in a way that supports action.


And there’s a second ingredient: the captain trusts Jonesy

Even the best story can fail if the storyteller has no credibility.

The scene also implies something organisational leaders understand instinctively: trust is cumulative. It’s built over many interactions where the person has:

  • shown technical competence

  • been honest about uncertainty

  • demonstrated good judgement under pressure

  • communicated clearly and responsibly

So when Jonesy says, in effect, “I think the system is wrong,” the captain doesn’t hear ego. He hears a professional who has likely earned the right to be taken seriously.

That distinction matters in AI adoption too.

Because when organisations introduce AI, they’re not just introducing a tool. They’re changing who and what people trust in the workflow.


A practical way to use AI without “magma displacement” moments

Here’s a grounded approach that works across functions:


1) Define the decision before the tool

If you can’t clearly name the decision and what “better” looks like, AI will happily produce outputs that sound insightful but don’t move anything.


2) Separate “classification” from “interpretation”

Let AI do the first pass (triage, tagging, summarising).But make humans accountable for interpretation and consequences.


3) Build a deliberate challenge step

To counter automation bias, require at least one of:

  • a second method / second model

  • a subject matter review

  • a “disconfirming evidence” check


4) Treat projection as human work

Projection requires constraints, incentives, and second-order effects — things AI can miss unless explicitly modelled and continually validated.


5) Invest in storytellers, not just systems

If you want AI to create value, you need people who can do what Jonesy did:

  • frame the problem

  • interpret responsibly

  • communicate a compelling narrative

  • state uncertainty without paralysis

  • earn trust over time

That’s not optional “soft stuff.” It’s the delivery mechanism for decision-quality.


The point isn’t that humans are always better

Jonesy isn’t anti-software. He uses it. He respects it. He just doesn’t outsource judgement to it.

That’s the posture organisations need with AI:

  • use it where it’s strong

  • expect blind spots

  • design processes where humans provide context, challenge, and projection

  • build storytelling capability so insights actually become decisions

Because in the moments that matter, the question isn’t “Is the AI smart?”

It’s: Do we have people who can turn outputs into a trustworthy, actionable story — and do leaders know who to trust when the model confidently runs home to mama?


 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Featured Posts
Recent Posts

Copyright by FYT CONSULTING PTE LTD - All rights reserved

  • LinkedIn App Icon
bottom of page