Job design in the age of AI: it’s not “jobs vs. machines” — it’s workflows vs. reality
- 4 days ago
- 7 min read

A lot of the loudest commentary about AI and work starts with the same assumption: AI will replace jobs.
A more useful starting point is simpler (and less dramatic):
AI replaces (or reshapes) tasks — not whole jobs.Most roles are bundles of tasks, stitched together into a workflow. When technology changes, the bundle changes. This is exactly why task-based research often finds that “automation risk” is frequently overstated when we treat occupations as all-or-nothing.
So the practical question for leaders isn’t “Which jobs will disappear?”It’s: Which tasks in our value chain should be done by which combination of humans and machines?
A practical way to think about tasks
One helpful lens is to classify work tasks into two broad categories:
1) Tasks with one right answer (mostly deterministic)
These are tasks where the output is objectively checkable: reconcile a ledger, validate a field, apply a policy rule, match records, calculate a metric.
This is where automation and traditional ML tend to shine: you can define the rules, the constraints, and the acceptance criteria.
2) Tasks with more than one acceptable answer (judgement-heavy)
These are tasks where context matters and the “best” answer depends on trade-offs: drafting, summarising, synthesising options, designing a message, exploring scenarios, customer responses, policy interpretation, coaching conversations.
This is where GenAI is especially useful — because it can generate drafts, options, and structured reasoning quickly. But it also means you need to be honest about what GenAI is doing: it’s producing plausible output, not guaranteed truth.
And this is where many organisations will trip:
Deploying the wrong kind of AI to the wrong kind of task can be disastrous.Not because AI is “bad”, but because the control model doesn’t match the work.

The uncomfortable truth: GenAI touches a lot of the day-to-day
If you look at modern work through a task lens, a surprisingly large share of what knowledge workers do is language, judgement, and coordination: documentation, stakeholder alignment, drafting, review, supervision, decision support.
That’s consistent with major research pointing out that generative AI increases automation potential specifically for knowledge work and language-heavy activities — and that a meaningful portion of work activities require at least moderate natural-language understanding.
So yes — GenAI can “do” a lot of tasks in category (2).But how you deploy it is where strategy lives.
The fork in the road: autonomy at scale vs. human-in-the-loop
Once you accept that GenAI can contribute to judgement-heavy tasks, organisations face a real design choice:
Option A: Deploy AI autonomously at scale
If the task has clear evaluation criteria and feedback loops (and the organisation can monitor outcomes), autonomous deployment can be powerful. It may even outperform humans in consistency.
But the stakes rise fast:
A small error rate, scaled across thousands of decisions, becomes a systemic issue.
Edge cases aren’t rare at scale — they’re inevitable.
If the failure mode is “confidently wrong”, reputational and operational costs can be high.
A lot of the loudest commentary about AI and work starts with the same assumption: AI will replace jobs.
A more useful starting point is simpler (and less dramatic):
AI replaces (or reshapes) tasks — not whole jobs.Most roles are bundles of tasks, stitched together into a workflow. When technology changes, the bundle changes. This is exactly why task-based research often finds that “automation risk” is frequently overstated when we treat occupations as all-or-nothing.
So the practical question for leaders isn’t “Which jobs will disappear?”It’s: Which tasks in our value chain should be done by which combination of humans and machines?
A practical way to think about tasks
One helpful lens is to classify work tasks into two broad categories:
1) Tasks with one right answer (mostly deterministic)
These are tasks where the output is objectively checkable: reconcile a ledger, validate a field, apply a policy rule, match records, calculate a metric.
This is where automation and traditional ML tend to shine: you can define the rules, the constraints, and the acceptance criteria.
2) Tasks with more than one acceptable answer (judgement-heavy)
These are tasks where context matters and the “best” answer depends on trade-offs: drafting, summarising, synthesising options, designing a message, exploring scenarios, customer responses, policy interpretation, coaching conversations.
This is where GenAI is especially useful — because it can generate drafts, options, and structured reasoning quickly. But it also means you need to be honest about what GenAI is doing: it’s producing plausible output, not guaranteed truth.
And this is where many organisations will trip:
Deploying the wrong kind of AI to the wrong kind of task can be disastrous.Not because AI is “bad”, but because the control model doesn’t match the work.
The uncomfortable truth: GenAI touches a lot of the day-to-day
If you look at modern work through a task lens, a surprisingly large share of what knowledge workers do is language, judgement, and coordination: documentation, stakeholder alignment, drafting, review, supervision, decision support.
That’s consistent with major research pointing out that generative AI increases automation potential specifically for knowledge work and language-heavy activities — and that a meaningful portion of work activities require at least moderate natural-language understanding.
So yes — GenAI can “do” a lot of tasks in category (2).But how you deploy it is where strategy lives.
The fork in the road: autonomy at scale vs. human-in-the-loop
Once you accept that GenAI can contribute to judgement-heavy tasks, organisations face a real design choice:
Option A: Deploy AI autonomously at scale
If the task has clear evaluation criteria and feedback loops (and the organisation can monitor outcomes), autonomous deployment can be powerful. It may even outperform humans in consistency.
But the stakes rise fast:
A small error rate, scaled across thousands of decisions, becomes a systemic issue.
Edge cases aren’t rare at scale — they’re inevitable.
If the failure mode is “confidently wrong”, reputational and operational costs can be high.
Option B: Deploy GenAI with humans in the loop
This is slower than full autonomy — but usually faster than “humans only.”
The point isn’t to “double work.”It’s to change the work:
AI drafts; humans judge.
AI proposes options; humans decide trade-offs.
AI summarises; humans validate and contextualise.
This is also why a lot of serious commentary is moving away from extremes (utopia vs. doom) and toward the harder topic: what it takes to deploy AI in ways that workers and organisations can actually live with.
Workflow redesign is the real work (and it’s where most organisations stall)
Here’s the part that doesn’t fit nicely into a viral headline:
You don’t get AI value by adding a tool. You get AI value by redesigning the workflow.
That redesign requires organisations to do four unglamorous things well:
Deconstruct jobs into tasks (and map task dependencies)
Decide what “good” looks like for each task (quality, speed, risk, compliance)
Match AI type to task type (deterministic vs. judgement-heavy)
Re-bundle the remaining tasks into new roles (with clear accountability)
And this is where the hardest question emerges:
After AI takes a slice of tasks across the workflow, is there still a coherent job left for a human — and what is that human accountable for?
Sometimes the answer is yes: the human role becomes more judgement, stakeholder management, exception handling, and decision ownership.Sometimes the answer is no: the role fragments, and organisations need fewer people doing that bundle.
But notice what’s happening: this isn’t “AI replacing jobs.”This is job design catching up to a new production function.
A calmer analogy: the PC era, not the extinction era
At FYT, we’re sceptical of narratives that assume AI “takes over all jobs.”
A more grounded analogy is the age of the PC and the spreadsheet:
It didn’t eliminate accounting.
It changed what accountants did.
It raised expectations for speed, clarity, and analytical ability.
It rewarded people who learned how to work with the tool — and punished organisations that didn’t redesign processes.
GenAI is likely to follow a similar pattern: not “no humans,” but more productive humans — if we do the redesign work honestly.
And if we don’t?We’ll see a lot of expensive pilots, noisy adoption, and disappointing outcomes — not because AI is weak, but because the workflow stayed the same.

Option B: Deploy GenAI with humans in the loop
This is slower than full autonomy — but usually faster than “humans only.”
The point isn’t to “double work.”It’s to change the work:
AI drafts; humans judge.
AI proposes options; humans decide trade-offs.
AI summarises; humans validate and contextualise.
This is also why a lot of serious commentary is moving away from extremes (utopia vs. doom) and toward the harder topic: what it takes to deploy AI in ways that workers and organisations can actually live with.
Workflow redesign is the real work (and it’s where most organisations stall)
Here’s the part that doesn’t fit nicely into a viral headline:
You don’t get AI value by adding a tool. You get AI value by redesigning the workflow.
That redesign requires organisations to do four unglamorous things well:
Deconstruct jobs into tasks (and map task dependencies)
Decide what “good” looks like for each task (quality, speed, risk, compliance)
Match AI type to task type (deterministic vs. judgement-heavy)
Re-bundle the remaining tasks into new roles (with clear accountability)
And this is where the hardest question emerges:
After AI takes a slice of tasks across the workflow, is there still a coherent job left for a human — and what is that human accountable for?
Sometimes the answer is yes: the human role becomes more judgement, stakeholder management, exception handling, and decision ownership.Sometimes the answer is no: the role fragments, and organisations need fewer people doing that bundle.
But notice what’s happening: this isn’t “AI replacing jobs.”This is job design catching up to a new production function.
A calmer analogy: the PC era, not the extinction era
At FYT, we’re sceptical of narratives that assume AI “takes over all jobs.”
A more grounded analogy is the age of the PC and the spreadsheet:
It didn’t eliminate accounting.
It changed what accountants did.
It raised expectations for speed, clarity, and analytical ability.
It rewarded people who learned how to work with the tool — and punished organisations that didn’t redesign processes.
GenAI is likely to follow a similar pattern: not “no humans,” but more productive humans — if we do the redesign work honestly.
And if we don’t?We’ll see a lot of expensive pilots, noisy adoption, and disappointing outcomes — not because AI is weak, but because the workflow stayed the same.
We're curious where you land on this:
Do you think organisations should push harder toward autonomy at scale, or keep humans in the loop for most GenAI use cases?
What tasks in your organisation feel most “AI-ready” — and which ones should remain firmly human-led?
If you’d like to explore this in a structured way, FYT can help teams map work into tasks, identify AI-ready opportunities (and the right type of AI), and redesign workflows so adoption actually translates into outcomes — not just tool usage.
Drop me a note or DM — happy to compare notes and share how we approach job + workflow redesign in the age of AI.































Comments