Data Doesn't Create Insight. Thinking Does.
- 19 hours ago
- 5 min read

Most people believe that better analytics comes from better tools — more sophisticated software, cleaner dashboards, or more recently, artificial intelligence. These things matter. But after working with professionals across industries, from public service teams making sense of policy data to corporate groups analysing operational performance, I have come to one consistent conclusion.
The tools are rarely the reason analysis succeeds or fails. The thinking behind them is.
You see it in the way questions get framed too quickly, in assumptions that go unchallenged, and in conclusions that look convincing on the surface but don't quite hold up when examined closely. And because the work looks analytical — the charts are there, the numbers add up, the report is polished — it rarely gets questioned.
That is where the real problem hides.
The Busy Work of Analytics
In many organisations, analytics looks productive. Data is collected, dashboards are built, reports are presented with confidence. Yet when decisions need to be made, something feels off. Leaders sense it. Analysts feel it too, though they may struggle to name it.
I once worked with a team that had spent six weeks analysing customer data to understand why sales were declining. The work was thorough. The visualisations were excellent. But when we sat down to review the findings, a simple question surfaced: what did we actually mean by "declining"? Revenue or volume? Across all segments or just one? Compared to last quarter or last year?
The team had been answering a question they had never properly defined. Six weeks of rigorous analysis, built on a foundation that had never been examined.
This is not unusual. It is, in fact, the norm.

What Socrates Has to Do With Data
Long before analytics became a discipline, Socrates built his entire philosophy around one deceptively simple practice — asking better questions. Not more questions, but better ones. Questions that clarify meaning, surface assumptions, and open up alternatives that hadn't been considered.
When you look at the analytics process through this lens, something shifts. The six steps we typically teach — defining the problem, developing hypotheses, collecting data, testing, interpreting, and communicating — stop looking like a technical workflow. They start looking like a structured way of thinking.
And that changes how each step should be approached.

Step 1: Define the problem — or risk solving the wrong one
Most analytics begins with a statement that sounds clear. "Sales are dropping." "Customer engagement is declining." "Costs are too high." But if you pause and examine these carefully, they are observations, not problems. They can mean very different things depending on how they are read.
This is where one question does enormous work: what exactly do we mean by this?
Are we talking about revenue or volume? A short-term dip or a structural shift? One segment or the whole business? One region or everywhere?
Without this clarity, teams move forward carrying different assumptions. By the time the analysis is done, the results — however accurate — do not quite answer the question that mattered. Defining the problem precisely is not a formality. It is the foundation everything else depends on.
Step 2: Develop hypotheses — but stay genuinely open
Once the problem is defined, explanations tend to come quickly. Pricing. Competition. Market conditions. Seasonality. There is nothing wrong with having early hunches — the risk lies in settling on one too soon.
Every hypothesis is, at its core, an assumption dressed in reasonable clothing. The role of good analysis is not to confirm what already seems likely, but to ask what else might be true.
What if the issue isn't pricing, but customer mix? What if the decline isn't widespread, but isolated to a single channel? What if the trend you are seeing is simply noise?
The quality of your analysis depends not just on the strength of your hypothesis, but on your willingness to hold it lightly — and to actively look for alternatives before committing to one.
Step 3: Collect data with purpose, not habit
Data collection is often treated as a mechanical step. Gather everything available, clean it, prepare it for analysis. But without direction, this quickly becomes overwhelming and, more importantly, misleading — because the presence of a large amount of data can create a false sense of completeness.
A better question to ask is not what data is available, but what data is needed.
What evidence would support your hypothesis? What evidence would contradict it? Which variables are genuinely relevant to the question you have defined?
When data collection is driven by purpose, it becomes focused and efficient. When it is driven by habit, it produces complexity that obscures more than it reveals.
Step 4: Test the hypothesis — and expect to be wrong
Testing is where ideas meet reality. If the hypothesis holds, the data will show it. If it does not, the results will push back — and this is where many analysts become uncomfortable, because there is often an unspoken expectation to confirm rather than challenge.
But contradiction is not failure. It is information. It tells you that something in your original thinking needs revisiting — and that is exactly what good analysis is supposed to do.
In practice, this is where the most valuable insights often emerge — not when a hypothesis is confirmed, but when it is disproven in a way that opens up something unexpected.
Strong analysis is not designed to prove you right. It is designed to show you where you are wrong, early enough to adjust.
Step 5: Interpret results carefully — especially when they seem obvious
Once the analysis is complete, conclusions seem to follow naturally. Trends emerge, patterns form, and the story appears to tell itself.
This is precisely the moment to slow down.
What looks obvious may not be complete. A trend could be seasonal. A correlation could be coincidental. A conclusion could be shaped by assumptions that were never made explicit at the start.
I have seen teams confidently present insights that later fell apart under simple questioning — not because the analysis was flawed, but because the interpretation was rushed.
The most dangerous insights are not the ones that are clearly wrong. They are the ones that feel so right that no one thinks to question them.
Step 6: Communicate to guide decisions — not just to present findings
The final step is where analytics either earns its place or quietly fades into the background. Most presentations focus on showing results — charts, summaries, numbers. But effective communication goes further.
It makes the reasoning visible. It shows how the conclusion was reached, what assumptions were made, where the analysis is strong and where it is uncertain. It acknowledges alternative interpretations and invites honest scrutiny.
Because the goal is not to impress. It is to give others enough clarity and confidence to act.
A finding that no one acts on is just a document.

The shift worth making
Step back and look at the six steps together and something becomes clear. This is not a technical process. It is a thinking process — one that determines how problems are understood, how questions are framed, and how decisions are ultimately made.
This also explains why two teams with access to the same data and the same tools can arrive at very different outcomes. The difference is not in their software. It is in the quality of their questions, the rigour of their assumptions, and their willingness to challenge their own conclusions.
Tools can process data. They cannot think about what the data means or why it matters. That part has always been, and will remain, a human responsibility. Socrates never built a dashboard. But the method holds.
































Comments