The problem with the “outcomes-first” obsession in UX hiring


There’s a current trend in UX hiring: if your CV doesn’t have stats, percentage uplifts or bottom-line results you’re told you’re doing it wrong. It’s not enough to show what you worked on, you need to show how it “moved the needle”.

Framing impact through outcomes sounds more compelling. It’s become almost a non-negotiable for SaaS and startup roles. But here’s the question: does it always make sense? And should we be applying the same thinking everywhere?

Recently my friend Danny posted on LinkedIn about this very topic. He argued that for most in-house roles, especially outside of the startup bubble, the obsession with quantifiable outcomes is misplaced. He’s right and it’s worth exploring why.


Why startups love outcome metrics

The startup and SaaS world thrives on pace, investor expectations and growth targets. Those environments want evidence they can see: uplift percentages, user growth numbers, hours saved, revenue increased.

It’s a culture shaped by venture capital and short runways where measurable impact is a key signal for investors and leadership. And to be fair, if you’re the founding designer at a startup or directly owning a product it’s reasonable to expect your CV to show direct, attributable results.

But here’s the rub: not every metric tells the full story. “10% growth”, “£X million saved”, “Y% increase in conversion” without the how are almost meaningless. A great metric without context can be as misleading as no metric at all.


A different lens in the public sector

In GDS-aligned teams and other public sector contexts we still care about outcomes, but we define them differently. The focus is on citizen-centred measures: increased accessibility compliance, improved experience based on research and data, reduced failure demand or reduced time on task for a service.

And crucially we put as much weight on how those outcomes were achieved as on the outcome itself. Did the person work in the open? Could they collaborate across disciplines in large teams? Could they adapt to shifting politics or policy priorities?

There’s often a perception that public sector work is slower or less demanding. The pace is different and that’s another conversation entirely, but the complexity is just as high if not higher. The constraints are often political, policy-driven or legacy-technology-related. Delivering within that reality requires a completely different set of muscles than a startup sprint.


The myth of clean attribution

In large organisations whether government or corporate, individual impact is rarely a simple cause and effect. Many of my own biggest results came not from tweaking a UI that suddenly spiked conversion, but from steering teams back to a shared vision, unblocking constraints, introducing delivery rhythm, shaping team design, running critiques and asking the uncomfortable questions.

Those things rarely make it into a neat metric but they fundamentally shape whether a service succeeds.

This is why, when interviewing for in-house public sector roles, I’d rather hear a candidate walk me through a messy ambiguous project where they had to influence outcomes without full control than see a perfectly isolated metric they “delivered”.


The danger of importing the wrong mindset

If we import the VC or startup outcomes obsession wholesale into contexts like the public sector we risk overvaluing people who can market a metric over those who can actually navigate complex delivery.

And in public services or in any very large organisation, navigating complexity is the job.


When metrics matter

A well-chosen honest metric absolutely has its place. But for it to mean anything it needs framing.

If someone tells me they improved completion rates by X% I want to hear:

  • The constraints they worked within
  • How they removed or reduced those constraints
  • Who they collaborated with
  • The evidence they gathered along the way
  • And most importantly what they personally did to enable that lift

That’s the difference between a number that’s replicable and one that was just good timing.


Finding the balance

The best recruiters, hiring managers and candidates understand this balance. Job descriptions, CVs, case studies, portfolios and interview processes should work together to show both the tangible outcomes and the human adaptive skills that made them possible in the first place.

If we only hire for numbers we’ll get people who can talk about them, not necessarily people who can deliver them.


Final thought: Outcomes matter. Context matters more.