Fidelity increases as uncertainty decreases

Reading Time: 3 minutes

No, this is not a couples counselling site.

I and others on my team have had a lot of discussion recently about what we are doing, why we're doing it, what we can test and why we are testing it.

In an agile team, this is healthy. As long as the discussion remains respectful, talking is great, because we need to reach a consensus, agree a course of action, and commit to it. As a team.

We may not all be 100% convinced that what we're doing is the right thing to do. But we all agree to do our best to achieve it, even if (sometimes) our motivation is to demonstrate that it was the wrong thing.

But you can only demonstrate that by doing the wrong thing right (if you see what I mean).

I saw this on Twitter this evening.

Then I did this recalulation.

2016-10-14_2242

The point is that I agree broadly with the point made by David Bland – as you learn more, you can develop more confidently and get ever closer to what the solution looks like.

Reducing uncertainty is broadly linear. You know what you don't know, so you can test your way out of that, one thing at a time. You might uncover things you didn't know you didn't know (© Donald Rumsfeld) as part of that process, but it's unusual for there to be so many of them. 

So you can just chip away at it and get to a point where uncertainty is minimal.

But my quibble is with the other side of the graph: how fidelity increases.

In my experience, this is not something you can increase in a linear fashion. Instead, as you learn a little you get a tiny bit better. But each improvement multiplies, rather than adds to, the result. By the time you get to build an MVP (around 5.5 in this example) the rate at which you close in on a good solution rises exponentially.

By the time these lines cross you've reduced uncertainty by 75% or so. You know most of what you need to to build the right thing, and build it right. That's why the fidelity line begins to head for the vertical at this point. And as you continue iterating on what you learned from the MVP, there is only one way it can go.

At that point, the only question left to ask is: do we stop now? Is this good enough? If it is, we can choose to stop.

At the other end of the spectrum, if your hypothesis was wrong in the first place you'll discover that pretty quickly too. Probably before the lines even cross. You'll learn enough to know this is the wrong thing, or we're building it wrong, and change your approach.

Coming back to the point

I started this post talking about discussion (opinion).

We ended it with data (evidence).

And that's the point. We all have our opinions. We all have expertise, experience. We all have principles, and knowledge of things we've done previously that worked well.

What none of us have is a crystal ball. We don't know what the future holds. We can't predict, with absolute certainty, that if we change x, y will happen.

In an uncertain environment, all we can do is try, in a small way, and decide whether or not it's worthwhile to continuing trying. Every day.

That can be one LinkedIn post or tweet at a time. It can cost almost nothing.

Conclusion

  • It's OK to disagree at the start
  • Even if you think it's wrong, do it right
  • Get data, and get it as fast as you can
  • Be prepared to be proven wrong …
  • … or right …

Pro tip

I labelled the 2 lines on that graph 'Uncertainty' and 'Fidelity'.

They could also be called 'disagreeement' and 'consensus'.

 | Website

I'm a service designer in Scottish Enterprise's unsurprisingly-named service design team. I've been a content designer, editor, UX designer and giant haystacks developer on the web for (gulp) over 25 years.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.