Faking it

Reading Time: 3 minutes

This week, I did something I thought I’d never do: I deceived our users. Our visitors.

Our customers.

That’s pretty much a sacking offence, so maybe I should explain.

As part of the digital first team, we’re looking for ways to get user feedback. After all, how will we know if customers think our products and services are valuable unless we ask them?

We can’t always do that face to face. Is there another way?

Maybe, we deliberated, if there’s a way for users to very quickly, effortlessly, tell us what they think about a piece of content, they will. But what if we develop something, which will cost money, and they don’t use it?

Well, here’s what we did. We cannabalised some code from another website we run and decoupled it from functions that run on the server (because they don’t exist on our server)

The code is a ratings widget; a very basic, Amazon-style 5-star widget. You can click to add your own rating, and the average changes (because you rated it – unless you rated it exactly in line with the average).

To implement this, or something like it, on our website, will take quite a lot of time and effort. The rating data will need to be stored in a database, and that information will need to be retrieved from the database whenever a page loads.

We don’t want to spend money on something that won’t work. So, we – I – faked it.

Not entirely, I have to add. If you rate the article, the average rating goes up or down, depending on what you chose. It won’t change for the next visitor – the initial state is always the same – but it feels like you’ve changed things.

And we are actually capturing data, in a roundabout way, through analytics. So I’ll manually calculate the average and change it line with how people actually vote every day.

It’ll take me a few minutes. And it’ll be a chore. But it’s OK, because if our fake rating works, it will be worth completely automating it,

Rate this widget
The fake ratings widget

A few days ago, I deployed some code that looks like this as far as our users are concerned. (I’ll put the code on github if anyone’s interested, it’s not that innovative.)

I deliberately made it with an average of 3 stars – there’s another hypothesis here we can test, namely that if we release with an average 5-star or 1-star rating, it will alter people’s voting behaviour – but we can get to that.

After 1 day, apart from some testing, we had no data on the web page I’d deployed it to. (I was being cautious: mistake #1. Lesson learned: be bold.)

So it appeared that no-one was using our rating widget.  But our sample was very small – only 6 people had visited the page I’d deployed it to.

So I deployed it, nervously, to another, higher-traffic page.

Result.

The page I deployed it on has had, so far, today, 625 visits. 15 of them have interacted with the widget.

5 5 star ratings; 4 3 star ratings; 3 1 star ratings; 2 4 star ratings; and  1 2 star rating
Yup, they do …

Fifteen doesn’t seem like many. But it’s 15/625 = 2.4% of all those visits.

Our website attracts around 45,000 visitors per month. If we deployed this across the site we could expect around 1,080 feedback interactions every month.

This was great. That was OK. This was rubbish.

Wouldn’t it be good to know that, 1,080 times a month.

Isn’t it nice to know that we know that, for – effectively – nothing?

 

 

I'm a service designer in Scottish Enterprise's unsurprisingly-named service design team. I've been a content designer, editor, UX designer and giant haystacks developer on the web for (gulp) over 25 years.

3 Replies to “Faking it”

  1. Interesting stuff, Davd. Well said. It would be great to have that level of consistent feedback. Challenge will be how we use that to iterate and prioritise our content development. Interested to hear views on how best to tackle that systematically.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.