Continuing the Labs tradition of sharing what’s been learnt along the way, in this final post from MindTech we reflect on our evaluation process in the hope that it can help similar studies in the future (and so you don’t make the same mistakes we did!)
Analytics are powerful tools, but….
As we saw in the Analytics Game, site traffic analytics told us a lot about how people were using the Labs products. But if we were running this evaluation again, we’d do a few things differently.
We would have looked at the analytics first because then we’d have realised a few important things.
Set your analytics up right
First, we’d have spotted that not all the products analytics packages were set up to capture the really useful information. For example, with Moodbug we could see how many and for how long people were using the app but we couldn’t find out what they did once they opened it as the developers hadn’t activated that part of the analytics package. This can be fixed easily enough, but it’s not possible to retrospectively capture the data so when we came to collect the data towards the end of the project it really limited what we could learn about how people were using Moodbug.
Identify emerging patterns early
Second, looking at the early analytics might have helped us design the surveys differently. Any emerging patterns would have meant we could better cross reference what people do (as shown by the analytics) with what they say (explored in the user surveys). In the event, lots of the data from the surveys and analytics showed similar patterns of usage and engagement, which is really helpful, but with a bit more forward planning we could have made more of this.
Running the surveys
We knew analytics wouldn’t be able to tell us how the products helped users’ mental wellbeing which is why we chose to ask users of two products totell us about their experience directly: Doc Ready and In Hand. Both product teams agreed we could embed links to the surveys on their app. This was great as it meant people using the products would only be one click away from the survey. As an added incentive we offered participants entry into a prize draw to win 6 x £50 high street shopping vouchers.
We launched the surveys at the end of August, hoping for at least 100 complete replies on each one. Given there were an average of 2,500 Doc Ready user sessions and 1,000 In Hand downloads per month, that’d be easy, right? Well, not quite…
Different survey response patterns
The In Hand survey got off to a good start, with 20 responses in the first week and similar in the following weeks, so we were on target. By the end of the eight weeks we had surpassed our target and received 131 completed surveys.
The Doc Ready survey was a different story. After 3 weeks we had only 9 responses! So we got some help in. Ben from FutureGov got the survey more social media coverage and got all the project partners to promote the survey. Rupert and Harry from Neontribe made a banner to go across all the web pages so that wherever you were in Doc Ready the invitation to complete the survey was waving at you.
Both these helped enormously, and the drip, drip of responses turned into a firm trickle. Leaving the survey running as long as we could (11 weeks) meant we managed to get 56 responses. Though this was some way off the target it was, we decided, ok.
Why different patterns?
Why did we see this different pattern across the two apps?
We think it’s probably because they are used in quite different ways, and that people have different relationships with them. In Hand is intended to be used often and hopes to gather a community of regular users who develop a deep relationship with the app. This is backed up by analytics showing that 75% of sessions are by returning users. This kind of community of users are more likely to feedback.
Doc Ready however, has been designed as more of a single-use tool (described as ‘ephemeral’ and ‘disposable’ by team members). Once you’ve used it to prepare for your appointment you may not need to use it again. This pattern of use was again reflected in the analytics as 84% of sessions were from new users. So if users don’t come back, they don’t see the invitation to take the survey. While we didn’t get the number of responses we would have liked from Doc Ready users, those 56 people still told us some fantastically useful stuff.
Understanding how people engage with a tool is really important when considering how to design the best way to get feedback. Next time, for a product like Doc Ready, we would consider adding other ways of understanding users’ experiences, like user groups.
Understanding impact on wellbeing
Mental wellbeing is a complex and subjective area that people describe in different ways. For the research to be high quality, we needed a standardised way of asking survey participants about the tools impact on their wellbeing.
We decided to use the Warwick-Edinburgh Mental Wellbeing Scale a measuring tool developed and tested as a valid and reliable measure of mental wellbeing. Usually the scale is used to measure a person’s wellbeing at the point of survey, however we used it a bit differently by taking the scale’s seven dimensions of wellbeing and asking users if they thought using Doc Ready or In Hand had helped them in relation to each one.
When we showed these seven areas to young people involved with Innovation Labs these seven areas they suggested one rewording and 3 additional areas to ask about. This gave us a total of 10 mental wellbeing areas and corresponding survey questions. Here’s what they looked like on the Doc Ready survey.
Looking in more detail at the response data we were able to identify a few patterns that give us added confidence in these results.
The tables below show us that In Hand users reported that the app helped them to have a positive outlook, feel less stressed, feel relaxed and think clearly.
* survey responders who had used the app at least once
On the other hand Doc Ready users reported that it helped them mainly in relation to thinking clearly, being ready to talk to someone else, and being more able to take control and make up their own mind about things.
This suggests that the products are supporting people’s mental wellbeing both in different ways and as each one intended. It also gives confidence that the research approach we used was useful and able to differentiate between the two products.
Thanks and cheerio…
To sum up then, we’ve had some successes with the evaluation – notably, the In Hand survey and the way we assessed the effectiveness of the products was useful and possible within the scope of this evaluation. But we’ve also learnt things along the way that we’d take into account next time. Huge thanks to all the Innovation Labs and product teams for their support and guidance during this project and especially to all the young people who took the time to fill in the surveys.
To access the full report click here.