Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mid-November MVP testing need for analytics #14

Closed
tiffany-tang opened this issue Oct 5, 2018 · 10 comments
Closed

Mid-November MVP testing need for analytics #14

tiffany-tang opened this issue Oct 5, 2018 · 10 comments
Labels

Comments

@tiffany-tang
Copy link
Collaborator

tiffany-tang commented Oct 5, 2018

Analytics/metrics we’ll want to measure:

  • How often do YP log in? e.g. everyday, multiple times a day
    • This assumes they have to re-log in every time they use the app? If this isn't the case, can we measure how many times they open it whilst logged in
  • How often do YP use a coping strategy?
    • Which were most popular?
  • How often do YP use the emotions log?
  • What emotions are YP clicking on?
  • What reasons are they giving for their emotions?
  • How often do YP click I don't know?
  • How do YP describe how they're feeling when they cannot identify the emotion?
  • How often do YP click other?
  • What other emotions are YP feeling? [other]
  • How often do YP view the calendar?
  • How often do YP switch the calendar view and to what?
  • How often do YP share how they feel with their trusted adult?
  • How often do YP access the help screen?
  • How often do YP access the coping toolbox from the navigation?

Why measure these:

  • To gather quantitative evidence that the product is liked/in use
  • To gather qualitative feedback on user journey (what users look to access when, and in what order), future content (e.g. the physical symptoms logic requirement, what other emotions should be added), and future dev requirements (e.g. are weekly/monthly calendars even needed, how should coping strategies be accessed in the future)

In addition to the above, we’ll be gathering qualitative feedback each day of the test, and will conduct interviews at the end of the week – but these won’t impact analytical requirements.

@iteles
Copy link
Collaborator

iteles commented Oct 16, 2018

@tiffany-tang If we can take one step back here, I have a couple of questions around analytics?

  • What does success look like to AfC for this app?
    • Note: What does success look like for AfC vs for the donors/funders? Are these congruent with each other?
  • What metrics can we measure to determine that success?
    • Is there anything we need to measure for the sustainability of the app going forward?

You have a number of primary content types:

And a few secondary content types (whether these are primary or secondary is totally up for debate!):

In essence, you want each page to be able to 'justify its existence'.

What does success look like for each? What metrics can you track for them?

We won't have enough time to build in a huge level of detail in these first couple of sprints as we discussed previously, but we've found that in the past, this thought exercise has really helped everyone have a better understanding of the project, sets things up not only for future analytics but also informs user testing and also helps us see what we can do to get you useful and actionable insights from the analytics we can provide at a high level.

@iteles iteles added the question Further information is requested label Oct 16, 2018
@tiffany-tang
Copy link
Collaborator Author

@iteles Thanks for this. I will be going in for a meeting this Friday which the topic of success will definitely be discussed. Is it possible for me to give you a more detailed response on Friday, rather than me doing some guessing game here?

@iteles
Copy link
Collaborator

iteles commented Oct 17, 2018

@tiffany-tang Absolutely, I look forward to it!

@tiffany-tang tiffany-tang added discuss and removed priority-2 question Further information is requested labels Oct 19, 2018
@tiffany-tang tiffany-tang changed the title AfC admin and testing - analytics Mid-November MVP testing need for analytics Oct 19, 2018
@tiffany-tang
Copy link
Collaborator Author

@iteles In response to justification of pages, it will likely be analysed from qualitative feedback from participating users in the testing. From those results and further justification for further development, we will then know what pages will be kept and what won't.

@tiffany-tang
Copy link
Collaborator Author

@iteles @Cleop

A question on how we can download the data from you guys in November after testing. For now, we are looking at testing between 12th - 21st Nov. As this period is after our agreed dev period, what will I need to do to be able to retrieve the analytic data?

@tiffany-tang tiffany-tang added the question Further information is requested label Oct 19, 2018
@iteles
Copy link
Collaborator

iteles commented Oct 20, 2018

@tiffany-tang I've just noticed you updated the top comment here, thank you!

In our experience it would also be really useful (for AfC) if you could also define the success for each of the types as well so that you (and we!) know what you're aiming for.

We'll take a look through this and determine what the best analytics tool for the job. Once we've determined what to put in place, we'll be sure to reserve some time to talk you through how to use it!

@tiffany-tang
Copy link
Collaborator Author

@iteles
We had a discussion on success factors during our meeting last week and the conclusion is since we actually don't know how we will be approaching the future of this app, it's pretty hard for us to define 'success' per say at this stage.

Right now, we are planning to use a combination of analytics and user feedback from worksheets and debrief session to have some initial data for our higher ups to proof that this project is worth pursuing.

We actually already have a meeting scheduled for 26th Nov so I guess my next question will be how quickly we can get the analytics downloaded from your end or whether you can give me access to it so I can download them myself?

@tiffany-tang
Copy link
Collaborator Author

@iteles A further question on analytics - when you mentioned you will talk me through how to use it, does that mean I can access the data myself?

Especially for the first couple days of the final testing, we are very keen on getting daily reports on analytics to ensure the app is actually being used meaningfully, so if we find it's not, we can stop the testing immediately.

If I can access the data myself, that would be very convenient in getting daily data and then the summary of the week. If not, I might need to request that from you guys. Happy to discuss this further.

@iteles
Copy link
Collaborator

iteles commented Oct 23, 2018

@tiffany-tang Sometimes the easiest way to define success it to think through the motivations to build the application in the first place and the reasons you picked each of these features to build. Then go from there in terms of seeing whether those reasons are justified!

We'll do what we can to give you access to the data directly.

@Cleop You're right, let's get Google Analytics in straight away. Could you please open the issues on this?
The next steps will be to go through the list Tiffany has provided and create the list of what we'll need to find the time to track other than what we get 'out of the box' (e.g. anything that requires click events like sharing with trusted events).
Step 3 will be to determine what else we might need for now (e.g. Redash) to cover the remaining points

@iteles iteles assigned Cleop and unassigned tiffany-tang Oct 23, 2018
@Cleop Cleop removed their assignment Nov 5, 2018
@Cleop
Copy link
Collaborator

Cleop commented Nov 5, 2018

The relevant issues were created and Google Analytics was implemented for the primary requirements. Then Redash was also added for more detailed analytics.

@Cleop Cleop removed the question Further information is requested label Nov 5, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants