In Part 1 we listed the metrics that would best serve email marketing and marketing automation providers. In Part 2, we will review the metrics that should be consulted in productivity SaaS applications.
We are longtime fans and users of Trello, so it seems only natural to take it as our productivity software test case. Productivity software is usually highly, regularly used tools that match email and word processors in how frequently we use them. Whether they are personal life-streamlining apps to “remember the milk” or shared lists for improving team productivity – we quickly find ourselves using them every day, all day.
What does this means for the metrics?
- B2C – while many businesses use productivity software, it is not a B2B service. Your customers’ customers do not benefit from it directly. Meaning, we don’t get to use end-users analytics as part of our measuring system.
- High usage rates – most of the pressure on retention for productivity software is applied during the onboarding stage. Once users are properly onboarded, usage rates should settle on a daily, possibly even hourly usage rate.
So the usage graph would look something like this:
So how would we go about measuring usage for hyper-used software?
For starters, measuring logins may not be as effective as it would for, say, marketing software – Trello is almost always open on a tab in an open browser. So instead of mapping login frequency, your finger should be on the activity pulse: simply scan for activity (ANY activity) several times a day.
Trello essentially works on 3 levels of task management:
- Boards – the broadest category, intended to divide tasks into arc-topics (e.g.: product dev., marketing, sales, etc.)
- Lists – mid-level categories, either assigned to the individuals in the team (e.g.: Eyal, Oded, Noa, Nizar, Bob, Sarah, etc.) or else for different degrees of urgency or implementation (e.g.: For Tomorrow, In Progress, Completed).
- Cards – single tasks that can be moved from one list to the other and can contain lists of items within them. These cards can do a whole bunch of other cool stuff that isn’t relevant to our case study, so we will regrettably refrain from expounding on it.
The point of this overview being, that Trello lists and items, like most other productivity software, are highly customizable, meaning it is nearly impossible to measure task completion and customer success through feature usage. While many users may stick to the default “To-Do” à “In Progress” à “Done” settings – many may not and anyway, going through your customers’ content for metrics is a breach of privacy.
So, um, how do we set up the measurement system?
Like any sound metric scale, you need a standard. The standard is set by successes, in this case – longtime users whom Trello has retained for a considerable period past the onboarding stage. Trello has the advantage of being a veteran, well known SaaS provider with a sizable user pool, so they can afford to sample year-old users. Younger SaaS applications may have to settle for less mature users. In any case, we suggest creating a Range of Reasonable Rates around the average. User behavior is more of a spectrum than a single value.
Next, we integrate our measurable parameters into the standardized scale we created. It should look something like this: