Are customers using the features we built?
- Catherine Tang
- Mar 1, 2024
- 4 min read
Updated: Apr 5, 2024
More often than not, we've built features that (almost) no one uses. But here is a case study that shows things don't have to stay that way.
In this post, we share a case study of how, step by step, we levelled up feature adoption/activation and feature engagement/retention with product auditing.
The Painful Truth
When we audited the product usage, it had more than 20 features. Each of them solved a unique pain point but the usage numbers were astonishingly low - only 5 features had more than 1% of customers using them.

However, it was also a product with millions of daily active users. The excitement was real, thinking about all the cool analysis we could do with that much user data. Every week, we had a deep-dive data review, paying close attention to every up and down in our numbers, and digesting the themes of feedback from the hundreds of Google Play reviews we received. We loved playing detective, figuring out why our numbers moved the way they did.
The hard work also panned out with great outcomes. Our north star metric, product stickiness (DAU/MAU), had increased and reached top-tier industry levels, and the DAU grew 4x within a year.
Product Auditing: The Framework and Methods Unpacked
Looking back, what we did is now known as Product Feature Analysis and Engagement Optimization. For brevity, let's call it Product Auditing. There are three key phases of the work.
Phase 1: Narrow down which features to focus on
Since our goal was to increase product stickiness, we picked the features with natural daily or weekly use cases to focus on.
💡 Note: There are many ways to analyze across features regarding their popularity and retention contribution. Our personal favorite is this method from Paul Levchuk. Many tools like Mixpanel and Amplitude can do this analysis.
Phase 2: Deep-dive into the Activation and Retention of each feature
To give a concrete example, one of the features we picked was the "Battery Saver". It solved the pain point for the users who didn't have easy access to charge their phones whenever and wherever they needed it. In some countries, people have to go to a charging station for their phones because electricity at home is not available 24-7.
Inspired by the AARRR framework (stands for Acquisition, Activation, Retention, Referral, and Revenue) by Dave McClure, we were mainly focused on two parts of this model - Activation and Retention.
Essentially, there are two questions to be answered:
Activation: do the users who potentially need feature A know about the feature and have tried at least once using it?
Retention: do the users who have used feature A successfully at least once keep using it?
They then translate into the metrics we used to keep track of our optimization progress:
Activation Rate = # [Users Used Feature A] / # [Users Potentially Need Feature A]
Time frame: 1 day
Definition of "Use": Opened and finished the whole flow in this feature
Definition of "Potential Need": Phone Battery went below 20% at this point
Retention Rate (7-Day Range Retention) = # [Users Used Feature A At Least Once in Day 1-7] / # [Users Used Feature A in Day 0]
Time frame: Day 0, Day 1-7
Definition of "Use": Opened and finished the whole flow in this feature
Phase 3: Use qualitative data to figure out how to optimize Activation and Retention
Now comes the critical point - Putting the numbers into the context of a real use case scenario! - where the qualitative data (i.e. user feedback and user research) plays a critical part.
Quantitative data like activation and retention rate do not tell us why they are good or bad. We need qualitative data to complete the story and figure out why.
A great example is that in some Google Play reviews, users mentioned the phone is draining battery because they noticed certain apps are constantly running in the background and sometimes light up the screen with pop-ups.
Then the optimization ideas started coming up. Do the users know that the Battery Saver is designed to kill and limit background activities? Can we make it easier for them to use this feature when it's most needed?
We then designed an onboarding screen to educate users about how Battery Saver works, and prompt them about it when they reached below 20% of battery for the first time. For easier access, we also added a one-click battery save feature for when the battery is below 10%.
After several iterations like this, the Activation Rate of the Battery Save went up to a healthy level, and the Retention Rate also increased significantly.
To Sum Up
Feature Analysis is extremely helpful for narrowing down the focus. It gives an overview of how existing features are currently being used, and makes it straightforward to pick a few features to deep-dive into.
In the beginning, there's no need to overcomplicate usage metrics, Activation Rate and Retention Rate are great to start with.
When we find the activation rate and/or retention rate lower than expected, leaning on qualitative user feedback or user research is a great way to find out why it is performing poorly and how to optimize.
Comments