Estimating the impact of features on monetization (Part 1)

Being able to think and reason about the impact of features on monetization is a useful skill that can help prioritize the backlogs and optimize resources.

Previously, we have touched upon data-driven product development in articles like How reasoning about LTV, ARPDAU, and CPIs can help your product development strategy (Part 1). Data-driven product development is something that comes naturally to me due to my training as a computer scientist, and experience building and managing my Virtual Reality startup business. There are many variables that we have to take into account when opting to develop a feature as opposed to another. Some of these variables are:

  • Development cost (how much does it cost in terms of resources);
  • Technical risk (are there technical unknowns that might affect the development);
  • Product impact (how the feature relates to product goals, vision and potential impact on KPIs).

It’s hard to estimate the impact of a feature because there’s a high number of variables that affect its performance. However, that doesn’t mean that we shouldn’t try to become better at managing and reasoning about uncertainty.

In the next section, we present an example of estimating the impact of a feature on monetization as a classical back of the napkin calculation. This is a skill that comes naturally to some people (who tend to make good analysts). Although, it’s taught and exercised, especially in university degrees such as Physics and Engineering. 

Thought Exercise on Estimating the Impact on Monetization

Let’s suppose we have a mobile F2P game app and we want to estimate the impact on the monetization of a Welcome Pop Up Offer we want to implement.

The Welcome Pop Up is going to be presented on the first day of the player experience. This means that in terms of eyeballs this will be the offer that has the highest potential reach. The impact of this on the game’s ARPDAU will depend on the following factors:

  • The proportion of new players (DNU daily active users) on daily active users (DAU)
  • Price of the offer
  • The conversion rate of the offer. When guesstimating I try to consider something between 1% and 5%, and if need be consider/simulate different scenarios
  • Natural percentage of spenders in the game. As with the previous point, I would probably try to simulate various scenarios with values between 5% and 15%)

Making it concrete, if the conversion offer is 8$, and it’s shown on D0 after the tutorial and DNU make up 20% of DAU. The conversion rate is 3% for a certain type of player that ends up becoming a spender. We also know that on average 12% of our DAU are spenders.

This means that on a rolling daily basis this welcome offer has an estimated impact of 0.00576$ on ARPDAU.

What if we would increase the price to 10$, and lowered the conversion rate to 2%? Then the impact would be 0.0048$.

Conclusions and insights

In the previous exercise, we try to estimate the impact of a Welcome Offer Pop Up on the game ARPDAU.

We noticed that overall the impact wasn’t very noticeable, less than 1 dollar cent.

However, the impact of a welcome offer on the overall ARPDAU is not the most important metric to consider. This exercise can also help us reason about the relationship between price/conversion rate and aim to find the sweet spot.

A/B testing (you can find a great explanatory article about it here: A/B Testing), is an important tool to help find these sweet spots and optimize the game. A/B testing the price can give us very important clues towards the spending behavior associated with the first purchase. It might also be relevant to A/B test different packs of resources that might be interesting to the player. However, A/B testing works in a “yes or no” type of question, and there’s a lot of thought behind asking the right questions. Being able to have a lot of product knowledge will guide you to quicker and better results and enable you to prioritize using ROI (return on investment), as a metric.


This article illustrates the back of the napkin calculation method applied to estimate the impact of a product feature. While it’s a useful technique, sometimes there are so many variables and uncertainty that it’s not practical to use. Sometimes, having a broader lens is more useful, especially when we want to compare multiple features and prioritize a backlog. In the next part, we’re going to explore frameworks to compare features in terms of impact.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top