in Product Management, Startup

10 product instrumentation mistakes and what we learned from them

In product management, instrumentation is the ability to measure and track key metrics, like adoption or retention of a feature. This is important to the product team, specially to the product manager, so it can make decisions based on data, not feelings.

Here at Resultados Digitais we use Mixpanel, integrated with Segment.io, to easily make the data available in just one place to our product managers. But this post is not about Mixpanel. It can be generalized to any other tool, because it focuses on the learning we had while instrumenting our product, the marketing automation platform called RD Station.

Mistake 1: delegating the product instrumentation planning to developers

A product manager won’t delegate to the development team the decision of which features will be built, so it doesn’t make any sense to leave in their hands the planning of what will be instrumented for product management. The development team is not always in sync with the needs of a product manager. To delegate this responsibility doesn’t make much sense.

 

Developers will be the ones implementing the instrumentation 9 out of 10 times, but they will always use the product manager’s plan as the starting point.

 

It is worth noting here that you should not confuse product instrumentation with engineering instrumentation, the latter planned by the development team.

 

Pro tip: Mixpanel has a feature called Autotrack, with this you can instrument certain things without the need of a developer.

 

Mistake 2: not planning the instrumentation

If it is a mistake to leave the planning on the hands of the development team, it is also a mistake to not plan it at all or in broad strokes. For example, “I want to measure the retention of the feature” or “I want to know how many users landed on the feature”.

 

To hand over this sort of “planning” to the development team is the recipe for an incomplete instrumentation, one that will ultimately not serve its purpose and will frustrate the person trying to analyze the data. The instrumentation planning must be written, so developers can implement it in a way that will be best for the product manager later.

 

What to plan:

  • The name of the events fired and their properties;
  • When, where and how each event is fired;
  • What kind and frequency of analysis will be used to make sense of the data.

 

Mistake 3: instrumenting all the things

To have a load of data might look desirable, but a product manager must ask himself “what conclusions can I make looking at this metric?” If the PM doesn’t know, it is probable that it is not useful to instrument it.

 

The product manager might even think “there may be a day that I will need this, so I will instrument it”. Besides this not being very aligned with the agile principles, what will probably happen is that the product manager in question won’t ever look at the data or, worse, will need the data in a slightly different format and won’t be able to use it anyway. It is also important to notice that instrumentation creates code, this means that it inserts more complexity and increase change cost to the software code base.

 

What should be instrumented:

 

Metrics will vary from product to product, but in general they are part of the Pirate Metrics (AARRR: Acquisition, Activation, Retention, Revenue, Referral) and the clients’ success metrics.

 

One example of such metrics was when we changed our product’s user interface. It was possible to change between the old and the new interface, so we instrumented both activation and retention metrics:

  • Activation: the percentage of the total user base that had accessed at least once the new interface
  • Retention: from the above users, the percentage of them who stayed in the new interface.

 

Mistake 4: not using conventions to name events

Once instrumentation becomes a norm, the number of events quickly escalates from 10 to 100 and beyond. This will not only make a pain to find the events one is looking for, but soon even the product managers who created the events won’t really know exactly what they measure, because they might seem to measure something, but the naming was not well defined and it really measure something slightly different.

 

At Resultados Digitais we are experimenting with naming conventions, and today we are using this:

  • All names are in English
  • Names must follow CamelCase pattern
  • Events must have the following structure: Feature: ComponentOrObject Action

Examples of our use of this convention:

SocialMedia: PostEditor Add

This event is triggered when a post is added (action), using the post editor (component) of the Social Media feature of RD Station.

 

Dashboard: Home View

Triggered when a user loads the dashboard home page.

 

SuccessPlans: Plan Edit

The event is fired when a user edits a plan in the Success Plans feature.

 

Mistake 5: triggering events that don’t measure what you think they do

This was a recent learning: when we launched the new RD Station, full of new features and a brand new look, users were welcomed by a modal presenting the changes. This modal had 7 slides, each of them could be closed or navigated forward and backwards.

 

We wanted to measure if the users were seeing all the modal slides and, if necessary, send each user a notification about some important message he didn’t see. So we implemented an event to be fired in each of the modal’s slides. This would not ensure the user read the content, but at least would indicate he saw it.

 

The problem is that the modal also changed slides automatically each 7 seconds. Also the modal was “infinite”, meaning it would start over when it got to the last slide. This means if the user logged in and went for a coffee, he would pass through all the slides multiple times without really seeing any of them. This event didn’t really measure what we wanted to know, thus it was not that useful.

 

Mistake 6: not calculating the potential number of events triggered

The previous learning was not the only one our welcome modal taught us. Remember our user who went to have a coffee after logging in? If we had only 500 users doing so (about 2% of our user base), we would have an event flood:

 

  • 500 events would be fired each 7 seconds (the speed of automatic slide change)
  • That would generate 4.285 events in a minute
  • 257.100 events in 1 hour

 

And this really happened, in 4 days our welcome modal event was triggered TWO MILLION TIMES. Considering our Mixpanel quota is 4 million events per month, that was a huge spike. Thankfully Mixpanel support understood our mistake and gave us a one month boost at no exra cost.

 

Mistake 7: forget to use (and plan) event properties

In Mixpanel, and most product instrumentation software, it is possible to send with the event a series of properties to better identify of the user did, as well as segment your data.

 

We send some default properties in all our events, like account ID and user ID. But it is possible to send any kind of information, like which browser was used, what was the version of the feature the user was using etc.

 

But remember error number 3: don’t send a bunch of properties just because you can. If the data is not useful to your analysis and decisions, don’t send it as a property.

 

Mistake 8: creating too similar events

Similar events are a symptom that you are probably creating too many events. An example that happened here at Resultados Digitais was the events triggered to track our Methodology content pages. When the user visited one page, it would fire 2 distinct events, one with the name of the page and other with the time spend on the page:

 

  • Methodology: BlogContent View
  • Methodology: BlogContent ViewTime
  • Methodology: BlogCreation View
  • Methodology: BlogCreation ViewTime
  • Methodology: BlogPromo View
  • Methodology: BlogPomo ViewTime
  • Methodology: BlogVisitorsOfferPromo View
  • Methodology: BlogVisitorsOfferPromo ViewTime
  • Methodology: EmailMarketing View
  • Methodology: EmailMarketing ViewTime
  • … and 28 other events, for a total of 38.

 

All those 38 events could easily be solved with just one event:

 

Methodology: Content View

 

With this event alone we wouldn’t know which page and how long was the visit, but if we sent those 2 as properties (shown in the previous mistake) we would know. It was only needed to add 2 event properties: ContentName (i.e. BlogCreation) e ViewTime.

Mistake 9: not testing the planned analysis

After planning the instrumentation, it is very common to forget them until it is the time to measure and analyze. Lots of times the product manager discovers then that the data he really needed can’t be extracted the way it was sent. And the instrumentation goes down the drain.

 

To avoid this, it is recommended to the product manager to test the analysis he wants (i.e. cohort, funnel…) and simulate it with fake data to assess if it really can be used the way the PM wants.

 

Mistake 10: not instrumenting the product

You wouldn’t fix a car without knowing what is wrong with it, the same way you shouldn’t introduce changes and “improvements” in the product without data to understand what is the correct path, what worked and what didn’t  Start slow by instrumenting a few key things and turn

Share a comment