Our Responsibility as Analytics Providers

Delivering Transparency & Building Trust in Your Product

Bugs happen. It’s a reality of running a software product. And usually it’s not the end of the world. In fact, most probably go unnoticed and fixes are pushed up before anyone bats an eyelash.

In the case of a severe bug, however, fixing it quietly and sweeping the evidence under the rug is no longer acceptable. Transparency is more critical than ever, and it actually represents a wonderful opportunity.

In the case of HookFeed, we consider any bug that alters the accuracy of our data to be severe. With more and more products baking in their own metrics and analytics, this process is increasingly relevant.

In an analytics product, EVERY bug deserves full transparency…or none of your metrics can be trusted.

So what’s the best way to handle a situation that calls into question the accuracy of your metrics and can potentially cause distrust from your customers? Fortunately, we just went through this and refined a process for turning a negative situation into an opportunity to strengthen trust.

Recently, one of our customers showed us a glaring bug in the calculation of their Churn Cohorts Table. It took us a couple days to find the cause, and research potential solutions.

This timeframe could have been reduced, and is something we’re working on for the next time around.

However, we’re very proud of what happened next, and have changed our bug-handling process based on our learnings from this experience.


Once we knew the cause of the bug, we did two things:

1. Email Notice: Using Customer.io, we were able to quickly find a list of customers who not only had access to our Churn Cohorts feature, but also who had actually used it before. We then sent them an email explaining briefly what was happening, when it was expected to be resolved, and what that rollout would look like.

The purpose of this email is simply to give them a heads-up that the metric cannot be trusted for the time being, and that they shouldn’t take it confidently into a meeting with investors.

2. In-App Notice: For any customers who didn’t receive the email, or tried to use churn for the first time, we added an in-app notice explaining what was going on, what to expect, and when.

These updates took less than 30 minutes, included our customers in the conversation, and let us roll up our sleeves to focus on fixing the bug.


Fast-forward about 2 days, and we had deployed a fix that had been heavily tested on affected accounts.

At this point, many teams would shoot off another email to customers letting them know the bug had been fixed and that the metrics are now accurate. But this is pretty pointless.

Without explaining the cause of the miscalculation, and how it was fixed, customers won’t trust your team.

We wrote up a detailed description of what went wrong, what changed, and why it won’t be happening again.

This was emailed to the same people who received the heads-up email a couple days earlier. Also, our documentation for the affected metric was updated to reflect the changes.

The purpose of this description is not just to let them know it’s been fixed, but rather to help them understand why it was incorrect and why it can be trusted now.

At the end of the day, as an analytics provider, trust is the backbone of your business.

Without trust, the numbers you show to customers are completely worthless (assuming your customers have even stuck around).

So whenever the accuracy of your analytics comes into question, it’s critical to not only respond quickly, but to make sure they understand why there was a problem and how it was fixed.


We tried something entirely new when we pushed the final fix live…

We left the broken version of the metric on the page.

This wasn’t an easy decision to make. It required about 3x the amount of time/effort in releasing the fix, and will require us to re-visit the bug in about a week to remove the broken version for good.

As with any complex metric, we aren’t talking about 1 extra line of code here…it’s more like hundreds of lines of code, out-of-date database attributes, and expensive indexes, all to support a metric that we know is broken.

“The following version of this metric includes the calculation bug that has since been fixed (working version can be seen above). You can use it to compare the discrepancy and update any internal reports you may have produced. This version will be removed from HookFeed on July 25.”

Why all the extra work?

Because customers don’t just look at a metric and move on with their day. They rely on them to drive important decisions that seriously impact their business.

In our case, they dive deep into our metrics, pull out specific KPIs and import them into pitch decks, Excel reports, take them to company meetings, etc.

If your Week 3 churn rate for the November 2014 cohort (with 29 customers) was formerly 2.38%. And we push a fix that changes that number to 3.17% for 27 customers…and we don’t include a comparison of the two, you’re completely lost.

Without that comparison, all you know is that the metrics you’ve been relying on from this product are flawed, with no way to fix it on your end, or even understand exactly what’s different or why.

That’s a terrible position to be in, and it’s why we decided to include both versions of our Churn Cohorts (both right and wrong) in the app for 7 days.


As the makers of analytics products, we all have customers relying on us to deliver truth.

The second that truth comes into question, trust has been lost, and you’ve lost a customer. (And potentially harmed their business, which is far worse).

Trust needs to be at the core of your customer relationships.

By following this process, we were able to turn a bad bug into a positive situation which actually brought us closer to our customers, increased their trust, and boosted their happiness with HookFeed.

About the Author

Matt GoldmanTwitter

Co-Founder of HookFeed. Currently focused on Product.

"What's HookFeed?" It's a software product that helps your whole team understand your customers on a deeper level based on their behavior and our research about them. Check it out  >