Using Estimates, Experiments, and Evidence to Understand your Customers

This is a guest post from Manuel Weiss, Co-Founder and Director of Marketing at Codeship. We are working to bring you valuable, repeatable lessons from the absolute best in the industry. Interested in writing for the HookFeed blog? Email us!

At Codeship we have a simple rule: Everything you think is only an assumption. You need to verify your assumptions by running experiments and delivering evidence.

We look at the outcome of our experiments on a weekly basis and keep track of the data. We want to realize quickly if things are working and if not, we are going a different route. The aim is to not run in circle and not wasting any time while at the same time documenting your learnings so the whole team can benefit from them. This is why we’ve developed our very own scientific approach to Marketing.

This process ensures we don’t just work on things because we think they work but because they actually do.

When building a business you want to make sure you use your time wisely. Using it for things that actually move the needle.

The most important thing when running experiments is the ability to analyze them. Not long ago, this was something that marketers really struggled with. You would have a great idea and use many resources (manpower and money) to make it come to life. But how would you track your target audience’s actions? How would you know how far down the funnel they actually made it? Marketers had to rely on people’s good will to respond to letters or to answer questions truthfully on cold calls.

Thankfully, it’s 2014 and we have products like Google Analytics, KISSmetrics, Mixpanel, and Optimizely that enable us to follow a user’s behavior. It’s not about the products though; it’s what they allow us to see. We have the possibility to look at the actions of specific users at any given point in time.

These tools guide us in defining funnel stages, setting email automation trigger points, and bringing attention to problems in our product. In the end, they help us create a better product for our users. And from a budget perspective, they also help us Marketers justify our marketing spend.

Numbers are people and they deserve to be understood

The experiments that we run are not intended to immediately generate X% more revenue or bring in Y% more unique visitors. Their main purpose is to help us understand our customers, where they come from, what they struggle with, and what they love!

We’ve found that working with the aforementioned process analyzing our experiments, gaining learnings and consequentially making data-informed decisions gets a lot easier and is actually a lot of fun. As a result, we feel more confident than ever that we are headed in the right direction.

So keep this in mind: You have to make sure that you always analyze and gain learnings from your experiments.

The easiest way to learn is by recording estimations prior to launching new campaigns.

This forces you to think about the potential outcomes and makes you commit to a goal you sincerely believe you can achieve. Don’t be too hard on yourself in the beginning. Making precise estimations is not easy, but you will improve over-time.

It is important to give yourself a threshold of how far off you allow yourself to be. If you are way off you should dig deeper and find out what didn’t go as expected. Let’s look at an example:

You offer a service that offers German-speaking Hamburger cooking classes. Your designer produced beautiful banners and you use some of your budget for paid advertisement. You expect a certain amount of click-throughs and signups. After the experiment, you analyze your numbers and realize that you had the expected amount of unique visitors, but a very bad conversion rate for signups – it is way outside of the threshold you set.

So you start digging deeper: You quickly discover that the messaging in your banners was all-wrong! English-speaking cooking aficionados, as well as Germans, use the same word for this food: “Hamburgers”. But your landing page is written entirely in German! You paid for all of those clicks and most visitors literally didn’t understand a word you were saying.

You may not have discovered this atrocity had you not analyzed your estimation and set a realistic threshold for it.

At Codeship we have a “game” that makes sure people keep motivated and actually evaluate their actions before we run an experiment. Everybody from the Marketing team has to commit to his/her estimations and when the experiment is finished the person closest to reality receives a special gift from the rest of the team.

It’s critical that as you conduct experiments, you take note of your estimations, results, and learnings. This way, your teammates (and future you) can learn from it. Information like this is invaluable to new team members as they are able to reconstruct winning formulas and take a historical look at previous activities.

For example, I have analyzed which countries send us the most organic signups. I look at these organic numbers frequently to recognize trends as early as possible. And, of course, I document them. Now, new team members immediately know why we target certain countries when doing paid activities.

It’s important that you are tracking everything, the method doesn’t necessarily matter

I use spreadsheets (and share an abstracted copy of mine below) but you can conduct these experiments using whatever tools you’re most comfortable with.

Firstly, it is important to define the channels you’ll be working with. For example: Blog, Newsletter, Podcast Sponsorship, LinkedIn, …

Secondly, you need to make sure you have the most important stages of your funnel represented and tracked. If you are a SaaS product like Codeship you most likely will have something like Unique Visitors, Signups, Activated Users, and Paid Customers.

Thirdly, get the numbers for a certain period of time for these stages of your funnel. I like to look at them in a month-by-month basis because with a lot of activities (especially paid) it’s easier to map expenses and outcome together this way.

With the previously mentioned products, all it takes is the correct setup (ask your developers to help you) and some time to create a structure that might look something like this:

August – Twitter:
2,087 Unique Visitors, 120 Signups, 50 Activated Users, 4 Customers

September – Twitter:
2,140 Unique Visitors, 125 Signups, 52 Activated Users, 4 Customers

Now that you have these metrics, you can calculate Conversion Rates. 125 Signups / 2,140 Unique Visitors, for example, gives you a Conversion Rate of 5.84%. Do the same thing for the remaining stages of your funnel. Look at them for at least throughout the last 6 months and for your most important channels. This is where your main learnings will come from.

I set up a quick spreadsheet that shows channels in a month-by-month comparison and calculates Conversion Rates for you.

Isn’t it great? The numbers (people!) are telling you what to work on.

Do you have a great Conversion Rate from Unique Visitors to Signups via LinkedIn, but very few Unique Visitors in comparison to other channels? Put your efforts into getting more visitors from LinkedIn.

Is there a huge drop-off in Conversion Rate from Signups to Activated Users from people coming via Google? This could indicate that your onboarding process is creating problems. Do you see bad Conversion Rates from Unique Visitors to Signups? Hit up some A/B tests for your Landing Page!

Actionable Tasks

Now that you’re tracking your channels month-by-month, it’s easy to calculate their average performance. If we are defining a new experiment, we now have data to base our estimations upon.

Let’s say you want to increase the Signup rate for one of your Landing Pages.

How about this experiment: Start putting Twitter testimonials on the Landing Page to provide social proof. Our assumption is that this should help create trust. And more trust should lead to more Signups.

Previously this Landing Page got 1,000 Unique Visitors and 70 Signups per month giving you a Conversion Rate of 7%. The amount of people hitting your Landing Page won’t change that much by running this experiment. But your Signup rate should increase. Looking at your data, you can base your estimation on an average of 7% and commit to a predicted rise in Conversion Rate by 2.5% giving you 95 Signups next month.

Did this experiment work? Did it increase your Conversion Rate to Signups? Document it for your team and define actionable tasks. In this case: From now on every Landing Page will show Twitter testimonials.

We are living in exciting times in which you can visualize the depths of your funnels and track customers on their journey from 1st visit towards loyal, long-term customer

It’s on you to use it to your advantage and gain learnings from the tools available.

Most importantly, you need a process in-place. This article describes parts of how my team at Codeship works on this. But your team may approach it differently. Just make sure you don’t stop at making assumptions. Always try to get learnings from what you are doing.

Looking at numbers to help you make decisions is great but make sure you don’t get caught in the trap of only executing. Always re-evaluate your channels and their performance.

Last but not least: Don’t blindly follow your numbers. There will be times when you have to trust your gut-feeling.

About the Author

Manuel WeissTwitter

Co-Founder / Director of Marketing at Codeship

"What's HookFeed?" It's a software product that helps your whole team understand your customers on a deeper level based on their behavior and our research about them. Check it out  >