#35 What are Actionable Insights and How to Generate Them
I work in analytics in a digital marketing team. My company sells B2B software products. Our team does analysis and reporting on what marketing programs are working and what are not. “Working” can be measured in different ways, such as working in acquiring new customers or driving revenue growth of existing customers.
Actionable insights are what I strive for in my analysis. Not just sophisticated models, but analytics that are insightful and call for marketers to take actions.
What are actionable insights and how to generate them?
I am by no means an expert. But here is what I have learned so far. I will explain in a redacted example.
The analysis: I developed a decision tree model that surfaces the combination of marketing programs that lead to good results.
The insight: The customers who engage with these three marketing programs are 7 times more likely to open an opportunity, i.e. they are interested in buying. For ease of explanation, let’s call them: Apple E-book, Banana Webinar, and Carrot Trial.
The action 1: Engage more customers with these three programs.
The action 2: Cross reference these programs to each other to encourage people who engaged with one of them engage with the other two.
I was very excited to share the “Actionable Insights” with marketers.
However, the responses are not as I expected.
Response 1: Apple E-book and Banana Webinar are very technical and quite hard to find — we don’t actively promote them on social media. So it’s not surprising that the customers who engage with these programs have higher probability to open opportunities with us.
Response 2: Carrot Trial always shows up as high-converting. Yea, sure, the customer is doing a trial. This action itself tells that this customer is interested.
When I asked the marketers: shall we take action 1 and 2?
They go: I am not sure…
Let’s analyze this situation:
Why is the marketer not sure? Because this insight might be the result of a selection bias: Customers who engage with Apple E-book, Banana Webinar and Carrot Trial are likely to be already very interested and determined before they engage. Interested and determined customers are more likely to open an opportunity. Thus, if we try to engage wider range of customers who are not as interested and determined, they may not be as likely to open an opportunity.
What’s needed to make this insight actionable? Prove or disprove engaging with program A,B and C caused the high probability of opening opportunities.
In other words, prove or disprove when we engage other customers with these three programs, they are as likely (if not 7 times, at least more likely than average) to open an opportunity with us.
As a Data Scientist in B2B marketing, the more analytics I have done in business, the more I realized that the key to success is not just tuning parameters in ML models.
“The goal of measurement is not precision. It is reducing ambiguity to make a decision.”
In order to be a great data scientist for business, we need knowledge in various domains: business, economics, machine learning, statistics.
The machine learners have taught us how to automate and scale; the economists bring tools for causal and structural modeling; and the statisticians make sure that everyone remembers to keep track of uncertainty.
Matt Taddy, Business Data Science
In order to generate actionable insights, I have to be able to infer causality. Doing A causes B. In other words, if we do more A in the future, we will have more B.
Casual problems are much more difficult than correlation problems. This area indeed only saw major advancement in the recent 30 years.
The lack of mathematical models in causality is the reason why we have made little advancement, despite the success of machine learning. The reason we don’t have mathematical models is that causality is so natural to human beings that we don’t see the necessity of building mathematical models. You know that flipping this switch will result in turning on the light. But until we started to develop AI-until we want to teach machines to be smart- only then did we realize we barely know how to program causality.
A paraphrasing of Judea Pearl’s quote in The Book of Why
There are some great books about causal inference. Here are some books that I would recommend:
There are many ways to infer causality, some more advanced, but in this blog, I will describe one of the most common.
The most straightforward way to infer causality is running randomized experiments. A common application is in clinical trial of a new medicine. A carefully designed randomized experiment with treatment group and control group can infer causality.
But what if we can’t develop randomized experiment? In B2B business, it is challenging to develop those experiments. Let’s say we are trying to sell cloud analytics products. The first challenges is: we need enough information about our customers in order to develop treatment group and control group that are similar enough. In addition to the size of the company, and the industry (which are easy to know) the two groups need to have similar readiness to buy our product (they are already using cloud or considering using cloud), similar price sensitivity, similar number of contactable leads, and several other known and unknown factors.
Even if we successfully collected the information, formed the treatment and control groups, and ran the experiment, the second challenge is: the influence of marketing on the customer is relatively small. Let’s say, on average, the marketing programs from our team account for 15% of the decision making process of a customer. It is very likely that the experiment will fail to provide statistically significant evidence that engaging with Apple E-book, Banana Webinar and Carrot Trial causes customers to be more likely to open opportunities with us.
Sad. Right? Now what we do?
One way to tackle this is: choose a more proximal outcome that we have more influence over.
Let’s say, 1) We have proven that “doing Carrot Trial” has causal effect on customers opening opportunity with us; 2) 50% customers who engage with Carrot Trial are influenced by our team’s marketing.
Now, the problem has changed. Instead of finding a marketing program combination that correlates with high likelihood of opening opportunity, the problem becomes finding out what tactics are most effective in driving people to do Carrot Trial.
The benefits of choosing a more proximal (closer-to-action) target are:
- the result (customer does Carrot Trial) is more explainable by our action (engage the customer with our marketing programs). This means we are more likely to get statistically significant results.
- It is easier to form treatment group and control group. Why? Because the problem changed. The factors we need to control is only the ones that will significantly influence the customer’s likelihood to do a trial, such as “Already a customer of us or not” (assuming existing customers are much more likely to buy more)
What’s required to control in the previous problem are not required anymore: Doing a free trial does not require a company to be infrastructurally ready (at least not as strictly); it does not include price sensitivity because it’s free; it does not take sales interactions to do a free trial.
What are actionable insights we generated from this example?
- To measure a marketing program’s effectiveness on generating opportunity is too far to reach. It may include selection bias, so that taking the same action (engage customers who are not as interested and determined with the same programs) will not generate the same result (have high probability of opening opportunity).
- Marketing programs’ effectiveness should be measured again more within-reach outcomes, such as “Carrot Trial”.
- We should develop A/B test on randomized treatment and control groups to find out what works the best to generate engagement in Carrot Trial
A/B test ideas:
What format to position trial at the end of an E-book:
- Put “Trial” as a Button
- As a Link
- As a Pop up window
What language we use to encourage people to do trial:
- Try Carrot Analytics for Free for 30 days!
- Try Carrot Analytics for Free for 30 days! (no credit card info required)
- Here is a one-minute video about Carrot Analytics: What you can do and Why they are valuable to your business. Try for free for 30 days. Call us anytime.
- Thinking about ROI — is Carrot Analytics worth it? Here is what our customer say: one-minute illustration about the “R” and the “I”. Try for free for 30 days.
What E-book have the highest conversion rate to Carrot Trial:
- E-book 1
- E-book 2
- E-book 3
Note: these E-books target similar customers and use the same tactics.
The insights from the A/B testing experiments will be much more actionable to marketers.
So far, we have divided the original problem of which marketing programs lead to higher likelihood of customers open opportunities (which we cannot prove causality, so that make actionable) into three sub-problems (which we can prove causality):
- What tactics to use in E-book programs to generate more click in Trial
- Which E-book programs are most effective in converting leads to Trial
- More Trials lead to more Opportunities (not covered in this blog)
- In order to generate actionable insights, we need to know not only what works, but also why it works.
- If the reason why “it” works is not that “it” is great, but other noise factors, such as “it” introduces selection bias, we will not be able to say do more of “it” will generate more results (causal effect). Thus this insight is not actionable.
- If we can’t prove causal effect from the action to the end result, we can use middle ground. Action 1 has proven causal effect on Action 2. Action 2 has proven causal effect on End Result. Then use Action 2 is the proximal metric for Action 1. In programming language, it is called divide and conquer.
Thanks for reading! And thank you for bearing with my less-than-great writing.
I am very passionate about applying data science in business. Although I have tried my best in putting my research and experience together, this blog is less concise, less scientifically-proven, and less engaging than I would like. I hope you learned something useful from it. I will also update it in three months after I made progress in my work.
If this blog interests you, here is the Table of Content of the rest of blogs in my #52WeeksOfWriting challenge.