Interview: Chris Chapo, Amperity, Part 2

  
0:00
-17:40

This is part 2 of my interview with Chris Chapo. Chris has never been a CMO but he has overseen analytics at a number of organizations including Apple Retail, JCPenny and now Amperity. In this part we explore Chris’s take on analytics and what he feels CMOs need to know.

This is the free edition of Marketing BS. Part 1 was available to subscribers on Wednesday.

Transcript

Edward: This is part two of my interview with Chris Chapo. Today we’re going to dive into his data analytics insights and how they can help CMOs. Chris, let’s start with this question, is data science the future of business?

Chris: I personally don’t believe it is, which is kind of funny given the fact that I am a data person, and I self-proclaimed “data scientist”. I’d say taking an evidence-based approach to understanding business problems and solving business problems, to me, that’s the future of business if you want to be scalable and sustainable in the future. Data science is just one of the methods you might use to achieve that vision.

Edward: What's another example? What are examples of evidence that’s not data science?

Chris: Sometimes people look at data science as the fancy statistical model which will say let me predict the future and the outcome of what’s going to happen, which could be one approach. But I’d say another simple approach to creating evidence is doing experimentation. If you’ve got two different potential marketing treatments that you may want to show to consumers and you’re not sure which one is going to be the most effective, try it out.

The confidence you’ll have in that will overcome any sort of doubt you may have and a statistical model which says, well, this person should get message A with 0.76 accuracies, and this person should get message A with 0.72% accuracy. You’re like, how do you interpret that? Oftentimes, experimentation is a great example of how to do that.

Another one that I find really helpful is bringing consumer insights and what you would call traditional research to bear and combining that with data analytics and data science methods. An example of something where I’ve seen these people really successful is creating behavioral-based segmentations where you talk to your consumer base about attitudes and beliefs. But at the same time, build statistical models to predict which segment a person may be a member of. That’s useful when you want to personalize two people based on their segments versus just create personas that are interesting but not necessarily actionable.

Edward: Why is that not data science? Is it data science only when you look at purchase data? But as soon as you do external research data, it stops becoming data science and it starts becoming something else?

Chris: I think data science by itself, honestly, is a very overloaded term. It can mean a lot of things to different people, and as such, then it means nothing per se because there’s nothing that’s specific. But one of the methods that I think most people traditionally think about for data science is the more purchase-based activity drives it or signals such as do I click on something, what did I do on a website?

Thinking about it, more focused on propensities and predictions about activities versus understanding someone’s core beliefs. One could say—and there are some examples where people I was training—try to approach and model people’s emotional states and understand some of those few more “data science” methods. But it’s usually not the mainstream when people talk about data science.

Edward: Chris, what do you do for long feedback cycles? In short feedback cycles, you can run experiments; or you have the results, you can look at the inputs and the results, and you can run all sorts of data models. What about things that have long feedback cycles? Things around hey, I influence you now and you buy a car two years from now based on the stuff I’m doing for you now. Is there anything that data science can help in trying to understand those long feedback cycles?

Chris: I’ll give you a couple of thoughts. Of course, data science is not a panacea. It doesn’t solve all problems. But one thought of how to approach some of those long feedback cycle problems are creating leading indicators. Actually, doing some analysis understand, yes—to your point—maybe this outcome I want to predict is something which will take a couple of years for it to play out.

Customer lifetime value is a great example of that. It takes a while to actually observe it or even products where there’s a long purchase cycle. But is there something where we can do an analysis first off to say, usually, people who have higher lifetime value—or in the case you just mentioned—who do go on to buy a car? These are some of the behaviors they exhibit first.

While it won’t necessarily be 100% predictive, but focusing on experimentation on those leading indicators and using that to help (to your point) to take your best guess using data on if I continue doing more of this thing, this will help us drive something in the future.

I’ll give you an example of this that wasn’t a team that I lead, but it is something I learned when I was in Intuit. One of the things that they wanted to do was get people to go from trial members of QuickBooks online to full paying members. That usually takes a period of time to actually show up. It takes 30, 45, or 60 days.

What the team did when they did some analysis is realize, if I can get people to do this one little action within their first seven days, that has a high correlation to someone converting to being a full paying member. They had teams who were focused on driving that little behavior, which was connecting to your bank account.

Again, they would have entire teams focused on experimentation to drive that, and how can I get that rate higher? What are the things I could do? Because they knew that it would pay off in the future.

Edward: Do you run into selection effect problems? You do that correlation, not causation. I imagine that 100% of people who are paying members connected it to their bank account, but that does not necessarily mean that connecting your bank account leads to payments. It could be that it’s the other way around.

Chris: Yeah. I think that’s one of the challenges with this. This is why an example of that, the team would go back and validate that. Those people who did that action did have a higher spend than people who didn’t. But to your point, the question selection buys come in. There are a couple of different methods that one could do to help address that. In this example, because we’re selecting who gets this potential treatment versus those who don’t because this is an online product, you can validate that in a true AB test fashion.

Now, this scenario is not quite as easy. One of the things that I’d seen teams apply are things like propensity analysis where you may not necessarily be able to have a control group, but can look statistically to say this person who does an action that we hope is a positive one or experience is something different, can we find somebody who’s very similar to them in terms their past purchase history and use that as a pseudo control group.

But it’s still not necessarily (I would say) as solid as a lot of the statisticians in the world would like. That comes to one of the pieces that back to this concept of data science is important is not getting super caught up in just the 100% accurate solution, but what is good enough to drive a better business result than we’ve seen in the past? And how can I make sure that this is not that we want to have random chance to spare only success measures, but how is this better than our other approaches, and can we get better over time? Those are a couple of examples.

Edward: I definitely buy that. I think that a lot of the issues with academia is they’re trying to get to the right answer. Whereas in a business, the right answer is not what you need. You don’t need to know the exact right answer, you just need to know a better answer than what you were doing before.

You spend a lot of your time in retail. Let’s talk a little bit about retail loyalty. How do you measure loyalty in retail?

Chris: That’s a great question because most people will measure loyalty based on how much people spend or likely to spend. Honestly, I’ll protect the guilty here. There’s a cable company that I use, which I spend a lot of money with but I do not like the cable company. I am not loyal to them. And if I had another choice that provided the same level of service in terms of speed, bandwidth, and all those things, I would in a heartbeat choose something else.

Oftentimes, how I think about loyalty—particularly in the retail sense—is when you’re able to build a strong emotional connection between the consumer and the brand. That connection where people—you’ve heard the net promoter scores, one surrogate of an example of how to measure that. But if you’re so connected, I love this brand so much. I will choose them above others. There’s something specific about them that makes me want to buy from them. That to me is an example of when you’ve got that loyalty.

The hard part is how do I measure that? How do I create a system to instrument that? But to me, if you can do that, that to me is a north star for retail analytics.

Edward: The finding loyalty as a positive emotion rather than the lack of a negative, you’re splitting out the loyalty that is I’m going and searching for this thing versus the loyalty that comes from lock-in.

Chris: Yes. I would say it’s loyalty, I’m going out to search for what I love in this thing. If I had a choice, I would choose this over others. That to me is that connection. You see it, oftentimes, anecdotally when people talk about their favorite brands, they talk about how it makes them feel. I’ll give you an example, I used to love Virgin America. That was my favorite airline. I would choose them over others because of how they treated me as a flyer.

For certain people, that emotion isn’t important. They may choose something else. They may choose Southwest because they want to feel like a smart savvy flyer, not an emotional connection on what they feel when they experience the brand. To me, it’s really around that emotional feeling that, quite honestly, can be difficult to measure, but it is really important, and you know when you’ve got it.

Edward: Can you measure with the price premium? I imagine most people when they fly, find the airline that’s going to take them to the place they need to go, at the time they need to go, and then they choose based on price. Whether it’s $100 or $99, they end up going with the $99 one. You imagine the more they’re willing to pay to go to that $100 one, $110 one, or $200 one. Is that the measurement of how much loyalty there is?

Chris: It might actually broaden it. Instead of it being a price point, it’s just general friction. One area of friction could be paying more. Another could be—in this example we’re talking about the flights—I’m willing to take a connection even if the prices are the same. I’m willing to have a layover because I love this brand, and I want to be part of the experience. Even though I know that there is someone who flies there directly.

There could be other pieces as well. You think about the friction side of things if you have a bad experience with a company. Going back to this flight example, say that your connection flight was canceled and you’re sitting there at the airport. You may be more willing to forgive that bad experience if you have loyalty to the brand versus if you don’t.

I actually broaden it to the friction or the discretionary friction you may be able to deal with. The more you’re able to deal with it potentially, the higher you have loyalty to the certain brand.

Edward: Can you quantify that? Because I imagine things like willingness to do a stopover on a flight, there’s a dollar value you can calculate. Someone is willing to accept that stopover versus going direct. Most of those friction things, things that you can put a dollar value on until you eventually get to the point where hey, there’s a number that we can put on the loyalty of any given customer?

Chris: That’s an interesting question. I haven’t thought about it in the macro sense. But I think in very specific examples, one could do this. I’ll give you an example back in my Apple retail days. One of the big challenges that people face—particularly in the 2007–2011 timeframe—was the stores were busy and getting help could take a long time. We were able to quantify the impact that having to wait for help, whether it be the genius bar or whether it being for help to purchase, actually had on someone’s—we used the net promoter score methodology—likelihood to recommend the store experience.

For those people who had prior great experiences, the negative effect didn’t impact as much as people who were either newer to the store experience or had prior negative experiences.

Edward: First impressions matter. If you make a great first impression on your customer, that first impression can be sticky and get you through some bad experiences down the line.

Chris: Oftentimes, the people remember how you ended the experience first, then how you began the experience, and then everything in between. There’s some research done—and I feel bad I don’t have it off the top of my head—by some folks who were studying the impact of lines and how people had the experiences. But the ending mattered more than the beginning in individual experience.

Edward: In terms of multiple experiences, are you always better than to invest in those early customers rather than a customer that’s been around loyally? It’s almost like if a customer’s loyalty, that’s the one that you’d least need to invest in?

Chris: I’d thought about this in three buckets. There are your best customers, your almost best, and then everyone else. Oftentimes, I advocate spending enough on the best customers to keep them there. That is because if you have enough bad experiences, they’re going to fall down. We’ve all probably experienced things like that company used to be great, but they’re not great anymore. Enough to keep in there, but it’s that next year, the next best that I personally would say you should invest more in.

Edward: Chris, is it the best though, or is it the new customers? Because it sounds like you’re saying before is that first impression really, really matters.

Chris: Next best could be (to your point) the first impression for folks. We know that based on the channel of acquisition, the profile of this customer that they are likely to be, they would be on the path toward best customers. Treat them at the very beginning really well. Or could be people who’ve been around with you for a while who are starting to increase their purchase frequency, but they haven’t necessarily got to that loyalty phase.

I wouldn’t say that I would choose either over the other. Going back to this concept experimentation, to actually try it, you don’t know which one you’re going to have the most leverage with. Honestly, I would say—at least my experience from most retailers—you’re going to need to balance the acquisition component and your first impression with customers and so forth with those who are buying from you but aren’t necessarily up at the top. You’ll need to invest in doing both of those.

Edward: What are those key metrics? You mentioned NPS, but other metrics that retailers should be using to predict or to be optimizing against to make sure their customers are loyal?

Chris: I think the net promoter score is definitely a simplistic measure to use and it’s fairly effective. Particularly in driving closed-loop operational improvements around customer experience. That’s definitely a key one. Another one that I am a big proponent of is predictive customer lifetime value. Although it can be difficult to understand and interpret. It’s really helpful and a barometer to say here’s what we think at least the future spend will be.

One area that takes a little bit of time to suss out, but if you can actually—for an individual brand—understand those emotional benefits that someone gets in your brand and find a way to measure that. That’s another great way to experience this.

Edward: What’s an example of that?

Chris: An example would be—going back to my Virgin example because they’re the longer around. Imagine one of their things—the emotional benefit is—makes me feel special and welcomed. Maybe that’s the emotional benefit. I'm just supposing that.

Edward: Is it market research then? It’s a matter of asking your customers survey questions and figuring out which of those answers matter?

Chris: Which are the answers that matter, and there’s an approach where you can figure that quantitatively through stated importance versus derived importance on these emotional benefits. You can either ask them (to your point) on an ongoing basis or if there’s a way to actually—going back to leading indicators—measure that in a different fashion. That can be interesting.

The question would be I don’t know necessarily how to genericize that because a lot of that has to do with each individual customer and brand. That’s why I would think through that.

Edward: But it’s a matter of the generalized way to ask a bunch of questions, run correlations, and which of those questions end up being leading indicators of the left-hand value of success in the future and then optimizing towards those questions.

Chris, this has been great. Can you talk a little bit before you go about what your quake book is and how that changed the way you thought about the world?

Chris: The book that stands out for me is a book called Tribal Leadership. Why that’s important is it goes back to the example I shared with JCPenney and a few other examples I’ve had. The team that you’re working with—the tribe that’s trying to drive change in a company or drive success—matters probably more than the actual strategy of what the company is, at least in my opinion. Because I’m a big believer that teams that are motivated and work well together can solve any problem. Again, that’s kind of [...], to some degree. It’s something that I‘ve actually experienced personally in my life.

What I love about this book Tribal Leadership is it talks about different levels of organizations. Starting with what they call level one, which is like prison gangs and tribes that happen there where they say things like, all lives sucks. All the way up to the very top level words like nirvana and flow. It’s that one group you work with where everybody is just completing out their sentences, understand how to work together effectively. It’s that once in a lifetime opportunity.

What I love about this book is it gives you concrete examples of—if you’re in a level two or three group—how do you get to the next level. And what are some tips and tricks to help you as an organization grow? I read this book first when I was in Intuit, it helped me as a leader understand that the craft of my work—which is analytics and data—is important. But what was more important is how to create a tribe or team that is successful and can drive the future.

That’s my quake book.

Edward: Thank you so much, Chris. This has been fantastic. I really appreciate your time today.

Chris: You’re welcome. Thank you very much too.