Why your experimentation programme needs a risk profile

September 14, 2020
-
S0
2
E
0
37
English episodeDutch episode
Recorded during
Recorded during
-
This is some text inside of a div block.

Today we are talking with Stephen Pavlovich, CEO of Conversion.com and I’ll be asking him how you can and why you should be using experimentation to manage risk and how to create a risk profile for your own or your clients business.

More info:

Book(s) recommended in this episode

:

Transcript:

Please note that these transcripts are mostly generated automatically and might not exactly reflect what is spoken by the people in the interview.

Guido: [00:00:00] welcome back to the CRO.CAFE. And before we dive into our main topic, I wanted to ask you about something you said at the digital elite conference in April last year. And you expressed the hope that people in CRO in 2019. We'll start answering bigger questions.

So instead of just optimizing landing pages or just forums, or dare I say, button colors, we should all focus more on the actual product offering pricing schemes, or the markets that companies address. Um, how far do you think we are in this?

Stephen: [00:00:36] I would say for the most part, we're pretty. Um, pretty early on in that journey.

I think there's a, there's a long way for us to go to, um, what we can do to test that. Yeah. Like at the moment, most people are kind of testing, typically testing. Changes that they would make. Anyway, in other words, if AB testing didn't exist, would you still make this change? And for most people they probably would.

So they're kind of testing something that they consider to be the best practice or kind of logical idea. And so in doing that, it doesn't really kind of stray too far from what. They would do anyway. And I think that undervalues the role, that experimentation complaint,

it

Guido: [00:01:15] feels more like as like a safeguard, like, okay, I want to do this, but I don't want to feel too big.

So let's, let's check this before we put it live.

Stephen: [00:01:22] Yeah. I mean, sometimes it's not it's. A safeguard implies that you could potentially fail. Like sometimes I feel like the reason why people run AB tests is to see how clever they are. Like, um, you know, I got this idea, let's, let's test it. And then I can use that to show them how much of a good idea it was to do it.

I think a lot of the time people aren't testing concepts that are bold enough to, to fail in a big way. And obviously you shouldn't be looking to fail in a big way, but you should be testing ideas that have the potential to go. Um, Opposite ends of the spectrum, as opposed to just kind of oscillating around, um, around the middle.

Yeah. So yeah, I think it's, I think it's still a very fair point that a lot of people are kind of testing, fairly simple, uh, concepts that are kind of, I guess, while they might expect to see on the competitor's websites. Um, whereas experimentation should give us the power to test pretty much anything it's like, we have a whole kind of.

Alternative reality that we, where we get to play God, we get to kind of create that new experience and decide what it should be and then see whether it performance better or not. But then people still kind of. Keeping it so close to what they have already.

Guido: [00:02:38] And why do you think that is? Uh, is it, is it like, uh, the, the corporate culture is, are we not, uh, exploring enough?

Are we not that don't, we dare enough, uh, to do exceptional things or to the extreme things or,

Stephen: [00:02:52] yeah, I certainly think that the culture is, is part of it, but I think also the, um, The culture point can only be used as an excuse so far. I think it goes both ways and the culture is by its nature that the people and how they act.

And so it's on us as experiments it's I guess, to, to, to help inform and change that culture. So yes, certainly the culture is part of it. And a lot of the time when people within a company wants to run a bold experiment, they get pushed back and people don't want to do it. And that's, that's totally fair enough, but there's still ways around that.

I think. A big part of it is also that we don't always, and this applies just as much to myself as, as anyone else. We don't always have the imagination or inspiration to put forward those Boulder concepts. When you're looking to, to optimize a, a website, the temptation is violence. Look at what it looks like today and come up with somebody that is for what we can test.

And he tends to keep it fairly close to what is already there. You don't tend to come up with a. Baltimore creative ideas, you know, the, throw it out and start again. Concepts, if you do that. Yeah. So I think it's certainly culture is part of it, but it's also on, on us as CRO professionals to, um, to, to push, to push that forward and, and

Guido: [00:04:11] try and change it.

Wanna hear more from you, uh, later on in his episodes about how to come up with more disruptive ideas. And this is a nice segue into the blog post you create it's about using experimentation to manage risk or create a risk profile. I think that's, uh, an important part in, in. Well becoming more disruptive.

You need to know, be aware of the risk you're um, uh, you're having, um, so first off, what is a risk profile? Let's start with that one.

Stephen: [00:04:39] I think everybody has a risk profile in their experimentation strategy, whether they know it or actively manage it or not, but I've seen just some experiments that you do will be low risk.

They will be iterations of something that you have tested before. And. You saw work, um, and others will be a medium risks that might be where you're testing a completely new concept, but it's unlikely to have an effect outside of the experiment itself. In other words, when you stop the experiment, that's when you limit the impact.

And then there could be high risk experiments. So these are ones where it's potentially more likely to have an effect even after you've stopped the experiment. So if you're testing pricing, if you're a relatively high profile company and you're testing something that people might pick up on, on social media or in the press, for example, then those are likely to be higher risk experiments.

So most companies will be doing. A mix of these experiments already, mostly low and medium risk experiments. Typically. Um, my, my belief is that companies should be managing a fairly balanced portfolio, including some high risk experiments, because ultimately those are the ones that your competitors are less likely to run.

Those are the ones that will give you the competitive advantage. Those are the ones that will help you go from the, the, the local. A Maxima to the global max. So in other words, rather than helping you to optimize and iterate on what you already have, it can show you where there's a much greater potential elsewhere.

And you only get that by testing those more radical concepts.

Guido: [00:06:21] Yeah. You already mentioned, uh, those competitors, but not running an experiment is also a risk, right?

Stephen: [00:06:27] Absolutely. Yeah. I think that's, that's entirely fan is it's something that often people will, will overlook. Um, Not running an experiment can mean either changing a website without, without testing it, which is obviously quite significant risk or not changing our website and leaving it as it is because you're too afraid to touch it, which again is a significant risk because.

Um, hopefully your competitors would be testing and optimizing and continuing to evolve and then doing so they are gaining a significant advantage over you. So you can't do nothing. You can't change your website without test it. So you have to test in a. Logical and scalable way and to do so means getting the balance right between those low, medium and high risk.

Guido: [00:07:10] So I think, I think many, a chiro practitioners already use a framework for prioritizing tests. Would you say that's very similar to a risk profile or do we need to add something to it or

Stephen: [00:07:21] I must admit, I struggle with a lot of the prioritization methods out there. This is probably. A whole other, a whole other podcast.

Um, but for me, I think the, the challenge with a lot of prioritization methods, um, is that a bad call, they have essentially comes down to a fairly subjective belief, which is at some point you're going to be asked essentially, what is the expected impact of this test? The problem with that is. It's it's, it's very subjective.

You know, your idea for potential impact might be different to, to mind for the same experiment. It becomes very hard to do that in a consistent logical way. That is one of the main driving practice behind how you prioritize her expense. And then, um, starts to call into question the whole. Value of that price and privatization method.

Um, the approach that we use is, is a little bit different. We try to make it as data-driven, as, as possible, um, with the data being the evidence that you've gained either from previous experiments or from customer research and using that to inform the prioritization. Um, so we, we talked about this quite a bit on our blog, um, where we talk about the score, um, prioritization method, um, But yeah, I think I wouldn't have thought that risk is necessarily something that comes into most people's privatization methods.

Even the, the, the score methodology. When, when we wrote that blog post, I don't know that we necessarily had, in fact, no, I don't think we did factor risk as a, um, Significant effect into in, into that at the time either. So it's certainly something that, that needs to needs to evolve

Guido: [00:09:05] over 10 years. Now, all my dialogue advisors about evidence based version optimization with a focus on data in psychology, we see that analyzing data and recognizing customer behavior results in a better online dialogue with your clients and a higher ROI that team of strategists, analysts, psychologists, and UX specialists.

Gathering valuable insights in your online behavior of your visitors, gather with you. They optimize the difference elements of your zero program. Through we designed expert reviews, AB test and behavioral analysis. For more information about their services, go to Owen dialogue. What is that? Is that the moments that you usually do there, a prioritization for your experiments?

Is it also the moment you evaluate those, those risk factors?

Stephen: [00:09:50] So it's normally when we're doing the ideation phase. So, um, prioritization is, is typically something that should, it's not a kind of one off event. Um, in, in my belief, most people treat it as though you come up with a list of. 20 experiments.

And then you want to prioritize them and decide in which order you should do it. In my opinion, that's probably far too late to be, to be prioritizing. You should start much sooner, sooner than that. Um, typically you should be prioritizing at the, um, area, audience and lever level. In other words, you should, um, prioritize the areas on your website.

So, you know, that's. Testing on the product page is going to be higher priority than testing on the homepage. For example, you should prioritize based on the audiences too. So we should be testing on prioritizing mobile visitors over desktop, for example, and then you can prioritize on the lever lever being the kind of key.

Hypothesis behind the experiment. So, um, it might be a trust or delivery or, um, probably for example, these kind of core principles, because then you can cluster your experiments around those. You can get an understanding of which work in which stones. So that way you can say, okay, we know that the highest priority is to test on the product page and on mobile traffic.

And. The delivery lever has the highest wind rain so far. So therefore we need to come up with the best next experiment to run on that. Um, On that page, on that audience or that lever. And that way you've already done a lot of the prioritization work before you even start on the ideation process. Yeah.

And all you have to do is come up with the best next experiment. You don't have to come up with a list of 20 different concepts and then try to prove it. Those based on what I think my ad is going to have a higher impact than yours and vice versa. Um, so it becomes much more objective as a result of it.

If you do that.

Guido: [00:11:51] And so, so when you're running these more disruptive tech departments, uh, I can imagine you get a lot of backlash from the company is saying, okay. But if, if we do this, we need to overhaul our whole departments or, uh, the way we do marketing or, or the way we do business in general. Uh, how do you respond to that?

How do you help them understand that, Hey, this is actually important for you to do because otherwise in a year you might not have any customers or

Stephen: [00:12:19] always look to this. It's that same phrase that I mentioned a second ago, the best next experiment. So what we're trying to create is not the perfect solution for any given problem.

We're trying to create the best next experiment. What I mean by that is we would say to a company. You have to do this, and this means overhauling your entire department or structure or your value proposition. We want to use experimentation to inform that. And so it's kinda like the, um, Yeah. And taking the next step

Guido: [00:12:53] on the journey.

Yeah, because I see a lot of people that are familiar with experiments that if they, if they see those experiments upfront and then they already assume, okay, but if this is proven somehow, then we need to implement it. Then we need to overhaul everything to accommodate whatever the results from the next experiment.

Stephen: [00:13:10] But then that that's not necessarily a bad thing. Like if you run the experiment and it shows you should be doing X. And this is how much better it will be for your customers and for your business. Then you have your ready-made business case there,

Guido: [00:13:23] or at least it's food for thought. And then

Stephen: [00:13:26] exactly

Guido: [00:13:26] four more experiments saying, Hey, apparently we can do much better.

Precisely let's explore this more.

Stephen: [00:13:31] And if you decide not to do it, then, you know, the, the potential impact of that absolutely one thing that we will always do is in terms of kind of taking those baby steps. That's what we'll look to start with something. I guess there are two ways of doing it. You can either start with a big goal and scale it down to a very simple concept that you can test next, or you can start with something fairly simple, and then you test that and if it works, you test something else.

And again, and again, and again, until you scale it over time in terms of the potential impact that it has. Um, except for one, one of our clients, they were a, um, a medical client and we knew that every time we tested. Um, delivery as a lead, but it was successful. So if we added, um, uh, if we promoted free delivery in the navigation, for example, that was successful.

If we reminded people about that when they were on the product page and the checkout that was successful as well. And if we spoke about the speed of delivery that was successful, and we knew that essentially every test that we did around delivery, um, seemed to be successful. And it got to the point where we said to the client.

Delivery is obviously something that your customers really care about. How can we actually improve the commercial proposition? Because that could be a key advantage. It's something that, you know, your customers care a lot about. And so that's what prompted them to test same day in store delivery, because they have hundreds of stores across the UK.

And so we were able to test that in a fairly simple way before they rolled it out. Um, nationwide, we were just able to do that as a fairly simple proof of concept. And we could see that, yes, sure. Enough, same day delivery was, was very popular for, uh, for them. It was something that the customers appreciated.

And so we were able to use experimentation to inform something that would become a commercial proposition. And it wasn't like on a day one, we said to them, You need to test same day in store delivery because they wouldn't have bought it, but because we had, so I think it was about six months worth of AB test data to inform it.

It gave them the confidence to test it that,

Guido: [00:15:39] yeah. And those might not work out financially in the short run either, either. I mean, setting up their delivery networks and that can be a lot of time and money that you need to invest in it. But once it's skills. It's nice to make money.

Stephen: [00:15:52] Yeah.

Guido: [00:15:53] So, um, how, how do you go about determining, uh, what's a, what's a low versus high risk experiments.

How do you, what's your process to get more of those high risk ones in,

Stephen: [00:16:08] there were a couple of different ways that we can do it in the, um, in the blog post, I spoke about it, a fairly kind of skip simple scoring matrix that you can use. It it's one that works for us. And it asks you questions like essentially, um, What type of experiments are you running?

Is it on, are you changing UI or text you changing functionality or posting things of those is likely to be, to have a different type of risk associated to it. Um, you can also look at, um, the potential impact of the experiment even after you've stopped it. So if, um, I suppose we were working with, um, How do they want the big, um, fast food restaurants in the UK?

And they wanted to test a completely new, um, vegan range of products, for example, and then wants to do that as a painted door experiment. So they didn't actually have the products. They just wanted to add it to their website and see what people try and buy this. That is obviously a super high risk experiment.

The likelihood is that he probably wouldn't want to run that in the first space because. Um, as soon as people started seeing that word would get out, people might talk about it on social media. They'd get annoyed if they couldn't buy it, um, it might get leaked to the press and so on. So that was an example of a test that has an impact even after the test is finished.

That's obviously a pretty extreme example to illustrate the point, but there are plenty of tests like that, that brief on it. Conversion where the client is taking a risk. Beyond the cost of actually building and running the experiments. It could have a, an impact beyond that. So we've got a kind of fairly detailed model that allows you to estimate whether something is low, medium, or high risk, but that kind of very simple rule of thumb is essentially to ask yourself the question.

If we couldn't AB test this change, would we still roll it out? Because most of the time, people are testing things that they think are good ideas. And that kind of sounds counter intuitive. Like why would you not test something that is a good idea? Or why would you test something that you think is a bad idea?

But what I mean more is that essentially we only test the things that we think are going to work. We should be testing the ideas where either it will crash and burn and it will tank the conversion rate. Or it will have the completely opposite effect. So for one of our clients, one of my favorite examples is when we changed, um, we worked with a client, a SAS client to change their pricing model from one where people paid based on, uh, the, the tiers were structured on functionality and we changed it to a pricing model based on usage.

And if we didn't have AB testing, would we still make that change? Well, It's hard to say. Probably not because what they'd had already was working, but we wanted to see if this was better, but it would be quite a big risk. It was quite a significant shift. Um, when we did test it, it actually ended up doubling.

Sales doubling revenue from new customers. So it was hugely successful, but we might not have tested for it. We might not have made that change if we couldn't have AB tested

Guido: [00:19:27] marketing budgets have suffered. And the sheriff for AB testing has been impacted too. If you want to keep testing to enterprise standards, but save 80% on your annual contract, you can consider comfort with their summer release.

You can take advantage of full stack and hybrid features, strong privacy compliance, no blink. And enterprise great security feel good about your smart business decision in Fest. What you saved back in your CRO program, check out www adult's overbook slash 2020. So for, for those higher risks. So I can imagine that's a, yeah.

If you do that publicly on the, on the public website where maybe millions of people already shop. Yeah, that can be high risk, maybe even when you, when you release it and small percentages. So do you move those high risk experiments away from AB testing to all our validation methods or maybe use those first?

Stephen: [00:20:25] I mean, did we try and find a way to AB test it and because, um, I mean, we, we like the, I mean, for something like pyramid that, um, Tom and the guys on my dialogue, um, introduced me to, and it. It, it shows essentially that, um, RCTs or AB tests are the strongest form of evidence that you can get, especially when you do that, that Metro analysis of, of that from, from multiple AB tests.

Um, so I always look to try and find a way to, to inform it, but it might be that we essentially have to break it down into, um, a simpler concept. That's easy to AB test. So like the, um, like the medical climate with the. Same day delivery. For example, that might be something that would be too hard for them to do on day one, but we can take steps.

Towards it. And the same as with the, um, if we were working with, at a fast food restaurant and they wanted to see what would happen if we introduced this vegan range, we might say, well, rather than testing it on an entire range, let's try. And, um, it, it might make sense to test it on an individual product or even just to see essentially whether we could better sign posts, um, uh, existing products that are vegan, for example, and see what effect does that have on, on customer behavior?

It's. It's hard because there's only so much that you can do towards it compared to the benefit of. Yeah, kind of PR campaign that might go around or that should be launching a whole new product range and so on. So it becomes harder to test. So that sort of scale. Yeah. That's the,

Guido: [00:21:52] so we just defined a low risk, medium risk and high risk profile for four experiments.

What do you think is a good balance? Is, is there one way of doing this or does it depend on situation or company or,

Stephen: [00:22:05] yeah, I think, I think it, I think it depends both on the company and on. The the situation as well. I think, I think a company should not have the same risk profile all year round. For example, in other words, if you are a, um, a retailer and you make 50% of your revenue in November, December, and January, for example, then you might think, well, we need to have a different risk profile for that period.

Then the rest of the year, it might be that when, when you're in peak sales, you want to run. No high risk experiments at all, because he can't afford to crash the conversion and used to be focusing primarily on low risk experiments. So these are ones that iterate on. Previous winning concepts, for example.

Um, so those are the ones that you focus on doing that, um, during peak periods. Yeah.

Guido: [00:22:57] That's all, it was a fun discussion of course, because like you said that you are making 50% of your revenue and it might be that that's a really important segment of your users that are mine, that might not show up the rest of the year, but are shopping on that, that part of the year, you do want to optimize.

If you want to optimize for something you should be optimizing for that, for the high peak part of a, of

Stephen: [00:23:19] the yarn. Yeah, absolutely. Absolutely. I mean, we, we would probably suggest in that instance to skew more towards low and medium risk and probably not to run the high-risk experiments. Yeah. It depends on each company's.

Appetite. I mean, we, we know firsthand the difference that, um, a higher conversion rate can, can make, if you have a high conversion rate and that allows you to afford a higher cost for acquisition when you could spend more than your competitors to acquire traffic.

Guido: [00:23:45] Yeah. Maybe also how fast Drori or audience and then your, your industry's moving right.

If every year is completely different. Yeah. You need to keep figuring out, okay. What's working this year. Well, if you're very traditional business, then. Might be totally different story.

Stephen: [00:24:00] Yeah,

Guido: [00:24:01] absolutely. So for those risk profiles, um, seasonality, uh, the size of your business, how fast your audience moves?

Am I forgetting something?

Stephen: [00:24:09] I would say also the experience that you have in your AB testing program. In other words, if you're just getting started, if you run only a handful of AB tests before, then you want to probably have a much more. Balanced profile across low, medium and high risk. If anything, you might want to skew slightly more towards high risk experiments, because low is when you're iterating.

That's typically when you're focusing on building on concepts that have already. Um, that I've wanted the past. Whereas if you're just getting started with AB testing, you don't want to narrow it down, focus too early. Only want to make sure that you are having a pretty broad approach to experimentation that you're testing a wide variety of, um, of leavers and you test them in a, in a, in a high impact way so that you can learn quickly whether or not they, they work for you.

Yeah. And I

Guido: [00:24:59] would also assume that there's a high correlation of the, the, the, the test effect size and the, and the risk profile. So higher risks might also be mean the higher reward usually. Uh, and, and if you're just starting out with CRO, um, Well, if you're just starting out, my assumption would be your probably in bit smaller company with less traffic.

You're not necessarily used to that. So if you have more risk, high risk, uh, experiments, there's a higher chance of you actually

Stephen: [00:25:27] finding something.

Guido: [00:25:30] And in your blog posts, you mentioned, uh, um, a couple of ideas to come up with those disruptive ideas. I think that's, that's a big challenge of course, uh, with many teams.

So how on earth do I find those disruptive? I do I, do I just sit down and brainstorm with my team or,

Stephen: [00:25:46] I mean, yes, absolutely. You can do. I mean, there are, there are a lot of techniques that can, that can work effectively. Um, in terms of brainstorming like one that we use, I didn't mention it in the blog post in terms of ways to come up with new ideas for more innovative ideas, this kind of higher risk concepts.

One of my favorites is the idea of invalidating law law, spelled L O R E. Law is the kind of anecdotal knowledge that exists. And your, in your company, things that you do, because that's the way it's always been done.

Guido: [00:26:17] And,

Stephen: [00:26:17] um, I got this, um, concept from an interview, uh, Jamal, who used to I'm Rhonda posting on Facebook and started, but he was talking about how.

Facebook's early success was based on this idea of invalidating

Guido: [00:26:34] naught.

Stephen: [00:26:35] In other words, find out what do you do in a company, because that's the way it's always been. But what do you do in your

Guido: [00:26:41] industry even because that's

Stephen: [00:26:42] where it's always been. And then challenge that and test that assumption to see is that still valid?

Should that still be the case? Because often people. By its nature. They accept that norm. That's the way it's always been. That goes unchallenged. And they don't necessarily realize that that could be holding you back.

Guido: [00:27:02] So how do you figure out what the law is? Do you go about interviewing people and then asking them what their assumptions are about their customers or how things work in their business or,

Stephen: [00:27:12] yeah, exactly.

I mean, we typically look to, to challenge. When we, when we first start working with a client, we understand if we want to go into the details of, um, the, of the product, how they sell it, how they make money from it, um, how it's structured. And then essentially we, we look to, um, I guess we want to kind of challenge it in an almost naive way.

Um, in other words, we might say. Well, why do you do it like this? And not like this and ask questions that are seemingly obvious, but in getting people to explain it one, it helps to improve our understanding. But secondly, also it highlights some of those opportunities for, um, To show where people are, are doing something because that's the way it's always been done.

So I'm

Guido: [00:28:03] trying to figure out Laura. So I think especially as a, as an outsider coming in, that helps. I think you, you notice things like, Hey, that's weird, poor white. Why on earth are they doing that? And I think, um, interviewing, um, um, just new hires. And asking them, Hey, what, w w what stood out when you started working for this company as an external agency?

Um, you're new, but you're still not necessarily working in the company. So you don't necessarily find out what the law is, but interviewing new new employees, uh, can really help because they're, they're doing this day in day out, but, but still are well, they're still new. So they're not, if you're working there four or five years, you have no idea what the law is

Stephen: [00:28:46] because it disappears.

Yeah, that's a really good idea. I'm like yesterday's brainstorm was so good. I really liked step's idea of running that test on the call to action buttons, making them orange will really make them stand out. Don't you think?

Guido: [00:29:00] Yeah. Right. Do you want to design real AB test winners and achieve enormous conversion uplift, then stop, brainstorming and take a sign difficult

Stephen: [00:29:09] brush.

Guido: [00:29:10] If you can read Dutch, follow the steps that online influence the best seller men, even book Delta now and ruling the office course and become an expert in applying proven behavioral signs yourself, go to  dot com for more information and free down notes. The second one was a diversion thinking.

Stephen: [00:29:30] So divergent thinking, this is, um, a lot of people won't be familiar with this idea already, even if, even if they're not familiar with the, uh, with the name of it.

So divergent thinking is basically like, um, if you've ever been asked in an interview question, like name as many uses as you can for brick. Um, yeah, it's a perfect example of divergent thinking because normally they ask you that question and it kind of sounds kind of obvious, but then you have. Two minutes to make, to name as many ideas as you can.

And you start off by kind of saying, you know, we can build a house so he could build a wall, um, or he could use it as a paperweight on your desk, or you could prop the door open with it. And then after around 30 seconds, you've exhausted all of the. Obvious ideas that you can think of. And it, you know, I can start getting into a kind of weird and dark place in terms of, you know, what could you actually do with a brick?

Um, and then you were kind of a minute in and then it kind of gets worse from there. Yeah. That's an example of divergent thinking. This is the same kind of principle that's used in the crazy eights idea where you. Come up with one possible solution and then another, and then you keep on going. And when you get into ideas, six, seven, and eight, because you've exhausted the kind of more obvious ideas first that's when he started to get into that divergent thinking towards the end.

And that can be very, um, that can be very effective as a, as a technique that can work well,

Guido: [00:30:53] an exercise that you do, uh, with, with the decline team or, uh, the customers itself, the customers of your clients.

Stephen: [00:31:00] So normally we would do that. Internally within commercial.com and with our clients, I don't believe it's one that we've ever done with clients.

Customers. We typically look to the customer to show us the problem, but not necessarily the solution, although there's yep.

Guido: [00:31:21] So that's the divergent thinking. And, uh, the third one that you mentioned in your blog post, uh, for creating disruptive ideas is a two X instead of 2%.

Stephen: [00:31:30] Yes, this is, um, this is kind of what we were talking about a little bit earlier in the often when.

When you're looking to optimize a page and I'm sending guilty of this myself, my tendency is to look at the page. That's a thing. How could we make it ads a little bit better? And so from the very start you're thinking about, you know, how can we get this kind of marginal improvement in performance, as opposed to starting with a more.

Radical question,

Guido: [00:31:57] and then you'll get to experiments changing, changing button.

Stephen: [00:32:00] Exactly. Whereas in Sydney should be asking the question essentially, how could we make, how could we double the conversion rate if he could only run tests that would it be there? Double the conversion rates or more or nothing.

Then what would you, what would you test? What would you do differently in your testing program? I think it's quite an interesting, um, thought process to go through because it completely changes what you're doing. You're not going to kind of stick to the kind of standards, concepts, and ideas that you might have tested previously.

You're going to test something much more radical. Yep. The example that I used in the blog post respond from a company called I think Bosphorus, or Posterous, I'm not sure how you pronounce it. I don't think the company even exists anymore actually, which has maybe not the best endorsement as way of thinking, but essentially there,

Guido: [00:32:46] well, they would've, they would've gone away much better.

Exactly.

Stephen: [00:32:50] They haven't done this one weird trick, but what they did really well is, um, and do, do take a look at the screenshot in the, in the blog post, cause it's going to. Explain it way better than I advocate, essentially. They were like, I'm kind of similar to like a microblogging platform kind of cross with Dropbox.

I think that sort of thing, but basically the way that they, um, they improve their registration process, you know, normally my thing. Okay. How would you improve your registration process? You could reduce the number of fields you could. Tell people why you're asking for the information that you're asking for.

You could sell people again on what they're signing up for. So it's not just a functional form. You've also got that motivational content alongside it. You can do all these sorts of things, but what they did is they said, well, absolutely, you just let's skip the river destruction process altogether. They literally kind of say, step one, register, cross it out.

Um, because if you want to. Hosts, um, content on their platform. All you have to do is email the content to, um, give given email address and it automatically creates an account for you based on your email address. And that's it. Yup. That's a great example of that kind of two weeks thinking, you know, rather than how could we increase the performance of our registration form by 2%

Guido: [00:34:08] just to remove the registration.

Exactly.

Stephen: [00:34:11] And I love that kind of example. I mean, it's the kind of thing that, you know, you, it makes you. Cake yourself thinking like it's such a good idea. How could I miss that? But then if you do that enough times and slowly but surely you start to come up with those sorts of ideas yourself as well.

Guido: [00:34:26] Instead of just having a web shop, selling flowers just sends an email for who you want to send the flowers to for what location and we'll L M will. Yeah.

Stephen: [00:34:35] Yeah,

Guido: [00:34:37] exactly. Who needs a web shop? Thank you so much for going, um, um, sort of blockbuster with us. I think this is really interesting to get us out as our, out of our daily, uh, data grids and, and, and, and trying to think of more.

Uh, extreme examples that we can do to actually help our businesses way, way more than we, uh, than we currently do. Um, so you're obviously already a working on the evangelizing, this, um, uh, and, and trying to get more companies to do this. So in the next 12 months, what are you doing to help your companies to adopt this even more?

Stephen: [00:35:10] Support the clubs that we work with to understand the opportunities to run these high risk experiments. Ultimately it's not on us to, to push them or force their hand, but essentially to show them, this is how you, this is why you should consider it. And that these are the benefits that you could get me to persuade them.

Exactly. Because I mean, they are, they are the clients, it's, it's our job to advise them in the best way. Possible, but it's on them to decide what they actually want to do. Um, so that means that we have to work hard to essentially show the value in running those sorts of experiments, being prepared to test those, those kind of ideas.

Um, especially if you've been doing and, and just because you're running those kind of concepts that obviously doesn't mean that it's kind of word first time or second time, third time. Um, yeah. It's, it's on us to, to build the business case. For that too, to show them the potential upside and then show them how to do it, basically how to make it easy.

I think because often people think, like you said, at the, um, at the start, you know, people think that they might have to restructure the whole team. If they're going to test X it's on us to say, this is how we take this kind of big strategic goal that you have. This is an experiment that we think would inform it.

And this is how we can do it in a really simple and easy to test play. That's that's what we need to do to show them how to make this kind of experimentation accessible.

Guido: [00:36:37] You already have a database off of enough of those disruptive high risk ideas that you can say, okay, this, this is roughly, uh, the, the, the success rate that you can expect from high risk rates, high risk experiments versus low risk

Stephen: [00:36:50] or medium risk.

So we do, we do have a database of all the experiments that we've run. Um, categorize an air table. One thing that we haven't done is categorize them according to risk. Uh, the challenges that, um, talking about risk is only a relatively recent thing for, for us in this, in this kind of context. Yeah. And it's hard to attribute risk retrospectively.

I think. So it's more something that we're trying to build up over time.

Guido: [00:37:18] And w would you say that a high risk, um, experimented by definition also takes much more effort and time?

Stephen: [00:37:23] No, not necessarily. Not necessarily. I think at a high risk experiment could be, could be very simple if you want it to completely change your value proposition or your price or something like that.

It doesn't have to be a complex experiment to set up. I think that that's part of the problem is people think that the high risk stuff is no it's completely redesigned our shopping cart. For instance, it doesn't have to be, it doesn't have to be that high risk is, is. It doesn't need to be correlated to the complexity of the experiment.

Guido: [00:37:54] No, it should be easy to do. All right. Let's let's all start, start looking at our risk profiles and see if we can squeeze in a bit more of

Stephen: [00:38:02] those hydrogens. It's easy to do. Hard to sign off, I think is probably the balance.

Guido: [00:38:06] Do you have a final question? Uh, do you have any, uh, CRO related books or, or other resources that you'd like to, uh, dimension and tip to our audience?

Stephen: [00:38:16] A lot of the books that I've been reading recently are not CRO. Related

Guido: [00:38:21] or, or more general, um, books that inspire you or help you with your work

Stephen: [00:38:26] reading or rather listening to, um, when I go running at the moment is, um, uh, tools of Titans by Tim Ferriss. Um, which I think you're almost embarrassed to say the name it's it's um, I'm not sure.

I like the name of the book. It makes it a bit over the top, but it's really interesting and kind of talks about, and kind of typical Tim Ferris style talks about, um, Uh, I think health, wealth, and wisdom, some of the three kind of sections of the book. And it's essentially, um, summaries of the interviews that he's done, um, on those, um, Tim Ferriss show over the last five, six years, I just have a long is.

And it's really interesting in terms of kind of getting a kind of good insight into what other people are doing, what they're thinking, how they work, that sort of thing. And it's, um, it's so eclectic, you know, I can go for a run and then in a couple of hours we'll have heard from. Seven or eight different people and, um, and their beliefs.

And so I always come back from my runs with a kind of mental list of five or six things to follow up on and learn more about psych.

Guido: [00:39:31] Is this an actual, a book that he wrote or is it just a combination of all the

Stephen: [00:39:36] podcasts that you do? Yeah. This one is just the podcast summaries. So he talks about, you know, interviews with Schwartzenegger and, um, Peter Diamandis and.

Dot.dot, dot, dot. You know, you could be listening to someone who's a business expert. And then the another one from the Russian guy who popularized kettlebells and how he trains with them. And a lot of it is on the kind of attitudes you didn't have it. And so on. So it's, it's, it's. It's pretty interesting.

Guido: [00:40:03] Nice. Thank you so much. We'll I will share that one in the, in the show notes too. And of course, if you want to read more about this topic and probably you might want to have some visuals to support the story, go through the, go to Stevens, a block bars on conversion.com. Uh, you'll find a lot of, a lot of factoring additional information there.

There.

Stephen: [00:40:26] Absolutely. Thank you so much

Guest(s) in this episode:
CRO.CAFE host:

Guido Jansen

LinkedInTwitter

Get your cup of CROppuccino

Through our mailinglist we periodically inform you when a special episode goes live, when there are professional events you might be interested in or when our partners have an exclusive offer for you. You'll receive an e-mail roughly once a month and you can unsubscribe at any time.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Some other recordings from

:

No items found.

Some other episodes you'll (probably) also like:

Some other episodes you'll (probably) also like: