Guido X Jansen: [00:00:00] Welcome. And thanks for listening to another episode of zero cafe. In this episode, I talked to designer and developer and optimizee Matt Beischel and his own agency, Korver zero, and we discussed automating parts of your CRO workflow. So you can keep focusing on the front bars of the job. My name is Carrie answer.
Welcome to shearography. What goes, where I show you the behind the scenes of optimization teams and talk with their specialist about data and human driven optimization, and implementing a culture of experimentation and a vegetation in case you missed it. In the previous episode, I spoke to Sean Shepherd from growth X about why Ciro specialists like you and me should actually be thinking like venture capitalists.
You can find that episode on zero.cafe/episodes or in the podcast app you're listening with right now, this episode of Ciroc Fe is made possible by our partners content square, fertile, online dialogue. And SiteSpect so Matt, welcome to the show and, yeah. Tell us a little bit about your background and how you are working with Shiro.
Matt Beischel: [00:01:15] Yeah, so I have a fine arts background. I got into Doing print design in high school through working on yearbook and taking art classes and stuff like that. So that really got me interested in the field of design. And from there in college did majored in actually three D animation. Cause I wanted to be like a animator and maybe get into video games and stuff, but discovered like.
That was, I wasn't really passionate about it. Like you're working on something for a super long time and you get maybe like a minutes worth of work. And so I've stayed still like taking design classes and things on the site. And web design was like a really new space then, and that really interested me.
So after leaving college, I got work in print and work in direct mail and catalog production for a couple of years and then transitioned into web design and then web development landed at an eCommerce company, whereas really able to use all those design and dev skills to work on eCommerce sites.
But as just a developer, I was always asking these questions. We're like, okay. Okay. We're working on our clients sites and making changes and updates and, redesigning pages and altering UIs and all those different things. How do we know the they're paying us to do all this work? How do we know that it's actually beneficial?
And so that's how I got into doing conversion optimization, was Fred around. I want to say. Like 2013, 2014 ish. And we somehow ended up discovering Optimizely and started using that as a layer on top of, okay. We're delivering changes, but let's test out the changes first. And so I instituted, the testing programs for clients that were interested in it and just have been doing it from there.
Guido X Jansen: [00:03:20] Do you still remember the, do you still remember the first test that you know
Matt Beischel: [00:03:24] now. First nations, not at all.
Guido X Jansen: [00:03:27] So what were they like? Were they like big redesigns or like a big version, a versus a really big change? Or was it just small elements? You started out with
Matt Beischel: [00:03:36] nothing wasn't gigantically transformative.
Like we weren't doing like an entire site redesign or anything like that. A lot of it was because I have that background in design and UI. That's what I was more focusing on. So doing things like making adjustments to say the cart page, or adding a, an interface feature to the product page, or, making navigational adjustments and things like that, rather than like promotions or aesthetic based changes, like the classic, change the button color kind of thing, which is, bullshit test anyway.
But yeah, so a lot of it was. I think in the early stages, optimizing a checkout form. Okay. Process, elimination of unnecessary fields lining up. Yeah. And put input so that it would be the easier to parse out the form and different things like that.
Guido X Jansen: [00:04:31] how did this make you a better designer?
doing all these experiments? Did it change how you work
Matt Beischel: [00:04:36] things? Because I had always not necessarily been distrustful of my myself, but more questioning. it
Guido X Jansen: [00:04:47] looks nice by,
Matt Beischel: [00:04:48] not just looks nice, but you always like, how do you. I always struggled to find a way to quantify or prove out the value of the work.
Like how do I know that this change actually is more beneficial? Sure. I have, education and training and experience and stuff, but there's always this little bit of doubt in the back of your head, is this the right thing for this client or this websites or whatever. And When I discovered split testing, it was like, Oh, this is how I answered that question.
And yes, I can prove out like the hypothesis of, okay. I think this may be better. Let's try it out and see if it works. So absolutely it did make me a better designer because I can make more informed decisions, but also that openness to. Essentially be wrong in I don't have a vested stake in the success of this in terms of this change.
I know this change is better. I want to discover what's the best path to go down. And if it's not this design, do we need to like, do we stick with what is current or do we iterate? What kind of, how can we change or mutate or evolve? What we're working on to be performed.
Guido X Jansen: [00:06:05] How big is designing still part of your
Matt Beischel: [00:06:08] it's pretty big.
So the way that I work with clients, I'm basically, just an independent single person consultancy. And I do end to end. So I bring on the client, I go through the, and I go through the entire process. I'll. Do all, usually work in collaboration with them on test ideation. So coming up with all the ideas and hypothesis and stuff, but then I have a prioritization framework.
So we'll run all of the experiment ideas through prioritization, and then I'll design out the changes and build the tests, write the code in the whatever testing platform and then execute it. Do quality control, proof it out, make sure the test is set up correctly and everything, and then review results analysis with the client.
So basically it's like end to end service all the way through I do as much, or as little as the client requires.
Guido X Jansen: [00:07:03] And, yeah, we wanted to talk about, automation today. you have some. Some thoughts and some ideas and how to do that. first off, what are the things that you are automating?
The things that are tedious to you
Matt Beischel: [00:07:15] or annoying? So a lot of my work is focused on, like you said, elimination of TDM. I'm like, yeah. I hate being bored with work. In the sense of sure everybody's work is going to be boring in some way, but that doesn't mean your job as a whole has to be boring.
And as an example,
Guido X Jansen: [00:07:38] some people
Matt Beischel: [00:07:39] did like data entry,
Guido X Jansen: [00:07:41] some people find joy in repetitive tasks. I can totally understand that. But for me personally,
Matt Beischel: [00:07:48] it may be like a Zen. Exactly.
Guido X Jansen: [00:07:51] But for me also, if it's really repetitive, it needs to do with all, all over again. The next time, are really going to try to find a way to automate it.
Matt Beischel: [00:08:00] Yeah. Like I enjoy being engaged and thoughtful about my work. Like I don't enjoy data entry and things like that as a. As a designer, like I'm always like thinking creative, like prob like problem solving. No, it's not process execution. Yeah. So much. and so look at like any process that I create.
And it's like, how can I standardize this to make it repeatable? Both in the aspects of the repeatability lends itself towards trainability so that it can scale. So it's okay, if this is you do this thing the same way, every time you can train other people to do it, but then also you can either create tools or leverage something already existing to automate that process.
as an example, like when I have a completed a split test and I want to record all of the results. Cause I collate them all to, evaluate program health, okay, if we've run 20 experiments so far this year or whatever, how many of them were. Revenue generators and this and that.
you want to store up all your stats and some of the split testing tools don't really have a program management tool, let you to collate all that data easily. So you have to do it in some sort of outside fashion. It means you have to transpose. Yes. Split test result data from the testing tool in the outside source.
Are you using a spreadsheet or something like that? I found it like it would take me 30 to 45 minutes just to record off result data. Or an experiment and it's yeah, I have to do this thing the same way every time. And I'm literally just copy paste from the web browser into a spreadsheet.
And it's so bold. It's so boring. And so unnecessary thing
Guido X Jansen: [00:09:47] online dialogue has
does and a specialist
and ultimately this year salmon Mecho is hidden the only day left. When you say hope Hama upsides, sales funnels, and customer journeys. Mia info. Hi, now online dialogue fentanyl. Hey, you're not learning anything from that itself.
Matt Beischel: [00:10:19] it's just tedious busy work that has to be done as part of the process, but it's As a person, I shouldn't have to be doing this.
and it just pushes off all of the scrape data into my test repository. So it's something that took me like, 30, 45 minutes. Now it takes me 10 seconds to do. And for how often I have to do that, it's like in aggregate, how much time am I saving and how much tedium am I eliminating? So doing automating all those little processes away, like repeatable tasks,
Why didn't you just use the API of the tool and to push it to a spreadsheet or doesn't that work for the tool? Little tools have a full API available,
Matt Beischel: [00:11:27] Sure. there's probably some way to do it that way, but that's like the way that I figured it out now to be fair, I use, I use a tool called air table to call it all of my test results.
So I can't like just write like an automated script, Hey, go grab this data. there's still like a little human factor involved so that. That allows me to okay, pick the points that are necessary, like for human decision and then submit the data.
Guido X Jansen: [00:12:18] Yeah. any other, time savers that you build,
Matt Beischel: [00:12:21] I'm also using Zapier, which is a great automation tool.
Guido X Jansen: [00:12:26] If there's, then that's a four pros for pro tools, business
Matt Beischel: [00:12:30] tools. Yeah, exactly. Yeah. so in that same vein, like.
Guido X Jansen: [00:12:36] which also worked with air.
Matt Beischel: [00:12:39] Yes, exactly. So I have, I have, an automation set up for when I push new split test result data in it also runs a Google analytics report to grab some data from GA for cross-referencing against the split and against the result data as well.
So that was. Also part of the time saver, another thing was that beer,
Guido X Jansen: [00:13:03] how would you make sure that the data matches or the data from the experiments or the segment that you used an experiment with Google analytics?
Matt Beischel: [00:13:11] So I'm not, the type of data that I'm using in GA it's really for, estimation purposes.
Rather than like rather than result analysis. So I'm really only using it, say, okay, what was this data captured in GA for the equivalent timeframe of the experiment? And then what was that data set going back a year so that I can evaluate like a percentage of say like traffic or revenue to help determine like seasonality timeframe for the experiment, for example.
Yeah. To say Okay. if this experiment generated an additional $50,000 or something in January, how heavy is this? The traffic load in January? So we can get a better estimation of Is that $50,000 sustainable throughout the year? Or is it like, Oh, Hey, that was peak season.
So like it's really not gonna generate that much, monthly. Exactly.
Guido X Jansen: [00:14:10] Yeah. So you use the GA data to be able to extrapolate the results
Matt Beischel: [00:14:14] yeah. To help evaluate seasonality and stuff. Yeah. It's not it's not like a hard comparison though, to your point. Like I've had thoughts around trying to wrap a process or ground.
Doing segmentation and stuff with it.
Guido X Jansen: [00:14:29] Yeah. I can imagine that's really hard to automate because there, you can have infinite kinds of segments in all kinds of ways.
Matt Beischel: [00:14:36] And, but there could also be like some like common use case ones, okay, what was the, the segmentation across different break points or new versus returning track, there's some common recurrent ones that you could easily.
Automate now, like
Guido X Jansen: [00:14:53] the default ones automate the common ones,
Matt Beischel: [00:14:57] or even like no earlier in the testing process, define what your segments are, which yeah. In my like, ideation process, I use a hypothesis builder. that I constructed. Okay. It's basically, like hypothesis Madlibs, it has all the different data points.
And like we think this, that who's the target audience, what pages are going to be. And it's basically like a way to source all of the experiments framework information. Yeah. And so if you could set that up in a way to say, these are the, these are the appropriate segments that we're targeting for the experiment.
You could carry that data through and then pull the GA segments that way as well. So it's all a matter of capturing the data when you need it and then using it at a later time and just turning it into some kind of system or process.
Guido X Jansen: [00:15:54] So the form or something you use for the clients to enter the data in, or is something just for yourself or?
Matt Beischel: [00:16:00] Yes, I built it with the intent of, Getting clients to submit more hypothesis. Yeah. And also to train I'm up on that process as well. Cause like I'll, I meet with my clients every week, does transparency is very big for me, so I keep them informed and engaged right. Process. Yeah. And For a long, they'll always come with like new experiment ideas or some level of curiosity, like in some perspective that's different for me, like a really good example.
One of my clients, they have their head of customer support on there, on the call every week. And he's bringing in really useful information and feedback that their phone service representatives get all the time from customers like a customer will call and be like, Hey, I was trying to order this step.
Or another product and was having trouble this way or that way, And so there's, he's, it's a really good source of customer feedback. Yeah. but then, we'll be talking about things on the, on our phone call or meeting and things get lost because, you're trying to, you're taking meeting notes and stuff that you're discussing a bunch of things.
So I was like. How can I capture all of these ideas or also help expand the experiment maturity in the organization, by like here's this like easily shareable, like hypothesis input tool. Think of it like a virtual suggestion box, almost like that was the intent behind it. And I've had difficulty getting traction with the clients, like sharing it out.
So my stroke. So my struggle now is okay, how do I do a better job of socializing it within the client's organization to start sourcing and really
Guido X Jansen: [00:17:48] definitely something I recognize that's a lot of, even if you make it just a simple form, it's apparently really hard for people to use that as way easier for them just to send you an email.
Cause that's in their system, sending emails. That's fine. Everyone can send emails, but it's filling out the search. It's forum a, you're basically asking them to already think about, I hold all the hypothesis building. Okay. Where did you came from? What do you expect to happen? for who do you expect it to happen?
Those are difficult questions apparently.
Matt Beischel: [00:18:19] Yeah. Yeah. And it's hard, right? It's. Probably tricky for it to be like an organization wide thing, but probably useful for the stakeholders. And it's also like an educational component, like for the people that I'm directly interacting with. It's here's how you should be thinking of an experiment.
Like it starts with the hypothesis and this is, these are all the necessary data points to execute and construct a good experiment, So it's also a little training tool. As well to scale up the customers.
Guido X Jansen: [00:18:52] Yeah. In my experience with Mitel there, it depends a bit on the company, But if afterwards you can share with the results and they're saying, Hey, but that's game from this and this guy, or grill from customer service because they filled in this form. So we know where it came from them. And that's an incentive for people. Oh, maybe I should choose a forum. So he remembers it's mine.
Matt Beischel: [00:19:13] Yeah. And I've had thoughts of since it's an input form, you can place it anywhere, right? Yeah. you can, you couldn't place a form in an email, doing something like on my little, like bookmarklet overlay tool. because a client already uses that for doing test proofing.
Like I could just add that as a feature as well. So it's Oh, they're looking at the website, they have an idea. They just hit that. And then the form comes right up and they can fill it out and put it.
Guido X Jansen: [00:19:41] Yeah, I think I, I added it to two internets and I also created just like the client URL slash AB test ID or just test ID and then only make it available, with internal IPS.
Matt Beischel: [00:19:53] Yeah. Yeah. Like I have a password protected client portal where it sits now, where they can come in and submit stuff, but it hasn't sufficiently gained enough traction.
Guido X Jansen: [00:20:04] SiteSpect beat veiled, vide, and unique Abby testing professionals, artsy and product recommendation. All bullshit. , this is on the tax office Kips for doing optimal performance.
the SiteSpect pulsing 80 minutes for tagging. I think guns will flicker ethic even shorter. These aren't back and forward with the
impact of open debate Testa SiteSpect
so what's what's the next project that you're going to be automating. What's the next thing that you're annoyed about, or maybe an expansion of a, of the things you already have.
Matt Beischel: [00:20:47] So lately I've been doing a lot of work around, just more broad project management. So doing things like. Essentially cutting down on the repeatable tasks that a project manager would traditionally do.
So you have certain similar of like project management software that you're already using, base camp or Trello or sauna or whatever. And someone has to that tool is only as good as the person managing it. making it easy to manage and wrangle. Like I already have a process in place for every experiment's essentially broken down into six stages of progress.
And so when an experiment moves between each stage, that's a trigger for actions to happen. So as an example, if an experiment moves into the proofing or quality. Control stage use. I use like Zapier to automatically post into the project management platform. Like it creates like the client sign off task and says, Hey, this is ready for preview.
And then it automatically assembles like the preview link that the. Client can click on to go directly to their site. And it triggers like the, the variations and the a little variation, switcher tool. Yeah. that I've built and stuff you can actually test and prove out the experiment live on their site, rather than trying to get them to log into the split testing tool and try and QC it that way and stuff.
So it's like removing all this, these friction points, but then also Taking tedious workload off of my back of like having to create that task and post it and things like that and stuff. Yeah.
Guido X Jansen: [00:22:35] Yeah. And the great thing about Zapier, if you do it that way is that you can use, almost whatever tool the client is using as a project management tool, whether it's Assana or JIRA or Trello or whatever, you can probably use that to post a task there.
So whatever they are using.
Matt Beischel: [00:22:52] Yeah. And my solution currently is cobbled together from air tables, Zapier and project management tool. Ideally, I would like to have like my own internal. Yeah, tool. That's like actually custom and purpose built that. Have
Guido X Jansen: [00:23:12] you ever tried? So I worked before with effective experiments.
I want to know if you
Matt Beischel: [00:23:17] here's what I have, I'm familiar, but I have not tried the tool, but I, once I just, once I discovered it, I was like, in my head, I was like, this is probably the thing that I'm envisioning. So
Guido X Jansen: [00:23:28] yeah, the great thing is, so it's a separate tool, of course. So it doesn't necessarily tie into whatever the client is using.
but one of the, definitely the grid is one of the greatest things in my experience with tools like this is that indeed what you said when something moves from a different stage to another stage that usually triggers something. So I had a development stage and a design stage, which automatically would email the person responsible for designing experiments or developing experiments.
And then. The case of developing experiments, it would also automatically create a JIRA ticket for the dev team. So I can assign it to whoever they want. And that's that takes, if you just run one experiment at a time, it's fine to coordinate that. But if you go up to 10, over 10 experiments, you don't want to message everyone personally, whenever a status of an experiment that changes.
And then when it's all or workflow like yours can handle that saves so much.
Matt Beischel: [00:24:27] And there's also, you have to appropriately balance Timing and clutter as well.
Guido X Jansen: [00:24:34] you can, again, you can overwhelm people with it. Of course, if you run a lot of experiments. So that's definitely a
Matt Beischel: [00:24:39] trend just with like project management notifications and task assignments and things like if your note, you have to find the right balance.
If you're notifying people too frequently, It just becomes clutter and they tune out and ignore it. And then when something important actually comes in, it's just noise. So it ends up getting deleted. Like I remember somewhere I used to work the head, they were using JIRA and they had it set up. So that like when you created a new project and had maybe a task list of 40 different assignments and everything would get assigned on project creation.
So if you had six things assigned to you, you didn't. You would get assigned them as soon as they were all created. Yeah. And you, but you didn't, when you were able to work on them, you didn't know what your blockers were like, what the previous task for, so someone had to like actually manage all of that stuff where you had to go in and monitor it on your own.
And that was like, it's just creating too much noise and it's making it more difficult for the people, for the worker to actually find their work and accomplish it. yeah.
Guido X Jansen: [00:25:46] So what are your other experience with working with clients? I'm curious. So you do, how does the workflow usually look like?
So you say you have at least one. face to face or, remote face to face meeting with them each week.
Matt Beischel: [00:26:00] yeah. Yes. I'm, I'm of the opinion of being very transparent and collaborative with the client, like building that relationship and getting them involved in the process because it helps them become more invested and more of a stakeholder in the process rather than Yeah, I'll run a bunch of tests and you get a report every month or something like that in person, communication allows you to stay involved also successfully execute on relationship building, which as a consultant, how you keep and maintain clients is if you have that personal relationship.
As well. And then also it helps socialize and demonstrate the success of the program to like the client more easily shares in like the wins and the feedback. And it also trains them up to expect like individual tests or for information gathering. Like it. It eliminates that notion of Oh, I'm only going to test things that I think are going to be successful.
Like you're testing the answer questions, right? the ROI. Isn't not that the value is incidental, it helps disprove that notion of Oh, I only want to test things that I think are gonna be successful. Like I'm like, I have a question about something. let's find the answer.
And then that will eventually lead to a lift and rev.
Guido X Jansen: [00:27:26] Yeah. I usually tell them if you have a success rates of over on our 40, 50% with your experiments, you're not experimenting enough. You might not even be experimenting because you're just testing things you already know, for sure. For whatever reason might work.
You're very conservative with what you're testing.
Matt Beischel: [00:27:43] There's also an, another aspect. To like you quote, like success rate. I break it down and experiments. I'm sure other people do this too, but I like to reiterate it is like you have, you don't have winners and losers. you have winners, which are, experiments that demonstrably caused a lift on whatever metric you're tracking and you have a neutral, which is inconclusive.
But then you also have a save, like no experiment is a loser. Like you're not losing revenue or losing KPI or whatever. Let's say you're saving yourself from doing damage instead. So it's more of
Guido X Jansen: [00:28:23] or wasting a little of development resources.
Matt Beischel: [00:28:25] Yes. Yeah. Yeah. you're saving yourself from wasting.
Resource stuff right here. you're optimizing spend. Yeah. And so I, instead of saying like 30% success rate, I actually break it down into three ways. okay. I find like 25% of the experience that I would, one are our winners and then, 10 or 15% or whatever are our saves. So it's like you have a greater.
Cumulative range of success. Now it's a little harder to quantify Oh, you're not implementing losing, you're not losing money. yeah. but there's still like value in saying, Oh, we've made sure we didn't implement something that actually would've caused damage.
Guido X Jansen: [00:29:10] Yeah. Yeah. And also with, with, looking at things like wins.
And if you only look at uplift, especially, this month will be your really hard one, for a lot of companies, either you'll see a, you go up to three times or you go out, go down like 90% and that's not necessarily because you did a bad job or because you did a really good job and all already, this is proving that maybe just conversion rates or just revenue.
Might not be the best metric for what we do.
Matt Beischel: [00:29:39] Yeah. And with every experiment, I look at, a, I don't want to quit. I don't want to quite say top of funnel metric, but like point of change metric and then, and business result, metric. And cause they're not necessarily the same, like point of change is What's the action that's taken directly
Guido X Jansen: [00:30:01] after adding something to the basket.
Matt Beischel: [00:30:04] if you're making a change on the product page, is it increasing add to cart rate, but then you're still also tracking order conversion and revenue to see, okay. what's the end point business lift of the metric as well. like a really good example. I had up that testing a feature with a client about.
they have the ability to filter their reviews based on different criteria. And they had done usability testing prior to this. and this was before I engage with them and it was instituted the testing program. but they'd still done usability testing on the feature and gotten some customer feedback and stuff, and it's Oh yeah, this is a great feature.
love this. This will be super useful, And it users in quality feedback responded very well to the feature. But then when we're doing some testing on an unrelated thing on the product page, in some of the session replays and heat maps, I was noticing that After users engage with the filtering feature.
There were abandoning a lot of times. And so it was curious, it was like
engagement with this feature seems to increase abandonment. So the, you, it was a benefit like users were saying that was a beneficial feature, but. It was detrimental to the business metrics. So there was a, there's a dissonance here,
Guido X Jansen: [00:31:37] Becca on the front ends our beta test and happy Oak lost from the beginning.
The flicking evaluatees did come through. The key test has not to be included and imposed active test tells that no Tyler's yet. Phobia convert comes our beta testing software. The smart insert that'll hit about any flickering of faith gifts. Now that our support fee a 24 seven chat, the health would go up and dive at the last minute, it was 15 Kia, carbon positive.
If you do this yourself, you have a knife and the fork and the cleaner artsy plus a year during
I can imagine if, but if the, if it's filters on things that people once, but you don't have the products to match, they will abandon. So they like using the filter.
Matt Beischel: [00:32:23] it's not filtering products. It's filtering review information.
Guido X Jansen: [00:32:26] Oh, okay. Oh, okay.
Matt Beischel: [00:32:27] Yeah. So the, yeah, but you're what you're saying is correct.
And we were noticing, like the users were engaging with those filters, but there were. Filtering on so many criteria that they were filtering away, all the reviews entirely too many filters and being left with nothing. And so it was an unintentional negative experience of Oh, there's no useful information for me here.
So I, I can't make a decision. So I'm going to leave.
Guido X Jansen: [00:32:52] I once had a great, a small anecdote, aside that's a, that offered filtering, both. Whatever the filters, were on top of that page four for summary. So if you didn't have any results, and so if you filter too far down, if it didn't have any results, it didn't necessarily, they didn't have a numbers.
besides the filters indicating how many results you would get. So it was a bit of a surprise that didn't have that feature yet. So people filter too far down and then we'll happens when you don't, you didn't get any results. You did get some suggestions of other products, that might be interesting and the same category or whatever.
but they didn't see. They didn't have any results. So what happens, you feel too far down, there were no results, but you did see products that were not matching your results. There were other suggestions, because they didn't want to show an empty page, to prevent that for I bring, which is in itself might be a good idea.
But because it didn't communicate, Hey, there is no originals. people were very confused. Like I'm filtering down on Samsung TVs and are still see Phillips or Sony TVs in my results or what they think or thought they had his results. So yeah. Filtering can be tricky. Yeah.
so you, so when we, were talking before this, I we're talking about, client projects, you also mentioned client onboarding. So basically a step back from a client communication. so what do you do with client onboarding? How do you sets, the rights, expectations from clients on what's going to happen now?
Fast results will be, coming in and that kind of stuff. How do you approach that?
Matt Beischel: [00:34:37] That's it? That's an interesting question. So my onboarding process typically takes about a month and that's mostly from I have, I have pre-prepared onboarding documentation that outlines like. here's program requirements like expectations, et cetera.
But then also like technical guides, like here's how you should install the split testing tool and the heat mapping tool and, get logins and reconciled with all the accounts and all that stuff. And then, we'll set up, like the weekly was set up like the weekly meeting cycle.
And then really the first month is just a lot of like qualitative and quantitative data collection. And in the meetings, it's like a lot of just experiment ideation. So training the client up on like the hypothesis generation process, Hey, here's the hypothesis builder. Let's walk through it.
No, you have a couple of questions, ideas. So here's, you give them to me and I'll show you like, okay, here's how I would take that and formulate it into a hypo. Yes. or some such. And then,
Guido X Jansen: [00:35:47] you wouldn't be running any experiments in the first month, right?
Matt Beischel: [00:35:50] no.
Guido X Jansen: [00:35:51] or just like an AA experiments, maybe.
Matt Beischel: [00:35:54] yeah, I'll do a couple like validation experiments to make sure that the testing tool is set up correctly and everything. Yes, of course. but other than that, no, not really any kind of like hard heart testing. it, it also depends a little bit on how fast the client gets up to speed.
As well. Yeah. If they have a couple ideas that they're really ready to go with and we can, and they have a good development process in place and they can get everything installed and working correctly quickly, then you know, it's a lot of it is dependent on the client. And so some of it is just wrangling of Hey, do to get that script installed.
Guido X Jansen: [00:36:32] Yeah. And so I'm ready to use lead, where you are these coming from? Is it a user already mentioned customer service as input for those
Matt Beischel: [00:36:41] hypotheses? So I'll source test ideas from anywhere. client can submit them, or a lot of the other States will just be me auditing the site to come up with.
Ideas, look at it, do some sort of UI UX audit, look through Google analytics or whatever analytics tool they're using to identify, where some particular drop off points, evaluate like traffic areas. Yeah, segments,
Guido X Jansen: [00:37:13] et cetera. And, earlier you also mentioned, why sticking to plan, test, run time parameters is important.
I think there's a whole topic for many people working with directly with clients.
Matt Beischel: [00:37:25] We're actually discussing that a little bit, at the, at the weekly standup today, someone was asking a question. About that. Yeah. And my S my starting point is really is looking at the marketing or the purchase cycle of this, of the segment that you're testing and seeing okay, if it's, if we're testing primarily on first time visitors, and we can do that terms historically, it takes them about 30 days from first site visit to.
Make that first purchase, then the experiment room needs to run at least that long to capture first transaction of first user that came into the experiment. so starting with just a touch, just your expected purchase or behavioral change timeframe. And then using that to estimate on sample size, feeding that into a calculator to determine your minimum detectable effect and then evaluating again, and then doing some kind of sanity check on the numbers that you'd come up with there.
so if you get a sample size of 3000 and an MD, if 80%. that's completely, that's completely unreasonable. So it's okay, what's we have to reevaluate and change some of these numbers around to get some sort of sane timeframe and sign saying detectable effect.
Guido X Jansen: [00:38:57] Yeah. I was talking to I'm a web shop owner, I think six months ago. And they sold kitchens. Online, but the average time there to go from, browsing for new kitchen to actually buying a kitchen is like multiple months.
Matt Beischel: [00:39:11] Yeah. That's so yeah,
Guido X Jansen: [00:39:13] if you cannot run an experiment on from first time visitor to actually, purchasing a kitchen.
if the lead time is way over, whatever your, Your cookie settings can handle. Yeah.
Matt Beischel: [00:39:28] And for something like that, it'd almost be more like, Looking at like lead capture or higher, what's the higher
Guido X Jansen: [00:39:34] touch point. So you need to go form a for things like that, people, going from a four brochure or whatever, or just not focused on first time visitors, but the repeat visitors, if someone visits your site five times, then it might be a different story,
Matt Beischel: [00:39:47] right?
Guido X Jansen: [00:39:48] So w what came out of the discretion that is often known with the Roundup?
Matt Beischel: [00:39:52] Yeah, that was the focus. It was like someone who's having a question around. how do you evaluate that from a project management standpoint? So I just it just exposed my process, okay.
First I think about it from the standpoint and then work forward from there and then do some sanity checks on the information that pops out. And it's is this workable? Is this reasonable? Yes, we can proceed forward or do we need to make adjustments? Like for example, I have one client there.
Their purchase time is usually around 60 days, so we're not always testing on transaction cause that's a pretty long purchase window.
Guido X Jansen: [00:40:31] Exactly. Yeah. And it could be one of the questions in your, in your workflow. maybe even, just off the top off, just building, asking those kinds of questions, like what's the business cycle for this.
And just to remind yourself.
Matt Beischel: [00:40:44] Yeah.
Guido X Jansen: [00:40:45] Okay. Yeah. That makes sense. just to remind yourself, okay. Yeah, definitely. That's something I should be checking first before we build the whole thing.
Matt Beischel: [00:40:53] Yeah. So I have, I have that broken down. So ideas stayed. So the first stage is the idea stage, and that's really just coming up with an idea, there's no, not quantifying it or assigning numbers to it at all yet.
It's just. Sourcing ideas. Cause I don't want to, I want to disabuse the client of the notion of Oh, that's a dumb idea or whatever. no, like no idea is a bad idea. Just source everything first, just cultivate a healthy sense of curiosity about the website, questioning things.
So coming up, just coming with ideas, it's just pure text, pure information. We think, we think this, so we want to make this change for this reason. So just come up with an idea and then we'll move it into the planning stage. And that's where we start to quantify stuff. So that's the stage where we'll say, okay, what's the right.
what's the transaction timeframe for this? What's the, what's an estimated sample size. What's an estimated run technical effect. And then using those quantified values. As a prioritization method.
Guido X Jansen: [00:41:54] Yeah. And a need to check those practical things. Again, like I said, with a sample size, can I run this experiment?
Does it make sense?
Matt Beischel: [00:42:01] So right. Taking those numbers and then feeding it into a sample size and a duration calculator. And it's does the duration, does the calculate a duration. Reconcile with our PR our purchase window. And also, is it a reasonable timeframe to run an experiment?
Like I remember one time, I think it was because I made a calculation error actually. Like they ended up running like the sample size calculator and comparing against the average daily traffic of the page. And it was like, This experiment's going to take 170 days throughout. And I was like, okay.
Guido X Jansen: [00:42:38] Yeah. What I'd also like to add is, usually with what we already spoke about, you have the different outcomes of an experiment. And I like to force teams upfront to think about, okay, if we have these outcomes, What will happen? what would it mean for us? what would change or what would, will we implement something?
wouldn't be do anything or, we'll be around a followup experiments because if you're running an experiment, then you'd have no idea whatever the outcome is. Nothing's going to change. Maybe that's not an experiment you should run anyway.
Matt Beischel: [00:43:11] Yeah. And I classify experiments into at least one of two types.
There's Prospective and iterative perspective or like your big transformative, we have no idea what's gonna happen, but this is something interesting. And then your iterative ones are like, those are the more that, like the sherbets like we already have some historical data around here, or it's a followup from a previous experiment where one, two.
brief, refine our changes a little bit more
Guido X Jansen: [00:43:38] with, the big chains. You probably have an idea. Okay. if this is this works, then Oh, wow.
Matt Beischel: [00:43:43] yeah. And
Guido X Jansen: [00:43:44] that is, we can do this. We can visit.
Matt Beischel: [00:43:45] Yeah. And typically the clients, like that's where they're relying on the experimenter's insight.
Cause you have historical experience like, Oh yeah. I've tested this six other times and it worked. 85% of the time or whatever.
Guido X Jansen: [00:43:59] So that's great, man. Thanks for sharing all of that. last thing I'm interested in and you'll give a lot of inspiration to those clients and, in all the work that you do, a hypothesis that you were created for them.
So where do you get your inspiration from? Oh, boy.
not besides zero round tables. That is,
Matt Beischel: [00:44:15] yeah. that's a pretty recent thing actually, but it's healthy because having that sort of face to face vocal communication is way more. Meaningful and valuable then say link LinkedIn, comment and chains or whatever, even though those are still they're still good and stuff.
It's interesting. Like just general inspiration comes from lots of different sources. I'm very big on UX. I also play a lot of video games. Viewing games are all about user experience and user interface, interacting with the interfaces, basically how you play the game.
And the quality of the interface is going to determine how successful we're terrible,
Guido X Jansen: [00:44:56] how enjoyable
Matt Beischel: [00:44:57] it is, How enjoyable the gameplay is. Yeah. And I've been playing games for a long time, even just all the way back to the days, like the original, like Nintendo or even a target,
Guido X Jansen: [00:45:07] but it doesn't, that doesn't necessarily have to be stairs instead as you're out.
we all know Minecraft. It's not necessarily the higher resolution that's blowing you away, right?
Matt Beischel: [00:45:18] no. And I was a big world of Warcraft player for. Probably 12 years and, as designer like completely built and customized out my own interface, control scheme for that was able to optimize my play.
and just building off of that experience, and you'll see it like a lot of other people who aren't designers, they'll post screenshots of like their interface and stuff and their screen is cluttered with 50 different. Action buttons and stuff. And it's you're restricting like your viewport of being able to see what's going on in the playfield.
And so it's what's striking to me, it was like this really the concept of like timeliness of importance, like timeliness of relevant information. there's this huge amount of information that you need when you're playing a game, but not all of it is relevant all the time. Like you only need to know say this ability is ready for you to use.
You only need to know, like when it's available, you don't need to know when it's not like you can hide the icon or something, or have some sort of like highlight or like glowing field or something. So that sort of like. Priority of relevance is a big, important feature in gaming. And so
Guido X Jansen: [00:46:27] yeah.
So it's the interface, adapting to what your, whatever you need right then and there that's
Matt Beischel: [00:46:31] right. Exactly. Yeah. And so
Guido X Jansen: [00:46:33] I have this, I have this, this, a stream deck right here. With all kinds of action buttons. but it depends on the program I have opened at that time. So if I have Chrome open, there are different wallets.
Don't have any actions for Chrome, but theoretically for Chrome, I could have different action butters for Photoshop or for premiere or for a game.
Matt Beischel: [00:46:53] Exactly. Yeah. so that, and then I've been doing, I've been doing more reading. Lately I've been picking up like books on consulting and AB testing and things like that.
Cause I feel like my statistics game is pretty weak. So I've been reading a lot of like more of the
Guido X Jansen: [00:47:12] statistics base, any recommendation that funds.
Matt Beischel: [00:47:14] Absolutely. I was, I've been reading through, your guys' book. statistical methods
Guido X Jansen: [00:47:19] in online AB testing.
Matt Beischel: [00:47:21] Yeah. that's been a very good one useful one.
I know a lot of like more, less statistically minded people are like, it's really dry and hard to read, but I'm finding it interesting and relevant there. And being able to incorporate those things into the process. like circling back around to say automation, same kind of thing.
okay. I have all of my stuff. Statistics that I'm trying to calculate, like saying planning stage again, that's a repeatable process. So I like having to go in and say, you'll manually type a bunch of numbers into a statistical calculator. that's tedious work. Like it would be interesting if say like some of the split testing tools actually incorporated those.
In their PR like, and I think in general, most of the split testing tools that I've used do a poor job around okay. The stuff, the actual, underlying statistics of the test, like helping to determine and evaluate detectable effect and. and if you have a reasonable sample size and things like that, like they're airing too far on the side of simplicity.
And that's where, not that CRO has like a bad name or anything, but that's where you get like a lot of the snake oil kind of things, where you see people like posting test results. Online or writing a case study where they're have like, Oh, we tested 300 visitors and got a 250% uplift after two days.
And I'm like, sure, the results are full
Guido X Jansen: [00:48:55] of shit. your results probably just fine, but it's not going to be useful for you in the future.
Yeah. That's that happens. and, is it helpful if you have someone that's done it, but do you also, so you read books, do you also do all like online courses, that's something that's, that you do or reading books more and more useful to you?
Matt Beischel: [00:49:14] Reading has been more useful to me and also just, my own, just online shopping experiences, like I'll go out and look at yeah. ECommerce sites and be like, Oh, here, it's interesting the way that they're doing this or that or something and take notes and solicit feedback and do things like that.
So just like just observational journeying.
Guido X Jansen: [00:49:39] yeah. Yeah. For me, I tried to both read books and do online courses. I think with. It takes me more effort to open a book and then starting an online course, that's easier for me. but I'm definitely way more distracted when I'm behind my computer.
So I definitely, when I do an online course, definitely need to shut down everything that's available. Ideally if possible, also download the online course and then just do it from my computer without any wifi on to minimize distractions. otherwise I am.
Matt Beischel: [00:50:09] Yeah, there's I haven't really been doing, I haven't really gotten into.
Guido X Jansen: [00:50:15] You said you started reading books, but is that a recent change because of the whole coronavirus thing or was it already on
Matt Beischel: [00:50:22] I've always been a book reader. yeah. Yeah. I'm more of like a reading and inexperience. Learner. So for me, it's more Oh, I'll read about something, then I'll go do it and
Guido X Jansen: [00:50:32] try it again.
Yeah. You have your notebook on the side saying, okay, this is something I should incorporate
Matt Beischel: [00:50:37] as an experimenter. Try it. Test it, fail at it. Figure out what didn't work
Guido X Jansen: [00:50:42] and yeah, exactly. Matt, thanks so much. It was lovely talking to you. Our time is up. Thanks so much. Yeah. We have some links to include in the show.
Notes are everything that we spoke about. For example, the book, you'll find a link in the show notes below, in this, this podcast. thanks again, Matt. Good luck. I'm reading. and yeah, building those automations. if you have more, please share them with us. We definitely want to know
Matt Beischel: [00:51:10] we'll do.
Guido X Jansen: [00:51:14] Thanks so much, Matt, for giving us some inspiration on what we can automate to make our work lives a bit less tedious. Good luck with all what you do and talk to you soon. This will season two episode 17 of zero cafe with Matt Boshell from Corvus hero. And as always, this show notes can be found on our website zero.cafe.
Although we started out with, as a ditch podcast, we are putting out more and more English content. If you want to skip all the Dutch content, please go through sera.cafe/english. So she and overview of our English episodes and to subscribe, to get notified about new English episodes. If you're interested in promoting your products or services to the best zero podcast listeners in the world, please take a look at ciro.com/partner to see how we can collaborate.
Next week. And then our English episode, where I talk to the person that kickstarted the COVID-19 conversion rate AIDS package, AKA covert crap, roll Dory, Swami. And I'll be talking about what covert crap is and how you can join as a Sphero specialist or as a business. Talk to you then, and always be optimizing.