Prioritizing experiments is a bit of an art and a science, but how do you get it right? Find out in this week’s Growth Snack: The Breakout Growth Podcast Short with Sean Ellis and Ethan Garr.
When Sean was struggling to find a good way for teams he led to compare growth ideas across dimensions he needed a tool that would balance the need to move the needle with the need to move fast. He came up with ICE scoring, and since then, thousands of growth teams have used this as their starting point for choosing experiments.
In this Growth Snack, we share insights that will help you tune whatever approach you use for prioritizing experiments. Whether you use ICE or another system, this quick conversation can help you think about the best ways to surface and choose the high-impact ideas most likely to accelerate growth in your business.
- What’s important when picking growth ideas to test (00:57)
- The basics of Impact (02:59)
- Understanding Confidence (04:09)
- Using a little bit of creativity to assess Ease (05:13)
- Haphazard testing and other pitfalls (06:30)
And much, much, more . . .
Ethan Garr 00:00:00 If you’re leading growth, building a startup or looking to ladder up your skills, then you’re probably really busy. So every other week tune into growth snack, the breakout growth podcast, short where Sean Ellis and I share one key growth learning to help you on your journey to break out growth success. It’s food for thought for anyone hungry for growth. All right, Sean, I scoring you came up with the idea a long time ago. What is it and why do you think it can help teams succeed?
Sean Ellis 00:00:25 Hey Ethan. Um, yeah, this is a, this will be a fun, quick conversation on something that, um, I hear referenced a lot. In fact, my daughter, uh, had an internship at noon this past summer, and she came to me one day and said, dad, I learned the coolest thing. And they’re using this thing called ice scoring. And have you heard about it? And uh, yeah, it was, it was fun to be able to tell her yeah, I came up with
Ethan Garr 00:00:47 That.
Sean Ellis 00:00:49 Yeah, it was definitely. So, um, the, the whole purpose of ice scoring is that it is an easy way to compare growth ideas, experiment ideas across, um, the key dimensions of, of an idea. So the dimensions that I think of are that are important are impact confidence and ease. So, um, best way to think of it as the, the best idea that you would possibly run would be one that is super high impact. So it’s, it’s gonna really help to accelerate your growth and it takes almost no effort to implement the idea. And you’re really confident it’s gonna work. Unfortunately, those ideas are extremely rare. And so most of the time you need to do a bit of a trade-off the potentially really high impact ideas might actually be pretty difficult. And so they’re going to score low on the east side and, um, or, you know, or confidence might not be real strong, but basically what you’re doing is you’re scoring across those three dimensions on a scale of one to 10. And, uh, it, the best idea is going to be a 10 on impact a 10 on confidence. Yeah.
Ethan Garr 00:01:54 How do I come up with more ideas like that? Right. But, you know, I, I think, you know, I obviously prioritization tool, but what do you see as kind of the larger goal here?
Sean Ellis 00:02:04 So the larger goal is really to maximize the impact on growth. And I don’t care if you’re, if you’re Microsoft or you’re an early stage startup, there’s a limited number of resources you’re going to have to work with. And you want to be able to maximize the impact from those resources on growth. And so being able to test more ideas. So if we, if we look at, uh, Jeff Bezos from Amazon says our success is Amazon is a function of how many tests that we run per day, per month, per year. So something like that on, on the quote from him. But if you want to run that high velocity of tests, you want to be able to run tests that really move the needle. And so that’s, that’s what this is doing. This is, this is giving you a way to really figure out what, how do you move the needle the most with the resources you have
Ethan Garr 00:02:57 Generally calculate impact?
Sean Ellis 00:03:00 So impact? Um, that’s a hard one. Sometimes I’ve missed it a lot. I remember we, we scored one idea as a four that ended up increasing the leads we collected by like a hundred percent on, on a, on a program we had. So you don’t always get it right on it. But I think the best way to calculate impact is to look at, um, like kind of a model of the business. And if you, if you take part in your main funnel, for example, and you say, if we got a doubling here, what would that to the business? Could, you know, is there a chance we could get a doubling then that might be a really high impact idea. Viral loops would be really potentially high if we, if we can, if we can tip this into a viral factor above one, then this could be super high. You know, the alternative might be okay, it’s a landing page where we only spent send 1% of our traffic. We make, uh, a 10% improvement on this step in the funnel there. It’s probably not going to move the needle very much because it, you know, it doesn’t touch that many people. So, so it’s really, I think through modeling that the potential impact is, is the best way to see, um,
Ethan Garr 00:04:11 You see confidence as confidence on your belief in that impact.
Sean Ellis 00:04:17 Yeah. So, so, um, not just, I look at confidence as more of a binary, will it achieve the goal of the test or not? And so the impact is kind of separate to that. Um, but confidence is being, um, you know, how do you kind of calculate confidence a lot of times, it’s a guess, but, um, if you, uh, one of the places that I think people kind of forget to look at on confidence is just qualitative research. So the more you understand the problem you’re trying to solve, or the opportunity you’re trying to tap into through customer interviews or surveys in your you’re really, um, contextualizing the issue, the more likely you’re going to run a test that that, uh, is successful. And so, um, and you know, or if we’ve run, we’ve run a couple of tests here, they were both successful. We do another one. Maybe, maybe I’m going to be pretty confident or we’ve run 10 tests in this area. Nothing’s moved the needle. Probably my confidence level is going to take.
Ethan Garr 00:05:16 And then with ease, you know, I’ve always struggled cause you know, how do I know how easy it’s going to be for my developers to implement? How do you look at ease or how do you score it with and without getting over complicated.
Sean Ellis 00:05:28 Right. So ease as is one that takes, um, often quite a bit of creativity, which you wouldn’t necessarily think on that. But what you’re really trying to do is come up with a creative low resource way of figuring out what you’re trying to figure out. And so sometimes we refer to that as like a minimum viable test, but you’re really trying to figure out, okay, what is the smallest way we can, we can test this to see if our hypothesis is correct or incorrect. And so, um, you know, the first pass might be okay, this would be really hard to test. You might get feedback from engineers. If it’s something that’s potentially a heavy engineering test to run, but when you talk through it more, maybe you’re like, well, wait, if we, if we just, um, you know, instead of trying to speed up the performance on something to, to, to see the impact of that, maybe we can actually slow it down a little bit and see the impact of, of speed on conversion rate, for example. Um, so sometimes it just takes a bit more creativity to be able to, um, as I see the smile on your face, as, you know,
Ethan Garr 00:06:34 Referencing
Sean Ellis 00:06:35 A, I go practice a question, so anyone who’s gone through go practice.
Ethan Garr 00:06:40 That’s a good one. Um, before you, you know, you coach a lot of teams and you’ve used ice as part of that coaching, how are, how do teams typically prioritize without ice? And what sort of happens if you don’t get good at something like this?
Sean Ellis 00:06:55 Yeah. A lot of times it’s just pretty haphazard. It’s just kind of like, okay, we want to run more tests. Let’s, let’s run this one, let’s run this one. And, and what happens is either you don’t have the resources to get those tests out. So you get frustrated or you, uh, run the tests on they’re really low impact because you never really thought through, okay, if it’s right, what might it do to the business? And so I think being able to have a good scoring system is just going to help you maximize impact for the resources that you have toward testing. Uh, the other thing that I, that I have seen is that kind of hippo method, which is the highest paid opinion. Um, but that’s, um, that’s really not effective either. So even if, if the boss calls all the shots, the more that the team is able to give impact on, on the impact, confidence and ease or give, give, uh, insights on impact confidence, and you use the more likely, um, the boss ultimately will make better,
Ethan Garr 00:07:51 Has a lot of success helping teams, uh, use ice when ice becomes the conversation starter, where I failed with it or struggled with it and had to kind of regroup is when I got really like too religious about it, like the top score wins or, you know, um, it’s just, it’s, I don’t think it’s ever that cut and dry. I think it’s a really good place to just sort of start the conversation, you know, and for me, icing in a giant batch, like taking all of the ideas and trying to compare them to one another against a score, that’s never been as effective for me as if I could get them into batches. Like all of the ideas against this lever, in the business, all the ideas and this lever of the business, but, you know, how can teams win with ice and kind of avoid the bigger pitfalls from your, from your point of view?
Sean Ellis 00:08:34 Yeah. So it’s probably a longer conversation about, you know, ideal ways to do growth meetings. And, um, so I don’t want to get too much into that, but I would say is, um, you know, try to try to get, you know, it is a guesstimate when you’re, when you’re coming up with the high score. So don’t, don’t feel like you can be exact because you can’t. Um, and hopefully you’ve got some tips here where you, where you get a little more of an idea of how to do that ice scoring. Once I get into a growth meeting, I don’t typically talk about, uh, ice scores anymore. I think ice scores can be good as the ideas come in. And as you say, okay, this is the area we want to test. What are the best ideas we have against that? And using ice to shortlist those ideas. And then just looking at the ideas kind of on their own merits. Once you get into that growth meeting and picking the two or three you might want to run. Right.
Ethan Garr 00:09:20 Well, I think any of our listeners, if they, if they want to check out hacking growth, your book, that’s a good place to learn more about this, but I think that’s all the time we have Shaw.
Sean Ellis 00:09:29 Absolutely. Yeah. So we, we definitely covered in hacking growth, but just getting started and, and trying to do it, uh, and adjusting along the way, I think is probably the best way to really get a handle on this. So, um, that’s it for this week, stress snack, uh, just remember it’s always one growth insight to help you power your team’s breakout growth success next week, we’re back with a full breakout growth podcast episode and an interview with a growth leader from another one of the world’s fastest growing companies. If you’re hungry for growth, keep tuning in.