Measuring 'Confidence' in ICE Prioritization

In Hacking Growth, we outline a prioritization method for how growth teams can identify what growth ideas and experiments to work on next. This growth prioritization model is called "ICE" which is an acronym for Impact, Confidence, Ease. The goal of the framework is to output a stack ranked ordered list of growth experiments that enables teams to cut through the ambiguity about what to do next. Each idea is scored across three axes on a scale of 1 to 10, with 10 being the maximal value (e.g. high impact, very easy, etc.) and 1 being the minimal (e.g. 1 is very hard, very low confidence, etc.).

One of the most contentious parts of the framework is the Confidence column. People have argued that 'confidence' is subjective and just a column that gives people wiggle room to shift the prioritization based on something that's nearly impossible to measure. On the surface, confidence does seem arbitrary and subjective, which appears to go against the rigor that the framework is supposed to bring. However, if you dig a bit deeper, confidence can (and should) be measured and held to objective criteria like the other two inputs.

Here's how to build an objective view of 'Confidence' to inform your growth prioritization. These are listed from lowest signal/confidence to highest. While not an exhaustive list, it's a good place to start when trying to bring more rigor to your 'Confidence' score.

  • Qualitative research/feedback
  • Quantitative research/data
  • Correlational data
  • Causal experimental evidence

Qualitative feedback can be user reports and feedback, insights from qual research sessions, etc. These are often low sample size/small n inputs.

Quant research is survey data and larger sample studies that have statistical significance and broader user representation in their responses. These are larger n.

Correlational data is using data science / analysis to identify correlations with user success states (e.g. some level of engagement or behavior correlated with higher retention)

Causal data. A previous experiment or holdout that shows a causal relationship between an experiment and a change in user behavior (e.g. a previous A/B test)

You can use these data points in conjunction with one another or standalone to assess your confidence in the likelihood of an idea being one worth investing in. A past experiment with causal evidence of a change in behavior should net higher confidence in a similar idea (e.g. doubling down in an area you’ve had success previously) than anecdotal user feedback, as two extremes. Obviously having higher confidence based on stronger data will up your hit rate in experimentation.

We use ICE in Hacking Growth, and there are lots of other prioritization frameworks out there. But no matter which you use, you can use this ladder of building certainty for any ideas/experiments you have. Hopefully this will help you bring more rigor to the confidence score and a stronger rationale of your growth prioritization -- and faster cycle time to bigger wins -- as well.

Photo by Chris Liverani on Unsplash

Back To Blogs