Write The Report Before You Run The Experiment
by Ethan Garr
Starting with the outcome in focus
If you are in growth, you should steal elements of Test-Driven Development (TDD) from your engineering team and apply them to your growth experimentation process. Because if you build the report before you run the experiment you will win. It will help to ensure that the tests you run consistently yield measurable results, and that alone can prevent culture-killing mistakes. Since we all need another three-letter acronym (TLA), let’s call this Report-Driven Testing (RDT).
In Test-Driven Development (TDD), engineers convert software requirements into test cases before writing their software. They then write minimal code, run it through the test, and prove its efficacy. With each iteration, they continue to prove their work by running all new code through these test cases. This process ensures that the code not only works correctly but actually meets the requirements with each new update.
I worked with an Engineering Director several years ago who introduced me to the Test-Driven Development concept. As he described how it was helping his team to write better code and collaborate more effectively it got me thinking about where and how we could use a similar concept in our own test/learn growth approach.
Why apply this thinking to growth experimentation?
In our quest as growth professionals to turn guesses into facts, we start with a hypothesis: “By changing X into Y, we expect to see Z”. We confirm these hypotheses with experiments and use those learnings to drive virtuous cycles of improvement. The problem for many teams is that they do not get the learnings they were hoping for because they are not instrumented properly or do not have democratized, accessible, and transparent data available to confirm their guesses.
Writing the report first and running test data through it using this Report-Driven Testing approach helps to ensure you will get meaningful answers to the questions you are asking. I have successfully piloted this concept with growth teams I coach and train, and I have found that it helps spot holes in your data, avoid mistakes, and get more valuable learnings more often.
Look, I am not an engineer, but Test-Driven Development makes a lot of sense to me because of its obvious forcing functions:
- You will write the simplest code to pass the test and avoid adding crap on top of it.
- You will build confidence in your code with each iteration because you won’t iterate on things that don’t work.
- You will inspire conversations around the specific challenges you uncover.
Why not build the same forcing functions into your growth team’s process with Report-Driven Testing?
- You will run simpler experiments without adding crap on top of them that muddy the results.
- You will only run tests where you know the hypotheses can be confirmed or disproved with the data and tools you have available.
- You will inspire conversations to overcome your data challenges and improve how you collect and analyze data as a whole.
Check out our podcast conversation with Morgan Brown here for other great insights for building successful test/learn cultures.
How to make Report-Driven Testing “A Thing”
So here is a starting definition for Report-Driven Testing. In RDT, growth teams first develop a ‘live’ report before running an experiment, which proves that the control and challenger traffic for each potential outcome will yield accurate data to confirm their hypothesis.
Now, to be fair there is a problem with the RTD/TDD analogy you should be aware of. In Test-Driven Development you don’t write the code until you write the test, but in Report-Driven Testing you do have to set up the experiment to some degree to ensure that the data will flow through to your live report. So I suggest three steps:
Step 1 – Outline what the report should look like. I suggest doing this in your vision document for the experiment. This can be a spreadsheet with sample data and formulas or even a wireframe of what the data will look like in your product analytics platform. In this step, codify how and where you will get each data point.
Step 2 – Build the report and run live data through it. This may involve your QA team, but essentially you need to ensure that control and challenger group events are captured all the way through to your report.
Step 3 – Ask your key stakeholders to review the test data. Confirm that the results of your “test run” show that the questions laid out in the experiment hypothesis can be answered with a full data set.
If everyone is aligned, now you can go run the test with a much higher degree of confidence in your experiment design.
A few final reasons to give this a try
Nothing kills the enthusiasm for high-tempo testing more than going into a growth meeting only to find out that the experiment you launched is broken. The test-learn flywheel doesn’t gain momentum and growth teams lose faith when tests break down and fingers start to point!
This Report-Driven Testing approach avoids unforced errors, but it also helps get your team collaborating around designing good experiments and making good decisions. Looking at test data inspires questions like:
- Do we know how long it will take to get a statistically significant result?
- Are all of the success metrics of the experiment able to be captured?
- What would a good result look like vs. a great result?
Finally, this is a valuable tool to help you and your team plan ahead for what you will do with your learnings. In the simulator-based growth learning course, GoPractice!, co-creator, Oleg Yakubenkov explains that having a plan for what you will do with an expected result is an important element for growth team success. In my experience, the best teams are always looking at each experiment through this lens of ‘assuming we prove our hypothesis, what’s next?’
Data inspires curiosity and creativity and sparks great growth conversations. Building the report before you run the experiment with this concept of Report-Driven Testing (RDT) will help you start those discussions early and increase your testing cadence and success.
Hit me up here if you have questions about integrating Report-Driven Testing into your growth team’s approach