Learn How to Capture Learnings, or Lose
By Ethan Garr
If you want to waste time and make mistakes twice, here is a key ingredient for your recipe: Develop your testing process without a consistent, repeatable and effective mechanism for capturing and archiving learnings.
Of course, that is a terrible idea, but many companies actually do fall into this trap because they do not build a well-organized mechanism for capturing learnings into their rapid experimentation process. If every test was a success and led sequentially into the next experiment this might not be as big a deal, but in reality some of the most valuable learnings come from experiments that disprove hypotheses, and it may be months or years before those learning need to be recalled to inform a new decision.
Recently, I ran a growth alignment workshop for a B2C app developer. They were working on a holiday promotion idea when one of their senior executives mentioned that the team had tried something very similar a few years earlier. Unfortunately, since then the product manager who ran the test had left, and the company had switched away from the A/B testing platform where the test was measured. The domain knowledge, the data, and all the key learnings were gone.
Should the team have abandoned the new idea with the assumption that since the promotion was not repeated the original test was probably unsuccessful? Should they have soldiered on believing this iteration would be different? Without a record of what happened, there is no good answer. The flywheel effect of a high-tempo test/learn process depends on easy access to the learnings of each prior iteration for maximum impact.
So in this article, I provide practical advice to help you as you build your process for capturing and archiving learnings. Let’s begin with some fundamentals.
Principles for Captured Learnings
- An experiment either confirms or disproves a hypothesis. A captured learning will identify thisoutcome or explain why the experiment failed to net a result.
- Learnings should lay out what next steps were taken based on the test results and related data. Ideally, experimenters identify these next steps before the test is run.
- An archive of the learnings should include the data that confirms the result and a summary of the findings so that future experiments can be informed by its context.
- Important findings should be pasted or included as screenshots so that captured learnings do not depend on the persistence of other systems for availability.
- Learnings should be captured as close to the experiment conclusion as possible to ensure anecdotal and/or qualitative findings are not forgotten
How Can We Make Key Learnings Long-Term Useful
When growth teams confirm a hypothesis, they should push to quickly expand what they have learned to as many users as possible, while simultaneously considering new tests to further capitalize on the learnings. Even though the results and data are fresh, and it might be easy to rely on memory to carry insights into the next new set of experiments this is the right time to document learnings. The consistency you build into the process protects against the unknown, and the record you build will be useful for future hires, experiments, and growth.
While an experiment that disproves a hypothesis may be a letdown for the team, it should not be. The learnings are important and may inform game-changing decisions in the future. Why did the result not meet our expectations? Should we keep moving in this direction? Should we focus efforts elsewhere? Sustainable growth is built on the answers to questions like these. The only failure when you run experiments is to not learn anything. Documentation can ensure this does not happen simply because the learnings become lost in the ether.
The other outcome of an experiment is the one we all dread. The test failed to yield a result. Maybe we made a mistake in how we set up the experiment, or an external issue forced us to abandon the test early. Who knows? There are lots of ways to ruin an experiment, and I have made them all. Even those should be documented. Because the idea that the team was trying to turn into a fact probably will resurface in one form or another. If you have the record of what did happen, you will at least start closer to where you left off than from where you started.
So at the very least, your team should document the key result of each experiment with a good summary of what the experiment was and what you know to be true based on the data you have analyzed.
Build What’s Next Into Your Experiment Process
When you record your results, make sure to list out what next steps are either planned or in progress. If the hypothesis was proven, lay out how the learning will be implemented for maximum impact. This is really important if the original experiment was a minimum viable test and not a full-function deployment. If the hypothesis was disproved, the next step may simply be to move on to other things. Either way, get it down on the record.
Here is a pro tip: When I coach teams through the process of building high-tempo testing processes, I encourage them to build vision documents for their experiments. These are brief test plans that outline the idea, hypothesis, success metrics, etc. I have found these to be the best way to get everyone invested in experiment outcomes from the onset. Recently, I have started adding an If/Then section to my vision document template. Here, I encourage teams to briefly outline what the next steps will be if the hypothesis is confirmed. This helps the team see how the experiment can carry value forward, and as an added bonus it makes it really easy to detail out what happened next in your post-experiment documentation.
What happened or didn’t happen is an important detail beyond the actual test results. The key learnings from an individual experiment may be more valuable when considered in the context of other experiments it informed at the time. Keep it simple. This step does not have to be super-detailed, just make sure anyone who views the record would know where to look to see what happened next.
Do These Things to Make Your Archive Effective
I have used a few commercially available project tracking software tools and growth-specific management tools, but so far I have not found any of these to be great repositories for key learnings. They can be good for managing and prioritizing ideas into tests, but they don’t necessarily serve as good long-term archives. While you are building out your process, I would suggest simply using documents and curating a shared folder where experiment results can be housed. Create sub-folders by lever to help organize results, and make sure you give experiments descriptive titles so that they are easy to search for and understand.
Whatever tool you decide to use to store your learnings, make sure it will always be available. I worked in an organization that went through four project management tracking tools in less than two years. Had we relied on any of these as our archive for experiments we would have likely lost many learnings along the way. Learnings are often the first casualties of any migration!
Along with a summation of your key learnings and decisions, it is important to bring relevant data into the archive itself. Future experiments may be informed by these learnings, but they will likely be looking to answer very different questions. Having the data available for review will allow the team to draw their own insights. Bring the data into the archive even if it is just screenshots. Don’t rely on links to other tools. Telling someone the data is available in Amplitude assumes that you are still using Amplitude and that they will have the same perspective to pull the data for themselves. Neither may be true.
Get Your Learnings Recorded As Quickly As Possible
I suggest that you task experiment owners with the responsibility of capturing the learnings from their experiments. Making this the sole responsibility of the data team is a bad idea. They are busy enough, and this part of the process covers more than just data. Consider any experiments incomplete until the learnings are captured within your process, and surface these in your growth meetings so that they don’t hang around too long and grow stale.
In fact, work to get the results archived within one week of an experiment’s completion. The fresher the learnings the easier they will be to document, and the less likely they will be to fall through the cracks. You might get away with letting time build up between the end of a test and recording your quantitative learnings, but qualitative learnings, the ones that you get from spirited discussions when the team is passionate about what they just learned tend to have short shelf-lives. Get them down in writing.
Now the big caution! Keep all of these processes lightweight. I think the vision documents for most experiments should be a page to a page and a half and should take half an hour to write up at most. Similarly, I think your captured learnings documentation should be about a page plus screenshots for data and key details. You are trying to build high-tempo testing processes and to accelerate growth, leave the bureaucracy for your old-school competitors whose businesses you plan to disrupt.
Final Thoughts – It’s About Building the Machine
When Sean Ellis and I spoke with Artem Petakov, Co-founder and President of Noom on The Breakout Growth Podcast, he explained that he was focused on “building the machine that gets things right.” The flywheel of sustainable growth feeds and builds on each cycle of the test/learn process. It gets things right by becoming smarter with each rotation. A mechanism for capturing your learnings is a crucial component and one that is often overlooked in the chaos of rapid growth.
A consistent, repeatable, and effective system may help you tomorrow, two years from now, or it may even help your successor, but regardless of when, it will always be worth the investment.