Episode 62 - Multi-Experience Controlled Experiments

Did someone forward this to you? If so, click to subscribe
Welcome to episode 62 of The Retention Blueprint!
Commercial and C-suite leaders want to improve retention, because they know it drives growth.
And often they want a big impact fast.
In this episode we explore how to build multi-experience controlled experiments that drive the biggest impact, while reducing testing pollution and biases.
Marketing ideas for marketers who hate boring
The best marketing ideas come from marketers who live it. That’s what The Marketing Millennials delivers: real insights, fresh takes, and no fluff. Written by Daniel Murray, a marketer who knows what works, this newsletter cuts through the noise so you can stop guessing and start winning. Subscribe and level up your marketing game.
📰 Top Story: Multi-Experience Controlled Experiments
Retention is driven by every interaction a customer has with your brand.
Moments of truth happen across Product, CX, CRM, Marketing and Service.
Often, the problem at mid- to large-sized organisations is that different aspects of the retention experience are managed by different teams, each with its own strategies and test plans.
This can make it challenging to drive the biggest impact fast because testing and optimisation across teams and touchpoints are not integrated.
It's easy to build a CRM Marketing journey or product experiment.
Of course, over time and multiple iterations, retention KPI’s will improve.
However, if you want a truly significant impact, you need to execute big changes simultaneously across all touchpoints.
However, it is risky to roll everything out at once, because what if the results don't match expectations? Without a proper test and control plan, you won't know whether the results were due to an external factor or due to your changes, especially if you operate in a competitive or seasonal market.
So what's the solution?
The solution is to build multi-experience controlled experiments.
Begin with audience insights and research to identify areas of the entire experience that you want to improve.
Let's take early life.
Build a new CRM program with multiple variants tailored to your different cohorts' objectives and goals.
Create tailored product experiences for each of these cohorts.
Create a dedicated service team with expertise in onboarding tailored to each cohort's different needs.
Then build a multi-experience controlled experiment.
What’s a multi-experence controlled experiment?
This is where you take a hold-out group from your new CRM Marketing journey, new product experience and new service offering, which includes the same customers in the hold-out (control) group across all three experiences.
Effectively, 10-50% of your audience receives the same experience as before, and the rest receive your new experience across ALL touchpoints.
How to set up a multi-experience controlled experiment
Identify a representative sample of your base to fall within the hold-out group who would be exposed to the existing experience (not your new one).
Exclude that audience from the new CRM Marketing journey, ensuring they receive the existing CRM Marketing approach. This is simple to set up within any marketing automation platform or CDP.
Next, take the treatment audience from the CRM Marketing segmentation and load those customer records into your contact management tool, which your service team manages. Ensure that your treatment audience is routed to your new dedicated service team, while the hold-out group continues to receive the existing experience.
Finally, and this is where it can get complex, ensure your treatment group receives the new experience in your product, while the hold-out group are exposed to the existing experience. You can use a CDP, such as Segment or Hightouch, to manage the connection between your customer IDs and digital tracking IDs. To manage the front-end experience for the treatment group, you can use tools like Adobe Target, Optimizely, or Sitecore, if you use that CMS. The hold-out group would receive the existing product experience, and the treatment group would receive the new experience across all touchpoints.
What's the benefit of this?
You can make a much bigger impact, much more quickly.
You can achieve higher statistical confidence in the outcomes because everything is tested together. Sometimes, if you roll out these initiatives separately, you may face challenges in obtaining statistical confidence quickly enough. For example, if you make changes to your product, run an A/B test in Optimizely and simultaneously run a separate test of your new CRM Marketing journey, customers can be in the hold groups and treatment groups of each other, which can slow down understanding of what's working and, in the worst case, make results inconclusive.
It helps mitigate the issue of small differences. You will achieve a higher likelihood of statistical confidence with bigger differences between the treatment and hold-out group, but you will also see statistical confidence if you have large treatment and control groups and slight differences. However, the problem with this is that minor differences between the control and treatment group mean your change has only made a subtle difference.
How to remove biases
So your multi-experience controlled experiment worked! Congratulations. The CEO is patting you on the back, and everyone in the team is delighted.
However, have you ever run an experiment where, in one instance, the treatment group outperformed the control, and in another, the reverse happened?
You probably blamed seasonality or some other context not related to the experiment.
This can be the case, but sometimes it’s also due to pure chance that you achieved a statistically significant difference between the control group and the treatment group.
To avoid this, you can use techniques such as target shuffling, which helps you understand if the test results you have seen are real or just random luck.
Here's how it works in simple terms:
Imagine you have data that shows a relationship, like "when you deliver X product / CRM Marketing and Service experience, retention improves by Y".
Target shuffling tests whether that relationship is meaningful or just a coincidence.
How It Works:
You keep all your input data (X) the same
But you randomly scramble all the retention outcomes (Y) - like shuffling a deck of cards
You rerun your analysis on this scrambled data
You repeat this scrambling process hundreds or thousands of times
What It Tells You:
If your original pattern was real and meaningful, it should perform way better than the scrambled versions
If your original pattern was just random luck, it'll perform about the same as the scrambled data.
Real Example: A hedge fund claimed excellent performance, but was it skill or luck? By shuffling the buy/sell signals randomly 1,000 times, they found that only 15 random versions performed as well, meaning there was only a 1.5% chance that the success was due to pure luck.
Why It's Useful: It gives you confidence that what you found in your data is real, not just a statistical fluke.
Think of it as the "coin flip test" - if your retention changes can beat random chance, it means the strategy has truly worked.
Final Thoughts
To achieve massive retention gains, you can’t test in silos.
Your CRM, product, and service experiences don’t operate in isolation. When you coordinate controlled testing across touchpoints, you unlock the opportunity for exponential impact.
It takes more effort up front. You’ll need collaboration across teams, good segmentation, and the right infrastructure. But the ROI speaks for itself:
Faster validation of high-impact retention strategies
Higher statistical confidence, even in noisy or seasonal markets
Retention isn’t won in one moment of truth. It’s won when you make every moment count.
Until next week,
Tom
P.S. What did you think of this episode? |
Do you need help with Customer Retention?
When you are ready, contact me to discuss consulting, my fast-track retention accelerator, courses, and training. Or if you are interested in sponsoring this newsletter, get in touch via [email protected]
Reply