Deep inside the new onboarding workflow for your internal messaging app, something is very wrong. Despite a high volume of customers signing on for a free trial of your software, conversions are far lower than expected. You consider adding incentives to boost conversion rates, but you don’t want to set your price point too low, and you don’t want to offer discounts to those customers who do have a high likelihood of converting. When you’re feeling optimistic, you’re digging through data to determine the correct course of action. When pessimistic, you may feel like tearing apart the whole workflow and starting over.
Experimenting with A/B testing and workflow changes can go a long way toward identifying your issues with conversion. However, unless those experiments are targeting the correct audience, you’re potentially analyzing results skewed across cohorts irrelevant to your ultimate goal.
User segmentation—organizing groups of users by shared demographics or behaviors—can focus your experiments in order to target only the customers you want. Experiments containing user segments can produce data-verified results you can trust, enabling you to make changes with confidence in your mission to boost user experience and fuel conversions—and at a minimum cost to you and your company.
Target the Right Customers With User Segmentation
Creating experiments that address the wrong segments of customers leads to results that aren’t relevant to your initial question. In the previous hypothetical example, you’ve identified an issue with converting customers from trial members to paying members of your service. You decide to funnel a selection of trial members into a separate workflow with a pricing discount as a means of boosting conversions. This tactic is hypothetically sound, but, without proper user segmentation, it’s saddled with practical problems:
- Performing an experiment without proper user segmentation can cannibalize revenue. Some trial members have been using your product more often and more meaningfully than others. These customers are likely to sign up without needing an incentive, so sending an offer to them only costs you money.
- Performing an experiment without proper user segmentation can skew results. Targeting those same high-usage customers alongside minimal users can lead to misleading conclusions. For example, your experiments suggest that the incentivized workflow only increased conversions by 1.3%. Without segmentation, the base total includes at least some customers who were going to sign up regardless, therefore diluting the results.
- Performing an experiment without proper user segmentation can ruffle feathers. Your messaging app is probably not being used in isolation—it’s built for teams. If an entire company were to adopt your product, improper segmentation might result in half of the team receiving a discount offer and half of them being asked to pay full price.
Experiments run without segmentation can range from inconclusive to disastrous. Even product marketers attempting to segment may find that simply grouping by demographic or geographic information doesn’t create the hyper-specific cohort necessary for proper testing.
Properly targeted experiments require segmentation by customer behaviors. For your hypothetical product, you might identify “+/- 10 messages sent a day” as the variable by which customers are funneled into the “normal” workflow or the one ending with an incentivized offer. Grouping trial members by this behavior maximizes the chance that the results of the experiment will speak to the customers you intended to address all along.
Test (and Then Analyze) Results From the Same Segment
With a hypothesis in hand and the proper targets in mind, you’re able to start building your experiment. By this point, segmentation has helped to narrow down the total customer pool to the relevant cohort. Instead of testing every single customer within the new segment, you’ll need to create two groups: an experimental group and a control group. By doing so, you’ll be able to compare the results of the experimental group with that of those who did not experience any changes.
You don’t want to funnel every single applicable customer into your new workflow without knowing the effects it might have. Building a segment allows for members of the same segment not included in the experiment to be used as a control group. This allows you to compare the results of your experiment against a similar cohort using the current workflow.
Such an experiment for your internal messaging platform might look like this:
- Using Amplitude Experiment to create a segment consisting of customers sending fewer than 10 messages a day
- Pushing the new customers through the new incentivized workflow
- Comparing the conversion rates of customers in the incentivized workflow against a segment of the same cohort going through the “normal” workflow
Dividing your segmented cohort into experimental and control groups helps assure the validity of the results by letting you analyze the same customers you used in your experiment. For instance, your experiment and control groups can be specifically segmented to include “people who send fewer than 10 messages a day AND also signed up on December 1st.” This allows for comparisons of behaviors and demographics that give an apples-to-apples analysis of data for the best possible insights on which to base future decision making.
Segmenting customers based on one or two parameters can narrow the broader pool of customers, but over-segmenting can restrict sample sizes to unreliable levels. Only experimenting on “people who send fewer than 10 messages a day AND also signed up on December 1st AND are 40 years of age AND identify as women” might return results on only a handful of people. Analytics platforms provide better insights with more relevant data, not less—a concept to keep in mind as you design your next experiment.
Take What You’ve Learned and Test Again (and Again)
Results from experimentation can provide insights that suggest surprising next steps. The data may seem to suggest a fundamental flaw in your workflow that requires extensive repair. However, it would be foolish to make sweeping changes to your procedures or products on the basis of a single experiment. Making such sweeping and potentially costly changes would require a high level of confidence in the results close to certainty—something a single successful test cannot provide.
Achieving the same outcome from multiple repetitions of the same experiment is a critical facet of the scientific method. As such, a cautious project manager should run the same experiment on different segments of a cohort to verify their initial conclusions. Repeatedly achieving the same results in smaller, controlled experiments can justify large-scale changes that would otherwise seem too risky to attempt or too costly to get wrong.
Sometimes, though, an experiment can be repeated with slight changes to focus conclusions. For instance, perhaps your messaging app’s new workflow experiment produces exactly what you expect. After repeat experiments confirm the boosted conversion rates of customers offered an incentive to join, you wonder: what’s the optimal discount for conversion?
And so, you start anew, segmenting another cohort of customers. This time you offer two different price points at the end of the onboarding flow until you find the best option. The results point you toward the better outcome, and so you run another experiment with new price points, and then another—and then another—until the data has pinpointed the price point that leads to the most conversions with a minimal impact to your bottom line.
Segment Your Way to Successful Experimentation
Is there a way to do it better? At the end of the day, it’s the thought that’s at the forefront of every product manager’s mind as they consider the success of their product and processes. As an infinitely curious creature, you have grand ideas or nagging suspicions you’ve wanted to prove—or disprove—but lacked the means of doing so without taxing company resources or disrupting the customer experience.
Now, however, your curiosity is powered by analytics. Data-based experimentation can satisfy the insistent voice in your head challenging you to do more and to do it better. With user segmentation, you can more safely produce reliable results from small-scale experiments that fuel bolder, more impactful changes. In a “worst-case” experiment? You revert back to your original approach. But in the best case? You may just improve customer experience, boost sales, and set the foundation for your next great experiment.