In the previous article, we explained how to dig deeper into the details of an opportunity. In this article, we're providing a technical addendum that provides full transparency into the calculations behind the Conversions magic. It’s especially relevant to data scientists, analysts, and the technically inclined.
Three Ways we find Conversion Impacts for you
In order to maximize insights with Conversions, it’s important to understand the special cases that arise when we automatically compare the performance of an “affected” and “unaffected” group through a funnel, for a given Signal. Behind the scenes, we actually compute different versions of the impact formula and show you only the most relevant one that applies.
Here are the 3 approaches that we take to detect a significant conversion impact for a given funnel. Conversions will first attempt to find a significant impact from the first approach (“end to end”) and will present that impact if it succeeds. Otherwise, it will try the second approach, and so on. Conversions will show the result from the first approach that succeeds.
In the simplest case, your affected and unaffected groups perform like two textbook funnels: every step shows some dropoff, and the end-to-end conversion rate of the affected group is low enough compared to the unaffected group to be considered statistically significant. For this, the horizontal histogram in the analysis summary area (see the image below) will indicate numbers for the first and last steps of the funnel, and there will be a red dotted box around the whole funnel in the bigger chart below.
This “mid to end” case was actually highlighted in our FruitShoppe store examples from the earlier articles of this Conversions series.
Notice that the red dotted box in the Analysis View is only around the last two steps, while the first three are grayed out. That happens when we detect that a given Signal (in this example, a dead click on a specific page) doesn't even occur until mid-way through the funnel.
Here, we've calculated the conversion impact from the first step highlighted by the red box to the end. We do this to ensure that impacts from that Signal can be brought to your attention. If we only calculated the impact end-to-end (from Step 1 to 5 in this example), we might miss significant differences in conversion rate that only apply after the event has happened.
Note: normally, the first highlighted step is the step on which the Signal occurs. Whether you can be sure of this for your funnel depends on what type of Signal you are analyzing and how much you know about its origin.
For instance, it’s clear that our dead click on a specific page must have occurred precisely at step 4 of the FruitShoppe funnel given the element involved. We wouldn’t expect to see attrition from the affected group at steps prior to visiting that page. However, you’ll notice that the affected group does not maintain 100% of its size on earlier steps! Many digital experiences involve users looping back in an app’s workflow, like returning to product pages after reviewing cart contents. Our attempts to visualize user flows linearly through a funnel don’t fully correspond to the complex reality. So, it’s reasonable to expect that some FruitShoppe users would have experienced the dead click and returned to earlier steps in the funnel, where they may have then dropped out. Conversions uses a heuristic to detect this and will ignore small early dropoffs of this kind.
There are times when a more granular approach is needed to uncover useful insights. Even if you’ve thought about step-to-step transition rates before (which are sometimes known as micro-conversions), comparing these types of transitions between affected and unaffected groups is complex to interpret and may be unfamiliar to you.
Let’s say your desktop users typically convert more frequently than mobile users, but that there’s an impactful bug on the desktop version of your website that affects everyone in that cohort and reduces their chance of converting. Suppose the bug doesn’t impact desktop conversion enough to make it overall below that of the mobile group, otherwise Conversions could surface the issue in one of the previous approaches.
From an end-to-end viewpoint over all your users, the affected group (in this case, Desktop users) is still converting better overall than the unaffected group (Mobile users), so there would be nothing to report using the previous two approaches. But if there’s a significant loss of customers between two steps for Desktop users compared to the loss on the same steps for Mobile users, then Conversions might tell you that you could be converting even more desktop users if you addressed this bug.
Note: We can’t guarantee that Conversions will catch every possible impact in a stepwise approach because it depends on the exact pattern of micro-conversion rates in the unaffected group’s funnel.
The stepwise type of opportunity is not easy to detect with tools that only analyze one cohort’s funnel activity at a time. Conversions always analyzes two related cohorts at the same time, which makes this comparison possible. We have seen our customers obtain valuable insights from stepwise impacts more frequently than we had initially expected.
In the unlikely situation that we find multiple stepwise impacts for one Signal in the same funnel analysis, we only surface the one with the highest lost conversions.