With so many options provided for how to fully optimize FullStory to create the best customer experience, you may be wondering what the best approach is or where to begin.
In this case, leveraging something like A/B testing could be extremely beneficial in your decision making.
Here are a few suggestions to get you started:
Out of the Box Self Service
In both of the below use cases, you can compare tests side by side by saving Segments for each of the experiments. Dashboards will also allow you to compare multiple segments in the same chart.
If A/B test variations have unique URLs or URL query parameters, you can use the “visited URL” filter and search for users who visited those specific page URLs accordingly at some point in their session.
If a button or text field (CSS selector) only appears in certain tests, you can search for “clicked” or “changed” behaviors on those selectors. This use-case will only work for engagements with those selectors - people who clicked or changed the selectors but not necessarily anyone who saw those selectors in their experience. If you’re concerned about the volume of users who saw those selectors (vs. actually engaged with them) please refer to the Watched Element use case below.
Admin Support Required
In this use case, a Watched Element may be used to indicate that users saw (but did not necessarily click, type or otherwise interact with) a selector in your environment. If understanding the behavior of users who saw (or did not see) an element on your site that was unique to a test would be helpful information, this approach to A/B testing might be for you.
When using Watched Elements to assess A/B tests, it may also be helpful compare a Segment of users who saw the Watched Element to users who did not see the Watched Element or to another control population that fits your use case.
Configuring a Watched Element in your instance of FullStory is an Admin capability that you can read more about here.
Code Change Required
If a precise understanding of the number of users who experienced a test is critical to your assessment of the test in FullStory (vs. prioritizing behavioral insight, for example) , passing the A/B Test variable or ID number into FullStory in the form of a custom event (either from your data layer, a third party optimization platform or a turnkey integration) is a great option.
Using custom events to assess your A/B tests is also the best approach to understanding tests where a back-end change is being tested, but nothing about the end-user’s experience is materially impacted. An example of this could be a test that compares user experience on a legacy web framework vs. a new web framework, but no buttons, URLs, text fields or other options presented to the user change.
Passing Custom Events into FullStory from your data layer or some third party platforms requires a code change via our fs.event API. You can read more about how that works here. As with Watched Elements, your internal FullStory Admins can likely help you navigate this change, or connect you to FullStory resources that can support.
In some cases and depending upon the platforms used by your organization, A/B testing variables or IDs can be passed into FullStory via a turnkey integration. A list of the experience optimization platforms we currently offer integrations with can be found here by selecting “experience optimization platform” from the left-hand sidebar (and is regularly updated!). Configuring integrations with your FullStory instance is an Admin capability.