Blog
Index

Creativity that converts: how to apply A/B testing to your brand storytelling

The creativity is a magic ingredient that makes a brand stand out, excite and stick in people's minds. Today, inspiration without direction is not enough. Good ideas need validation, and that's where the A/B test.

On a daily basis, creatives ask themselves many questions that at first glance seem to have a subjective answer: which colour best conveys the value proposition, how does the CTA that leads to the form affect the final conversion, does this message really connect with our target audience? 

The A/B test is the tool that allows us to answer these questions without resorting to assumptions, applying real data to refine even the most creative aspects of the projectThe digital brand or product's image, and seemingly subjective.

In this article I will tell you how we usually apply A/B testsNot only to landings, but also to your brand storytelling, because even the story you tell can be optimised without taking away from its authenticity.

What is A/B testing and how does it work?

The A/B testing is to create two versions (or more) of the same element and show them at the same time to random segments of the audience, in order to test which one is the best. works best according to a measurable objective (clicks, conversions, engagement, etc.).

According to MailchimpA/B testing involves creating 2 versions of a digital asset (website, ads, email, etc.) to see which one users respond better to. Half of your audience receives 'version A' and the other half, 'version B', measuring which one gets a better conversion".

Traditionally, it is used in marketing to optimise landing pages, buttons or advertising campaignsbut what if we applied this approach to the brand storytelling? Instead of just changing colours or CTA locations, we can test elements that make up the brand identity, such as headlines, slogans, promotional images or videos, illustration styles, etc. 

The objective is not only to increase one-off sales, but also to better connect with the audience and convey the brand message much more effectively.

How it works and its application to branding

The basic procedure remains the same: formulate a hypothesis ("If we change X to Y, we expect Z to improve"), design variant A (current) and variant B (new), divide the audience randomly and measure the response. At the end of the test you analyse which variant achieved better metrics (e.g. more clicks on an advertisement, more time spent reading or more subscribers).

With this empirical data we can make informed decisions about which option will work best. As I summarised Optimizely: A/B testing "converts web optimisation from guessing to data-driven decisions, moving from I think'. a I know ".

We can apply it to the branding, This means for example testing two versions of a slogan, two styles of creative for a campaign, or even different product presentations. From logo to copyEverything can be put to the test in the A/B test. 

Each A/B variant must be distinguished in a single element (to isolate the effect) and keep the rest constant, because if, on the contrary, we use two variants that we want to measure with several different points, we will lose the control to know which of the points is the one that generates the effect. 

In this way, we can see with numbers how audiences respond to different nuances of our brand narrative. 

Advantages and disadvantages of A/B testing in brand design

The use of A/B testing has clear advantages, but also challenges. Here are what I consider to be the most relevant points:

  • Data-driven decisions: The major benefit is eliminating assumptions and opinionshaving the option to analyse real metrics. VWO points out that A/B testing "avoids the riddles in the optimisation of a website because they allow data-driven decisions to be made".

In branding, this helps to supporting creative change with demonstrable evidenceFor example, validating whether one claim generates more interest than another, or whether a certain visual style captures attention better.

  • Repetition and continuous learning: With A/B testing it is possible to test specific hypotheses and learn what works best for the audience. 

Optimizely stresses that testing allows for "small, careful changes to the user experience while collecting data on the impact". This encourages an iterative processEach test provides learnings that are applied to the next, progressively refining the brand narrative. 

It also adds the possibility that if a change is not effective, lessons can be learned and a new idea can be tried without compromising the entire campaign, and the variant or proposal can be discarded outright.

  • Resolution of opinion debates: In creative environments there are often discrepancies of opinion. An A/B test helps to "resolve disputes of opinion" by testing the versions involved in the debate. 

In the words of ClearleftThe "quickest and most effective solution" can "bring out the most effective solution quickly" and avoid endless discussions about design or copy.

But what are the disadvantages?

However, there are also disadvantages/limitations to consider:

  • Focus on metrics, not on the big picture: By focusing on quantitative results, there is a risk of losing the bird's eye view of the forest when talking about qualitative or strategic aspects. 

As they warn in ClearleftA/B testing can "take human judgement out of the equation" by putting KPIs before the full user experience, because " ' 'better'. experience" is not always "achieving objectives faster". 

At brandingwhere consistency and emotion are vital, a seemingly "win-win" change in the short term must be avoided. harm the objectives of the brand identity. Not everything is measured in clicks, and sometimes a "prettier" design might be preferred, even if it converts a little less.

  • Resources and technical complexity: Implementing A/B tests requires time, tools and knowledge. 

The process is demanding, as it requires well-defined metrics, audience segmentation, production changeover and analysis of statistical results (the whole process). 

According to ClearleftAs a result, "choosing and implementing A/B solutions takes time and money... who is in charge of this, managing it, paying for it?", so many companies do not invest in techniques such as A/B testing, thinking that they are not so necessary or that they will not have such a positive impact on their campaigns/brand image. 

If it is true that, in practice, a design team must collaborate with developers, analysts and campaign managers, which can complicating projects, increase hourly cost and pose a problem if the company has not mastered the technique.

  • Need for traffic or interactions: To obtain reliable results, a sufficiently large sample is needed

If the site or campaign has low traffic, the test may take a long time to show significant differences (or may give misleading and inaccurate results). 

In branding, where changes can be macro (e.g. a logo redesign), it is often not feasible to divide the audience into thousands of people. This limits the scope of the testsis more feasible in digital media (web, networks, emails) than in static media (press or TV), where the reach is much higher and the proof is much more effective.

  • Small changes rather than transformations: The very structure of A/B testing lends itself better to adjusting details (button colour, ad headline) than to re-evaluating an entire narrative. 

Changing a brand's tone of voice or rethinking an overall concept can be too risky to A/B test directly. Therefore, in practice, it is often used to adjusting individual elements within a predefined narrativenot to reinvent the brand story from scratch.

Featured case studies

Leading companies have incorporated A/B testing in their design and marketing culture in a strategic way. Some notable examples are discussed below:

Netflix: 

Netflix performs thousands of continuous tests to optimise its interface and content. 

For example, its own team of Creative Services tried different cover images ("artwork") for the same title. An initial experiment showed that changing the image of a broad title increased audience interest and viewing time. 

In a larger experiment, they served each user the most attractive covers (according to a pre-test) and managed to "significantly increase aggregate streaming hours".demonstrating that the right visual content drives users to discover and watch more titles.

Netflix also uses A/B testing in every corner of its product.The menu layout (e.g. fewer rows, but with larger images generated 10% more navigation), to functions such as the button "Skip Intro"(introduced following tests which found an improvement of 15% in satisfaction by allowing automatic advancement to content). 

In addition, optimises your emails and notifications through testing. One study cited found that personalised emails ("Hey [Name], check this out...") achieved 25% more opens, and notifications sent right at the end of an episode increased the likelihood that the user would start another episode by 30%. 

All of this, taken together, shows that Netflix relies on A/B testing to fine-tune even its content narrative and communication, creating a hyper-personalised experience which is continually evolving.

Booking.com: 

This travel company is famous for basing all its decisions on data. At Booking.com have built a platform for large-scale experimentation that enables the any business team to set up and run A/B testsmaking "evidence-based decision making" possible. 

They have been experimenting for almost a decade (thousands of tests a day!), analysing every design variation that can affect the user experience. For example, a case study in the hotel industry showed that moving from a simple fixed form to a dynamic user-friendly form increased bookings by 33%. 

However, they do not always advertise these tests, Booking.com is an example of how experimentation can be applied to optimise any brand touchpoint (design of search engines, help messages, etc.) before implementing it on the whole platform. 

As one former Booking designerWhat exactly did they test and how long ago? If the test failed, are we 110% sure it wasn't because of a bug? This reflects their philosophy of never taking a concept for granted: if something could be improved with data, it is tested.

HubSpot: 

It is best known as a marketing tool, HubSpot also implements A/B testing. 

A documented case (by the website Unbounce) shows how HubSpot conducted a test on the design of their weekly emails. A version with centred text was compared to a version with left-aligned text (see examples of emails below). 

Surprisingly, the variant with left justified text was worstThe original design: it got far fewer clicks and less than 25% of the versions with aligned text outperformed the original design.

Example of HubSpot's email marketing A/B test: variant A (left, centred text) vs. variant B (right, left justified text). The test revealed that the option that seemed more technically "readable", the justified text, ended up generating worse engagement results.

This experiment illustrates how seemingly small narrative or formatting details can influence user response. HubSpot learns from this to adjust its email templates, and also applies A/B testing on its landing pages and blog content to improve conversions. 

For example, on your site or in email campaigns you test headlines, images and calls to action, measuring which variant generates more registrations or leads. Again, the message is that every element of brand storytelling - from the tone of an email subject line to the layout of an image - can be optimised with A/B testing..

Popular A/B testing tools

Numerous specialised platforms are available for A/B testing. Some of the best known are Google Optimize (free of charge, but withdrawn in 2023), Optimizely, VWO, AB Tasty, Adobe Targetamong others. 

For example, VWO stands out as a complete suite for experimentation and optimisation, Optimizely is a leader in web testing, and Adobe Target is part of the Adobe Marketing Cloud ecosystem.

In the field of advertising, Meta Experiments offers studies of "conversion lift that work through controlled testing: it divides the audience into test groups (who see the campaign) and control groups (who do not), measuring the actual incremental impact of the ads. 

This Meta methodology uses randomised trials (randomised controlled trials) "the gold standard for determining causality" in advertising.

It is worth mentioning that Google OptimizeGoogle's tool was very popular (especially for SMEs) because it allowed A/B testing linked to Google Analytics. 

However, in September 2023, Google withdrew it definitively Those who still use it should therefore migrate to alternatives such as VWO or Google Analytics 4 Experiments. 

Fortunately, the list of options is extensive: from specialised platforms to solutions integrated into marketing suites. The choice depends on the budget, traffic volume and the complexity of the tests you want to run.

A brief technical guide to implementing an A/B test

But our intention is to focus on strategy, it is important to understand the basics of the technical set-up of an A/B test, without going into formulas or techniques that require prior study for their understanding.

In practice, all A/B tests must follow these fundamental steps to achieve results:

We define the hypothesis and key metrics: 

Before trying anything, makes a clear hypothesis ("If we change X to Y, we expect Z"). Associate a success metric (e.g., click-through rate, registrations, dwell time) that allows you to compare quantitatively variant A vs. variant B.

We prepare the variants: 

Create the original version (A) and the version with the proposed change (B).. Make sure that only one variable changes between the two (e.g. only the header design, not the whole email), in order to correctly attribute any effects. 

You can use code, a web builder with A/B (native or plugin), or testing platforms (such as Optimizely or VWO) that inject change dynamically.

We segmented and split the sample: 

Randomly assign your visitors or users between version A and B. It is crucial that the groups are statistically equivalent. 

For emails, for example, each half of the subscribers will see a different version. For web, traffic is split into two groups in real time. Today's testing platforms handle this randomness automatically.

We run the test and collect data: 

Let the experiment run long enough to reach an acceptable level. sufficiently large sample size to ensure that the difference is not random. During this period, collect data on your KPIs for each variant. Do not change other things or give clues to users that they are participating in a test, to avoid possible bias.

We analyse results: 

Use analytical tools or the test platform's own metrics. to assess which variant wins. Look for differences that are statistically significant at your chosen confidence level (typically 95%). 

Check not only the main metric, but also secondary effects (e.g. if conversion increases, does time on page decrease?).

We implement learning and repeat: 

If variant B wins, you can apply that change as the new official version. If A wins, keep the original. Either way, record the learnings... Then, come up with the next hypothesis and repeat the process to further optimise your brand.

Technically, the A/B testing platforms are in charge of random segmentation. As designers, it is enough to know what to test and which results are positive. 

Even so, it is advisable to coordinate with development and analytics to ensure the correct follow-up of the test. In the case of a brand, don't forget to include qualitative metrics (satisfaction surveys, brand lift metrics) if they are relevant, as not everything is reflected in immediate clicks, brands also aim to transmit and arouse emotions.

Practical recommendations

The A/B experimentation in brand storytelling is a creative and analytical process at the same time. To implement it successfully, I will share with you some recommendations that have helped me, and that I would have liked to know before starting to apply this type of test:

Start with clear objectives: 

Before starting, define what you want to improve (e.g. brand awareness, engagement with content, subscriptions) and what your KPI benchmark will be. From that point, come up with a hypothesis for change that is aligned with your brand identity and your overall strategy.

Test one element at a time: 

To maintain control, exchanges only one element for another (a headline, an image, a tagline, etc.). This way you will know exactly what made the difference and which of the two works best.

Consider the audience: 

Make sure you split your sample well or use segmentation (by demographics, by channel, etc.) if your brand is targeting heterogeneous audiences. For example, you could test two different tagline messages in different market segments to see which one retains the user better with each group.

Use good tools: 

Lean on reliable testing platforms (Optimizely, VWO, Meta Experiments, etc.) that simplify implementation and analysis of results. You can also do simple tests with Google Analytics 4 (Experiments) or even with newsletters (many email marketing tools allow you to split lists).

It measures rigorously: 

Don't just stick to intuitive indicators; calculates the statistical confidence level. If you don't have massive traffic, extend the duration or focus on more impactful tests. Also remember to complement with qualitative metrics (Brand Lift surveys, direct feedback) to understand the "why" behind the hard data.

We interject and refine the narrative: 

Every test, successful or not, is learning. Document what works (e.g. a visual style that appeals most) and apply it to other branded pieces. 

If something doesn't work, avoid dismissing it as a bad idea, rethink the execution. Little by little, you can develop a brand voice and aesthetic that is more in tune with your audience.

We balance data and brand coherence: 

Although numbers guide, take care that the changes prove, not break the brand identity. 

A/B testing within the graphic and tone guidelines defined in your style guide. Experimentation is a tool to improve the existing narrativenot to erratically change the essence of the brand.

To conclude, introduce A/B testing in the creation of content and brand messages leads to a more rigorous and of course more effective creative approach. 

As designers, becoming "creativity scientists" allows us to justify every change to clients or bosses, reduce risks and enhance what really impacts the user.

Picture of Enrique Trigo
Enrique Trigo

UX Designer

Index

Share this post

Subscribe to our blog