Explore our guide to know what is A/B testing and how it can help in scaling your publishing business by removing the guesswork.
A/B testing has been a boon for publishers and bloggers alike. It has changed the course of decision-making, turning it from intuition to data. As publishers, we tend to misinterpret or even sideline chance, probability, and even randomness while making a decision. Well, the statistical significance of A/B testing plays here.
Over the years, A/B testing for publishers has taught us one fact: If data doesn’t support it, it ain’t the right decision.
Our guide will explore all facets of split test for publishers: what is A/B testing, its benefits & challenges, and how publishers can start with it.

What is A/B Testing for Publishers? (A/B Testing Meaning)
Let’s first understand A/B testing. It is a form of conversion rate optimization test (CRO test) wherein two versions of a particular element are exposed to a section of live traffic to identify which variation will drive more conversions. In simple words, the A/B split testing identifies the better-performing variant based on audience interaction data.
A/B testing has been fully utilized in digital marketing. Be it ad copy, newsletters, or ad creatives, it has made the decision-making process more streamlined.
When we talk about split testing for publishers, we specifically highlight the processes that are frequently implemented by publishers. This can include ad placements, choosing the ad formats, and website content presentation among other things.
Now that we have discussed what is A/B testing, let’s explore examples to understand it better.
Examples of A/B Testing
A/B testing in digital marketing is an industry-wide phenomenon. Publishers and advertisers alike utilize this tool to improve their marketing processes.
Let’s explore some A/B testing examples to understand the instances where you can apply this technique.
1. HubSpot’s Newsletter Subscriber Experience
Goal: To test how newsletter content alignment can impact CTA clicks.
Variant A (Control): Center alignment for email text
Variant B: Left alignment for email text
Result: HubSpot discovered that Variant B got less clicks than Variant A. Of the total Variant B emails sent, fewer than 25% received more clicks than the control.
2. Highrise’ Headline and Subheadline Test
Goal: To test which headline would prompt more signups
Variant A: Start a Highrise Account – Pay as you go. 30-day free trial on all accounts. No hidden fees.
Variant B: 30-day Free Trial on All Accounts – Sign-up takes less than 60 seconds. Pick a plan to get started!
Result: The Google Analytics Experiments test revealed that the variation informing visitors that the sign-up process is quick resulted in a 30% increase in Clicks.
Now that we are done answering the burning question “what is A/B testing”, let’s explore how it is performed.
How to Do A/B Testing?
A/B tests, also called split tests, are conducted in a controlled manner.
Consider this: you are a publisher who is planning to run banner ads on your website. Your aim is to maximize the ad revenue without compromising the UI/UX of the website.
1. You define your objective: which banner ad placement will generate more eCPMs – placement A or placement B?
2. Now, you create two variants. Option A (control) is your current setup and Option B (test) is the modified version.
3. Then, you randomly divide the traffic so that 50% of your visitors see the current setup and 50% see the modified version.
4. Run the test long enough to collect significant data for analysis.
5. At last, you measure the key metrics, such as CTR, impressions, bounce rate, RPM, page load speed, and user engagement duration.

Based on the findings, you select the placement that is performing well.
Et voila! A/B testing is successful!
What can Publishers do with A/B Testing?
A/B Testing and Multi-Armed Bandit solutions can be used by publishers to make data-driven decisions and scale their publishing business. It can also push your optimization efforts to increase ad revenue. Here’s what can be done:
Optimize Ad Placement
Publishers can identify ideal ad placements for their websites through continuous split testing. They can test various locations within the website:
- Above the fold vs below the fold
- In-between the content vs sidebar
- Sticky ads vs floating ads
This can help identify the placements that bring maximum impressions or clicks without affecting the user experience.
Compare Ad Formats
Apart from ad placements, you can also compare ad formats to identify the formats that suit your website the best. You can test out various ad formats:
Content Layout and UX
The content layout plays a vital role in determining both engagement and monetization. As a publisher, you can test:
- Number of subheadings/paragraphs before displaying an ad
- Article format and length
- Mobile vs. desktop rendering
Split testing the content layout will assist you in achieving the optimal balance between content readability and ad revenue.
Evaluate Different Ad Networks or SSPs
Apart from website-centric elements, publishers can also conduct A/B tests to evaluate their ad stack. By comparing metrics like fill rate, eCPMs, and ad relevance, publishers can rotate between multiple ad networks and SSPs to maximize their ad yield.
Publishers can also conduct A/B tests to:
- Test paywall strategies or subscription prompts
- Improve SEO and user engagement
- Work on page latency
- Audience segmentation for personalization
3 Benefits of A/B Testing for Publishers
A/B testing offers a number of benefits to the publishers, such as:
No Guesswork or Intuition
Conducting an A/B test removes any room for guesswork, intuition, or gut feeling. Hence, if you have any doubts, you can simply A/B test it and let the audience guide you towards the decision.
Data-driven Decisions
A/B test strictly operates on data, i.e., it will present the data in the form of user engagement, CPM, and other metrics to clearly demarcate the better-performing variant. This way, you can make informed decisions about your yield strategies.
Smarter Content Strategy
By continuously testing meta descriptions, headlines, sub-headings, and writing style, publishers can identify traffic-driving and conversion-leading elements. Through this, they can shape a content roadmap that aligns with the audience’s interests.
3 Limitations of Current A/B Testing Software
The majority of the A/B testing software is not made for publishers. Here’s why:
Can’t Track Ad Clicks to measure results
Since most of the ad networks include the creatives in the form of an iframe, A/B testing software is unable to track ad clicks. Not to forget it is also a program policy violation for many ad networks to measure ad clicks directly using any analytics or testing software.
No Support for creating variations automatically
A/B testing ads can be slightly more complicated as the number of variations rises exponentially. Typically, you would want to show 3 ad units on a page, and you have 6-7 key spots, 2-3 important ad sizes for each location, and 5-6 color themes for each spot.
The number of variations would easily reach 100s, and there is no reason why creating variations should not be automated. Of course, creating a very large number of variations is only advisable for large publishers.
Manually Editing Code does not help either
Even if you manually create the new ad code (which you want to compare the control with), say in Adsense‘s Ad Management panel, you can’t just edit the HTML code of the variation inside popular A/B testing tools (most of them are client-side). This will not work.
This is because A/B testing tools (again, the client-side ones) push the updates/changes (which you made) on the page after the page loads. The AdSense JavaScript will not work, and not to forget – messing with the AdSense JS code is against program policies.
5 Best Practices for Effective A/B Testing
Publishers need to keep certain things in mind while conducting an A/B test, such as:
Test One Element at a Time
A/B test is like any other experiment. You test one single element and keep all other variables intact to get accurate results. In simple words, the key to precision in A/B tests is choosing one aspect at a time. By doing this, you can isolate the effect of each modification and make changes accordingly.
For example, if you want to test ad sizes, then just stick to that. Keep all other variables the same – be it the ad format, ad placement, copy, creative, and even the stack.
Do not Rush
This point is important from a data-collection point of view. It is recommended to wait till you have at least 1000 monthly website visitors before you get into split testing. While it’s not a hard-and-fast rule, having a small sample size can skew your results, which can render the test inconclusive or even invalid.
Conduct A/B Tests across Devices
Your audience can access your website from any device. Hence, it makes absolute sense to conduct the test across multiple devices (laptops/mobiles/tablets). Through this, you can identify any device-specific user behavior and optimize the user experience accordingly.
Monitor Performance Continuously
While performing A/B tests, keep yourself updated with the ongoing results, preferably on a daily basis. If you notice variation significantly underperforming, be ready with a mitigation plan or terminate the test.
Keep External Factors at Bay
A/B tests require a stable environment with no external interruptions that can disrupt the test flow or skew the performance data. Hence, ensure that website changes or marketing campaigns do not influence the test in any way. A word of advice: it’s better to put a hold on the ongoing changes while performing the test to get precise results.
A/B Testing Statistics
1. TrueList states that:
- The global A/B testing software market is expected to reach $1.08 billion by 2025.
- A/B testing statistical significance can be reached with a minimum of 5000 unique visitors.
- Simple subject lines receive 541 percent more responses than creative ones.
2. According to LLCBuddy,
- A/B testing is used by 60% of businesses for landing pages, and 63% say it is easy to execute.
- 77% of marketers use A/B testing, with 60% using it on landing sites, 59% on emails, and 58% on PPC.
- Bing reported that using A/B testing on display ads resulted in a 25% increase in ad revenue.
- Obama’s digital team had utilized A/B testing in 2019 to increase contribution conversions by 49%.
- According to Hubspot, after an A/B test, emails with the sender’s real name received 0.53% more opens than emails with the sender’s business name.
3. The most complex split testing methodologies are utilized in industries where conversions are critical, such as SaaS, IT, retain, and e-commerce. (Fibr.AI)
Towards MAB and Continuous Optimization
Technically, an ideal product for publishers will not be A/B testing; what they should use is other Multi-arm bandit (MAB) solutions, simply because optimization should be seen as a continuous process.
Anybody who has changed layouts for a website that serves ads would notice the CTR going up right after the design/layout change. This happens primarily because your layout develops banner blindness over time.
Continuous MAB testing is the perfect solution. Creating a leading indicator of change in the user behavior towards ad units isn’t a difficult task which can help keep ad units performing well all the time.
In a non-technical language, once you start the experiment on your website, continuous optimization and MAB will continue serving the majority of your impressions to the best-performing variation. However, as soon as the system notices a (statistically significant) drop in the CTR, the system will find the best possible variation itself.
It is also technically feasible to use machine learning to find the best spot for your advertisements (outside the defined variations) and change it as the user behavior changes.
Key Takeaways
What is A/B testing: It is a technique where two variants of an element are exposed to live traffic to identify the conversion-driving variant.
What can publishers do with A/B testing: Optimize ad placements, compare ad formats, optimize content layout and UX, and evaluate different ad networks/SSPs, among other things.
Benefits of A/B testing: Smarter and data-driven decision making, more optimized content strategy that aligns with the audience, and significant SEO gains.
Limitations of A/B testing for publishers: Can’t track ad clicks to measure results, no support for creating variations automatically, and no help with manual coding.
Best practices of A/B testing: Test one element at a time, don’t A/B test with a small sample size, conduct split test across multiple devices, and continuously monitor the performance.
Multi-armed Bandit Solutions (MAB): It is a more refined form of A/B testing that utilizes machine learning to divert traffic to the high-performing variant than its counterpart. As it works in real-time, it is more cost-effective and potentially faster compared to traditional Split testing.
Are you planning to conduct A/B testing on your website? Contact AdPushup now and get a free demo. We offer automated Split testing for publishers as a part of our ad layout editor. The control stays with the publishers, which allows them to test new layouts and their variations whenever they want.
FAQs on A/B Testing for Publishers
There are many A/B testing tools for publishers, such as VWO and Optimizely. You can also consider AdPushup for conducting A/B tests on your website. With our years of expertise, we offer the best modalities to increase ad yield.
A/B testing is a performance-measurement technique where two variants of a webpage or app are compared to identify which one performs better. Also called split testing, it is a methodology under conversion rate optimization tests that aims to increase the conversion rates for a business.
