Over the years, Ad tech has become a complex environment where it became very difficult to navigate. The multiplicity of actors and ways to integrate them added to this complexity thus increasing operational costs. Tons of SSPs (Open Exchanges, specialized SSPs, trading desks, etc.), multiple wrappers to connect with, and many ID providers, among others. Having such a vast choice of options to choose from often results in difficulty in figuring out which ones are the best for you, especially when your testing capabilities are limited.
There are definitely many layers of complexity in both the decision-making process and implementation when connecting a new partner, and it often implies a high cost. Thus, you should initiate a “test & learn” approach to figuring out what partners you should not spend time on and identify those you should focus on to limit the operational costs. But what are the operational costs? There is a simple rule which is “more partners = more operational costs”.
Although, this does not mean that you should not plug-in new partners. What it means is that many benefits can come out of carefully selecting your partners.
Therefore, a robust and reliable “test and learn” process to test out and select only the best partners should be put in place, as connecting a new partner could represent an organizational and legal challenge (testing a new SSP or ID provider, for example). There are some traditional metrics to track as Revenue, RPM, CPM, and Fill rate, among others. Still, these do not consider the possible cannibalization occurring between partners and the seasonality, which can make the before/after comparison difficult.
So, there are two additional less common ways to analyze the success of a test:
Incrementality looks inside an auction and identifies by how much your partner outbids the competition. For example, let’s say that in one of your auctions, Rubicon has had a winning bid of 3€, followed by the Trading Desk with a 2€ bid request. So here, the Incrementality of Rubicon is 1€. This means that without Rubicon you would have made 2€ instead of 3€.
This metric will allow you to evaluate the quality of a partner and whether or not you should work with him by differentiating partners that have high revenue from those with both high revenue and high Incrementality.
If you take the example of a partner that brings 20k€ of revenue, you will always want to keep it because it brings a lot of money. But part of these 20k€ of revenue could have been made by other SSPs. When you calculate the unique value, you might realize for example, that there is only 3 or 4k€ out of the initial 20k€ of revenue, that is unique to this specific partner. And the question you should ask yourself would be: Are these 3 or 4k€ of unique value brought by this partner enough compensation for the operational costs that come along with him? It is very important to clearly define these thresholds for each partner.
Thus, calculating Incrementality will help you to understand how much revenue you’ll lose if you decide to unplug a given SSP and if you should try testing a new partner or simply cut your operational costs.
A/B Testing is the most precise way to test a new partner’s added value on an almost unlimited scope and at a very low cost. Nonetheless, you’ll need to invest in a tool, such as Pubstack, during the early stages of your decision-making process to avoid having to deal with many useless partners in the long term.
The idea, for example, is to call your partner on 90% of your traffic and keep a control group of 10% where the SSP or ID provider will not be called. You’ll then be able to calculate the impact on the website’s RPM and evaluate the real uplift of this partner and take a very robust decision. This process avoids dealing with any cannibalism between the SSPs and allows us to understand the real uplift.
A/B tests are very efficient and are not only applicable to testing new partners. They could apply to many different processes, such as testing new SSPs, new ID providers, or new layouts among others. Some good examples of A/B testing are:
With these smart steps, you’ll be able to carefully select the partners that have the strongest strategic interest and avoid being overwhelmed by the operational costs that come alongside bringing in new partners.
However, you may have noticed that each one of these steps requires a detailed approach that is only made possible through extensive use of analytics within your ad stack. This strong use of analytics is notably what will help any publisher monitor & solve discrepancies in their ad revenue, a topic we’ve gone over recently as it seems to be in demand amongst publishers combining several revenue sources.
If you’re still curious about how much a superior data granularity can do for your ad stack’s needs, we invite you to discover our interview of Kim Skovgaards Jørgensen, Programmatic Lead at Step Network, who has been using our solution after trialing several others.