5 min read

Why Publishers Should Embrace Ad Exploratory Testing

Alex Carl
Alex Carl
Updated on
January 29, 2020
Best Practices

Let’s throw out the antiquated idea that ads exist outside of the “real” product and therefore aren’t beholden to the same quality standards. Monetization tools are real products that should deliver value not only for brands and agencies but for the end users who see the ads too.

Ads that don’t load, aren’t interactable, don’t work across devices, crash the app or browser, and use dark patterns are simply unacceptable, let alone unengaging. Plus, if you’re monetizing with ads - how can you possibly justify premium rates when pitching your product to advertisers?

The era of “throw some code on a page and pray it works” ended for professional web apps before Bill Clinton left office in 2001, but this attitude is still the norm for ads - particularly for programmatic ads.

A poor ad experience is a poor user experience, period.

So with that in mind, we can identify tools to turn poor ad experiences into ones that deliver real value. Exploratory testing is one such tool.

What is exploratory testing?

Exploratory testing can be summarized as “testing like a human”. Humans don’t think in terms of test cases or test suites or even test steps. Humans like to try stuff and watch what happens. They ask themselves, “Does this work? How about this? What if I tweak that and try again? Hmm, now why is THAT happening?”

Then, when an exploratory tester finds an issue, they note what they were doing so they can reproduce it. Over time they learn to identify common problematic behaviors in the product, and their testing becomes more efficient at finding bugs.

How does exploratory testing work?

The key to successful exploratory testing is relying on a tester’s brain rather than a script. No predetermined suite of tests can capture the creativity and judgment a human brings to creating tests on the fly.

Exploratory testing may seem like “playing around” with a product, and in many ways that’s what it is. But there’s a method to the madness— the tester is trying to uncover what’s behaving according to spec and what’s not. Furthermore, the tester can discover undefined behavior that the spec didn’t cover, and that’s one of the real values of exploratory testing.

Why the QA process can fail (with jokes)

There’s a software joke that goes: “A software QA engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 99999999999 beers. Orders a lizard. Orders -1 beers…”

This is your typical test-driven QA process— there’s a test suite derived from a spec that defines how to order beer. The tests cover both the “correct” and “incorrect” cases.

But there’s an appendix to that joke: “The QA engineer signs off on the bar. The first customer walks in and asks where the bathroom is. The bar explodes.”

The customer performed a reasonable activity for a bar (ordering drinks), but because bathroom breaks weren’t covered in the spec, they missed something they would have caught had they tested the bar “like a human”.

How might we leverage exploratory testing for ad products?

We first need to identify the risks to quality upfront. Risks exist broadly at the system-level, as in Rex Black’s list of quality risk categories. They also exist at the product or feature-level, like “clicking on the ad does not register an event” or “the ad breaks the styling of the header”.

Using the bar example, some functionality risks are “when interacting with the bartender, you can only order beer” and “you cannot get directions to the bathroom”.

Enumerating the risks gives us some boundaries for exploratory testing. By identifying what’s at stake, we know what could go wrong, even if the spec doesn’t cover them, and we will watch for them as we’re testing. Other good practices include:

  • Setting the right time box is crucial for good exploratory testing. Since there’s no traditional definition of “test coverage” here, we have to use our judgment for how long something should be tested. In general, the greater the risks under test, the bigger the time box to use.
  • Using a staging environment with realistic ad data empowers testers in multiple ways - they can take risks when interacting with ads without affecting production data (or brand engagement data), and prod-like ads give an accurate simulation of what end users experience. At Kevel, we encourage our customers to create a separate dev account for this.
  • Above all, a good exploratory tester must use their imagination when interacting with the product. They need to be empathetic to the needs of users while imagining how the identified risks can manifest themselves. For ad products, this means understanding the needs and perspectives of both brands/agencies and the end user seeing the ad.

What happens if we don’t conduct ad exploratory tests?

Exploratory testing requires time that can be challenging for teams, but even quick, surface-level testing by a human can save you from poor ad experiences like these:

Pop-up ads you can’t click out of

If your users can’t escape your ads, thanks to tiny exit buttons or fine print, they can’t (and won’t want to) see your site content. These are infuriating ad experiences that could tarnish their view of your brand.

pop-up ads you can't escape

Ads that crash mobile apps or desktop browsers

If your ads crash your users’ browsers and apps, not only do you lose the chance to monetize them during that session, but they may never come back at all.

error message on ads that crash browsers and apps

Ads that obscure the page layout or content

Surrounding and embedding your organic content with ads can distract users from engaging with either. Where does the ad end and the content begin? If it takes extra brainpower for your users to figure that out, they may just jettison your site altogether for a product with a simpler, ad-free layout.

example of ads that take over website content

Incorrectly sized ads

Improperly-sized ads don’t serve your users, advertisers, or your brand at all. You may have spent years making sure your site is clean and sized properly - only to be okay displaying ads that make your product look amateurish.

example of incorrectly sized digital ad

Ads with broken code

Probably worse that incorrectly-sized ads are ads that display code (or don’t display anything) rather than the intended creatives - leading to a terrible user experience.

example of ad with broken code

Oddly-placed ads

Again, we find it amazing that companies that have teams dedicated to site design and brand presentation for some reason give ads a pass. Here’s an example from The New York Times Crossword section. The ad banner is out-of-place and enormous, with the ad frame taking up a third of the above-the-fold space, while the banner itself a small portion of this.

example of oddly placed nyt crossword ads

This is likely due to them using the slot for differently-sized banners - but for the sake of sleek site design, they should be, say, implementing rules using Ad Serving APIs that identify banner size and adjust the placement in real-time.

It’s also a lot of blank space that could be used instead to upsell their News subscription, Recipe subscription, or insert polished native ads. This is a win-win: users get a crisper, cleaner user experience while The New York Times can charge higher ad rates for better ads (and/or upsell other products).

Improper context or contextual targeting

Off-brand advertisers or PR-nightmare scenarios can easily occur if you’re running programmatic ad traffic. Fortunately, human testing can identify when this occurs - and you can then block certain advertisers and/or ad partners.

example of improper contextual targeting

To be fair, it’ll be hard to catch all of these situations if you’re using an ad exchange, and if you’re displaying direct-sold ads, this is far less-likely to happen.

Exploratory testing is one of many tools

Exploratory testing is not a panacea for software quality, which is why it’s just one tool in our user experience toolkit.

You shouldn’t use exploratory testing for quickly identifying regressions. That task is better suited for automated integration tests. Similarly, there are automated tools for detecting known malvertising patterns and perpetrators that are more efficient than a human.

Overall, exploratory testing comes with a time cost— the time to identify risks, the time to test, and the time for a tester to learn about the product and improve their skills. And exploratory testing cannot be done casually. An unimaginative, unempathetic tester doesn’t provide a great advantage over a test suite.

But when executed well, exploratory testing is a powerful weapon against poor ad experiences.

All ad tech in your inbox

Subscribe to our newsletter to stay up to date with the latest news.