Use Creative Clustering to Optimize Your Ad Creatives

Singular Engineering Blog
6 min readOct 19, 2021

--

Here at Singular, we aim to be the best at what we do — utilizing every bit of your mobile marketing data to provide meaningful campaign analytics. In this blog post, we’ll explain what creative clustering is, we’ll demonstrate several clustering methods and we’ll show you how clustering techniques can be used to improve ad performance.

Creatives are the campaign images, ad text or videos displayed to the user when an ad is served. Marketers constantly test different visual combinations on ad networks to determine which creatives drive the best performance in terms of revenue and app engagement. Using that data, they can successfully complete the task of ad optimization.

Often the marketer will use identical images across multiple campaigns (NY_males_18–35) and ad channels (e.g., Facebook, Twitter, etc.) In these instances, marketers need to group creative performance data for images that are exact matches.

However, in some cases, marketers might also want to test campaign image “themes” and aggregate data for similar images. But “Pixel-Perfect” matching algorithms frequently fail to aggregate data under image themes due to small differences in the images.

These differences in similarly-themed images typically arise in two types of scenarios:

  1. Intentional changes: small tweaks to the ad group creative, such as ad text language changes or to test different color schemes
  2. Unintentional changes: changes made by the ad channel you’re marketing on. Ad channels often resize or change the encoding of creatives, causing the images to not match anymore

For instance, in the example below, one Viking image underwent minor visual changes. While the marketer might want to see how Viking-themed images performed against other themes, “Pixel-Perfect” matching won’t recognize the images as belonging to the same theme:

Small differences between these two ad campaign creatives cause them to be classified under different creative themes.

In another example, images are similar, but the accompanying text below (which is part of the image) is different. One is in English and the other is in German:

The above cases make it quite obvious that the marketer may sometimes want to see which type of image or campaign image theme performs better, rather than how the exact image performs. It doesn’t matter how the English version of the image above performs against the German version. It’s more important to compare the above image with other images. For that, we can’t use a “Pixel-Perfect” matching algorithm. Instead, we need to use another kind of algorithm, called “Perceptual Hash”.

Perceptual Image Hashing

While you are probably familiar with hash functions like MD5 or SHA1, perceptual image hash functions are quite different. Hash functions like MD5 and SHA1 are mainly used as cryptographic hash functions, and are influenced by the Avalanche Effect, which implies that changing a single bit of input creates dramatic changes in output, or a completely different, random hash:

Instead, perceptual image hashing allows two similar images to be scaled or have different aspect ratios and still have the same resulting hash (or a very close one). Most of the perceptual image hashing algorithms work in quite the same way:

  1. Reduce to thumbnail size
  2. Reduce color to grayscale
  3. Average the resulting pixels
  4. Calculate the hash

For this demonstration, we’ll use the Average Hash (aHash) method to calculate the perceptual image hash:

(Image taken from Wikipedia , Creative Commons license )

Reduce size

This first stage allows the following stages and processing to run much faster. We only need to process 64 bits instead of millions. In this stage we are also “normalizing” the scale of our image to a thumbnail size. If you use a bigger size, the resulting hash will be more accurate. You should choose the size that fits you, taking into account the tradeoff between accuracy and processing time.

image = image.resize((8, 8), Image.ANTIALIAS)

Reduce color

In this stage, we convert our image to grayscale. Removing all the color enables us to process less data much faster and put more emphasis on the structural similarities in the image.

image = image.convert("L")

Calculate the average pixel color

image_pixels = list(image.getdata())
avg_pixel_value = float(sum(image_pixels) / len(image_pixels))

Calculate the final image hash

hash_bits = ''.join(['1' if pixel > avg_pixel_value else '0' for pixel in image_pixels])hex_hash = str(hex(int(hash_bits, 2))).upper()
print(hash_bits, hex_hash)

The output is:

Now, let’s take the following modified, yet similar image and run aHash on it:

The output is:

As you can see, the hash changed from 0xF8F25F3C9EC4F0F8 to 0xF8F25F3C9EC4F0F0 — a change of a single bit. So in order to compare how similar the two images are — we need to calculate the number of different bits in the resulting hash. This number is also called the Hamming Distance. The lower the distance, the more similar the images are.

This is our final aHash function:

But instead of implementing aHash ourselves, we can use the fantastic imagehash Python package. With imagehash, it’s very simple to calculate the aHash:

imagehash.average_hash(my_image)

Now, let’s look at another hashing algorithm — dHash. While aHash uses the average pixel values to calculate the hash and is very fast, it can generate a lot of false positives. dHash, by using gradients instead of calculating the average pixel value, holds a huge advantage — it operates at about the same speed as aHash, but yields far fewer false positives. This is how dHash operates:

  1. Reduce Size
  2. Convert to grayscale
  3. Calculate the differences between adjacent pixels — if pixel number 1 is smaller than pixel number 2, mark as “1”, else “0”
  4. Calculate hash

For example, this is the result of running dHash on the above two images:

print(imagehash.dhash(original_image))
print(imagehash.dhash(best_plane_ever_image))
# Output
# 0b677d9a2429230d
# 0b677d9a24292305

We can see that the Hamming Distance between the two hashes is once again a single bit. Looking ahead, we have several ideas on how to further develop our ad optimization analytics product, by providing Creative Clustering for videos and offering even more valuable analytics on top of this framework.

Ultimately, one could use whatever image hashing or matching method that seems suitable. While there is no “one size fits all” method, there are several popular methods that are probably going to meet your needs.

Found this interesting? Looking for a career? Singular is hiring!

This post was written by Nir Solomon. If you have any questions or feedback, please contact Nir at nir@singular.net.

--

--

Singular Engineering Blog
Singular Engineering Blog

Written by Singular Engineering Blog

Singular’s engineering blog. We post here about tech topics we encounter and solutions we build. For more info go to → www.singular.net :)

No responses yet