Latency—the delay that occurs in communication over a network—remains the enemy of ad tech, and by extension, the enemy of publishers and agencies relying on increasingly sophisticated tools to drive revenue and engage audiences.
With real-time bidding demanding sub-100-millisecond response times, advertisers are careful to avoid any process that could hinder their ability to win placements. Website page load speeds, meanwhile, continue to be a critical metric for publishers, as adding tracking pixels, tags and content reload tech to page code can inadvertently increase latency, and as a result, website bounce rates.
If you think a few dozen milliseconds here or there won’t tank user experience, note that the human brain is capable of processing images far faster than we previously thought. An image seen for as little as 13 milliseconds can be identified later, according to neuroscientists at MIT. The drive for greater speed and better performance will march on because users will demand it.
At its core, latency reduction—like the mechanics of transporting people—is governed by both physics and available technology. Unless a hyperloop breaks ground soon, you will likely never make a trip from Los Angeles to Chicago in two hours. It’s a similar story for the data traversing internet fiber optic cables across the globe. Even with a high-speed connection, your internet traffic is still bound by pesky principles like the speed of light.
So how are ad-tech companies solving for latency?
The two most straightforward answers are to simply move data centers closer to users and exchanges or move the media itself closer via content delivery networks. The shorter the distance, the lower the latency.
A third, lesser-known tactic involves the use of internet route optimization technologies that operate much like a real-time traffic app you might use to shave minutes off your commute. Deploying this tech can significantly reduce latency, which can be directly correlated to upticks in revenue in the programmatic and digital ad space.
To understand how it works, let’s first consider how most internet traffic reaches your laptops, smartphones, and (sigh…) your refrigerators, doorbells and washing machines.
Unlike the average consumer, companies increasingly choose to blend their bandwidth with multiple internet service providers. In effect, this creates a giant, interconnected road map linking providers to networks across the globe. In other words, the cat video du jour has many paths it can take to reach a single pair of captivated eyeballs.
This blended internet service has two very real benefits for enterprises. It allows internet traffic to have a greater chance of always finding its way to users and sends traffic by the shortest route. But there’s one very important catch: The shortest route isn’t always the fastest route.
In fact, the system routing internet traffic works less like real-time GPS routing and more like those unwieldy fold out highway roadmaps that were a staple of many family road trips gone awry. They are an adequate tool for picking the shortest path from point A to point B, but can’t factor in traffic delays, lane closures, accidents or the likelihood of Dad deciding a dilapidated roadside motel in central Nebraska is the perfect place to stop for the day.
In much the same way, the default system guiding internet traffic selects a route based on the lowest number of network “hops” (think: tollbooths or highway interchanges) as opposed to the route with lowest estimated latency. While the shortest path sometimes is the fastest, traffic is always changing. Congestion can throttle speeds. The cables carrying data can be accidentally severed, stopping traffic altogether. Human error can temporarily take down a data center or network routers. But unless someone intervenes, the system will keep sending your traffic through this path, to the detriment of your latency goals and, ultimately, your clients and end users.