Our service inspects web clicks coming to your landing page or affiliate offer and classifies each as innocent or malicious. Innocent traffic is then let through to the actual content while malicious visitors are shown a different page that does not have any sensitive content that could be compromised on exposure.
We provide solid protection against a wide range of unwanted visitors: click fraud, ad network moderators, web scrapers, antivirus bots, etc.
In order to detect hostile traffic we need data about each visitor. Most solutions on the market rely primarily on simplistic blacklists of IP addresses, HTTP headers, and other superficial features.
Our approach is smarter than that: we collect thousands of in-depth facts about your visitors in the network, HTTP, and JavaScript contexts, compiling what is known as browser fingerprints. These fingerprints are evaluated by dozens of high-precision scanners, resulting in a confident verdict.
Even the most precise checks are limited in scope. Emerging threats cannot be reliably detected with heuristics tailored for previous generations. Even the smartest analyst may overlook a hidden pattern in fingerprints. But not the slightest deviation will ever escape the scrutiny of a machine programmed to seek for fingerprint anomalies.
VLA™ is our state of the art machine learning technology that can do what our competitors cannot–detect and automatically adapt to new, previously unknown threats as the race of arms in affiliate marketing goes on. The system becomes smarter and more comprehensive with every click we inspect.
Besides traffic filtering, we also collect vast amounts of statistical data, some of which is exposed as part of our built-in ad tracker. This data alone contains a lot of insights about recurring patterns in traffic that can help us identify malicious visitors. But, as with any big data analysis, finding patterns in billions of clicks in real time is a challenging task.
Thankfully, computer science has solutions. HyperLogLog is an advanced algorithm used to estimate cardinality of large sets. It is used in the eponymous state of the art filter that we invented to perform pattern-based filtering in real time based on our entire operation history.