CHEQ Enhances Brand-safety AI to Cover 200 Granular News Categories
Keyword-Killing AI understands context and 14 languages; Trained on millions of pieces of content
NEW YORK, Dec. 17, 2019 /PRNewswire/ -- Military-grade ad verification company CHEQ has today announced that its AI can now understand online content across more than 200 granular categories. The AI is being used by brands and publishers to ensure that ads are not served next to offensive, inappropriate, or hateful content.
Unlike outdated keyword-based brand safety, the AI can detect whether online content is discussing anything from alcohol to body modification, fine art, to finance. Based on pre-determined brand suitability, the AI then decides whether to serve or block an advertisement in real-time. Many leading brands and agencies have achieved at least 20 percent more reach by ending their reliance on blunt keywords and whitelists, in favor of AI that can discern subtle context that's entirely appropriate for campaigns.
The CHEQ AI has been developed for five years and is trained using millions of pieces of content across 14 languages including, English, German, Japanese, Chinese and Spanish. It can understand the exact meaning of any news source to confirm with human-level accuracy what a piece of content is — and is not.
CHEQ has worked with top publishers and brands to provide customized brand safety, with examples including:
- Working with a top publisher to create a "good profanity" AI category, i.e., the AI has been carefully trained to not automatically block "non-offensive" profanity. (For example, it could serve ads next to premium news content that may say "I f*cking love my job," but would also accurately block hate-filled and offensive profanity as determined by brand guidelines).
- Distinguishing between "fact and fiction" — with the AI trained to distinguish between real conflicts and war zones, and entertainment such as "Game of Thrones" or "Infinity War."
CHEQ also provides a complete breakdown of every unsafe URL blocked in real time, enabling marketers to audit the SaaS platform's performance.
These advances come after a CHEQ study showed that more than half (57%) of neutral or positive stories on major news sites are being incorrectly flagged as brand unsafe because of reliance on outdated keyword brand-safety technology. The study also found that 73% of safe LGBTQ content is blocked by the use of crude keywords such as "lesbian", "same sex marriage", and "bisexual".
CHEQ Founder and CEO, Guy Tytunovich, says: "CHEQ's AI completes sequences to understand the meaning of the text, in order to identify the exact context of a story, similar to the way a human brain works.
"For instance, the AI can detect if the mention of 'alcohol' is a story about driving under the influence (DUI)–likely to be damaging for many brands or simply a recipe mentioning an alcoholic drink, which in many cases may be considered safe. The AI can tell the difference between a story about a same-sex lesbian couple getting married, considered brand-safe by most progressive brands, which would otherwise be automatically blocked by most ad verification platforms because it triggers keywords like 'lesbian' and 'sex'. Overall, this advance enables the secure blocking of unsafe content while unlocking greater reach for all campaigns."
About CHEQ
CHEQ is a global, AI-driven cybersecurity company disrupting the ad-verification space. With offices in New-York, Tokyo and Tel-Aviv, the company's mission is to help sustain the digital ecosystem by protecting leading advertisers from the risks of online advertising and helping them restore confidence in the space. Founded by former military intelligence and cybersecurity personnel, the company has introduced the first Neural-Network based solution for brand safety, ad-fraud and viewability, running powerful AI, NLP, computer vision and deep learning algorithms.
Contact:
[email protected]
SOURCE CHEQ
Related Links
WANT YOUR COMPANY'S NEWS FEATURED ON PRNEWSWIRE.COM?
Newsrooms &
Influencers
Digital Media
Outlets
Journalists
Opted In
Share this article