London, UK, May 12th, 2022 — GumGum, a contextual-first global digital advertising platform, today announced it has become an official member of Global Alliance for Responsible Media (GARM), the cross-industry initiative established by the World Federation of Advertisers (WFA), and will play a more strategic role in helping build guidelines that increase trust and transparency and shape the advertising industry across contextual, brand safety and suitability.
“It is paramount for the industry at large to have standardised definitions for keeping brands safe without negatively impacting monetization for publishers,” said Phil Schraeder, CEO of GumGum. “We are excited to be joining GARM in a more strategic capacity. We have seen the power and benefits standardization has for all players in the industry. We are excited to partner with GARM to build those standards for brand safety, suitability and now contextual targeting.”
GARM was established by the WFA to address the challenges of harmful content on digital media platforms and its monetisation via advertising. It brings together advertisers, agencies, media companies, platforms, industry organisations and now ad tech providers. GARM’s mission is to get the digital media ecosystem working together on the shared priorities that will lead to the removal of harmful content from advertiser-supported social media.
“As leaders in the online ecosystem, brand advertisers play a pivotal role in key media operations, from media spend strategy, ad placement decisions, and making sure that advertising supports positive content and avoids harmful content,” said Rob Rakowitz, Initiative Lead – Global Alliance for Responsible Media. “Much of our efforts result in creating standards, transparency and controls for advertisers, agencies, and platforms approaching the challenge – how digital content is categorised – and providing them with controls to include or exclude content in paid media campaigns. We are excited to partner with GumGum in ensuring a safer online environment that builds trust between marketers and consumers.”
GumGum’s accredited contextual intelligence platform Verity™ natively implements the GARM: Brand Safety Floor + Suitability Framework into GumGum Threat Categories.
VerityTM leverages deep-learning artificial intelligence for threat detection and classification. The process for training this model includes collecting data samples, deploying human annotation on these data samples, and ultimately feeding these labeled (human-verified) samples to the model to “learn” the concepts presented. In order to most accurately train the model, there must be well-distributed samples for each of the categories (classes) supported. The samples provided for Verity’s threat models reflect the categorical definitions presented within GARM’s Brand Suitability Framework for Low, Medium and High-risk representations of each class.
“Contextual is the future of brand safety and suitability – which is something GumGum has believed in for over a decade,” said GumGum’s Head of Verity, William Merchan. “As an industry it will be important for us to not only come together on the brand safety and suitability standards but also how we define and implement the next generation of contextual technology.”
For more information, please reach out to pr@gumgum.com.
About GumGum
GumGum is a contextual-first global digital advertising platform that captures people’s attention, without the use of personal data. We believe that a digital advertising ecosystem based on understanding a consumer’s active frame of mind rather than behavior builds a more equitable and safer future for consumers, publishers and advertisers alike. Founded in 2008, GumGum is headquartered in Santa Monica, California and operates in 19 markets worldwide. For more information, please reach out to pr@gumgum.com.
About the Global Alliance for Responsible Media
The Global Alliance for Responsible Media (GARM) was formed to identify specific collaborative actions, processes and protocols for protecting consumers and brands from safety issues. Alliance members will work collaboratively to identify actions that will better protect consumers online, working toward a media environment where hate speech, bullying and disinformation is challenged, where personal data is protected, and used responsibly when given, and where everyone is, especially children, better protected online. Alliance members acknowledge their collective power to significantly improve the health of the media ecosystem.