NewsGuard and Pangram to identify AI-generated news and misinformation
quality 3/10 · low quality
0 net
AI Summary
NewsGuard and Pangram Labs launched an AI detection tool to identify AI-generated content farms and misinformation sites, using proprietary ML models to flag sites with substantial AI-generated content that lack disclosure. The system has identified over 3,000 AI content farm sites in six months of testing, helping advertisers and consumers avoid low-quality, ad-revenue-driven misinformation.
Tags
Entities
NewsGuard
Pangram Labs
ChatGPT
Claude
Gemini
The Trade Desk
Max Spero
Matt Skibinski
Nature
Wellesley
University of Maryland
EXCLUSIVE: NewsGuard Taps Startup Pangram to Identify AI-Generated News and Misinformation --> Subscribe Sign In MiniMBA Agencies Brands Creativity Media TV Commerce Tech The tech could help advertisers avoid appearing on AI-generated made-for-advertising sites. Moor Studio By Kendra Barnett --> Media rating and misinformation-tracking firm NewsGuard is trying to stop the spread of AI-generated misinformation and slop in the news ecosystem with a new project—that also relies on AI. On Thursday, NewsGuard launched an AI content farm detection tool designed to identify when news and information sites host a significant portion of content that appears to be created by large language models like ChatGPT, Claude, or Gemini. The project was launched in collaboration with the AI content detection startup Pangram Labs. The system uses Pangram’s proprietary AI models, which are specifically trained to identify AI-generated content, to evaluate not just individual webpages but broad swaths of entire domains. Once Pangram’s tech has identified a site that appears to be an AI content farm, using automation to pump out digital content en masse, it flags the site to NewsGuard, whose analysts then conduct manual reviews. These experts review Pangram’s findings to determine the pervasiveness of AI content across the site, look for explicit disclosures that content is AI-generated, seek indicators that human writers are involved, and reach out to site owners for additional information to ensure they don’t assign false positives. NewsGuard categorizes websites as AI content farms according to three criteria: a “substantial” share of the content is created by AI, as determined by Pangram; the site does not disclose that its content is AI-generated (unlike many reliable news outlets that explicitly share when their content is produced with the help of AI); and the appearance of the site could easily mislead the average user into believing its content is created by humans. This content is, at best, unreliable; at worst, purposeful and potentially dangerous misinformation or propaganda. “If we can’t detect AI content, then every communication space is going to be flooded with inauthentic content that’s cheap to produce and difficult to impossible to differentiate [from] something authentic,” said Max Spero, Pangram’s CEO. The detection system, which has been in testing for over six months, has already helped NewsGuard flag some 3,000 AI content farm sites, more than double what the organization was able to identify last year using primarily manual techniques. Many of these are branded under generic, newsy names like Times Business News or Business Post, while consistently putting out misinformation-riddled articles about real brands, political leaders, celebrities, and public health information. In one instance, a site called Citizen Watch Report, which bills itself as “a fine selection of independent media sources,” ran a story last year asserting that two U.S. lawmakers, Senator Lindsey Graham (R-SC) and Senator Richard Blumenthal (D-CT), shelled out $814,000 on hotels in Ukraine. The false claim spread on social platforms and was amplified further Russian state media before being shut down as fake news . In another example, a site called News 24 falsely claimed that Coca-Cola threatened to cut its Super Bowl LIX sponsorship over the announcement that Puerto Rican rapper Bad Bunny would headline the game’s halftime show. Coca-Cola was not even a sponsor of the Super Bowl. The article’s webpage displayed ads from global brands including AT&T, YouTube, Expedia, Hotels.com, Skechers, and others. Many of these sites can be categorized as made-for-advertising (MFA) sites, properties with low-quality content designed solely to generate ad revenues via arbitrage. NewsGuard first began monitoring AI-generated news and MFA sites a few years ago, when it was often easy to detect the use of LLMs in copy. “Sites would publish articles with AI error messages in them or quotes, such as, ‘As of my cut-off date of November 2024, I can’t answer this question,’” said Matt Skibinski, NewsGuard’s chief operating officer. But in the intervening years, these sites have spread like wildfire. Today, Pangram claims it’s seeing between 300 and 500 of these AI content farm sites emerge each month. “It’s a way to produce low-quality content for really low cost and generate some advertising revenue—and also [bad] actors who want to spread false information have figured out that they can sort of weaponize this technology and churn out, at a really high volume, false and misleading content and still make a quick buck because they run ads on those pages, too,” Skibinski said. In one two-month observational period, NewsGuard found 141 blue-chip brands advertising on MFA AI content farms. The new detection tool, NewsGuard hopes, will help both advertisers and consumers steer clear of AI-enabled misinformation and MFA sites. To help protect advertisers, it will allow them to license its data stream about AI content farms directly or through their agency. It also has a direct integration with popular demand-side platform The Trade Desk, through which advertisers can block these sites with specific pre-bid segments. NewsGuard is also considering integrating the tool into its browser extension so that everyday consumers can see when they’re consuming AI-generated info, according to Skibinski. Pangram, founded in 2023 by a former Google engineer and an ex-Tesla scientist, has already gained acclaim for the effectiveness of its tech. A report in Nature from September found that Pangram proved highly capable of flagging research papers and peer reviews that included LLM-generated text. Some academic institutions including Wellesley are using Pangram’s tech to combat undisclosed or unwanted AI-generated content in academia. Spero expects demand for Pangram’s tech to spike in the coming months. “There’s just going to be so much spam and bots and slop online,” he said, “that it’s going to be pretty unusable without technology to help you wade through the slop.” Correction Mar. 12 at 6:42 a.m. ET: A previous version of this story inaccurately stated that University of Maryland is using Pangram’s tech in academic settings. University of Maryland independently validated the effectiveness of the tech in an a third-party study but is not a Pangram customer. FTC Issues Sweeping Demands to Media Rating Firms Over Industry Ties Kendra Barnett Kendra Barnett is Adweek's senior tech reporter. @KendraEBarnett | [email protected] Read More At NewFronts, LinkedIn to Showcase the Future of B2B Marketing Melissa Ward Decisions, Decisions: The Marketer’s Journey to Long-Term Success With Twitch Amazon Ads Where Bold Ideas Meet Big Results: Celebrating the 2025 Meta Agency Award Winners Meta The AI Generation Gap: How Different Demographics Are Using—and Trusting—Generative Tools Future The Latest Newsletters ADWEEK Daily Your 5-minute morning advantage. ADWEEK Daily delivers the most important headlines in marketing, media, and tech—every weekday. AI, Tech & Money AI, commerce, M&A—this is where media and marketing intersect with money. Sign up for Tech & Money and follow the power shifts that matter. By submitting your email, you agree to our Terms of Use and Privacy Policy . You may opt-out anytime by clicking 'unsubscribe' from the newsletter or from your account. Email SIGN UP FOR FREE Loading... Media The Trade Desk Says It's Testing AI Campaign Creation With Claude By Trishla Ostwal Exclusive: Playboy Names Phillip Picardi Chief Brand Officer and Editor in Chief By Mark Stenberg How Washington's AI Power Struggle Became a Marketing Headache By Trishla Ostwal How Time Turned Events Into Its Biggest Revenue Driver By Mark Stenberg The Trade Desk CEO Jeff Green Dropped $148M on Company Stock By Trishla Ostwal BDG Names Avi Zimak Chief Commercial Officer in Dual Leadership Update The media group also elevated Amber Estabrook to the role of chief business officer of prestige revenue and partnerships By Mark Stenberg Exclusive: Beehiiv Taps Former Calendly Exec as Its First CMO Darren Chait, who built and sold Hugo to Calendly, joins the newsletter platform as it eyes $50M in revenue and a broader identity By Mark Stenberg Luma AI's AI Agents Promise to End the Multi-Tool Mess By Trishla Ostwal OffBall and Togethxr Team Up on a New Playbook for Sports Media By Mark Stenberg