Danielle Coffey, CEO and president of the News/Media Alliance, believes journalism and generative AI can play nice.
But first, generative AI companies must get real about the value journalism brings to their products.
“This doesn’t have to be a zero-sum game,” Coffey told AdExchanger. “These innovative technologies are very promising, but it doesn’t have to be at the expense of journalism.”
AI companies acknowledge the value of journalism, Coffee said, since they clearly see how much more trustworthy and higher quality it is as a data source for training their models. So, she argued, they should look at working with journalists as part of the cost of doing business – similar to how Netflix or Spotify compensate content creators for their IP.
However, Big Tech has largely resisted compensating news publishers directly for content creation (like a news blurb in a Google Search results page), and generative AI is no exception. That reality has forced creators and publishers to turn to the courts.
After a recent Senate hearing on journalism and AI, Coffey is confident lawmakers have the journalism industry’s best interests in mind. But despite Congress proposing several new laws regulating the proliferation of generative AI, she believes it would be best for the market to dictate reasonable terms for how publishers and AI companies could work together.
Coffey spoke to AdExchanger.
AdExchanger: What are the top stories you’re watching when it comes to the future of generative AI and journalism?
COFFEY: The New York Times is litigating this now [in its case against OpenAI and Microsoft]. I saw [OpenAI CEO] Sam Altman say they don’t need The New York Times. That was interesting, because if he’s already pulled all of their content [to train OpenAI’s model], is he saying he’s going to give it back?
AI does need copyright-protected content. When something is copyright protected, it’s because it has value. So you should find a way to reward that quality content.
Are licensing agreements a viable compromise between publishers and generative AI companies?
There are a lot more issues at stake than just licensing, although that’s a big piece of it, because publishers aren’t going to be around if they don’t get compensated.
Accountability and transparency are important, too.
Let’s say generative AI outputs a big, fat lie and cites The Wall Street Journal or The Washington Post, so now that lie is associated with their brands. Is that OK as long as the content is licensed?
No. Which is why licensing is not a rifle-shot solution.
To be clear, we are not asking Congress for a mandatory or compulsory license. I want to correct the record on that because some people took that away from our testimony.
What are some legislative fixes that would help publishers protect copyrighted material?
There could be amendments to the Section 512, [Title 17] DMCA safe harbor provisions and the Section 230, [Title 47] accountability provisions. Those allow the platforms to avoid liability because they can argue they’re not creating content; they’re just hosting it.
What’s really funny is the AI companies are arguing in court, “This isn’t your work. It’s new work that the AI is creating.” So, Senators Hawley and Blumenthal’s No Section 230 Immunity for AI Act would be helpful.
And under Section 1202, [Title 17, AI companies should] keep metadata attached to content, so we know how our content has been used to train their models. Identification of the inputs is important so we can enforce our rights.
There’s also the Klobuchar-Thune bill for establishing transparency and standards for disclosures around AI.
Senator Amy Klobuchar also put forth the Journalism Competition and Preservation Act, which, if passed, would allow publishers to collectively negotiate platform fees for linking their content. The ACLU argues platforms would remove news from their feeds instead of paying. Are you worried about those hardball tactics?
Canada, Australia and Europe have passed similar laws, and soon New Zealand, Indonesia and South America. How many more countries have to pass this thing before we see that these are scare tactics?
There are so many crazy arguments against these laws: The internet will break. This is an ad tax. Extreme content will be the only thing freely available. All the gains will go to [big-name publishers].
First of all, the big publishers are already paid. Second, the small and local publications will never see fair market value. They just don’t have that leverage. But in Canada, they’re being paid $21,000 per journalist. That’s pretty good. In Australia, $140 million was paid to news publications.
And the internet has not broken. Extreme content is not proliferating any worse than it already was. All of these scare tactics have proven false.
Do regulators have the powers they need to enforce existing copyright laws?
When companies get too large, there’s an inability to enforce your rights.
AI does this thing called retrieval augmented generation (RAG), or grounding, [where it can use search results to train a large language model]. They combined this new thing, AI, with this old thing, search, where one is dependent on the other. And because publishers were beholden to search, we’re now beholden to AI.
That’s a very clear area where the marketplace is broken and the law has to step in.
Is there anything about how the News/Media Alliance is working with legislators behind the scenes that didn’t come across in the hearing?
One of the things we hear a lot is that legislators want the content industries to get on the same page as much as possible. We’re absolutely trying to do that.
This interview has been edited and condensed.