Uncategorized

Meta’s Clegg Wants Standards on Watermarking for Generative AI



(Bloomberg) — Regulators hoping to keep up with the rapidly evolving world of artificial intelligence should prioritize new rules and technologies for detecting AI-generated content, said Nick Clegg, president of global affairs at Meta Platforms Inc.

Most Read from Bloomberg

Aligning the industry on common standards to identify AI-generated content, such as by watermarking images and videos, should be the issue “in the front of the queue,” Clegg said, speaking on a panel at the World Economic Forum in Davos, Switzerland, on Thursday.

“That does not exist at the moment. Each company is doing their own thing,” he added. “That’s the most urgent task facing us today.”

Read more: Regulate AI? How US, EU and China Are Going About It: QuickTake

Online tools like OpenAI’s ChatGPT and DALL-E image generator are now easily accessible to millions of people who can use them to instantly create text or images based on a simple prompt. Currently, there’s no way to determine if the resulting text or photo has been auto-generated unless its creator shares that information.

The technological capabilities have raised a slew of questions and concerns, especially about the ability to create misinformation or images that look real but are actually altered if not completely fabricated, content that’s commonly referred to as “deepfakes.”

Meta has had a “manipulated media” policy for years. It removes AI-generated content that’s intentionally misleading or has been altered “in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say,” according to the company’s policy.

While deepfakes can pose a threat, Clegg said, AI has also helped Meta detect and remove other types of content, like hate speech, at a massive scale. “It’s a sword and shield,” he said.

The discussion in Davos also pivoted to AI access, and whether these new technologies should be built and controlled by large tech companies, or be more readily available to individual developers by making them open source. Clegg argued vehemently that open source is the more responsible path, and it’s also Meta’s approach to its own AI technologies.

There is “absolutely no reason” why the technology “should be kept under lock and key by a few handful of very rich corporations,” he said.

Most Read from Bloomberg Businessweek

©2024 Bloomberg L.P.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *