Uncategorized

OpenAI Shares Strategy to Counter Generative AI Misuse During 2024 Elections



OpenAI has laid out its strategy to limit and shut down the misuse of its generative AI technology and services this year as the U.S. and several other large democracies prepare for national elections. The company acknowledged the deceptive and malicious potential represented by tools like ChatGPT and DALL-E but vowed to fight back against such abuse, including providing accurate information while exposing deepfakes and other synthetic media.

OpenAI Electoral Integrity

OpenAI has centralized its efforts with a new multidisciplinary team combining experts in technical, political, and legal matters. Those involved are tasked with embedding real voting information in OpenAI products while implementing the policies supporting reality over misinformation in preparation for the upcoming elections. One of the biggest initiatives will be developing preventative tools to identify and stop misleading deepfakes, scaled influence operations, and generative AI chatbots impersonating candidates. For instance, DALL·E is programmed to reject requests for generating images of real people, including political candidates. There are also more subtle technical improvements to improve factual accuracy, reduce bias, and decline problematic requests.

“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” OpenAI explained in its announcement. “As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency. We have a cross-functional effort dedicated to election work, bringing together expertise from our safety systems, threat intelligence, legal, engineering, and policy teams to quickly investigate and address potential abuse.”

On the policy side, OpenAI said it will keep updating its Usage Policies for ChatGPT and its APIs to counter potential abuses. The policies already prohibit the use of AI for political campaigning and lobbying, creating chatbots that impersonate real entities and applications that deter voting and other democratic activities. But, neither the technology nor the plans of bad actors are stagnant. OpenAI plans to continue adding and refining the current rules proactively and in response to reported issues. That includes applying the rules to customized GPTs, and there’s an option to report violations found in the custom GPTs to OpenAI, as seen on the right.

OpenAI’s ambition is to wipe out all inaccurate responses from its LLMs, regardless of whether they’re inadvertently hallucinatory or prompted on purpose. As that’s essentially impossible, the company is pursuing ways to at least make the origins of any AI-created content transparent. One method is a tool to trace synthetic content and deepfakes by adding digital credentials for images produced by DALL·E 3 as a kind of virtual watermark with details about its origin. OpenAI is also working on a new tool to spot already edited DALL·E images made without such a watermark. The company is also working with the nonpartisan National Association of Secretaries of State (NASS) in the U.S. The partnership has led to programming ChatGPT to direct users to CanIVote.org when asked questions about where to vote and vote eligibility.

Despite major U.S. AI companies agreeing to a set of safety and responsibility principles as announced by President Biden in July, there hasn’t been much in the way of concrete decisions yet. The ease by which deepfake scams can already trick people out of their money is worrisome. Bills on AI governance are in the works, something OpenAI CEO Sam Altman encouraged at a U.S. Senate hearing, but they aren’t law yet.

OpenAI is not the only generative AI developer grappling with how to stop politically motivated abuse of deepfake technology and generative AI content production. Google recently updated its advertising policy to require political ads using generative AI and deepfakes to disclose that synthetic media is present in a clear and obvious way. The rule aims to combat deception as deepfakes and other AI-produced media proliferate in campaigns. The only exceptions are for inconsequential uses like removing red eye from photos.

 

  

Google Mandates Deepfake Generative AI Disclaimers on Political Ads Ahead of 2024 Election

Former Pakistani Prime Minister Imran Khan Addresses Supporters from Jail Via Deepfake Generative AI Voice Clone [Report]

Leading AI Developers Agree to White House’s AI Safety Principles








Source link

Leave a Reply

Your email address will not be published. Required fields are marked *