ChatGPT creator OpenAI is taking concrete steps to ensure that its generative artificial intelligence tools won’t be used to manipulate the outcomes of key elections coming up this year.
In a blog post today, the company announced its commitment to preventing abuse and enhancing transparency around AI-generated content, as well as the steps it will take to facilitate access to accurate information about voting.
“As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies and improving transparency,” the company’s blog post stated.
OpenAI explained that it has created a “cross-functional effort dedicated to election work” that combines the expertise of its safety systems with threat intelligence and the skills of its legal, engineering and policy teams. The idea with this effort is to identify and address potential ways in which its generative AI technology might be used to manipulate election outcomes.
It’s fair to say that misinformation has always been present during election campaigns, with politicians making false and misleading claims for hundreds of years. However, the Russian interference in the 2016 U.S. Presidential election served as a wake up call regarding the potential of technology to aid in the spread of misinformation. Online platforms are uniquely able to magnify rumors, conspiracy theories and other falsehoods, and there are concerns that this could significantly impact the outcome of popular votes.
This year is poised to be a big one for elections, not only in the U.S. but globally. More than 50 countries, home to over half of the world’s population, are expected to hold democratic elections this in 2024. Besides the U.S. elections set to take place in November, the U.K. is widely expected to hold a general election later this year. Big democracies including India, Indonesia, Pakistan and South Africa will also vote for new presidents. Likewise, countries such as Bangladesh, El Salvador, Mexico, Russia, Senegal and South Sudan are slated to vote for their leaders at some point this year, while the European Union is expected to hold parliamentary elections in June.
With so much at stake in so many countries, OpenAI said it is not taking any chances, and will employ a combination of red teaming, user engagement for feedback, safety guardrails and other tools to minimize the risk of its tools being abused to sway election outcomes. For instance, OpenAI’s image generation tool DALL-E has been given guardrails that are supposed to prevent it from creating images of real people, including election candidates.
The company regularly refines its user policies for ChatGPT and its API to align them with evolving insights into how its technology is being used, and potentially abused. Its enhanced safety policies are designed to restrict the development of generative AI-powered applications that might be used for political campaigning and lobbying. It has also put rules in place to prevent users from creating chatbots that impersonate real people or organizations.
One of the new enhancements announced today is a provenance classifier that’s able to detect images created using the DALL-E tool. OpenAI said it’s making this feature available to beta testers, including some journalists, platforms and researchers. In addition, ChatGPT will integrate real-time news reporting globally to enhance transparency around information sources, the company said.
In addition, the AI firm is working with the National Association of Secretaries of State in the U.S., to try and prevent its technology from being used to discourage participation in elections. So when someone asks a GPT-powered chatbot about the voting process, for example, those users will be redirected to the CanIVote.org website operated by that organization.
OpenAI’s efforts to ensure the integrity of elections comes at a time when confidence in electoral processes around the world appears to be eroding. According to a 2022 survey by the Brennan Center for Justice, 64% of election officials from around the world stated that “the spread of false information has made their jobs more dangerous.” Meanwhile, the World Economic Forum said in a report published this month that it’s putting the threat of AI-powered misinformation and disinformation ahead of climate change, war and economic weakness as the biggest short-term threat to the global economy.
OpenAI’s chief executive officer Sam Altman has also voiced his own concerns, telling a Senate Judiciary Committee last May that his worst fears are that “the technology industry [could] cause significant harm to the world.”
A number of OpenAI’s rivals, including Google LLC and Meta Platforms Inc., have also announced measures that are being put in place to prevent their technologies and platforms being used to spread misinformation.
Image: Freepik
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU