Google LLC said today it’s planning to restrict the ability of its artificial intelligence chatbot Bard and its Search Generative Experience to respond to election-related queries ahead of the U.S. Presidential election taking place next year.
The restrictions will be introduced early next year, the company said.
In the U.S. elections, primary races and campaigning from independent candidates is already underway, and there have been a number of election ads created using AI that did not come with any disclosures. It’s likely that campaigning will only intensify as we progress into the new year, so Google has decided to take steps to prevent its generative AI tools from being used to broadcast misinformation.
Google announced its plans in a blog post, saying it will also insist on certain advertising disclosures and introduce labels for some types of content. However, the most interesting step it’s taking is to curtail Google Bard and the Search Generative Experience tool, which both use generative AI to create more humanlike responses to questions and inputs.
The company did not provide any specific examples of the queries it intends to restrict, but stressed that it intends to proceed with an “abundance of caution” with regard to election-related topics in 2024.
The move is timely because there have been a lot of fears raised over the potential of generative AI to manipulate election outcomes by generating false or misleading information. Moreover, it’s not just the U.S. that faces critical elections next year, but also India, which is the world’s most populous democracy, and South Africa, one of Africa’s most important economies.
According to Google, it will work “with an increased focus on the role artificial intelligence might play” as it strives to serve voters in these elections next year.
Although the company’s plans seem somewhat vague, they are nonetheless an important step toward limiting the possibility of AI being abused to spread misinformation in key elections. Google’s competitor OpenAI has recently integrated more robust tools to try to prevent its models from doing the same, though it has not said anything about the specific steps it might take to prevent election-related abuse of its AI models.
Meanwhile, Facebook’s parent company Meta Platforms Inc. said last month it will prevent political campaigns and advertisers in regulated industries from using generative AI-powered ads. Anyone who places an ad on Meta’s platforms will also be required to disclose when AI has been used in political, social or election-related advertisements.
On the other hand, Elon Musk’s X, formerly known as Twitter, has gone the other way, saying in August that it will once again allow political ads in the U.S. by candidates and parties. Political ads on its platform had been banned globally since 2019.
Across the world, governments have been rallying to regulate AI tools in light of its potential to disrupt democratic processes. In the European Union, for example, technology firms are now required to clearly label political ads on their platforms, stating who paid for the ad and how much was spent on it, and which election is being targeted.
Photo: Element5 Digital/Unsplash
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU