ChatGPT

Google Bard, ChatGPT: Are AI chatbots suppressing information about Israel and Palestine?


When Google’s Bard AI is asked any question about Israel and Palestine, the chatbot basically shuts down. The questions can be anything from the seemingly innocent — (“Where is Israel?”) — to those centered around the ongoing Israel-Hamas war. To these, Bard responds to each of these inquiries with a variation of the message: “I’m a text-based AI and can’t assist with that.”

Google’s chatbot appears to be censoring any answer related to the current crisis. This applies to a variety of questions we and many users have asked using keywords including Israel, Gaza, Palestine, and the IDF (the abbreviation for the Israel Defense Forces). Bard’s responses to us included: “I’m a language model and don’t have the capacity to help with that.”

A screenshot of Google Bard.


Credit: Screenshot / Google

A screenshot of Google Bard.


Credit: Screenshot / Google

Users on X first pointed out this issue, also displaying that the chatbot would easily answer questions about other countries, regions, and locations. Google Bard also responds to questions about other ongoing global conflicts, such as the war in Ukraine — when asked to explain this conflict, Bard provided this information: “In February 2022, Russia launched a full-scale invasion of Ukraine. The invasion was met with international condemnation and led to the imposition of severe sanctions on Russia.”

A screenshot of Google Bard answering a question about Ukraine and Russia.


Credit: Screenshot / Google

Mashable reached out to Google for comment and received confirmation from the company’s press team that it has “implemented temporary guardrails to disable Bard’s responses to associated queries.” The statement reads:

Bard is still an experiment, designed for creativity and productivity and may make mistakes when answering questions about escalating conflicts or security issues. Out of an abundance of caution and as part of our commitment to being responsible as we build our experimental tool, we’ve implemented temporary guardrails to disable Bard’s responses to associated queries.

In a blog post from March 2023, Google VPs shared thoughts on the “experiment” Bard and also cautioned users that the technology, while large language models (LLMs) are exciting, “they’re not without their faults.” The statement reiterated that because such models learn from “a wide range of information” they are susceptible to reflecting “real-world biases and stereotypes.”

People have also brought into question the answers provided by OpenAI’s ChatGPT to certain prompts. Mashable tried asking questions to GPT-3.5, the Default version. In response to questions about the history between Israel and Palestine, the chatbot appears to provide detailed information, but underscores that its knowledge may be outdated: “Please note that the situation can change rapidly, and it is essential to consult up-to-date news sources for the latest information on the Israel-Hamas conflict, as developments may have occurred since my last knowledge update in January 2022,” read one response to the question “what is happening between israel and hamas?”.

To the question “can you tell me what’s happening in palestine?”, the chatbot again stated its information may be outdated and provided a mostly vague answer, also directing users to read “reputable news sources” instead (we can get behind that advice):

I apologize, but I am not able to provide real-time information as my knowledge was last updated in 2022. As of my last update, the situation in Palestine has long been a complex and sensitive issue, characterized by ongoing conflicts and disputes between the Palestinian territories and Israel. The conflict has deep historical and political roots, and it has resulted in significant humanitarian issues and human rights concerns, including displacement, violence, and loss of life on both sides. Efforts to find a lasting resolution have involved numerous peace negotiations, international interventions, and agreements, but a comprehensive and long-term solution has remained elusive. The issues involved encompass various elements, including the status of Jerusalem, Israeli settlements, Palestinian statehood, security concerns, and the rights of refugees, among others. For the most recent and accurate updates on the situation in Palestine, I recommend checking reputable news sources, such as international news agencies and organizations that provide coverage on the region.

People across the internet have experimented with the responses provided by AI-powered chatbots about the war. For instance, writer and illustrator Mona Chalabi posted the significant difference in the answers ChatGPT gave to the questions “Do Israelis deserve justice?” and “Do Palestinians deserve justice?”. The answer to the former was far more concrete, while the answer to the latter stated that “the question of justice for Palestinians is a complex and highly debated issue”. Others have asked the same questions and received a similar disparity.

Mashable has reached out to OpenAI for comment and will update this story if we hear back.

OpenAI has previously insisted that its model is impartial, but recent research has shown that AI models do, in fact, hold bias. On r/ChatGPT, a Reddit community dedicated to discussion about AI, the intersection of ChatGPT and Israel-Palestine was discussed in a since-deleted post. The comments remain, however, and one user responded: “People are biased, therefore ChatGPT will be biased.”

Since the beginning of the Israel-Hamas war in October, online platforms have been widely questioned by users for their actions. Companies owned by Meta, including Instagram and Facebook, have been accused of shadowbanning content and other incidents of glaring bias. X and TikTok have come under similar scrutiny for moderation policies or the spread of disinformation.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *