Uncategorized

USDA determined ChatGPT’s risk was ‘high,’ set up board to review generative AI use, documents show


As OpenAI’s ChatGPT tool broke into the mainstream earlier this year, the U.S. Department of Agriculture determined that the generative artificial intelligence tool posed too high a risk to use on its network and prohibited its use, according to documents obtained by FedScoop. 

In October, seven months after that risk determination was made, department leaders distributed interim guidance that extended that prohibition more broadly to employee and contractor use of third-party generative AI tools in their official capacities and on government equipment. The agency also established a board that’s creating a process to review proposed uses of the technology going forward, according to documents obtained through a Freedom of Information Act request and the department’s response to FedScoop.

Information about USDA’s approach comes as agencies across the federal government are grappling with creating policies for generative AI tools within their agencies and coming to different conclusions about how to handle the nascent and rapidly growing technology. 

The Department of Homeland Security, for example, recently made public its conditional approval of generative AI tools for use in the department, including ChatGPT, Bing Chat, Claude 2 and DALL-E2. Meanwhile, NASA leaders told employees in May that the tools weren’t cleared for widespread use with “sensitive NASA data,” though they permitted use on personal accounts “following acceptable use policies.”

An Agriculture Department spokesperson told FedScoop in an emailed statement that the agency’s interim guidance, along with the White House’s AI executive order, “will help ensure that USDA, like other agencies across the federal government, is using this emerging, important technology safely, securely, and responsibly, while also delivering better results for the people who rely on its programs and services.”

According to the March 16 risk determination obtained by FedScoop, the department found that “ChatGPT displays multiple concerning indicators and vulnerabilities that will pose a risk if used in the USDA enterprise network infrastructure” and ultimately labeled that risk as “high.”

Specifically, the risk determination referenced a vulnerability documented in the National Vulnerability Database involving a WordPress plugin that appears to use ChatGPT. The determination said the vulnerability “describes a missing authorization check that allows users the ability to access data or perform actions that should be prohibited.” It also pointed to “insufficient safeguards.”

“While OpenAI alleges having safeguards in place to mitigate these risks, use cases demonstrate that malicious users can get around those safeguards by posing questions or requests differently to obtain the same results,” the risk determination said. “Use of ChatGPT poses a risk of security breaches or incidents associated with data entered [into] the tool by users, to include controlled unclassified information (CUI), proprietary government data, regulated Food and Agriculture (FA) sector data, and personal confidential data.”

In response to a FedScoop inquiry about the USDA’s determination, a spokesperson for OpenAI said the company was not affiliated with the WordPress plugin it cited. The spokesperson also pointed to DHS’s recent assessment that conditionally approved generative tools and noted the launch of ChatGPT Enterprise, which has additional security and privacy controls.

“We appreciate the U.S. government’s dedication to using AI safely and effectively to improve services for the public. We would be happy to discuss the safe use of our products to support the USDA’s work,” the spokesperson said. 

Under USDA’s interim guidance, which was distributed internally Oct. 16, the Generative AI Review Board includes representation from USDA’s chief data officer and the chief technology officer, in addition to representatives for cybersecurity, the general counsel’s office, and two mission areas. 

Since President Joe Biden’s executive order, the department’s CDO and responsible AI official, Chris Alvarez, has been elevated to serve as its chief AI officer, and he also serves on the board in that capacity, the spokesperson said. That comes as agencies are starting to name CAIOs in light of a new position created under Biden’s order and subsequent White House guidance.

The board will meet monthly, the document said, and implement a process for reviewing proposed generative AI projects within 90 days, which would be roughly mid-January. It also stipulated that “any use cases currently in development or in use at the time of this memo should be paused until reviewed by the” Generative AI Review Board, and noted specifically that using AI language translation services is prohibited.

Submitting personal identifiable or non-public information to public generative AI tools is “a prohibited release of protected information” that employees must report, the document said. The spokesperson said there haven’t been any known instances where USDA personal identifiable information has been submitted to a generative AI tool, and “USDA has not received any reports of inappropriate GenAI output.”

Rebecca Heilweil contributed to this article.

Madison Alder

Written by Madison Alder



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *