Uncategorized

Information security statement on generative AI



Use of ChatGPT, Google Bard, and other generative artificial intelligence (AI) tools and services is growing rapidly within higher education, including on our campus. Although AI offers new and powerful capabilities for research and education, it also poses a potential risk to institutional data that UC Santa Cruz is legally and ethically obligated to protect.

Security and Privacy Guidance

The guidance below outlines the appropriate use of generative AI as it relates to data security and privacy at UCSC. Due to the rapid evolution of generative AI, this guidance is likely to change. It is the responsibility of every member of our campus community to stay informed of data security and privacy policies, security protocols, and guidance.

  1. Prohibited use: When using generative AI services, do not submit personal, confidential, or otherwise sensitive information, including Personally Identifiable Information (PII), FERPA-protected student records and information, and data classified as Moderate Risk (P3) or High Risk (P4).

  2. Permitted use: University faculty, staff, students and affiliates may enter institutional data into generative AI tools or services only when the information is classified as Minimal Risk (P1) or Low Risk (P2).

  3. Procurement and licensing : All software licensing and supplier agreements must adhere to University of California (UC) policies for data security and privacy. The data security and privacy provisions contained in these agreements are a critical part of UC’s ongoing efforts to ensure the protection of the data and information related to its operations and the personal information of its staff, faculty, and students. However, UC Santa Cruz does not currently have licenses or agreements with any generative AI services that provide similar protections. The use of these services could expose individual users and UC to the potential loss and/or abuse of highly sensitive data and information. Follow the guidance outlined in items and 1 and 2 above.

  4. Cultivating awareness: Some generative AI may be embedded within tools that you are already using (such as within Zoom, Microsoft, or Otter.AI) and you may not realize it has been enabled or is in use. Please be aware and exercise caution as this landscape evolves rapidly. ITS will develop additional guidance in this area soon.

Please share this message with any student employees that you manage or supervise.

For questions about the information security guidance in this memo, please contact the UC Santa Cruz Information Security Team or your Unit Information Security Lead. Note that questions about the use of AI in human subjects research should be directed for review by the UCSC Institutional Review Board (IRB).

Further guidance may be forthcoming.

Sincerely,
Melanie

Melanie Douglas
Acting Vice Chancellor for Information Technology

Resources

About Generative Artificial Intelligence (AI)

Generative Artificial Intelligence (AI) refers to language models trained on massive amounts of data which are able to produce human-like responses in the form of text, code, images, audio, and video. Generative AI services, such as ChatGPT, Google Bard, Anthropic, and DALL-E, can be used to support teaching and learning, research, and administration.

Similar to a search engine, you can submit questions or prompts to these platforms. However, unlike a search engine that just returns links to more information, Generative AI platforms can take the user-generated questions and provide comprehensive responses, in certain instances, may even complete tasks on your behalf.

Individuals can submit more information into a generative AI platform in order for it to do more accurate work for them; however, the utility of generative AI platforms needs to be balanced with the inherent risks to information security and privacy. Not only will the generative AI platform have a copy of the information that you submit, their AI model may be further trained on the information you submit. Data submitted may be used by the AI platform in future responses to other end-users using their service.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *