Skip to main content

Vanderbilt University Staff Guidance for ChatGPT, OpenAI and other Generative Artificial Intelligence (AI) tools

Guidance issued July 13, 2023

A new wave of generative AI tools, such as ChatGPT and DALLE-2, is changing the way many institutions, including Vanderbilt University, conduct business. These technologies can help streamline processes, activities, and tasks across the university; at the same time, they may also introduce new security risks, both for staff members and for the university itself. With that in mind, the university requests that staff follow the guidelines below when interacting with new generative AI tools:

  • Explore the capabilities of new technologies: New AI technologies can simplify labor-intensive tasks. Employees are encouraged to harness the capabilities of generative AI and incorporate them into their day-to-day workflows.
  • Seek feedback from supervisors: Generative AI tools serve many purposes, but they are not meant to solve all problems. Staff should discuss any use of generative AI with their supervisors to ensure that using these tools is appropriate for the work being completed.
  • Utilize new AI training modules provided by the university: The university has launched several new training modules on generative AI, including the prompt engineering course taught by Vanderbilt University professor Jules White. Staff are encouraged to take advantage of these training modules both for their own personal and professional development.
  • Understand confidentiality: The terms and conditions and privacy policies for many generative AI technologies allow the underlying companies to use inputs and outputs for their own legitimate business purposes, including sharing information about how their tool, technology, or system is used by others with potential buyers or investors. [1]
  • Don’t put anything restricted into generative AI tools: Information that is restricted by law (FERPA, HIPAA, etc.), by contract, or by other agreements should not be entered into generative AI tools.
  • Don’t put anything confidential or sensitive into generative AI tools: There is no guarantee that information inputted into these AI tools will remain confidential. Sensitive information should not be shared.
  • Consider ownership of outputs: Generative AI often draws upon the work of others, particularly trademarked or copyrighted content, to create new images. At this point, it is not clear who owns image outputs generated by these technologies. Until it is, staff should avoid using generative AI tools for projects when the university must, for business purposes, own the final work product. 
  • Don’t assume that outputs are accurate: There have been numerous, well-documented instances of AI generating results that seem realistic, but are in fact totally false, made up ‘facts’ generated by the system. Do not trust any factual output from ChatGPT or other generative AI tools; always double-check information for accuracy.
  • Ensure that outputs are consistent with university values: Staff should review AI outputs to ensure accuracy and alignment with university values, especially in situations where human empathy and connection are required.

To report a security issue related to AI tools, please submit a report on the Office of Cybersecurity website.

Resources:

 

 

The university will revisit these guidelines frequently, as the use of these AI programs evolves rapidly.


[1] As of April 2023, Chat GPT is offering an ‘incognito mode’ for the chatbot, which will neither retain nor utilize user inputted data to build or improve its foundational model. Staff are encouraged to use incognito mode, when available.