“What can I actually use AI for—and how do I avoid getting myself in trouble?”


That’s one of the first (and smartest) questions people ask when trying out generative AI tools. There has been a lot of excitement around such tools, and there sure are a lot of tools on the marketplace.

At the Universities of Wisconsin, we currently do not have an enterprise or educational license for tools like ChatGPT, Claude, Gemini, or Deepseek. (The exception is Microsoft Copilot, which has a paid license in some systems.)

That means: while these tools can be powerful for brainstorming, writing, and automating routine tasks, it’s critical to use them responsibly—especially when it comes to handling data.

Here’s what you need to know about using public AI tools safely and wisely. (This does not pertain to the paid Copilot license)

OK to Use ChatGPT (Or any public LLM) For:

  • Drafting non-sensitive communications, like email templates or project summaries

  • Brainstorming ideas (e.g., presentation outlines, workshop formats, survey questions)

  • Creating or refining generic text (e.g., help articles, documentation)

  • Generating code snippets without user or student data

  • Exploring concepts or summarizing public information

  • Clarifying technical terms or academic topics

  • Getting grammar and tone feedback on professional writing

  • Support for creative ideation and storytelling in student engagement materials

NOT OK to Use ChatGPT (Or any public LLM) For:

  • Sharing personally identifiable information (PII), such as names, ID numbers, or student records

  • Discussing confidential or restricted information, including contracts, budget documents, HR cases, or sensitive governance topics

  • Uploading or pasting internal documents marked confidential or not meant for public disclosure

  • Using it to make final decisions about student admission, financial aid, or disciplinary action

  • Treating ChatGPT outputs as authoritative without verification—it’s a tool, not a source of record

This includes technical communication as well. It is OK to use ChatGPT for general-purpose coding and debugging. It is NOT acceptable to use ChatGPT if the code contains student data, internal APIs, secure tokens or authentication keys. This means anything tied to Peoplesoft, SIS, Workday, etc. Be especially cautious if you are using the model to generate decisions that affect users without human oversight (e.g., scripts that automatically move money, update grades, or modify access rights)

If you wouldn’t email the code to a stranger without redacting it, don’t paste it into an LLM.

Tips for Responsible Use

  • Treat AI as a thought partner, not a decision-maker.

  • If in doubt, strip out sensitive details and ask generalized questions.

  • Assume anything you input is visible externally, unless you’re using a licensed instance with data protection agreements.

  • Cite any final content or policies to approved institutional sources, not ChatGPT.