What Is This?

This is your quick-reference library of ready-to-use AI prompts for common workplace tasks at the Universities of Wisconsin. No need to start from scratch every time!

How to Use This Library

Step 1: Open the File

Download and open the Excel File UW AI Prompt Library xlsx and explore the prompts organized by category.

Step 2: Make It Searchable

  • Use Ctrl+F (Cmd+F on Mac) to search by keyword
  • Filter by Category using Excel’s filter tools (Data > Filter)
  • Freeze the header row so you can scroll easily while keeping categories visible

Step 3: Use a Prompt

  1. Find a prompt that matches your task
  2. Read the “When to Use” column to confirm it fits
  3. Copy the “Prompt Template”
  4. Replace the [BRACKETED SECTIONS] with your specific information
  5. Look at “Example Variables” for inspiration
  6. Paste into an AI tool of choice!*
*Note: For any information that uses internal data, always follow responsible data classification and protection. Data classified as low risk, under UW Administrative policy SYS 1031, Information Security: Data Classification and Protection, can be freely used with generative AI tools such as ChatGPT, Gemini, Claude, Grok, etc. They are is ok when using public information or when no institutional data is shared.

When using moderate or high risk data, always only use an enterprise approved AI tool with appropriate data protection. 

Questions? Contact the IT helpdesk for assistance.

The “Persona + Task + Context” Formula

Many prompts use this proven structure:

  • Persona: “I am a…” or “You are a…” (sets the role/expertise)
  • Task: What you need the AI to do
  • Context: Background information, constraints, audience, tone

Example: “I am a program coordinator at the Universities of Wisconsin. I need to draft a follow-up email after a meeting with campus partners. Key decisions: pilot new initiative for student support. Action items: review requirements and timeline. Tone: enthusiastic but professional.”

Tips for Success

Be Specific

Don’t: “Help me with emails”

Do: “I need to respond to 20 similar emails about program deadlines. Draft a template I can personalize.”

Iterate

Don’t expect perfection on the first try! If the output isn’t quite right:

  • Add more details
  • Clarify the tone
  • Provide an example
  • Ask the AI to revise specific parts

Customize Prompts

These are STARTING POINTS. Feel free to:

  • Add your own variables
  • Adjust the tone
  • Combine multiple prompts
  • Create variations that work for your style

Save Your Wins

When you find a variation that works really well:

  • Add it to your library!
  • Share it with your team
  • Note what made it effective

Categories Explained

Email Communication: High-volume correspondence, follow-ups, professional messaging

Data Analysis: Spreadsheets, surveys, identifying patterns, creating insights

Meeting Facilitation: Agendas, icebreakers, note-taking, follow-ups

Documentation: Process docs, summaries, policy language

Project Planning: Timelines, risk assessment, stakeholder communication

Student/Stakeholder Communication: FAQs, program descriptions, plain language

Content Creation: Newsletters, social media, presentations

Training & Education: Learning modules, quizzes, case studies

Research & Analysis: Literature reviews, competitive analysis, trend identification

Problem Solving: Root cause analysis, brainstorming, decision-making

Personal Productivity: Task management, email batching, meeting prep

Common Mistakes to Avoid

  1. Too Vague: “Write something about AI” → Not enough guidance
  2. No Context: AI doesn’t know your audience, constraints, or goals unless you tell it
  3. Expecting Mind-Reading: The more specific you are, the better the output
  4. Copy-Paste Without Reviewing: Always review and personalize AI outputs!
  5. Forgetting Tone: Specify if you want formal, casual, friendly, technical, etc.

Quick Reference: Most Popular Prompts

Based on common workplace needs, these are likely your most-used:

  1. Professional Email Response – For handling high-volume inquiries and correspondence
  2. Meeting Agenda Creator – For structured meetings and planning sessions
  3. Survey Data Interpretation – For making sense of feedback and assessments
  4. Meeting Notes Template – For standardizing documentation
  5. FAQ Generator – For creating support resources

Examples in Action

Example 1: Email Response

Prompt Used: Professional Email Response

Filled In: “I need to respond to an email about program application deadlines. The sender is a prospective student. Key points to address: deadline is March 1st and firm, late applications go to different process, encourage early completion. Tone should be friendly but clear.”

Result: Professional, consistent response ready to personalize

Example 2: Meeting Prep

Prompt Used: Meeting Agenda Creator

Filled In: “Create a meeting agenda for a 75-minute planning session about improving departmental processes. Attendees: 4 team members with varying experience levels. Goals: identify current challenges, explore potential solutions, gather input on priorities. Include time blocks.”

Result: Structured agenda with appropriate time allocation

Example 3: Making Data Digestible

Prompt Used: Survey Data Interpretation

Filled In: “I have survey data with 187 responses about workplace technology preferences. Key findings I’m seeing: 72% interested in training opportunities, 58% concerned about data privacy, 45% currently using tools unofficially, 89% want clearer guidelines. Help me identify patterns and craft 3-5 key insights for a summary report.”

Result: Clear narrative insights for leadership presentation

Remember: AI is a Collaborative Partner

Think of these prompts as starting conversations with an AI colleague, not commands to a robot. The best results come from:

  • Clear communication
  • Iterative refinement
  • Your human judgment and expertise
  • Treating AI as a thought partner, not a replacement

Version: 1.0 (November 2025) Created for: Universities of Wisconsin Wisconsin Idea in Action: Extending knowledge to serve Wisconsin

The artificial intelligence revolution isn’t coming—it’s here. From the research assistant helping draft emails to the scheduling tool optimizing meeting times, AI has quietly woven itself into the fabric of modern work life. Yet many professionals find themselves using these powerful tools without fully understanding how they work, what they can and can’t do, or how to use them effectively and safely.

As AI becomes as common as spreadsheets or email, developing AI literacy isn’t just helpful—it’s essential. Whether you’re a faculty member exploring new research methodologies, an administrator streamlining operations, or a student preparing for your career, understanding the fundamentals of AI will help you work more effectively while avoiding common pitfalls.

What Is AI, Really?

At its core, artificial intelligence refers to computer systems that can perform tasks typically requiring human intelligence. But today’s AI—particularly generative AI like Copilot, Claude, or ChatGPT—works differently than you might expect.

Think of AI as a sophisticated pattern recognition system. These tools are trained on massive datasets containing text, images, code, and other information. They learn to identify patterns in this data and use those patterns to generate responses, complete tasks, or make predictions. They’re not searching a database for the “right” answer—they’re predicting what response would be most appropriate based on the patterns they’ve learned.

This distinction matters because it explains both AI’s remarkable capabilities and its significant limitations.

Core Concepts Every Professional Should Understand

Training Data: The Foundation of AI Knowledge

Every AI system learns from training data—the information used to teach it patterns and relationships. For large language models, this typically includes books, articles, websites, and other text sources, usually with a knowledge cutoff date.

What this means for you: AI systems know what they were trained on, but they may not have information about recent events, your specific organization’s policies, or specialized knowledge in niche fields. Always verify important information, especially if it relates to current events or your specific context.

Hallucinations: When AI Gets Creative with Facts

One of the most important concepts to understand is “hallucination”—when AI generates information that sounds plausible but is actually incorrect or fabricated. This isn’t a bug; it’s an inherent characteristic of how these systems work.

Why it happens: AI generates responses by predicting what should come next based on patterns, not by accessing a reliable database of facts. Sometimes this process produces convincing-sounding information that simply isn’t true.

What this means for you: Never assume AI-generated information is accurate without verification, especially for:

  • Statistics and specific data points
  • Citations and references
  • Technical specifications
  • Historical facts or dates
  • Legal or medical information

Bias: AI Reflects Human Patterns

AI systems learn from human-created data, which means they can perpetuate or amplify human biases present in that data. This can affect everything from language translation to hiring recommendations.

What this means for you: Be particularly careful when using AI for decisions that affect people—hiring, evaluation, resource allocation, or content that will be widely shared. Consider diverse perspectives and human oversight for these applications.

Context Windows: AI’s Limited Memory

AI systems can only “remember” a limited amount of information at once, called a context window. Once you exceed this limit, the system starts “forgetting” earlier parts of your conversation.

What this means for you: For long documents or complex projects, you may need to break work into smaller chunks or periodically remind the AI of important context from earlier in your conversation.

Evaluating AI Outputs: Your Critical Thinking Checklist

Developing the ability to critically evaluate AI responses is perhaps the most important skill for AI literacy. Here’s a practical framework:

The FACT Check:

  • Factual accuracy: Can you verify the information from reliable sources?
  • Appropriate tone and context: Does the response fit your needs and audience?
  • Complete and relevant: Does it address your actual question without unnecessary tangents?
  • Timely and current: Is the information up-to-date for your purposes?

Red flags to watch for:

  • Responses that seem too confident about uncertain topics
  • Information you can’t verify through other sources
  • Advice that contradicts established best practices in your field
  • Content that doesn’t quite match your specific context or requirements

Getting Better Results: The Art of AI Communication

Effective AI use is as much about communication as it is about understanding the technology. Here are key strategies:

Be specific and clear: Instead of “help me write something,” try “help me write a professional email declining a meeting request while suggesting alternative times.”

Provide context: Share relevant background information, your role, your audience, and your goals.

Iterate and refine: Use AI’s responses as starting points. Ask follow-up questions, request revisions, or build on initial outputs.

Set boundaries: Clearly state what you do and don’t want included in responses.

Verify and personalize: Always review outputs for accuracy and make them authentically yours.

Looking Ahead: Building Sustainable AI Practices

As AI capabilities continue to evolve rapidly, the most important skill isn’t learning specific tools—it’s developing the judgment to use AI effectively and ethically. This means:

Maintaining your expertise: Use AI to enhance your skills, not replace your critical thinking and domain knowledge.

Staying informed: Keep up with your organization’s AI policies and best practices in your field.

Being transparent: When appropriate, let others know when AI has contributed to your work.

Thinking ethically: Consider the broader implications of your AI use on colleagues, students, and your profession.

Your AI Literacy Journey Starts Now

AI literacy isn’t about becoming a technical expert—it’s about developing the knowledge and judgment to use these powerful tools effectively, safely, and ethically. Like any literacy, it develops through practice, reflection, and continuous learning.

Start small: Choose one AI tool and explore it thoughtfully. Pay attention to where it helps and where it falls short. Build your understanding gradually, always keeping human judgment at the center of your decision-making process.

The professionals who thrive in an AI-enabled workplace won’t necessarily be those who use AI the most—they’ll be those who use it most wisely. With the foundational knowledge covered here, you’re well-equipped to begin that journey.

In a world where artificial intelligence (AI) is rapidly transforming the workplace, a new global study commissioned by Workday and conducted by Hanover Research offers a refreshingly optimistic perspective: AI isn’t here to replace us—it’s here to elevate us.

Based on insights from 2,500 full-time workers across 22 countries, the report explores how AI is reshaping work by enhancing human creativity, leadership, learning, trust, and collaboration.

The Human-Centered Promise of AI

The study reveals that 83% of respondents believe AI will enhance human creativity and lead to new forms of economic value. Rather than automating people out of relevance, AI is seen as a tool that frees individuals from routine tasks, allowing them to focus on higher-order skills like ethical decision-making, emotional intelligence, and strategic thinking.

Five Principles for Thriving with AI

The report is anchored in five core principles that define how organizations can thrive in an AI-enabled future:

1. Creativity, Elevated

AI acts as a creative assistant, helping individuals generate ideas and solutions faster and more effectively. It enables people to bring imagination to their roles—whether in administrative workflows or product innovation.

2. Leadership, Elevated

AI supports empathetic leadership by providing real-time insights into team dynamics and freeing up time for human connection. It helps leaders make more objective decisions and focus on what matters most—their people.

3. Learning, Elevated

AI enhances learning by identifying skill gaps, personalizing development, and democratizing access to knowledge. It empowers organizations to build agile, future-ready teams.

4. Trust, Elevated

Transparency and responsible AI practices are essential. A striking 90% of respondents agree that AI can increase organizational accountability, but trust must be built collaboratively across sectors.

5. Collaboration, Elevated

AI breaks down data silos and enables seamless collaboration across departments and between humans and machines. It fosters a new kind of teamwork where AI augments human potential.

Key Findings at a Glance

  • 93% of AI users say it allows them to focus on strategic tasks.
  • Top irreplaceable skills: ethical decision-making, empathy, relationship-building, and conflict resolution.
  • Biggest challenges to AI adoption: uncertainty about ROI, data privacy, and integration complexity.
  • Most impactful missing skills: cultural sensitivity, adaptability, and strategic planning.

A Call to Action

The report concludes with a clear message: the AI revolution is not just technological—it’s deeply human. Organizations must:

  • Embrace human-centric leadership.
  • Foster collaboration between people and AI.
  • Invest in upskilling and reskilling.
  • Promote transparency and accountability.

AI is not the new face of work—it’s the force that allows our human talent to shine brighter.

“What can I actually use AI for—and how do I avoid getting myself in trouble?”


That’s one of the first (and smartest) questions people ask when trying out generative AI tools. There has been a lot of excitement around such tools, and there sure are a lot of tools on the marketplace.

At the Universities of Wisconsin, we currently do not have an enterprise or educational license for tools like ChatGPT, Claude, Gemini, or Deepseek. (The exception is Microsoft Copilot, which has a paid license in some systems.)

That means: while these tools can be powerful for brainstorming, writing, and automating routine tasks, it’s critical to use them responsibly—especially when it comes to handling data.

Here’s what you need to know about using public AI tools safely and wisely. (This does not pertain to the paid Copilot license)

OK to Use ChatGPT (Or any public LLM) For:

  • Drafting non-sensitive communications, like email templates or project summaries

  • Brainstorming ideas (e.g., presentation outlines, workshop formats, survey questions)

  • Creating or refining generic text (e.g., help articles, documentation)

  • Generating code snippets without user or student data

  • Exploring concepts or summarizing public information

  • Clarifying technical terms or academic topics

  • Getting grammar and tone feedback on professional writing

  • Support for creative ideation and storytelling in student engagement materials

NOT OK to Use ChatGPT (Or any public LLM) For:

  • Sharing personally identifiable information (PII), such as names, ID numbers, or student records

  • Discussing confidential or restricted information, including contracts, budget documents, HR cases, or sensitive governance topics

  • Uploading or pasting internal documents marked confidential or not meant for public disclosure

  • Using it to make final decisions about student admission, financial aid, or disciplinary action

  • Treating ChatGPT outputs as authoritative without verification—it’s a tool, not a source of record

This includes technical communication as well. It is OK to use ChatGPT for general-purpose coding and debugging. It is NOT acceptable to use ChatGPT if the code contains student data, internal APIs, secure tokens or authentication keys. This means anything tied to Peoplesoft, SIS, Workday, etc. Be especially cautious if you are using the model to generate decisions that affect users without human oversight (e.g., scripts that automatically move money, update grades, or modify access rights)

If you wouldn’t email the code to a stranger without redacting it, don’t paste it into an LLM.

Tips for Responsible Use

  • Treat AI as a thought partner, not a decision-maker.

  • If in doubt, strip out sensitive details and ask generalized questions.

  • Assume anything you input is visible externally, unless you’re using a licensed instance with data protection agreements.

  • Cite any final content or policies to approved institutional sources, not ChatGPT.