The artificial intelligence revolution isn’t coming—it’s here. From the research assistant helping draft emails to the scheduling tool optimizing meeting times, AI has quietly woven itself into the fabric of modern work life. Yet many professionals find themselves using these powerful tools without fully understanding how they work, what they can and can’t do, or how to use them effectively and safely.

As AI becomes as common as spreadsheets or email, developing AI literacy isn’t just helpful—it’s essential. Whether you’re a faculty member exploring new research methodologies, an administrator streamlining operations, or a student preparing for your career, understanding the fundamentals of AI will help you work more effectively while avoiding common pitfalls.

What Is AI, Really?

At its core, artificial intelligence refers to computer systems that can perform tasks typically requiring human intelligence. But today’s AI—particularly generative AI like Copilot, Claude, or ChatGPT—works differently than you might expect.

Think of AI as a sophisticated pattern recognition system. These tools are trained on massive datasets containing text, images, code, and other information. They learn to identify patterns in this data and use those patterns to generate responses, complete tasks, or make predictions. They’re not searching a database for the “right” answer—they’re predicting what response would be most appropriate based on the patterns they’ve learned.

This distinction matters because it explains both AI’s remarkable capabilities and its significant limitations.

Core Concepts Every Professional Should Understand

Training Data: The Foundation of AI Knowledge

Every AI system learns from training data—the information used to teach it patterns and relationships. For large language models, this typically includes books, articles, websites, and other text sources, usually with a knowledge cutoff date.

What this means for you: AI systems know what they were trained on, but they may not have information about recent events, your specific organization’s policies, or specialized knowledge in niche fields. Always verify important information, especially if it relates to current events or your specific context.

Hallucinations: When AI Gets Creative with Facts

One of the most important concepts to understand is “hallucination”—when AI generates information that sounds plausible but is actually incorrect or fabricated. This isn’t a bug; it’s an inherent characteristic of how these systems work.

Why it happens: AI generates responses by predicting what should come next based on patterns, not by accessing a reliable database of facts. Sometimes this process produces convincing-sounding information that simply isn’t true.

What this means for you: Never assume AI-generated information is accurate without verification, especially for:

  • Statistics and specific data points
  • Citations and references
  • Technical specifications
  • Historical facts or dates
  • Legal or medical information

Bias: AI Reflects Human Patterns

AI systems learn from human-created data, which means they can perpetuate or amplify human biases present in that data. This can affect everything from language translation to hiring recommendations.

What this means for you: Be particularly careful when using AI for decisions that affect people—hiring, evaluation, resource allocation, or content that will be widely shared. Consider diverse perspectives and human oversight for these applications.

Context Windows: AI’s Limited Memory

AI systems can only “remember” a limited amount of information at once, called a context window. Once you exceed this limit, the system starts “forgetting” earlier parts of your conversation.

What this means for you: For long documents or complex projects, you may need to break work into smaller chunks or periodically remind the AI of important context from earlier in your conversation.

Evaluating AI Outputs: Your Critical Thinking Checklist

Developing the ability to critically evaluate AI responses is perhaps the most important skill for AI literacy. Here’s a practical framework:

The FACT Check:

  • Factual accuracy: Can you verify the information from reliable sources?
  • Appropriate tone and context: Does the response fit your needs and audience?
  • Complete and relevant: Does it address your actual question without unnecessary tangents?
  • Timely and current: Is the information up-to-date for your purposes?

Red flags to watch for:

  • Responses that seem too confident about uncertain topics
  • Information you can’t verify through other sources
  • Advice that contradicts established best practices in your field
  • Content that doesn’t quite match your specific context or requirements

Getting Better Results: The Art of AI Communication

Effective AI use is as much about communication as it is about understanding the technology. Here are key strategies:

Be specific and clear: Instead of “help me write something,” try “help me write a professional email declining a meeting request while suggesting alternative times.”

Provide context: Share relevant background information, your role, your audience, and your goals.

Iterate and refine: Use AI’s responses as starting points. Ask follow-up questions, request revisions, or build on initial outputs.

Set boundaries: Clearly state what you do and don’t want included in responses.

Verify and personalize: Always review outputs for accuracy and make them authentically yours.

Looking Ahead: Building Sustainable AI Practices

As AI capabilities continue to evolve rapidly, the most important skill isn’t learning specific tools—it’s developing the judgment to use AI effectively and ethically. This means:

Maintaining your expertise: Use AI to enhance your skills, not replace your critical thinking and domain knowledge.

Staying informed: Keep up with your organization’s AI policies and best practices in your field.

Being transparent: When appropriate, let others know when AI has contributed to your work.

Thinking ethically: Consider the broader implications of your AI use on colleagues, students, and your profession.

Your AI Literacy Journey Starts Now

AI literacy isn’t about becoming a technical expert—it’s about developing the knowledge and judgment to use these powerful tools effectively, safely, and ethically. Like any literacy, it develops through practice, reflection, and continuous learning.

Start small: Choose one AI tool and explore it thoughtfully. Pay attention to where it helps and where it falls short. Build your understanding gradually, always keeping human judgment at the center of your decision-making process.

The professionals who thrive in an AI-enabled workplace won’t necessarily be those who use AI the most—they’ll be those who use it most wisely. With the foundational knowledge covered here, you’re well-equipped to begin that journey.

In a world where artificial intelligence (AI) is rapidly transforming the workplace, a new global study commissioned by Workday and conducted by Hanover Research offers a refreshingly optimistic perspective: AI isn’t here to replace us—it’s here to elevate us.

Based on insights from 2,500 full-time workers across 22 countries, the report explores how AI is reshaping work by enhancing human creativity, leadership, learning, trust, and collaboration.

The Human-Centered Promise of AI

The study reveals that 83% of respondents believe AI will enhance human creativity and lead to new forms of economic value. Rather than automating people out of relevance, AI is seen as a tool that frees individuals from routine tasks, allowing them to focus on higher-order skills like ethical decision-making, emotional intelligence, and strategic thinking.

Five Principles for Thriving with AI

The report is anchored in five core principles that define how organizations can thrive in an AI-enabled future:

1. Creativity, Elevated

AI acts as a creative assistant, helping individuals generate ideas and solutions faster and more effectively. It enables people to bring imagination to their roles—whether in administrative workflows or product innovation.

2. Leadership, Elevated

AI supports empathetic leadership by providing real-time insights into team dynamics and freeing up time for human connection. It helps leaders make more objective decisions and focus on what matters most—their people.

3. Learning, Elevated

AI enhances learning by identifying skill gaps, personalizing development, and democratizing access to knowledge. It empowers organizations to build agile, future-ready teams.

4. Trust, Elevated

Transparency and responsible AI practices are essential. A striking 90% of respondents agree that AI can increase organizational accountability, but trust must be built collaboratively across sectors.

5. Collaboration, Elevated

AI breaks down data silos and enables seamless collaboration across departments and between humans and machines. It fosters a new kind of teamwork where AI augments human potential.

Key Findings at a Glance

  • 93% of AI users say it allows them to focus on strategic tasks.
  • Top irreplaceable skills: ethical decision-making, empathy, relationship-building, and conflict resolution.
  • Biggest challenges to AI adoption: uncertainty about ROI, data privacy, and integration complexity.
  • Most impactful missing skills: cultural sensitivity, adaptability, and strategic planning.

A Call to Action

The report concludes with a clear message: the AI revolution is not just technological—it’s deeply human. Organizations must:

  • Embrace human-centric leadership.
  • Foster collaboration between people and AI.
  • Invest in upskilling and reskilling.
  • Promote transparency and accountability.

AI is not the new face of work—it’s the force that allows our human talent to shine brighter.

If you’ve used Microsoft Copilot in on the web browser, you’ve seen the potential for AI with internal data, but that’s just the beginning. The paid license for Copilot in Microsoft 365 unlocks the full potential: seamless integration, personalization, and smart automation right inside the tools you already use every day.

Built Directly into Office Apps

The primary difference between the free version of Copilot and the paid license is the integration into the everyday Microsoft Tools you already use.

  • Word: Generate, summarize, or rewrite entire documents

  • Excel: Analyze data, create formulas, and find trends

  • PowerPoint: Build decks from scratch using just a prompt

  • Outlook: Write polished replies and suggest scheduling options

  • Teams: Summarize meetings (live or after the fact), track tasks, and surface key points

Copilot Free vs. Paid: What’s the Difference?

Feature Free (Web Version) Paid Copilot for M365
Commercial Data Protection
Access to your M365 data
Embedded in Word, Excel, etc.
Personalized replies based on your work
Summary of meetings, emails, and documents

The biggest difference in the free vs. paid license of Copilot is what you can do with the data you have once Copilot is built into your Microsoft Apps.

With the paid version:

  • Your data—emails, files, Teams chats, calendars—never leaves the Microsoft 365 environment.

  • Copilot runs within your secure tenant, using Microsoft’s Zero Trust architecture.

  • It does not send data to public servers or use it to train language models.

This means your interactions with Copilot stay subject to the same security and compliance rules that already govern tools like Outlook, OneDrive, and SharePoint.

The paid Copilot solution aligns with:

  • FERPA, HIPAA (where applicable), and other regulatory standards

  • UW’s internal data classification and acceptable use policies

  • Microsoft’s own Customer Data Protection Policy, with clear data residency and ownership agreements

This makes it fundamentally different from using free tools like ChatGPT, Bing AI, or Gemini, which do not reside within UW’s managed infrastructure. It’s a secure, integrated solution that respects your files, your permissions, and your data boundaries. That’s what makes it appropriate for handling work with moderate sensitivity, as long as users still follow internal data policies.

“What can I actually use AI for—and how do I avoid getting myself in trouble?”


That’s one of the first (and smartest) questions people ask when trying out generative AI tools. There has been a lot of excitement around such tools, and there sure are a lot of tools on the marketplace.

At the Universities of Wisconsin, we currently do not have an enterprise or educational license for tools like ChatGPT, Claude, Gemini, or Deepseek. (The exception is Microsoft Copilot, which has a paid license in some systems.)

That means: while these tools can be powerful for brainstorming, writing, and automating routine tasks, it’s critical to use them responsibly—especially when it comes to handling data.

Here’s what you need to know about using public AI tools safely and wisely. (This does not pertain to the paid Copilot license)

OK to Use ChatGPT (Or any public LLM) For:

  • Drafting non-sensitive communications, like email templates or project summaries

  • Brainstorming ideas (e.g., presentation outlines, workshop formats, survey questions)

  • Creating or refining generic text (e.g., help articles, documentation)

  • Generating code snippets without user or student data

  • Exploring concepts or summarizing public information

  • Clarifying technical terms or academic topics

  • Getting grammar and tone feedback on professional writing

  • Support for creative ideation and storytelling in student engagement materials

NOT OK to Use ChatGPT (Or any public LLM) For:

  • Sharing personally identifiable information (PII), such as names, ID numbers, or student records

  • Discussing confidential or restricted information, including contracts, budget documents, HR cases, or sensitive governance topics

  • Uploading or pasting internal documents marked confidential or not meant for public disclosure

  • Using it to make final decisions about student admission, financial aid, or disciplinary action

  • Treating ChatGPT outputs as authoritative without verification—it’s a tool, not a source of record

This includes technical communication as well. It is OK to use ChatGPT for general-purpose coding and debugging. It is NOT acceptable to use ChatGPT if the code contains student data, internal APIs, secure tokens or authentication keys. This means anything tied to Peoplesoft, SIS, Workday, etc. Be especially cautious if you are using the model to generate decisions that affect users without human oversight (e.g., scripts that automatically move money, update grades, or modify access rights)

If you wouldn’t email the code to a stranger without redacting it, don’t paste it into an LLM.

Tips for Responsible Use

  • Treat AI as a thought partner, not a decision-maker.

  • If in doubt, strip out sensitive details and ask generalized questions.

  • Assume anything you input is visible externally, unless you’re using a licensed instance with data protection agreements.

  • Cite any final content or policies to approved institutional sources, not ChatGPT.