In a world where artificial intelligence (AI) is rapidly transforming the workplace, a new global study commissioned by Workday and conducted by Hanover Research offers a refreshingly optimistic perspective: AI isn’t here to replace us—it’s here to elevate us.

Based on insights from 2,500 full-time workers across 22 countries, the report explores how AI is reshaping work by enhancing human creativity, leadership, learning, trust, and collaboration.

The Human-Centered Promise of AI

The study reveals that 83% of respondents believe AI will enhance human creativity and lead to new forms of economic value. Rather than automating people out of relevance, AI is seen as a tool that frees individuals from routine tasks, allowing them to focus on higher-order skills like ethical decision-making, emotional intelligence, and strategic thinking.

Five Principles for Thriving with AI

The report is anchored in five core principles that define how organizations can thrive in an AI-enabled future:

1. Creativity, Elevated

AI acts as a creative assistant, helping individuals generate ideas and solutions faster and more effectively. It enables people to bring imagination to their roles—whether in administrative workflows or product innovation.

2. Leadership, Elevated

AI supports empathetic leadership by providing real-time insights into team dynamics and freeing up time for human connection. It helps leaders make more objective decisions and focus on what matters most—their people.

3. Learning, Elevated

AI enhances learning by identifying skill gaps, personalizing development, and democratizing access to knowledge. It empowers organizations to build agile, future-ready teams.

4. Trust, Elevated

Transparency and responsible AI practices are essential. A striking 90% of respondents agree that AI can increase organizational accountability, but trust must be built collaboratively across sectors.

5. Collaboration, Elevated

AI breaks down data silos and enables seamless collaboration across departments and between humans and machines. It fosters a new kind of teamwork where AI augments human potential.

Key Findings at a Glance

  • 93% of AI users say it allows them to focus on strategic tasks.
  • Top irreplaceable skills: ethical decision-making, empathy, relationship-building, and conflict resolution.
  • Biggest challenges to AI adoption: uncertainty about ROI, data privacy, and integration complexity.
  • Most impactful missing skills: cultural sensitivity, adaptability, and strategic planning.

A Call to Action

The report concludes with a clear message: the AI revolution is not just technological—it’s deeply human. Organizations must:

  • Embrace human-centric leadership.
  • Foster collaboration between people and AI.
  • Invest in upskilling and reskilling.
  • Promote transparency and accountability.

AI is not the new face of work—it’s the force that allows our human talent to shine brighter.

If you’ve used Microsoft Copilot in on the web browser, you’ve seen the potential for AI with internal data, but that’s just the beginning. The paid license for Copilot in Microsoft 365 unlocks the full potential: seamless integration, personalization, and smart automation right inside the tools you already use every day.

Built Directly into Office Apps

The primary difference between the free version of Copilot and the paid license is the integration into the everyday Microsoft Tools you already use.

  • Word: Generate, summarize, or rewrite entire documents

  • Excel: Analyze data, create formulas, and find trends

  • PowerPoint: Build decks from scratch using just a prompt

  • Outlook: Write polished replies and suggest scheduling options

  • Teams: Summarize meetings (live or after the fact), track tasks, and surface key points

Copilot Free vs. Paid: What’s the Difference?

Feature Free (Web Version) Paid Copilot for M365
Commercial Data Protection
Access to your M365 data
Embedded in Word, Excel, etc.
Personalized replies based on your work
Summary of meetings, emails, and documents

The biggest difference in the free vs. paid license of Copilot is what you can do with the data you have once Copilot is built into your Microsoft Apps.

With the paid version:

  • Your data—emails, files, Teams chats, calendars—never leaves the Microsoft 365 environment.

  • Copilot runs within your secure tenant, using Microsoft’s Zero Trust architecture.

  • It does not send data to public servers or use it to train language models.

This means your interactions with Copilot stay subject to the same security and compliance rules that already govern tools like Outlook, OneDrive, and SharePoint.

The paid Copilot solution aligns with:

  • FERPA, HIPAA (where applicable), and other regulatory standards

  • UW’s internal data classification and acceptable use policies

  • Microsoft’s own Customer Data Protection Policy, with clear data residency and ownership agreements

This makes it fundamentally different from using free tools like ChatGPT, Bing AI, or Gemini, which do not reside within UW’s managed infrastructure. It’s a secure, integrated solution that respects your files, your permissions, and your data boundaries. That’s what makes it appropriate for handling work with moderate sensitivity, as long as users still follow internal data policies.

“What can I actually use AI for—and how do I avoid getting myself in trouble?”


That’s one of the first (and smartest) questions people ask when trying out generative AI tools. There has been a lot of excitement around such tools, and there sure are a lot of tools on the marketplace.

At the Universities of Wisconsin, we currently do not have an enterprise or educational license for tools like ChatGPT, Claude, Gemini, or Deepseek. (The exception is Microsoft Copilot, which has a paid license in some systems.)

That means: while these tools can be powerful for brainstorming, writing, and automating routine tasks, it’s critical to use them responsibly—especially when it comes to handling data.

Here’s what you need to know about using public AI tools safely and wisely. (This does not pertain to the paid Copilot license)

OK to Use ChatGPT (Or any public LLM) For:

  • Drafting non-sensitive communications, like email templates or project summaries

  • Brainstorming ideas (e.g., presentation outlines, workshop formats, survey questions)

  • Creating or refining generic text (e.g., help articles, documentation)

  • Generating code snippets without user or student data

  • Exploring concepts or summarizing public information

  • Clarifying technical terms or academic topics

  • Getting grammar and tone feedback on professional writing

  • Support for creative ideation and storytelling in student engagement materials

NOT OK to Use ChatGPT (Or any public LLM) For:

  • Sharing personally identifiable information (PII), such as names, ID numbers, or student records

  • Discussing confidential or restricted information, including contracts, budget documents, HR cases, or sensitive governance topics

  • Uploading or pasting internal documents marked confidential or not meant for public disclosure

  • Using it to make final decisions about student admission, financial aid, or disciplinary action

  • Treating ChatGPT outputs as authoritative without verification—it’s a tool, not a source of record

This includes technical communication as well. It is OK to use ChatGPT for general-purpose coding and debugging. It is NOT acceptable to use ChatGPT if the code contains student data, internal APIs, secure tokens or authentication keys. This means anything tied to Peoplesoft, SIS, Workday, etc. Be especially cautious if you are using the model to generate decisions that affect users without human oversight (e.g., scripts that automatically move money, update grades, or modify access rights)

If you wouldn’t email the code to a stranger without redacting it, don’t paste it into an LLM.

Tips for Responsible Use

  • Treat AI as a thought partner, not a decision-maker.

  • If in doubt, strip out sensitive details and ask generalized questions.

  • Assume anything you input is visible externally, unless you’re using a licensed instance with data protection agreements.

  • Cite any final content or policies to approved institutional sources, not ChatGPT.