What If Someone Uploads Payroll to ChatGPT? Navigating AI Risk in Real Time

What if someone on my team uploads payroll data to ChatGPT?

That question keeps coming up—and with good reason.

If you’re worried someone in your organization might expose sensitive information, you’re not alone. I’ve heard this concern echoed again and again, and other advisors I speak with are hearing the same thing across industries.

We’re in a moment where the mandate is clear: employees need to be using GenAI tools to stay competitive and drive efficiency. But at the same time, the risks are murky, the standards are shifting, and policies are a moving target.

It’s a classic “upgrade the plane while it’s in flight” situation, and it’s not going to stabilize anytime soon.

So, how are business leaders balancing enablement and risk management?

One strategy that’s becoming non-negotiable is ongoing employee training.

Just like annual harassment prevention or cybersecurity modules, AI literacy and usage training must become part of the mix. It’s not enough to roll out a policy and hope it sticks. You need a repeatable rhythm to help employees understand:

  • What’s okay to upload

  • What’s strictly off-limits

  • How internal GPT stacks are configured

  • And how those tools evolve over time

Because the stacks are evolving, and they won’t stand still for the foreseeable future.

GenAI isn’t a “set it and forget it” moment. It’s an ongoing operational capability, and smart orgs are treating it that way.

If your team is working through this, I’d love to hear how you’re approaching it, or share what’s working across companies I’ve advised.