For more stories like this, sign up for the PLANSPONSOR NEWSDash daily newsletter.
How Plan Sponsors Can Manage AI Opportunities, Risks
Experts recommend not prohibiting the use of artificial intelligence—but controlling its dangers.
As the range of threats employers face from artificial intelligence continues to evolve—eliciting responses from public officials—plan sponsors have options to tame the beast.
Experts recommended not prohibiting the use of AI within an organization, but instead controlling the risks it poses.
“As organizations adopt AI, an emerging concern is how those tools are accessed and governed internally,” wrote David Ogg, a business information security officer at Principal Financial Group, in a response to emailed questions. “Protecting participants requires a layered approach that combines responsible AI adoption, strong identity protections and continuous education.”
Governing AI
Scott Miller, a senior consultant in Segal’s administration and technology consulting practice, wrote in an article for the National Conference on Public Employee Retirement Systems in March last year that “merely deciding not to allow AI use in the workplace is not sufficient.”
Rather than imposing blanket prohibitions against the use of AI tools, sponsors can embrace it with controls, Miller wrote. For example, sponsors can limit or block access to public AI tools and instead deploy secure, enterprise-approved AI platforms with guardrails, monitoring and data protections.
Miller recommended a four-step action plan to help plan sponsors incorporate AI into the workplace:
- Consult the legal department for guidance;
- Speak with other sponsors or consultants about their experiences implementing AI;
- Determine which AI use cases are best for a particular plan or organization; and
- Develop an AI use policy that prohibits the unauthorized use of AI tools, requires users to receive appropriate training, includes education about AI-related scams, and imposes requirements for vendors’ use of AI tools.
In an NCPERS article from April, Wayne Leipold, also a senior consultant in Segal’s administration and technology consulting practice, wrote that employers can reasonably adopt AI while “maintaining the accountability and transparency their members expect.”
When a public retirement system recently structured its AI plan, “what followed was not a single ‘AI project,’” Leipold wrote, “but a sequence of coordinated efforts—each building on the last—to ensure AI adoption was useful, well-governed and sustainable.”
Policy Development
The public retirement system began its work with the development of an organizational AI strategy, rather than focusing on specific AI technologies.
The strategy established guiding principles: that AI should “augment” human judgment and must align with organizational values, as well as that data privacy, security and transparency remained foundational.
“A key outcome of this phase was the development of a proposed ‘Responsible Use of AI’ policy,” Leipold’s post stated. “The policy translated broad concepts into practical expectations around acceptable use, human oversight, training requirements and accountability.”
From a management perspective, the planning meant that staff knew what processes were allowed; leaders knew how AI would be governed; and future projects could advance without the need to revisit foundational questions.
Educating Employees
Sponsors should continue to prioritize AI-related education and preparedness, according to Ogg, which should include ongoing employee training, plan participant awareness campaigns, regular testing of incident-response plans, and close oversight of vendor security practices.
“Strong outcomes require a workforce that understands how AI works, where its risks are and how to use it safely,” Ogg wrote.
At the public retirement system considered by Leipold, its AI training session addressed what generative AI “does well, where it falls short and how it can be used responsibly in day-to-day work.” In return, staff and leaders gained confidence in how to use AI safely and efficiently.
In a recent survey conducted by information technology service provider Sagiss, 64% of respondents said an AI-generated message could likely impersonate someone they work with, and 57% said AI makes hackers’ phishing efforts harder to spot because breach attempts feel more professional.
However, there remains a critical gap between workers’ use of AI and knowledge of how to use it safely. A 2023 Salesforce study fielded among 14,000 workers across 14 countries found that 55% of surveyed employees had used unapproved generative AI tools at work. Of those who did so, 69% had never received training on how to use it safely and ethically.
The Department of Labor’s Employee Benefits Security Administration’s 2024 cybersecurity guidance update stated that plan fiduciaries should conduct cybersecurity awareness training among employees “at least annually.” EBSA listed cybersecurity as a priority in its January release of 2026 enforcement projects.
“Since identity theft is a leading cause of fraudulent distributions, it should be considered a key topic of training, which should focus on current trends to exploit unauthorized access to systems,” the update stated. “Be on the lookout for individuals falsely posing as authorized plan officials, fiduciaries, participants or beneficiaries.”
While identity fraud is not new, how it is carried out has changed rapidly.
“Bad actors are no longer relying solely on basic phishing attempts—they’re using AI-driven social engineering, impersonation and credential stuffing attacks to compromise participant accounts and third-party providers,” Principal’s Ogg wrote.
Miller’s article stated that AI policy cannot be handled with a “set it and forget it” approach. Plan sponsors, including AI users and non-AI users, need to stay informed and keep their policies updated as laws change and AI tools evolve.
“AI will continue to evolve, and so will the threat landscape,” Ogg wrote. “Cybersecurity is no longer just an IT issue—it’s a core component of retirement readiness and long-term participant trust.”
The “2026 Sagiss Managed Security Report” was conducted on February 23 among 500 desk-based workers who use email or chat as part of their jobs.
You Might Also Like:
Advanced AI Sparks Cybersecurity Concerns
AI and the Labor Shortage Economy
Scores Reveal Key Factors in Financial Wellness
« Early-Career Workers Make Retirement Enrollment Decisions With Little Guidance
