Enterprises Contain AI Agents to Balance Risk, Reward

Cybersecurity05.May.2026 22:524 min read

As AI agents mature, enterprises are experimenting carefully to capture their benefits while limiting operational and data risks. Organizations are emphasizing governance, containment and targeted deployment to avoid costly mistakes.

Enterprises Contain AI Agents to Balance Risk, Reward

NEW YORK — As enterprises accelerate adoption of AI agents, many are discovering that speed must be balanced with control. Early experimentation has shown both the promise of agentic AI and the operational and governance risks that accompany it.

Kevin Hearn, senior vice president and head of consumer bank development at Axos Bank, learned this firsthand. As an early adopter of AI, he initially gave 300 employees access to an AI agent without a specific goal. The result was 300 different outcomes.

Some employees used the agent to write code, others used it to fix code, and some struggled to prompt it effectively, leading to inconsistent code quality. Hearn ultimately reduced the group of AI testers from 300 to about five to seven people dedicated to experimenting, testing and refining the agent.

“As people come to me with ideas, I may give them the autonomy to go chase it, or I’ll have that team specifically focus on it,” Hearn said. “The power of that team is that once they’ve solidified an agent in a particular area, meaning they’ve worked with all the consumers of that agent to put a corporate effect on it, we’re now able to perpetuate that consistently.”

This more centralized approach reflects how enterprises are trying to capture the power of AI agents while containing their risks. Leaders face pressure to innovate quickly, but they are also wary of unintended consequences.

“Agents aren’t traditional software,” said Matt DeBergalis, CEO and co-founder of Apollo GraphQL. “On the one hand, everybody is banging on the table saying, ‘Go fast, go far, act like a startup.’ But on the other hand, this is the biggest data exfiltration threat to every enterprise.”

According to DeBergalis, enterprises need strong foundations that allow them to experiment in a measured way rather than deploying agents broadly without safeguards.

Internal Use Cases First

For Axos, the opportunity AI presented was too significant to ignore. The bank chose to mitigate risk by focusing first on internal use cases. It uses OutSystems Agent Workbench to create, deploy and manage AI agents, including internal business analyst agents, Scrum Master agents and engineering agents.

Hearn emphasized that a small, centralized team helps ensure governance and proper oversight.

“It’s all coming through that kind of centralized team that ensures the governance is there,” he said. “Governance being that we are using it appropriately. We are not feeding information we should not be. It does not have access to the outside world.”

Fintech company Netevia has taken a similar approach. It uses AI, including agentic AI, for internal processes such as customer service but avoids integrating it into forward-facing applications.

“Part of the journey is to be able to understand how you thread slowly,” said Vlad Sadovskiy, CEO of Netevia. “You cannot [mess] with people's money even though the technology is already available to others doing agentic payments, AI-to-AI payments. We are still about a year away from the actual people thinking of adoption.”

Balancing External Innovation With Controls

Other enterprises are deploying AI agents in customer-facing applications, while maintaining strict oversight.

At T-Mobile, AI supports customer service through its AI-powered app, T-Life. Julianne Roberson, director of AI engineering at T-Mobile, said risk management is central to its strategy.

“We have observability on everything, so if something goes wrong, we see it,” Roberson said. “We try not to put things out if we don’t know if they’re going to work.”

Upwork has also invested heavily in containment. The company runs custom-built language models internally and passes them through a trust system designed to prevent hallucinations and keep outputs on track.

“We built a lot of internal tech that provides the safety harness for all of this,” said Andrew Rabinovich, CTO and head of AI at Upwork. “Every language model that’s run internally — and they’re all custom-built — they’re all passed through this trust system to avoid hallucination and prevent getting off the rails.”

Upwork also focused on educating employees about how AI agents work, enabling teams to better understand appropriate use cases and limitations.

The broader containment strategy — ensuring governance frameworks, observability and technical guardrails are in place before wider deployment — is emerging as a common theme among enterprises.

“People see performance, mistake it for confidence, then they get FOMO and it is a mess. As soon as you get into FOMO mode, it is a big mess,” said Robert Blumofe, executive vice president and chief technology officer at Akamai. “Use AI for what AI is awesome at and not try to force it into everything.”

As AI agents continue to mature, enterprises are learning that success depends not just on adoption, but on disciplined experimentation, governance and knowing where the technology delivers the most value.