The ABCs of Agentic AI Trust – What Banking Leaders Need to Know Now

The ABCs of Agentic AI Trust – What Banking Leaders Need to Know Now

Article - Banking
Jennifer C. By Jennifer C.|4th September 2025

In financial services, trust is everything. As AI matures from passive tools into autonomous agents – able to act, decide, and adapt – that trust is being tested like never before. 

For banks investing in AI at scale, the next challenge isn’t just about accuracy or speed – it’s about credibility. Agentic AI is redefining how work gets done, how customers engage, and how decisions are made. But in a high-stakes, highly regulated environment like banking, trust must be built into every layer. 

In a standout keynote at a recent GDS Banking Insight Summit, James Massa, Senior Executive Director of Software Engineering & Architecture at JPMorgan Chase, offered a candid and forward-thinking take on what this means in practice. His framework – The Agent, The Bank, and The Customer – lays out the ABCs of building trust in agentic AI. 

Why Trust Is the New Infrastructure 

“At JPMorgan Chase, we’ve got 60,000 technologists and an AI budget in the billions – with a B,” Massa stated. “And yet, even with all that firepower, trust remains the single most important factor”. 

AI isn’t just a technical shift – it’s a cultural one. And as Massa reminded us, the banking industry must approach agentic AI not as a tool to control, but as a colleague to trust. That trust, however, doesn’t come automatically. 

Agents Are the New Workforce – But Who Manages Them? 

Agentic AI doesn’t just answer prompts – it makes decisions. It reads customer emails, infers intent, references internal documents, and executes transactions. In other words, it behaves like a member of staff. 

According to Massa, an AI agent can be “as powerful as a team of employees.” That brings new expectations: performance reviews, bias checks, and even the digital equivalent of HR processes. 

Banking leaders must ask: 

  • What data was the agent trained on? 
  • How do we evaluate its performance over time? 
  • Can we “offboard” an underperforming agent safely? 

It’s a shift from managing software to managing digital talent – and the implications span governance, compliance, and ethics. 

Agentic AI in Banking – The Role of RAG in Building Trust 

Retrieval-Augmented Generation (RAG) is emerging as a core component of trusted AI systems in banking. Rather than generating responses from general internet training data, RAG enables agents to base answers on verified, internal content – policy documents, regulatory frameworks, and recent communications. 

As Massa put it: “RAG is my favourite way – the most trusted way – because now you’re working with approved material”. 

In banking use cases, this might look like: 

  • Reading a customer inquiry 
  • Retrieving guidance from internal onboarding protocols 
  • Executing the appropriate next step 
  • Citing the source of the decision 

It’s a complete loop – transparent, auditable, and aligned with compliance requirements. 

Hallucination Risk – And the Myth of Perfect Accuracy 

AI hallucination remains one of the biggest risks to trust. Massa identified two categories: 

  • Factual hallucinations – incorrect data or references 
  • Faithfulness hallucinations – failures to follow the intended task or logic 

“There is no 100% test coverage. No perfect model. They will hallucinate,” he warned. 

Key mitigation strategies include: 

  • RAG grounding – training models on trusted content 
  • LLM-as-judge systems – where one AI evaluates another’s output 
  • Human-in-the-loop vs. Human-as-QC – distinguishing between active involvement and oversight 

For banks, this is about more than accuracy – it’s about governance. AI systems must not only work well – they must fail safely, explain decisions, and adapt without compromising integrity. 

From Chatbots to Agent Orchestration – The Future of Workflows 

The next evolution isn’t smarter chatbots – it’s orchestrated, multi-agent ecosystems. Massa outlined a model where: 

  • A user initiates a task 
  • An orchestrator determines intent 
  • Specialised agents retrieve information and perform actions 
  • Outputs are validated and logged 

This system architecture mirrors modern banking operations – distributed, compliant, and customer-centric. It also unlocks true intelligent automation for use cases like KYC, client onboarding, and real-time compliance checks. 

But orchestration adds complexity. “What happens when one agent hallucinates and talks to another agent who also hallucinates?” Massa asked. “It’s a game of agentic telephone”. 

What Banking Leaders Should Do Next 

As competition shifts from best-in-class models to best-in-class systems, financial institutions must act now to establish trust frameworks for AI. 

Four strategic priorities: 

  1. Treat agents like people – vet their data, track their decisions, and monitor their “performance.” 
  1. Build RAG into your stack – ground AI in trusted internal documents to eliminate hallucination risks. 
  1. Design for orchestration – ensure your agents work together under strong governance and control. 
  1. Invest in explainability – create systems where AI can justify decisions and pass audit trails. 

Final Thought – Trust, By Design 

Agentic AI won’t replace your people. But it will change the shape of your organisation. Trust must be designed into the core – from the datasets you use to the outcomes you allow. 

At GDS, we help global banking leaders build the conversations – and capabilities – that make transformation real. Because in the era of agentic AI, transformation without trust is just automation. 

And in banking, trust is the business. 

Back to insights

Related content

Related events