Few topics have captured executive attention like the current frenzy around enterprise AI. So how are leading firms cutting through the noise and making sense of such a fast-moving space?
Where to implement it. What it means for the workforce. How to harness it for innovation whilst remaining compliant. The questions around AI and its impact on business are seemingly endless.
Which makes for some pretty interesting conversations – and some interesting shared insights.
In a recent closed-door roundtable hosted by Meet the Boss, a group of senior executives from some of Europe’s leading firms gathered together to discuss this very topic: how to harness the power of cloud, data and AI to drive innovation – without compromising on compliance, security or resilience.
The discussion revealed a common reality: while AI offers transformative potential, its adoption is far from straightforward. From financial services to manufacturing to transportation to high-tech, participants highlighted the delicate balancing act required to innovate responsibly.
Innovation vs. Compliance: A Tightrope Walk
One of the most resonant themes was the tension between innovation and compliance. One attendee, VP of Global AI Transformation at a major European bank, captured this dilemma perfectly: “Sometimes our data is so secure that we cannot always use it for innovation.”
In highly regulated sectors such as finance, the need to protect customer data and meet stringent reporting obligations can often stifle experimentation. Yet innovation is essential to remaining competitive. The solution? Create secure sandbox environments and embedding large language models within internal infrastructure that allow safe, controlled testing.
As the Global CISO for a multinational mobility firm put it: “You cannot set your risk tolerance at zero.” Business value must be prioritized in concert with risk management – which is why her team is developing multiple secure environments tailored to different levels of experimentation, enabling innovation without compromising on compliance.
Cultural Transformation: Upskilling for the AI Era
But AI adoption isn’t just a technical challenge – it’s also a cultural one. Several leaders stressed the importance of upskilling not just their technical teams, but also wider stakeholders across the business – including compliance officers, legal departments and cybersecurity professionals.
For instance, one exec noted a “knowledge gap” amongst governance stakeholders, which often leads to a risk-averse mindset. Bridging this gap requires targeted training and cross-functional collaboration. Another described her transformation-focused team as the “glue” between innovation and compliance, translating needs and risks across departments.
Meanwhile, the Director for Data Governance at one of Europe’s leading insurers recommended a hybrid governance approach he called “water-gile” – a blend of structured oversight and agile flexibility. This model allows for early-stage controls while preserving the autonomy needed for innovation, and provides employees with the psychological safety needed to experiment. “It’s about creating the right organizational mindset,” he explained.
Shadow AI: The Unseen Risk
However, one consequence of the rise in citizen developers and easy-to-use AI tools has been a surge in shadow AI – unapproved or insecure use of public platforms such as ChatGPT. This was a major concern across the board.
One CTO for a global technology firm warned that tools such as Copilot don’t necessarily create new risks, but rather “shine a spotlight on already bad security practices.” Another noted that “common sense is often the least common sense” amongst users, and stressed the importance of educating users on secure AI usage.
Meanwhile, the Chief Digital Officer for a leading rail firm felt that employees will always default to public LLMs unless secure internal alternatives are provided. “You need to make it easy for users to make the right decision,” he said. In response, his team is actively promoting vetted tools that mimic public platforms while maintaining control around data privacy.
Governance: Federated Models for Scalable Innovation
The concerns around data privacy and leakage mean governance is fast-emerging as a differentiator between AI success and failure. But finding the right balance between control and creative freedom remains a tricky proposition. Decentralized models run the risk of inconsistency and non-compliance; too much centralized control, however, can stifle innovation.
One representative from the banking sector advocated for a bottom-up innovation model with top-down governance. Her team is building a framework that allows local experimentation while scaling successful use cases globally. “We need to prioritize strategic bets that need time versus tactical quick wins,” she explained.
The insurance industry favours a federated governance model, where domain teams own their data products but operate within a shared architectural and compliance framework. Meanwhile another attendee from a digital services firm emphasized the need for data stewardship and audit trails, especially in hybrid environments with disparate practices.
Resilience: The New Strategic Lens
Clearly, security remains a concern, but as the group were keen to point out, striking the right balance between risk and innovation remains fundamental moving forwards. And as such, the concept of resilience is emerging as a means of encompassing ideas around security, sustainability, risk, innovation and operational continuity.
One participant, Chief Security Advisor at a big five tech firm, believes that reframing both security and innovation as part of a broader resilience conversation makes it easier to engage boards and business leaders. For her, resilience includes safety, sustainability and information security, and her organization is embedding resilience into its AI strategy as a core pillar.
However, another sounded a cautionary note: “By automating more and building in more machine intelligence, we’re potentially making ourselves less resilient,” he warned. His team is exploring the concept of “minimum viable companies” to assess how much automation is too much.
Conclusion: A Journey, Not a Destination
The roundtable underscored that AI transformation is a journey requiring cultural, technical and strategic alignment. And while no organization has fully cracked the code, the group did come up with a number of shared recommendations for AI success:
- Innovation versus compliance: Build secure experimentation environments in order to foster bottom-up creativity
- Cultural transformation: Upskill all stakeholders, not just the technical team, and foster collaboration across the business
- Shadow AI and security: Monitor usage, enforce guardrails, and offer secure tools as alternatives to publicly available models
- Governance: Adopt federated models, track metadata, centralize inventories and consider AI centres of excellence to share best practice
- Resilience: Integrate resilience into your AI strategy and board conversations to encourage longer-term thinking and elevate the importance of a value-based approach