According to Gallup, politicians are the world’s least trusted people. According to business leaders at the recent AI Innovation Summit, artificial intelligence is the world’s least trusted tech. Can anybody help politicians? Doubtful. But our gathered experts did have some practical insights to help with your AI implementation.
The author, poet, and minister, George MacDonald, said, “To be trusted is a greater compliment than to be loved.” It’s a journey. We all love a puppy, but we wouldn’t leave one in charge of the country. Children can be lovable, but they don’t pay bills on time.
It’s tempting to say generative AI is on the same journey; a distracting kitten that will grow into a trusted cat. But GenAI was born in the 1960s. It’s a boomer. When will it start acting like it because it certainly needs to!
According to a recent poll of senior executives from the retail industry, the world’s largest private-sector employer, 75% view generative AI as “instrumental to their business’s revenue growth”. This is a big ask of something that still hallucinates regularly.
Never. Or Now?
In his opening keynote at our AI Innovation Summit, Lambert Hougenhout, Chief AI Officer at the United Nations, said, “The biggest thing holding back generative AI in companies is the trust that employees have in that system.”
A recent Deloitte report agrees, finding that this lack of trust stems from concerns about the system’s quality, accuracy, and reliability, and employees’ relationship to AI, including the fear of being replaced.
Not When, But How
Hougenhout believes that organizations often struggle by “not understanding how to manage and facilitate the transition required by technological shifts,” and that “if not handled correctly, the people who can least afford it will be most negatively impacted.”
This source of internal mistrust–the fear that the organizational leadership or process will not adequately support them through changes driven by AI–can be addressed. And quickly.
Reverse Brainstorming and Timeouts
In a recent roundtable on dealing with resistance to change, Bentley Motors’ CIO, Kirsty Bennett, described how she uses techniques like reverse brainstorming (“how could we make this the worst?”) as a way to surface these concerns.
Bennett also created a “timeout” mechanism, where any team member could stop a program if they identified a problem. She found this strategy helped build trust and give the team a sense of control.
Heads-Up
Whilst mistrust is a solvable problem today, our summit working groups also identified an issue for tomorrow.
Michael Riley, head of services at Pentaho, lead a fascinating discussion about how to avoid the data-related challenges. He highlighted inaccuracies, silos, and poor quality, as being primarily responsible for the high failure rate (70%) of AI projects.
The coming storm? Challenges with younger generations, particularly a lack of deep understanding of how technology works, and concerns regarding their attitude toward data quality.
What do we do when the problem is in the chair and the computer?
To continue exploring how your peers navigate these challenges and what’s next in artificial intelligence, join us at one of our upcoming AI Innovation Summits.
To see all our upcoming summits, visit our events page.