Firms must deal with the challenge of Responsible AI as they scale their activities with AI in general and generative AI in particular. Biases and hallucinations, security and privacy risks, and alignment with legislation, both current and forthcoming, must all be ironed out before enterprises can scale POCs and pilots. Yet only one in 10 enterprises has implemented a comprehensive policy for Responsible AI.
We spoke with four enterprise leaders to understand how they are adding Responsible AI into their governance structures—just as Accenture appointed its first-ever Chief Responsible AI Officer, Arnab Chakraborty. He says enterprises must bake Responsible AI into GenAI projects from the start if they are to stand any chance of scaling.
Watch our interview with Arnab here.
However, when we asked 260 enterprise leaders how they are addressing aspects of Responsible AI such as copyright, data protection, bias mitigation, and ethical considerations, only one in 10 was able to confirm they had a comprehensive policy in place (see Exhibit 1). If most don’t have a policy, even less can be ready to scale GenAI by baking it into their GenAI programs.
Sample: 260 enterprise leaders with experience in deploying GenAI
Source: HFS Research, 2024
Legal & General’s Group Technology and Data Risk Director Stefana Brown points out that GenAI is different from previous technologies in that it creates new data, which can include inaccuracies and bias, which enterprises have not had to confront before.
Of course, humans have always been perfectly capable of adding their spin and making their own mistakes—but firms have learned to expect that, and our governance processes have been managing it for centuries. Now, we have to learn to manage errors and transgressions by machines, and we have next-to-zero experience of that.
Stefana believes her firm is not starting from zero because they have had experience with machine learning (ML) for many years. They have ‘model-risk’ policies to ensure their models comply with regulatory expectations.
“Our frameworks are there. We need to slightly augment them for that generative element, especially when it comes to validation, ethical implications, etc. We need to look at certain elements in more detail, but we are starting from a good place,” she said.
Watch our interview with Stefana here.
Jim Edwards, Kimberly-Clark’s Global Innovation Capabilities Leader, says it is mission-critical to have legal and security teams embedded into your processes right from the outset to protect against the threat of data leakage, ethical, and other risks.
Jim says GenAI opens up a new wave of possibilities that business leaders can apply across the enterprise.
The prospect of innovation breaking out throughout the organization emphasizes the need for embedded repeatable innovation processes that everyone adheres to, preventing independent business units from deploying AI without considering the ‘responsibility’ issues.
“Every day, we all get messages from new providers saying they have solutions that could be game-changers in our businesses. Those messages are being received by people all over the business, creating huge excitement,”
said Jim.
That means you have processes in place to manage that influx of demand for innovation while safeguarding the organization.
Lawrence Ampofo, Strategy & Transformation Lead at _VOIS (Vodafone Intelligent Solutions), describes a shifting landscape regarding the governance required to deliver AI responsibly.
He raises particular concerns about the need for governance when considering the rise of ‘agentic workflows’ in which bots can make their own decisions. He says we must plan to not only have governance in place but also operationalize it to meet the speed of change in the market.
“We are working with our partners and academic institutions to work out how to develop a crawl-walk-run approach, learning from other use cases,” he said.
Watch our interview with Lawrence here.
Dr. Christina Yan Zhang is CEO of The Metaverse Institute—the metaverse manifesting as another frontier technology with its own governance challenges. She works with the UN and governments worldwide.
She, too, believes enterprises’ experience to date with AI will stand them in good stead for tackling the governance of GenAI. She says AI has been deployed across industries and regions for many decades.
But she does call out some ‘major issues’ referencing IP protection in content production, legal liabilities, and environmental impact.
And, she says, with so many enterprises committed to reducing carbon emissions in their ESG mandates, they must find a way to measure and control the impact of the increasing use of AI—which is driving surges in energy use at data centers.
Watch our interview with Christina here.
Only one in 10 enterprises has established a comprehensive policy for Responsible AI. It’s also true that failing to resolve the new governance challenges GenAI creates will limit the enterprise’s ability to scale with GenAI.
However, our interviews with enterprise leaders reveal a higher understanding of what is required than the raw data from our survey suggests. Many enterprise leaders recognize the steps they need to take, offering a guide for those still early in their maturity.
As our enterprise experts suggest:
Register now for immediate access of HFS' research, data and forward looking trends.
Get StartedIf you don't have an account, Register here |
Register now for immediate access of HFS' research, data and forward looking trends.
Get Started