Point of View

Enterprise voices: Make Responsible AI your foundation for GenAI growth

Home » Research & Insights » Enterprise voices: Make Responsible AI your foundation for GenAI growth

Firms must deal with the challenge of Responsible AI as they scale their activities with AI in general and generative AI in particular. Biases and hallucinations, security and privacy risks, and alignment with legislation, both current and forthcoming, must all be ironed out before enterprises can scale POCs and pilots. Yet only one in 10 enterprises has implemented a comprehensive policy for Responsible AI.

We spoke with four enterprise leaders to understand how they are adding Responsible AI into their governance structures—just as Accenture appointed its first-ever Chief Responsible AI Officer, Arnab Chakraborty. He says enterprises must bake Responsible AI into GenAI projects from the start if they are to stand any chance of scaling.

Watch our interview with Arnab here.

Only one in 10 big businesses have a comprehensive Responsible AI policy in place

However, when we asked 260 enterprise leaders how they are addressing aspects of Responsible AI such as copyright, data protection, bias mitigation, and ethical considerations, only one in 10 was able to confirm they had a comprehensive policy in place (see Exhibit 1). If most don’t have a policy, even less can be ready to scale GenAI by baking it into their GenAI programs.

Exhibit 1: Responsible AI is being dealt with on an ad-hoc basis in the majority of enterprises

Sample: 260 enterprise leaders with experience in deploying GenAI
Source: HFS Research, 2024

We’ve had hundreds of years to learn how to manage human mistakes and transgressions—machines present a new challenge

Legal & General’s Group Technology and Data Risk Director Stefana Brown points out that GenAI is different from previous technologies in that it creates new data, which can include inaccuracies and bias, which enterprises have not had to confront before.

Of course, humans have always been perfectly capable of adding their spin and making their own mistakes—but firms have learned to expect that, and our governance processes have been managing it for centuries. Now, we have to learn to manage errors and transgressions by machines, and we have next-to-zero experience of that.

Stefana believes her firm is not starting from zero because they have had experience with machine learning (ML) for many years. They have ‘model-risk’ policies to ensure their models comply with regulatory expectations.

“Our frameworks are there. We need to slightly augment them for that generative element, especially when it comes to validation, ethical implications, etc. We need to look at certain elements in more detail, but we are starting from a good place,” she said.

Watch our interview with Stefana here.

GenAI has triggered an innovation deluge—we need processes in place to protect the enterprise

Jim Edwards, Kimberly-Clark’s Global Innovation Capabilities Leader, says it is mission-critical to have legal and security teams embedded into your processes right from the outset to protect against the threat of data leakage, ethical, and other risks.

Jim says GenAI opens up a new wave of possibilities that business leaders can apply across the enterprise.

The prospect of innovation breaking out throughout the organization emphasizes the need for embedded repeatable innovation processes that everyone adheres to, preventing independent business units from deploying AI without considering the ‘responsibility’ issues.

“Every day, we all get messages from new providers saying they have solutions that could be game-changers in our businesses. Those messages are being received by people all over the business, creating huge excitement,”
said Jim.

That means you have processes in place to manage that influx of demand for innovation while safeguarding the organization.

And now the bots are making their own decisions—we must ensure they do so responsibly

Lawrence Ampofo, Strategy & Transformation Lead at _VOIS (Vodafone Intelligent Solutions), describes a shifting landscape regarding the governance required to deliver AI responsibly.

He raises particular concerns about the need for governance when considering the rise of ‘agentic workflows’ in which bots can make their own decisions. He says we must plan to not only have governance in place but also operationalize it to meet the speed of change in the market.

“We are working with our partners and academic institutions to work out how to develop a crawl-walk-run approach, learning from other use cases,” he said.

Watch our interview with Lawrence here.

GenAI brings an environmental threat we must also counter if we are to call our AI ‘responsible’

Dr. Christina Yan Zhang is CEO of The Metaverse Institute—the metaverse manifesting as another frontier technology with its own governance challenges. She works with the UN and governments worldwide.

She, too, believes enterprises’ experience to date with AI will stand them in good stead for tackling the governance of GenAI. She says AI has been deployed across industries and regions for many decades.

But she does call out some ‘major issues’ referencing IP protection in content production, legal liabilities, and environmental impact.

And, she says, with so many enterprises committed to reducing carbon emissions in their ESG mandates, they must find a way to measure and control the impact of the increasing use of AI—which is driving surges in energy use at data centers.

Watch our interview with Christina here.

The Bottom Line: Learn from peers who already have a plan for Responsible AI

Only one in 10 enterprises has established a comprehensive policy for Responsible AI. It’s also true that failing to resolve the new governance challenges GenAI creates will limit the enterprise’s ability to scale with GenAI.

However, our interviews with enterprise leaders reveal a higher understanding of what is required than the raw data from our survey suggests. Many enterprise leaders recognize the steps they need to take, offering a guide for those still early in their maturity.

As our enterprise experts suggest:

  • Build on foundations derived from experience with machine learning.
  • Embed repeatable innovation processes that everyone adheres to.
  • Prepare for the rise of bots making their own decisions.
  • Include ESG commitments when building your approach to Responsible AI.

Sign in to view or download this research.

Login

Register

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started

Logo

confirm

Congratulations!

Your account has been created. You can continue exploring free AI insights while you verify your email. Please check your inbox for the verification link to activate full access.

Sign In

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started
ASK
HFS AI