Highlight Report

Infosys’ Responsible AI cuts the fears holding the C-suite back

Home » Research & Insights » Infosys’ Responsible AI cuts the fears holding the C-suite back

Infosys is convinced enterprise artificial intelligence (AI) initiatives are about to be rolled out to production scale and that during 2024, the number of AI initiatives will grow from handfuls of experiments to strategic AI-led business transformations with a bounty of new value on offer. But if you want a piece of the action, first, you must get over the fears paralyzing C-suite decision making.

Many enterprise leaders are dazzled by twin headlights representing the risks of rushing to scale with AI too early. The first is the fear that you go too soon and miss the benefits of inter-business-enabling unifying standards that emerge a little later down the line. The second is the fear of a legislative rug being pulled from under the enterprise—wiping out big bets made too early because the rules mean you have to rip your tech down and start again. HFS pointed out that ethics should not be used as an excuse not to invest in AI in our blog post, Only humans can make AI “ethical.” Machines make it transparent and accurate.

Firms must stop playing the waiting game when there is so much value to unlock

Firms can’t keep playing a waiting game when there is so much value to unlock, as we outline in our research report, GenAI will reshape business economics.

Infosys’ response is a newly-created Responsible AI Office, tasked with monitoring the rapidly evolving regulatory landscape, spanning the EU AI Act, AI Liability Directive and IP Directive, the UK’s upcoming IP Bill and its AI whitepaper, and the US’ AI Bill of Rights, Copyright Act, and legislation in 40 states that is due to be adopted in the next six months.

The firm is also tracking the threats wrapped up in a range of court cases and rulings, such as the UK Supreme Court’s December 2023 decision that AI cannot be the inventor of patents and the New York Times’ bid to sue OpenAI for copyright infringement.

You must go beyond legal due diligence and choose platforms ready to adapt

From an enterprise point of view, Infosys is doing the donkey work on privacy, security, and ethical legislation, so your legal team doesn’t have to. In its approach to building AI solutions, it is working from the standpoint that responsibility must be built in from the word go. Platforms should be built to manage the legislative risks the Responsible AI Office is able to scan for, and those platforms should be adaptable enough to enable the switch of models and other elements of a configurable architecture, if necessary, to handle curve balls no one has yet predicted.

Even with platforms with built-in guardrails and protections for privacy, against misuse, and (for example) profanity, enterprise leaders still face a world of complexity. What level of “explainability” should you demand, for example? What is the right standard of security to build in? How can you avoid infringing copyright? Infosys cuts through that potentially lengthy round of prevarication with the offer of its own built-in standards and measures.

Should a third party define what your enterprise regards as “fair”?

The idea that a third party (in this case, Infosys) is offering to define “fairness” and “bias” for an enterprise may raise a few eyebrows. However, trusting such decisions to a third party is a pragmatic step you can take to accelerate progress toward unlocking the value of AI. Infosys would claim it earns that trust from customers with a strong track record and proof points such as being named one of the world’s most ethical companies for the third year running last year (by Ethisphere.com). But, of course, like any good AI system, performance can be fine-tuned toward individual enterprise preferences. HFS believes enterprise leaders should continue to own how they describe and measure “fairness.”

Infosys is also deeply involved in ecosystem-wide initiatives to make AI responsible, with partners such as Microsoft, Nvidia, AWS, IBM, academia, the World Economic Forum, and customers including Shell, Airbus, GSK, and many others.

The Bottom Line: Initiatives like Infosys’ Responsible AI office change the risk profile of AI. Are you ready to place your bet now?

Make no mistake, the challenges of Responsible AI have forced many enterprises to slow their progress toward scaled value with AI. It may seem prudent to wait on worldwide deliberations over policies and standards. But while you wait, some of your rivals have already weighed up how initiatives such as Infosys’ Responsible AI Office change the risk and reward profile and placed their bets.

Sign in to view or download this research.

Login

Register

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started

Logo

confirm

Congratulations!

Your account has been created. You can continue exploring free AI insights while you verify your email. Please check your inbox for the verification link to activate full access.

Sign In

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started
ASK
HFS AI