Point of View

To win trust, business leaders must take control of AI

Home » Research & Insights » To win trust, business leaders must take control of AI

Business leaders must get ahead of artificial intelligence (AI) and understand how and why their machines make decisions. By doing so, they can both maximize their data strategies’ success and give customers and employees the confidence and evidence they need to trust the technology.

To make progress, you must understand an emerging field of AI, explainable AI (XAI), where the machine’s inner workings are transparent and its judgments are always understandable.

The pandemic has accelerated the need to advance digital, and AI is a core component

Embracing AI has new urgency in the emerging post-pandemic economy, as the majority of C-Suite leaders view digital initiatives as, by far, the most critical platform for change.  A major study of 900 senior executives across the Global 2000, conducted in conjunction with KPMG, shows how AI has become essential for future survival as a result of the pandemic (see Exhibit 1):

Exhibit 1: COVID-19 has elevated AI as “essential for future survival”

The top-ranked objective of investments in emerging technologies

Sample: 900 executives across Global 2000 enterprises

Source:   HFS Research in conjunction with KPMG, 2020

Our great commercial challenge is to keep pace with the twists and turns of an increasing range of unknown-unknowns on both the demand and supply sides. The best way to handle such unpredictability is through experiments.

Black-box AI limits your organization’s ability to experiment toward success

Without the promised transparency of XAI, we lose sight of what we can test and how we can test it. Without this, we cannot direct the rapid, iterative test-learn-respond cycles toward our strategies’ purpose.

Such rapid test-learn-respond cycles are embedded in end-to-end OneOffice business processes and enabled by the emerging technologies that support our ability to predict and respond to needs (see Exhibit 2).

Exhibit 2: The OneOffice Emerging Tech Platform is intended to benefit from the best of human and machine intelligence

 

When machine learning algorithms are created directly from data, models emerge that are “black-box” in nature—models in which humans, even those who design the algorithm, are unable to explain the decisions the machine makes. To trust a black-box model is to trust the model’s equations and the entire database upon which it was built.

If you can’t explain how it works, you can’t come up with ways to improve it

If we can’t explain how it works, we can’t find ways to improve it. The machine may deliver results that work, but we will be left scratching our heads as to how and why—locked out of a process that human imagination should be adding rocket fuel to.

But that’s not the only drag on performance we introduce when we let AI get away with failing to explain itself; customer and employee experience take a hit when these vital stakeholders remain unconvinced that decisions made by artificial intelligence are fair, based on accurate and relevant data, and made with reasoning they—or any human—can make sense of. Black-box AI introduces a trust issue.

If you want customers to stick with you, they have to trust your machines

Trust in the machine is essential for customers, patients, and users in everything from better medical diagnoses to individually assessed insurance pricing, credit card and mortgage applications, and customer loyalty programs. Without trust in the AI, customers may feel cheated or exploited by their experience. For example, if it can’t be explained why you get charged one price for a hotel room and someone else gets another, your confidence takes a hit, and you take your money elsewhere.

Regulation is headed in the direction of a “right to explanation”

There are growing legislative requirements on businesses to provide transparency in decision making and an increasing clamor for a “right to explanation”—providing consumers with the right to an explanation of an algorithm’s decision. The EU’s General Data Protection Regulation (2016), for example, emphasizes the need to make explanations accessible to humans and to be testable by humans:

“The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention.” (Source: Recital 71, EU GDPR 2016)

Explainable AI is essential for employee experience and innovation

When employees are faced with a black box of unknowable variables and steps that churn out a decision, they cannot confirm or challenge what the machine knows and are therefore less able to provide explanations to customers who may be frustrated or angered by the outputs.

They are also hamstrung when it comes to generating new and testable assumptions. Without the transparency of explainability, ongoing improvement to the processes governing the acquisition, understanding, and use of data for business outcomes are all derailed.

XAI puts you back in charge—working with humans in continuous improvement

AI is increasingly embedded in the decision technologies we rely on to run our businesses. If we don’t understand the steps taken, choices made, and reasoning they use, we lose crucial business control levers. Explainable artificial intelligence puts us back in charge.

XAI is all about bringing transparency to what our tech has done, what it is doing right now, what it will do next, and what data those actions are based on. Relax. This does not mean leaders must take a crash course in understanding the complexities of programming, developing, or even maintaining AI models. Instead, the emphasis is on the XAI to deliver transparency, interpretability, and explainability in a way that humans can make sense of.

Build digital fluency in the human workforce—and drive employee experience

We need to understand the tech to embrace the tech. This concept speaks to the HFS OneOffice vision’s digital fluency—an important contributor in augmenting human performance in decision making. By virtue of XAI’s intent to make machine intelligence knowable, humans can be fully engaged, using our digital fluency to play our pivotal role in enhancing imagination and coming up with innovations even the most advanced artificial intelligence is years from competing with. We see this as a key element of the OneOffice Emerging Tech Platform (see Exhibit 2). Technology assists and complements human expertise, continuously learning from interactions and feedback, with processes in the cloud that are also improved through continuous human interaction.

What is needed to make AI “explainable”?

XAI demands that to be transparent, the processes that learn and adapt from the training data applied to AI must remain consistent and observable to a human.

For interpretability to be true, it must always be possible to comprehend the learning model applied so that we can understand the basis for decision making.

It must also deliver humans’ ability to understand how elements of “interpretability” connect to make decisions in any particular context.

XAI steps up to become a foundational principle in good data governance

Because this transparency in AI decision making is so pivotal to the relationship technology must have with humans in our OneOffice vision, we believe the automation in data’s use in business has advanced to the point that we must now count XAI as one of the foundational default principles for what constitutes today’s good data governance (see Exhibit 3).

Exhibit 3: Explainable AI is a default principle in the development of best-fit data governance

Source: HFS Research, 2021

Machines need business domain expertise and our business nous

Note in Exhibit 3 (above) the prominence of “Knowledge enabled.” Our belief that this is foundational to good data governance is supported and reflected by recent shifts in the focus of XAI away from explainability-for-all toward end-user-relevant explainability.

As HFS’s VP Research Reetika Fleming pointed out in our Point of View introducing XAI in 2018,

If AI is to have true business-ready capabilities, it will only succeed if we can design the business logic behind it. That means business leaders who are steeped in business logic need to be front-and-center in the AI design and management processes.”

“…we need to design AI systems that are easy for humans to explain and easy to design, without the need for deep algorithmic and programming capabilities. AI needs to be user friendly, have visual interfaces that make it simple for non-technical people to understand and modify on the fly, and have the proven scalability and security to deploy across businesses.”

A social and psychological approach is driving a new era in end-user-oriented XAI

To date, XAI development has been driven more by the capabilities of the technology than the needs of the end user. Researchers in the space are now reaching into social sciences and psychology to provide methodologies and explanations that align more genuinely to human needs.

The mission is to make it easier for humans and AI to hold hands and, in so doing, resolve the trust issue at the heart of the adoption and acceptance of AI as an augmentation for humans rather than a threat.

The 2020 paper Directions for Explainable Knowledge-Enabled Systems [1] concludes that we are embarking on a new era for AI in which explainability plays a central role.

They see an explosion in demand for user-centered explainability with a wide range of approaches to meet the particular needs of user types, contexts, and domains.

“Different situations, contexts, and user requirements demand explanations of varying complexities, granularities, levels of evidence and presentation,” they assert.

The end user is being placed at the heart of AI development to an extent not previously seen. This rising demand to take AI out of the lab and into everyday business decision-making is accelerating the need for XAI, where the X-factor is genuinely human.

[1] Shruthi Chari, Shani Seneviratne, Deborah L McGuinness (Rensselaer Polytechnic Institute, Troy, NY, USA), and Daniel M Gruen (IBM Research, Cambridge, MA, USA) – available from Arxiv.org.

Google, Accenture, Microsoft, and Infosys are all taking this seriously—so should you

To learn about some of the organizations championing XAI, such as OpenAI (backed by the likes of Elon Musk, Peter Thiel, AWS, Infosys, and others) and Partnership on AI (bringing together Accenture, Amazon, Apple, DeepMind, Facebook, Google, Intel, Microsoft, and others), take a look at our 2018 paper Why your firm must embrace Explainable AI… The paper also describes pioneering work in the space by simMachines, Accenture, and the US Defense Advanced Research Projects Agency (DARPA).

Roll forward to 2021, and we can add example applications such as Google’s Cloud XAI platform, which scores each factor of an ML model to reveal how each contributes to the final predictions.

Flowcast, which focuses on fintech on credit decisions, raised $3 million from ING in March 2021. Its API-based solution sets out to call out black-box models within company systems. Fiddler Labs, a Palo Alto start-up, raised additional undisclosed funds from the Amazon Alexa investment fund last year with the stated intent to accelerate AI explainability.

On March 15, 2021, Deloitte announced a joint go-to-market arrangement with Chatterbox for its patented “ethical and trustworthy AI” software. It enables organizations to “validate and understand” their AI initiatives to reveal if they are operating both fairly and ethically.

To reap maximum return on investment (ROI) from XAI solutions, enterprises should bear the following limitations in mind and create roadmaps to successfully navigate them.

  • The fairness-accuracy tradeoff. Making the workings of an AI model more transparent to human observers may mean sacrificing algorithmic complexity and sophistication, as can demanding a reduction in the number of variables an AI model bases its decisions on. Enterprises should engage expert third parties to give an objective assessment of an XAI solution’s ability to balance transparency and accuracy.
  • IP concerns. As other commentators have pointed out, making a complex AI model explainable and understandable to a non-specialist audience may mean giving up trade secrets or a competitive edge. This may prove less of a challenge with the emerging focus of XAI on in-context and domain-specific explainability. The AI need not explain itself to everyone, just those with the need or right to know.

The Bottom Line: Embrace end-user-oriented XAI to deliver the innovation capacity and digital fluency every organization needs to tackle the ambiguity of the post-pandemic economy.

XAI allows business leaders to regain control of critical levers to manage success through the rapid ups and downs promised by the ambiguity of the emerging post-pandemic economy.

It builds employee and customer experience through the trust it establishes; it restores the experimental capabilities required to respond to need at pace (removed by black-box models of ML), and it enhances digital fluency by enabling human understanding of AI.

 

Sign in to view or download this research.

Login

Register

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started

Logo

confirm

Congratulations!

Your account has been created. You can continue exploring free AI insights while you verify your email. Please check your inbox for the verification link to activate full access.

Sign In

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started
ASK
HFS AI