Point of View

Why your firm must embrace Explainable AI to get ahead of the hype and understand the business logic of AI

Home » Research & Insights » Why your firm must embrace Explainable AI to get ahead of the hype and understand the business logic of AI

If AI is to have true business-ready capabilities, it will only succeed if we can design the business logic behind it. That means business leaders who are steeped in business logic need to be front-and-center in the AI design and management processes.

 

We may not understand the complexities of programming, developing or even maintaining artificial intelligence (AI) models, but we all must understand the purpose of the AI, its decision process logic, what is it designed to accomplish and how it is designed to self-remediate over time. 

 

Hence, we need to design AI systems that are easy for humans to explain and easy to design, without the need for deep algorithmic and programming capabilities.  AI needs to be user-friendly, have visual interfaces that make it simple for non-technical people to understand and modify on the fly, and have the proven scalabilty and security to deploy across businesses. 

 

Introducing XAI: Explainable AI models designed for the business user to manage

 

So, how do we make a technology—that’s built to outstrip the human brain—understandable to humans? And understandable it must be. If it becomes impenetrable, the consequences can be severe. Although, in some instances, AI models are already proven to make better decisions than humans—like predicting revenues or making medical diagnoses—they have also been known to fire people unjustly and cause fatal traffic accidents. Such errors are caused by biases embedded in historical training data and by the algorithms used to train the models developing  through the ‘learning’ that an AI undergoes – a logic opaque to humans, i.e. “black box” logic.

 

Despite this seemingly unsolvable paradox, there is a growing movement to design “explainable AI,” or XAI. XAI is an AI model that is programmed to explain its goals, logic, and decision making so that the average human user—a programmer, end user, or person impacted by an AI model’s decisions—can understand it. XAI aims to bring transparency and accountability to the AI space to ensure that the technology benefits the society, organizations and humans rather than harms them.

 

As enterprises delegate ever-more critical decisions to AI models in all industries, those using and buying the technology must be able to understand precisely how AI models draw their conclusions. Otherwise, enterprises expose themselves to fiscal and reputational damage as data and transparency laws grow more stringent, and public sensitivity around AI becomes more acute.

 

XAI is not yet a perfect concept and leaves some unanswered questions, but no sensible enterprise using or wanting to use AI can afford to ignore the concept altogether. This POV outlines steps enterprises can start taking today toward AI transparency and the pioneers who can help them get there.

 

The technology to make XAI a reality is already here—and it’s changing the AI space

 

Attempts at creating XAI have been around since the 1970s, but they are becoming a pressing priority as the possibility of AI moving beyond human understanding and control approaches reality. In particular, deep learning algorithms, which rely on multiple neural network layers to learn how to extract patterns and make decisions, are a cause for concern. They are non-linear and take far more parameters into account than a human mind could grasp; however, several factors enable solutions to this dilemma to emerge:

 

  • Dedicated fora. Heavyweights from both the AI software vendor and enterprise user sides are forming consortia dedicated to bringing transparency, fairness, and accountability to the space by pooling their considerable resources. OpenAI has amassed over $1 billion in investment from the likes of Elon Musk, Peter Thiel, AWS, Infosys, and others. Partnership on AI, meanwhile, boasts Accenture, Amazon, Apple, DeepMind, Facebook, Google, Intel, Microsoft, and others as members in its quest to form AI best practices.
  • Available technology. Developers are also getting creative with existing technologies to make AI more decipherable. They are leveraging Bayesian Rule Lists (BRLs), reversed time attention models (RETAIN), and layer-wise relevance propagation (LRP) to either provide clear explanations of an AI decision after it has been made or to make the underlying model that an AI model learns from more understandable in the first place. Besides these options, new ones are now being developed, such as similarity-based learning.

 

There is by now agreement on what an AI model should be able to disclose in order to be an XAI: its strengths and weak spots, the parameters it uses to make a decision, why it reached a particular conclusion as opposed to an alternative one, the mistakes its most liable to commit, and how to rectify those mistakes. This level of disclosure ought to benefit developers, enterprise users, and consumers by making AI models more trainable and rectifiable; making them more transparent, controllable, and less of a liability; and giving individuals redress if they’ve been adversely affected by an AI model’s decision. In theory, this could make AI fairer, more trustworthy, and more widely usable. Next, HFS takes a closer look at three firms seeking to make these theoretical benefits tangible for enterprises.

 

Organizations have multiple options to start transforming their AI into XAI

 

The XAI field is expanding every day as more firms devise an ever-wider array of approaches to making explainable AI a reality. We did a deep-dive on just three such pioneers, which we selected for their representation of the wide range of XAI options available to enterprises with skin in the AI game.

  • simMachines. Chicago-based simMachines, founded in 2012, offers clients a proprietary similarity-based machine learning engine specializing in customer experience optimization, explainable pattern detection and forecasting, and fraud and compliance. The software claims to make data analytics more granular, predictive, and unbiased (see Exhibit 1). More importantly, simMachines uses a similarity-based learning method, rather than decision trees or neural networks, to train its algorithms. The startup says this method makes its engine the only one that can give clients the “why” behind each prediction—the justification for each conclusion it draws, at a local level. Using dynamic dimension reduction techniques, simMachines can precisely identify the variables that went into the software’s decision making, making the AI’s learning model more transparent while not compromising on accuracy, speed, or sophistication.

 

Exhibit 1: Branding value of XAI, according to simMachines

 

 

Source: simMachines (screenshot)

 

  •  Accenture. In June 2018, consulting and services heavyweight Accenture rolled out a tool to help enterprises detect and scrub embedded biases—such as gender, racial, and ethnic bias—from their AI software. This tool is for organizations making high-stakes decisions about mortgages, parole, and benefits eligibility; it lets clients define sensitive data categories like race, gender, and age and track the degree to which they correlate with other data categories. For instance, race might have a high correlation with postcodes, so an algorithm could be instructed to ignore both postcodes and race inputs to eliminate bias. The tool also provides a dashboard to help companies track correlations and how uncoupling data fields affects model accuracy. Moreover, the tool measures algorithms’ “predictive parity” fairness—for instance, whether it generates the same number of false positives and negatives across genders and ethnicities. It also gives developers insight into how their models’ accuracy changes as predictive parity improves. Despite historical concerns over the trade-off between bias elimination and model accuracy, Accenture demonstrated that boosting predictive parity augmented model accuracy for its clients. Such tools are rapidly becoming more common, with IBM launching its own in September.

 

  • The Defense Advanced Research Projects Agency (DARPA). DARPA is a US Department of Defense agency with a mandate to research and create technologies for military use. Its DARPA XAI program aims to develop best practices and machine learning models that can lead to the creation of far more transparent, yet no less accurate, AI models—what it calls “glass box” models (see Exhibit 1). The goal, in DARPA’s words, is to “enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.” Being able to safely delegate to an AI is particularly important in a military context. The program’s product will be “a toolkit library consisting of machine learning and human-computer interface software modules that could be used to develop future explainable AI systems.” When the program concludes, the toolkit would be made more widely available for a range of military and commercial use cases. DARPA plans to present the full outcomes of its first-phase evaluations in November 2018. In September, DARPA also announced a new $2 billion program to develop next-gen AI capable of contextual reasoning, to “create more trusting, collaborative partnerships between humans and machines.”

 

Exhibit 2: DARPA’s vision for explainable artificial intelligence

 

 

Source: DARPA

 

As even this small sample of XAI pioneers illustrates, contributors in this field are developing and honing XAI solutions focusing on solving an ever-wider range of issues associated with opaque AI models. While simMachines focuses on putting businesses firmly in control of their data, Accenture aims to allay growing public unease over seemingly arbitrary AI verdicts, thus safeguarding its clients’ reputations. DARPA is developing military-grade software optimized for high-stake situations in which dependability is key. With such a range of XAI options, enterprises have a rich pick of solutions to start making their AI models and usage more accountable and transparent.

 

The XAI movement isn’t perfect—here’s what organizations should be aware

 

However, despite the best efforts of simMachines, Accenture, DARPA, and their peers, there are persistent unresolved issues in the XAI space, not all of which have a technological fix. To reap maximum ROI from XAI solutions, enterprises should bear these limitations in mind and draw up roadmaps for how to successfully navigate them.

 

  • The fairness-accuracy tradeoff. Until recently, making the workings of an AI model more transparent to human observers inevitably meant sacrificing algorithmic complexity and sophistication, as it involved reducing the number of variables an AI model based its decisions on. Developers are working around this tradeoff, as shown by Accenture, and recent studies show that Bayesian Rule Lists now boast the same accuracy levels as top-level ML algorithms approximately 85% of the time. However, such success isn’t guaranteed for all XAI solutions. Enterprises should engage consultants or other third parties to give an objective assessment of an XAI solution’s ability to balance transparency and accuracy before implementing it.
  • IP concerns. As other commentators have pointed out, , making a complex AI model explainable and understandable to a non-specialist, mass-market audience presents a quandary for the companies developing such software, as it means having to give up their trade secrets and competitive edge. Such IP concerns could understandably slow down progress in the XAI space and even deter vendors from making their solutions fully transparent. To smooth such friction, enterprises could consider acquiring smaller vendors with particularly promising technologies. Although this will not address the broader dilemma, it could make individual enterprises’ XAI path smoother.
  • Defining “understandable.” “Understandable” can mean very different things to different audiences, especially in a field as technically complex as AI. Making an AI model comprehensible to an audience of engineers and AI specialists is a far different bar than rendering it fully understandable to the average consumer. When ensuring the transparency of their AI models, enterprises should therefore invest not only in data scientists and AI specialists but also in seemingly more peripheral roles such as documentation writers, who are skilled at communicating complex technical concepts to non-specialist audiences.

 

There is little that enterprises can do about these issues, as they depend on the evolution of the market’s technological advances, regulation, and emerging industry standards. For now, enterprises depending increasingly on AI have to think not only tactically—which technology to use or buy—but also strategically—how the market economics will shift to find this balance or how to make AI more both efficient than and comprehensible to the human brain. If enterprises are aware of these broader issues, they can involve themselves in initiatives shaping the future of the market, perhaps to their advantage. In the meantime, what they can do is actively start leveraging the technology available to begin future-proofing their AI strategy.

 

Bottom line: If your organization isn’t already thinking of how to achieve XAI, you’re not doing your homework, and will soon be called out on it

 

As advances in AI make these models so sophisticated that their logic becomes increasingly subtle and incomprehensible to humans, the need for auditable, accountable, and understandable AI becomes inevitable and might be hurried along by regulators’ (and consumers’) justified concerns. If your organization is using or looking to use AI—and by now, this should be a universal driver—you’re going to have to make sure you understand how your algorithms are working. If you don’t, you’ll leave your organization open to legal action, regulatory fines, loss of customer trust, security risks through lack of effective oversight, reputational damage, and, most fundamentally, losing full control of how your business operates.

 

As a technology that explicitly aims to not just replicate but improve upon how the human brain works, AI is hard to understand and to use, especially for organizations that do not have a technological heritage and lack specialist staff. Making AI transparent will therefore be almost impossible for them to achieve alone. They will need help, and they will need to start taking action soon. In other words, companies dabbling with AI must start doing one of three things today:

 

  • Join a program or consortium focused on achieving XAI, such as DARPA or OpenAI.
  • Bring in third-party XAI solutions to completely outsource data management to a transparency specialist, like simMachines.
  • Leverage tools that help ensure their current AI models are as understandable and unbiased as possible, like Accenture’s.

 

Sign in to view or download this research.

Login

Register

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started

Logo

confirm

Congratulations!

Your account has been created. You can continue exploring free AI insights while you verify your email. Please check your inbox for the verification link to activate full access.

Sign In

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started
ASK
HFS AI