If AI is to have true business-ready capabilities, it will only succeed if we can design the business logic behind it. That means business leaders who are steeped in business logic need to be front-and-center in the AI design and management processes.
We may not understand the complexities of programming, developing or even maintaining artificial intelligence (AI) models, but we all must understand the purpose of the AI, its decision process logic, what is it designed to accomplish and how it is designed to self-remediate over time.
Hence, we need to design AI systems that are easy for humans to explain and easy to design, without the need for deep algorithmic and programming capabilities. AI needs to be user-friendly, have visual interfaces that make it simple for non-technical people to understand and modify on the fly, and have the proven scalabilty and security to deploy across businesses.
Introducing XAI: Explainable AI models designed for the business user to manage
So, how do we make a technology—that’s built to outstrip the human brain—understandable to humans? And understandable it must be. If it becomes impenetrable, the consequences can be severe. Although, in some instances, AI models are already proven to make better decisions than humans—like predicting revenues or making medical diagnoses—they have also been known to fire people unjustly and cause fatal traffic accidents. Such errors are caused by biases embedded in historical training data and by the algorithms used to train the models developing through the ‘learning’ that an AI undergoes – a logic opaque to humans, i.e. “black box” logic.
Despite this seemingly unsolvable paradox, there is a growing movement to design “explainable AI,” or XAI. XAI is an AI model that is programmed to explain its goals, logic, and decision making so that the average human user—a programmer, end user, or person impacted by an AI model’s decisions—can understand it. XAI aims to bring transparency and accountability to the AI space to ensure that the technology benefits the society, organizations and humans rather than harms them.
As enterprises delegate ever-more critical decisions to AI models in all industries, those using and buying the technology must be able to understand precisely how AI models draw their conclusions. Otherwise, enterprises expose themselves to fiscal and reputational damage as data and transparency laws grow more stringent, and public sensitivity around AI becomes more acute.
XAI is not yet a perfect concept and leaves some unanswered questions, but no sensible enterprise using or wanting to use AI can afford to ignore the concept altogether. This POV outlines steps enterprises can start taking today toward AI transparency and the pioneers who can help them get there.
The technology to make XAI a reality is already here—and it’s changing the AI space
Attempts at creating XAI have been around since the 1970s, but they are becoming a pressing priority as the possibility of AI moving beyond human understanding and control approaches reality. In particular, deep learning algorithms, which rely on multiple neural network layers to learn how to extract patterns and make decisions, are a cause for concern. They are non-linear and take far more parameters into account than a human mind could grasp; however, several factors enable solutions to this dilemma to emerge:
There is by now agreement on what an AI model should be able to disclose in order to be an XAI: its strengths and weak spots, the parameters it uses to make a decision, why it reached a particular conclusion as opposed to an alternative one, the mistakes its most liable to commit, and how to rectify those mistakes. This level of disclosure ought to benefit developers, enterprise users, and consumers by making AI models more trainable and rectifiable; making them more transparent, controllable, and less of a liability; and giving individuals redress if they’ve been adversely affected by an AI model’s decision. In theory, this could make AI fairer, more trustworthy, and more widely usable. Next, HFS takes a closer look at three firms seeking to make these theoretical benefits tangible for enterprises.
Organizations have multiple options to start transforming their AI into XAI
The XAI field is expanding every day as more firms devise an ever-wider array of approaches to making explainable AI a reality. We did a deep-dive on just three such pioneers, which we selected for their representation of the wide range of XAI options available to enterprises with skin in the AI game.
Exhibit 1: Branding value of XAI, according to simMachines
Source: simMachines (screenshot)
Exhibit 2: DARPA’s vision for explainable artificial intelligence
Source: DARPA
As even this small sample of XAI pioneers illustrates, contributors in this field are developing and honing XAI solutions focusing on solving an ever-wider range of issues associated with opaque AI models. While simMachines focuses on putting businesses firmly in control of their data, Accenture aims to allay growing public unease over seemingly arbitrary AI verdicts, thus safeguarding its clients’ reputations. DARPA is developing military-grade software optimized for high-stake situations in which dependability is key. With such a range of XAI options, enterprises have a rich pick of solutions to start making their AI models and usage more accountable and transparent.
The XAI movement isn’t perfect—here’s what organizations should be aware
However, despite the best efforts of simMachines, Accenture, DARPA, and their peers, there are persistent unresolved issues in the XAI space, not all of which have a technological fix. To reap maximum ROI from XAI solutions, enterprises should bear these limitations in mind and draw up roadmaps for how to successfully navigate them.
There is little that enterprises can do about these issues, as they depend on the evolution of the market’s technological advances, regulation, and emerging industry standards. For now, enterprises depending increasingly on AI have to think not only tactically—which technology to use or buy—but also strategically—how the market economics will shift to find this balance or how to make AI more both efficient than and comprehensible to the human brain. If enterprises are aware of these broader issues, they can involve themselves in initiatives shaping the future of the market, perhaps to their advantage. In the meantime, what they can do is actively start leveraging the technology available to begin future-proofing their AI strategy.
Bottom line: If your organization isn’t already thinking of how to achieve XAI, you’re not doing your homework, and will soon be called out on it
As advances in AI make these models so sophisticated that their logic becomes increasingly subtle and incomprehensible to humans, the need for auditable, accountable, and understandable AI becomes inevitable and might be hurried along by regulators’ (and consumers’) justified concerns. If your organization is using or looking to use AI—and by now, this should be a universal driver—you’re going to have to make sure you understand how your algorithms are working. If you don’t, you’ll leave your organization open to legal action, regulatory fines, loss of customer trust, security risks through lack of effective oversight, reputational damage, and, most fundamentally, losing full control of how your business operates.
As a technology that explicitly aims to not just replicate but improve upon how the human brain works, AI is hard to understand and to use, especially for organizations that do not have a technological heritage and lack specialist staff. Making AI transparent will therefore be almost impossible for them to achieve alone. They will need help, and they will need to start taking action soon. In other words, companies dabbling with AI must start doing one of three things today:
Register now for immediate access of HFS' research, data and forward looking trends.
Get StartedIf you don't have an account, Register here |
Register now for immediate access of HFS' research, data and forward looking trends.
Get Started