Skip to main content

Explainable AI (XAI) provides insights into the data, variables and decision points used to make a recommendation. Although machine learning is the most common use of AI, businesses are often concerned that machine learning models are opaque, non-intuitive and limited information is provided regarding their decision-making process and predictions.

Association of Chartered Certified Accountants’ (ACCA’s) recent report, Explainable AI: Putting the User at the Core  looks into this aspect and sets out how Explainable AI (XAI) is an emerging field that addresses how to reduce opacity, and the so-called black box issue within AI decision making.

XAI emphasizes not just how algorithms provide an output, but also how they work with the user, and how the output or conclusion is reached. XAI approaches shine a light on the algorithm’s inner workings to show the factors that influenced its output. Moreover, the idea is for this information to be available in a human-readable way, rather than being hidden within code.

ACCA’s report addresses explainability from the perspective of practitioners, i.e. accountancy and finance professionals. It is in the public interest to support the thinking behind XAI, which helps to balance the protection of the consumer with innovation in the marketplace.

Complexity, speed and volume of AI decision-making often obscure what is going on in the background (the black box), which makes the model difficult to interrogate. Explainability, or the lack of this, affects the ability of professional accountants to understand and display scepticism.  In a recent ACCA survey, more than double, 54%, agreed with this statement compared to those who didn’t. 

It’s an area that’s relevant to being able to trust technology and to be confident that it’s used ethically. Its as much a design principle as a set of tools. This is AI decoded, and designed to augment the human ability to understand and interrogate the results returned by the model.

A survey of accountancy professionals conducted for the report highlighted that more than half of respondents were unaware of XAI.  This impairs the ability to engage, and the report sets out some of the key developments to help raise awareness. In accountancy, AI isn’t fully autonomous, nor is it a complete fantasy. The middle path of augmenting, as opposed to replacing the human works best when the human understands what the AI is doing; which needs explainability. And to embed explainability into enterprise adoption, its important to consider the level of explainability needed, and how it can help with model performance, as well as ethical use and legal compliance.

XAI is also important for policy makers, for instance in government or at regulators. They frequently hear the developer/supplier perspective from the AI industry. But this report can complement that with a view from the user/demand side, so that policy can incorporate consumer needs. Explainability empowers consumers and regulators by reducing the deep asymmetry between experts who understand AI and the wider public. And for regulators, it can help reduce systemic risk if there is a better understanding of factors influencing algorithms that are being increasingly deployed across the marketplace.

XAI comprises many types of approaches such as local and global explainability. The report goes into more detail, but in summary the former seeks explain how individual data points behave within a machine learning model, while the latter seeks to explain the model as a whole. Furthermore, some techniques are model specific, while other are agnostic (can be applied to explain any model).

The wider point for accountancy and finance professionals is that to embed explainability into the process of adopting AI solutions within an enterprise, an end-to-end approach must be taken as shown in the schematic.

AI can be polarising, with some having unrealistic expectations for it to be like magic and answer all questions. While others are deeply suspicious of what the algorithm is doing in the background. XAI seeks to bridge this gap, by improving understanding to manage unrealistic expectations, and to give a level of comfort and clarity to the doubters.

 

Image
Narayanan Vaidyanathan

Head of Business Insights, ACCA

Narayanan Vaidyanathan leads ACCA's futures research, and focuses on how future trends will impact business and the accountancy profession. With a global remit for the topics he covers, he inspires our team of subject-matter and policy experts to explore specific future trends. Particular areas of thought leadership within his research portfolio include work on the global economy, emerging technologies and business models, sustainability, social mobility, as well as sector specific insights, in relation to the public sector, financial services and oil and gas. He's also our staff expert for the Accountancy Futures Academy Global Forum, and represents ACCA on government, industry and academic forums such as an all-party parliamentary group in the UK and an international standards body.