Why making AI explainable is crucial to its success*

ChAI for MarineStartups.com

”Surprising to many as it is, AI research has actually been carried out for decades and concerns about explainability have been riding on its coattails for just as long.”

More affordable and accessible than ever before, Artificial Intelligence (AI) is on everyone’s lips at the moment. Theoretically, AI has great potential to transform businesses for the better across industries and geographies but whether it succeeds or fails will ultimately come down to the most human of concepts – trust.

In times of rapidly decreasing trust in political institutions, corporations and financial systems our need to understand the recommendations produced by the triplets of AI, Machine Learning and algorithms is growing. What is actually going on underneath their fashionable, glossy hoods? Telling someone to “Trust me, take my word for it” is simply not good enough any longer.

At ChAI, AI is at the heart of everything we do. From a front row position, we see the current gaps that AI can bridge. But to trust and adopt AI, people need to understand its verdicts. Therefore, we are passionate about moving businesses away from black-box algorithms to transparent AI that is easy to understand. In this blog, we take a closer look at Explainable AI and why it’s so important.

What is Explainable AI and why is it important?

Explainability is simply put the ability to understand how and why a model has produced a certain outcome. For us at ChAI, explainability means the very general and basic sense of a human being that they can understand a machine learning system – just as they would understand another human being briefly spelling out how and why it made a certain prediction.

Is explainability unanimously beneficial? Can a model’s opaqueness be useful? One could argue that a credit card company’s fraud-detection algorithm is more vulnerable to would-be fraudsters the more explainable it is. Indeed, in cases like these we advocate “intelligent openness” rather than blind across-the-board explainability of AI models.

Furthermore, analytically orientated data scientists claim that there is a trade-off between how easy it is to unpick a model versus how accurate that model tends to be. In our view, this trade-off ignores a third dimension which is that, with enough effort and resource, it is often possible to increase the transparency of a once opaque model, while preserving its accuracy.

Where humans use the output of a model to support their own decision making – unless the model is known to be perfect at all times (when has that ever happened?) – users need to feel that they understand why a certain prediction was made. If you were a medical doctor, would you rather have a system that diagnosed a terminal disease with 90% accuracy, but where you couldn’t understand the basis of prediction? Or, would you prefer one which was 85% correct but where you could unpick the model’s inner workings and explain all the diagnostic factors to your patients? As each iteration helps improve an AI model’s accuracy, a model that actually gets used has better future prospects (as it has more to learn from) than one that doesn’t.

Our work at ChAI is far more prosaic than medical diagnoses, but no less in need of strong human buy-in. When we make predictions about which way the price of a commodity might move in the future, it is essential that our clients basing business decisions on our predictions are able to understand why they have been made — not least because they themselves might have to explain those decisions to their stakeholders. The decision not to mitigate a $100m exposure to aluminium price volatility based on a signal from ChAI is going to be a lot easier for a client to justify if it is backed up by a detailed explanation of all the factors our model has taken into account and how much it has weighted them. Take my word for it? I don’t think so.

Thinking global, acting local
We divide explainability into two areas — global and local. Global gives a general description of how a model maps a set of inputs to a set of outputs for all the inputs it might get presented with. Local, in turn, relates to understanding why a model has come up with a prediction for a particular data point.

For example, at ChAI we develop models that predict the price of a metal one month into the future based on inputs such as exchange rates, econometric data, satellite imagery and freight data. A global understanding of the model would show how each of those inputs in general explain the price of that metal. It would say that on the whole, a stronger economy (as shown through econometric data) and a reduction in supply of the metal (as shown through satellite imagery over some open cast mines) would indicate a higher price for that metal. In parallel, a local understanding would show why, on any particular day, the model had predicted a particular movement in the price for that specific combination of input data. Sometimes local explainability will highlight relationships that on the surface appear inconsistent with global ones. For example, a model predicting that a metal price will go down despite a general sense of a stronger economy and reduced supply, because both these factors are superseded by evidence of a particularly weak currency in a major importing country.

Explainability is Just the Beginning

Taking a more holistic view, explainability doesn’t just apply to an AI model itself. It also involves considerations as to why the inputs used in the model were selected – out of all the possible information sources that could have been included – and how the data used to train a model was sampled. It appears that some organisations are now waking up to the fact that they need to retrofit explainability into their opaque tech-stacks, and with some effort, this is often fairly feasible.

However, it is much better to embrace the principles of explainability right at the beginning of an organisation’s machine learning journey. Underlining this, we are starting to see signs that regulators might step in to enforce explainability in a lot of algorithms (machine learning related or not), so it is worth staying ahead of the curve here.

At ChAI, explainable AI is our creed which is why we work hard every day to:

  • Show an audit trail explaining why our input sources were chosen for each of the commodities that we make predictions for
  • Offer a detailed explanation of the provenance of all the training data we use
  • Select algorithms that are readily interpretable without sacrificing accuracy
  • Engage with our end users, particularly those sceptical of data science, to question the outputs being produced
  • Build a client interface that clearly articulates decisions being produced at all stages in our pipeline

Explainable AI is only one part – but a key one – in increasing users’ trust in and adoption of AI systems. Watch this space for more developments from ChAI.

About ChAI

ChAI helps mitigate commodity price volatility for buyers and sellers of commodities by forecasting their prices using both traditional and alternative data (including satellite and maritime) and the latest in AI techniques over strategically useful time horizons. This enables purchasing managers and financial officers to control their price risk – resulting in larger margins and stronger financial control.

ChAI’s vision is to provide SME manufacturers with the same price risk mitigation tools that are currently only available to their larger peers using state-of-the-art AI methods on a large number of alternative and established data sources in order to make price predictions on a full cross-section of commodities.

*Article courtesy of ChAI, first appeared here.

More from this stream

Recomended