Learn Blog Sophia Zitman 9 December 2021

Explainable AI (XAI) in real life

If you do anything AI-related, you’ve most likely heard about it: Explainable AI (XAI). Often paired with terms like ‘black box’, ‘transparency’, ‘fairness’, and ‘very important’. Google searches on XAI lead to methods like SHAP and LIME, and again: posts on its importance. What I find to be the most interesting about the search results, is the lack of hits on XAI in practice. Given the hype around XAI and its huge potential in the real world, it feels paradoxical that it is rarely put to use. By writing this blog I want to start filling that gap on XAI IRL by sharing a project that I have done. Within this project, I have developed an explainer for a client (yes, it is live! yes, it is being used!). If you want to know how that happened, keep on reading!

A little context: the project

Before we take a deep dive into the development of the explainer, I want to introduce the client and the project so far. Knowing the context will make the next sections easier to understand. For privacy purposes the names of the company and employees are fictional. However, the description of the use case and our way of working are real.

Our client is Magazine Solutions, a company that buys all sorts of magazines from publishers and sells them to stores in The Netherlands. They are a middleman and provide all sorts of services: from adding wrappers to advising stores on what magazines to sell. In the purchasing department, a team of 3 employees goes through 300 – 400 magazines each week and estimates how many of each will be sold to stores. This is not a durable way of working; it’s very labor-intensive, and it’s hard to find people who want to take over the work once the current employees are retired. 

We were able to make a model that predicts the sales of each magazine for the purchasing department. This is a classical ML model that makes use of all sorts of metadata: from topics within an issue to sales of the publisher. It runs every night and makes predictions as new data comes in. This model will not replace the 3 employees overnight. Instead, the predictions will be used to support the 3 employees. They will see the prediction and can overrule it if they disagree. This is where the explainer comes in! Understanding the model is very important when interacting with its output so frequently. So alongside the prediction, the employees will also see a local explanation. These explanations will improve employees’ understanding of the model, and therefore their trust in it, and enable a useful feedback loop.

Making the explainer

Understanding the use case around the explainer is important. Just like a ML model, it fits a specific purpose and should be made to fit. The explainer is the bridge between the technicalities of the model and humans. Hence, you need to understand both worlds in order to make that bridge sturdy. In these next sections, I want to talk about how I tackled the human and the technical side of this project.

Designing the explainer

Explanations are very personal (I highly recommend this youtube series where this is illustrated perfectly). Explaining the model’s predictions to a colleague who’s not on the project, is different from what I say to a colleague that is on the project. When I imagined explaining it to the employees from the purchasing department, I concluded that I knew too little about them to design an explanation that would be useful. I never had to give local explanations to a trio of non-technical 50+-year-olds. As this was crucial information (giving a bad explanation can be worse than giving none at all), I planned a 1,5h meeting with them. The goal of this meeting was to design the appearance of the explanation together. Simply asking, “Hey, what would you like the explainer to look like?” was not going to work. Instead, I structured the meeting in such a way that I could gather all the bits and pieces of information I wanted.

I started by explaining the purpose of this meeting and that it should be lighthearted, fun, and open. Every thought and comment was welcome. To get out of the magazine-focused mindset, we started with a completely unrelated case. I gave one of the employees a sheet of paper with certain factors that explained the temperature in the room. I asked her to explain to her colleagues why it was 20C in only one sentence. I could sense that this was a bit of a weird request for them, but once they started explaining and the others went along, the atmosphere became positive and fruitful for the next cases. All the cases that followed were interactive and put me in the position to observe. They started looking more and more like our real case and along the way I figured out what type of information feels natural to them, and how it should be presented. Near the end of the workshop, I summarized my observations. They agreed with my findings, and together we made a sketch of the explainer they wanted to see with the output.

Writing it in code

Coding the explainer was the most straightforward part of this project. Once you know exactly what it should look like, it is only a matter of picking the proper XAI technique and adapting the output so it fits the design. Obviously, there were technicalities I had to think about, such as the explainer treating all the dummified features individually while that is not the way they should be presented in the final output, and that the model was trained on a log scale, hence making the interpretation of raw output from the underlying explainer a bit less intuitive. But those were all matters that could be resolved.

Why this worked

Once the explainer was live, it functioned the way it should: it fulfilled the users’ need for understanding and served as a solid base for giving useful feedback on the predictions. I compared this project to others I had heard about and asked myself: “Why did this one work out, and others not so much?”. I believe that this project was a success as we took a holistic approach. The explainer was seen as much more than just a technical feature. We understood that the explainer served a crucial role in the new way of working, and designed it in that way. This was a human-centered project in the first place, and an innovative AI project secondarily.

Want implement Explainable AI yourself?

Has this article sparked your interest, or did it inspire you to implement XAI? Great! But where do we go from here? One of Deeploys’ hands-on workshops might be the ideal introduction. Our workshops cover techniques used for making explainers, and will give you hands-on experience on how to use user-centered thinking for XAI. All of our workshops are based on actual XAI cases within certain fields of work, making the workshop highly relevant and relatable. 

Our next workshop will be on January 20th 2022, focusing on a real-life FinTech case. This event is free, and there are still some tickets available. Reserve your seat now! 

Want to stay updated?

Please fill in your e-mail and we update you when we have new content!