The ML Toolkit for Designers — Part 2

Introduction

In our second article, we will see how services can be improved with certain machine learning algorithms. This article will review two major concepts. The first one is about the importance of having an open discourse when dealing with machine learning results, and the second point is about the current trend of explainable AI. Both are extremely important ideas that will eventually not only shape the future of artificial intelligence, but also the future of design.

What is explainable AI (XAI)?

As artificial intelligence becomes widely deployed in consumer products, users increasingly start to become worried about the technicalities of these highly intricate algorithms. An often-used term to describe the ambiguity of these algorithms is the “black box” metaphor.

The models used in artificial intelligence are exponentially becoming more complex. While most of them are still using simple linear functions to predict an outcome, faster GPUs and more data propelled the efficiency of artificial neural networks. Simply put, neural networks use a structure of functions stacked in multiple layers. The architecture mimics how our human brain is built. Deep neural networks — neural networks with multiple layers — when provided with enough data, can learn and identify solutions on their own with little to no human intervention.

The complexity of these models, and the high number of operations required to train a deep neural network make them difficult to interpret. Saying that they are “black boxes” might be an overstatement, nevertheless, with the increased number of “layers” added to the model, it naturally becomes more difficult to fully understand the “reasoning” behind their results. Thus, the recent surge in popularity of explainable AI.

Explainable AI is a branch of artificial intelligence that tries to develop models that not only perform well but also can be interpretable by humans. XAI seems like a very reasonable concept that is worth pursuing; indeed, governmental organizations such as DARPA are heavily involved in financing research in this field of study.

XAI has still a long way to go. Many deep neural networks that are deployed into platforms that we use daily are still using algorithms that are very intransparent, but the field is definitely moving in the right direction. We can be positive that in a couple of years we will have more transparent tools that will allow us to use machine learning in a more responsible way.

The Tools

As a very simple example of explainable AI, I will introduce decision trees. This class of machine learning models can classify models by using tree-like graphs. The idea of it is to infer conclusions from if-then-else rules. It might sound abstract, but the core intuition is very similar to how humans think.

The neat aspect of this type of machine learning algorithms is that they can be visualized quite easily. As shown in the case study below, even without being experts in artificial intelligence we can deduce how the algorithm works, and how it infers conclusions. Thus, decision trees can be seen as a very simple type of explainable AI models.

Case Study

To improve our understanding of how decision trees, and other types of XAI models, can be applied in the real world, we will (once again) consider a fictional consulting project. Imagine you have to advise a telecommunications company that recently experienced a high number of churn — customers leaving their services. Executives at this company are clueless about the reason for it. You as a service designer have to detect the issue and suggest an improvement to the service to avoid churn in the future. The task is hard, and often it is difficult to get things started — decision trees can help.

In the majority of cases, the client should have a substantial amount of data about each customer. To simulate it, we will use a popular dataset from Kaggle called “Telco Customer Churn”. As the name suggests, the dataset contains 20 variables plus the target variable which tells us if the customer churned or not. To simplify the example, I only included a subset of variables in the analysis (Exhibit 2.1.1).

Creating a decision tree with scikit-learn — an open-source machine learning library — is quite easy. We just have to decide which impurity function to use (in this case we use Gini), and the maximum depth of our model — basically the complexity of our decision tree. After setting the hyperparameter and fitting the model to our data, we can revise the accuracy. In our example, the model, due to the reduced number of features, has an accuracy of 70% which is definitely not optimal, but still substantially higher than the baseline. Apart from checking how accurate our model is, we can plot the tree and have a visual feedback of how the model works (exhibit 2.2.1).

You can notice that the output is quite self-explanatory. Nevertheless, I also provided a simplified, more comprehensive version (exhibit 2.2.2). Two things that pop out right away are the “root” — first “leaf” or “box” at the top of the tree — and the leaf with the highest churn at the bottom. We can see that on average customers that are paying for “online security” tend not to churn easily, on the opposite non-senior customers that don’t pay for “online security” and “internet services” tend to churn — actually most of them will eventually churn. All of those insights are extremely valuable for our consulting project, but by themselves, they don’t explain why the probabilities of churn change at different stages or “leaves”. To better understand them, we have to use our human innate intuition and put the probabilities in context.

In service design, we use co-creation sessions to analyze research, develop new ideas, and validate prototypes. Bringing a diverse range of people together can unlock creativity. The hypothesis is that together we can come up with better solutions. As noted before, often the first step to kickstart a creative process is difficult. Decision trees could alleviate it, they could provide a basis to discuss ideas and solutions on how to prevent churn. Humans are very good at generalizing concepts and linking different intuitive phenomena together. In our example, we can imagine that by facilitating a co-creation session and discussing the decision tree collectively, we can easily come up with assumptions about why, for example, young customers that don’t pay for “internet services” are likely to churn. Our assumptions could then crystallize into valuable solutions.

This seamless collaboration between the machine’s output (the decision tree) and human intuition can be labeled as collective intelligence. A trendy new word that most likely will shape the way we will work in the future.

The Framework

The framework of this exercise is quite straightforward. We start by performing a decision tree analysis on the quantitative data relevant to customer churn. We revise the accuracy and print out a visual representation of the decision tree. After that, we organize a co-creation session, where we try to invite a wide range of participants. Optimally, executives, engineers, salespeople, and if possible, customers.

Together we can use the decision tree as a foundation for brainstorming and create assumptions of why customers churn at different stages, and finally, we can come up with some potential solutions.

As easy as it sounds, this way of working might be extremely valuable to detect issues in complex services, and quickly be able to improve the service proposition.

Conclusion

Decision trees are only one very simple example of explainable AI models. As described at the beginning of this article, the field of XAI is vibrant, and we can expect that in the near future we will see new models, techniques, and tools that will give us new opportunities to interact with algorithms. It could be argued that explainable AI is probably the most interesting field in artificial intelligence today for service design practitioners. XAI enables us to better understand how algorithms work and infer predictions. This new transparency on how algorithms inner workings operate opens new possibilities for collective decision-making that can be facilitated by adopting design thinking methodologies.

As a final note, I want to highlight the fact that decision trees might not always be the best solution, they tend to overfit quite frequently. Nevertheless, they can be used in simple scenarios where we don’t have to have too many features at once, and the models don’t have to be extremely accurate. In this case, the decision tree is just a conversation starter and not a model that is meant to be deployed in a customer-facing application — it’s an analysis tool. Thus, decision trees fit quite well for the purposes of the exercises outlined in this article.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Samuel Rueesch

Samuel Rueesch

London-based service/business designer interested in applying design-thinking to unconventional industries. https://www.samuelrueesch.com/