Breadcrumb navigation

A clear understanding of AI thought pathways
Rule Discovery based Inference

Featured Technologies

February 25, 2022

In recent years, data application using AI has attracted a lot of attention, but the situation is that in many cases the grounds for determining the results reached by AI remain unclear. It is said that for AI to penetrate further into society, it is important that there be a guarantee of interpretability. In February 2022, NEC announced a new technology to meet this kind of situation; ‘Rule Discovery based Inference.’ We asked a researcher for details of the value of this technology and how it works.

Profile

Data Science Research Laboratories
Principal Researcher
Yuzuru Okajima

Towards AI that can be used with understanding and confidence

― What kind of technology is ‘Rule Discovery based Inference’?

It’s an AI technology that enables the rationale for a prediction to be shown in the form of easy-to-understand rules. While recent AI technology makes it possible to present accurate predictions and values through very complex calculations, it was not possible for us humans to understand the rationale behind these predictions or values. It’s what we call the ‘black box AI’ problem.
In contrast, ‘Rule Discovery based Inference’ combines a high degree of accuracy and interpretability, and can indicate to us humans in the form of easy-to-understand rules ‘what happens when what factors are in what state.’ It is possible to verify the correctness of each rule by actually tracing past examples.
Let me give you a concrete illustration. For example, let’s consider the case, in the manufacturing industry, of trying to derive from past data on room temperature, voltage, etc., a link to machine failure. In the ‘weighting’ model currently in wide circulation as ‘explainable AI,’ the AI analyzes the data and produces a computation model on the lines of ‘probability of failure = 2.0 x room temperature + 0.2 x voltage – 20.’ The constants ‘2.0’ and ‘0.2’ inserted before the room temperature and voltage each indicate the importance (weight) of the respective factors, but as we humans do not understand what units are being used or the meaning of the numbers, we cannot determine whether or not this model is appropriate. In contrast to that, in Rule Discovery based Inference, we can obtain a model that clearly shows the units and the meaning of the numbers, as in ‘If room temperature > 30.0℃ AND voltage > 200V THEN probability of failure = 80%.’ If we can see the rule, I think it is easy for us to interpret this as a piece of knowledge, that is, if the voltage is raised above 200V when the room temperature exceeds 30℃, the machine is liable to fail. In addition, if we track past data of room temperature in excess of 30℃ and voltage in excess of 200V, we can also verify how many failures actually occurred. I hope that by creating AI that provides clear insights on the rationale for its predictions there will be more opportunities for the AI to be actually used.

Narrowing down to only the optimal from the huge number of randomly-generated rules

― What is the mechanism behind this technology?

In AI technology that uses rules, there is a technique that has long been known, called the decision tree. A decision tree is simple AI that provides a prediction result just by answering Yes or No questions. While this is easy for humans to understand, the drawback was that the prediction accuracy is low. In order to resolve this problem, one technique that is widely used is a ‘random forest’ that raises the prediction accuracy by combining large numbers of randomly-generated decision trees, but this involves a huge number of rules, making it more difficult for humans to understand.
Therefore, we took the approach of creating a list of rules called the ‘decision list,’ by narrowing down from the rules included in the decision trees created in the random forest only the best-suited rules. In a random forest there is random generation of similar decision trees, so that in fact the random forest contains a mixture of many rules that are almost identical as well as many rules that are invalid. From this huge mixed bag of rules, we screen for rules with a high accuracy rate and look for combinations of rules that should be retained. Normally we would have to consider combinations of rules on the scale of tens of thousands times tens of thousands times tens of thousands, ad infinitum; it’s the kind of process that would cause a computational explosion. However, by converting this into a computable problem by teaching the AI the correct data and deriving the validity (weight) of each rule, and then applying parallel computing technology used in deep machine learning, we designed it to be able to sort out a combination of necessary and sufficient rules. Decisions are made on the basis of the ‘decision list’ thus created, that is, a list of rules narrowed down to a realistic number.
As a result, it became possible to produce a high level of accuracy with much fewer rules than the conventional methods using decision trees. In experiments using open data, we have confirmed that using our technology it is possible to achieve with less than twenty rules a level of accuracy not achieved using conventional methods until close to fifty rules were used. The fact that we have achieved the same level of accuracy using fewer rules means that we have been able to create a method that is easy for humans to interpret.

Identifying in advance factors leading to defective products, contributing to the development of regular customers

― In what kind of situation is it expected to be utilized?

First of all, we are considering its use in the manufacturing industry, the example I gave at the beginning. Using this technology, by having the AI learn large amounts of data on things like the component composition of raw materials or processing equipment settings, it can analyze under what conditions defective products will occur, and it will be able to present the rationale in a form that we can understand. That should further enhance quality control. In fact, we were able to receive favorable evaluations from the results of verification tests carried out with the cooperation of customers in the manufacturing industry.
Technical verification of application of the technology in marketing is also under way. In verification tests carried out in collaboration with customers in the retail business, this technology is being used to visualize routes for turning new customers into regular customers. By inputting data on a customer’s purchasing history, it is possible to analyze in rule format the conditions under which the customer spends more or visits the store more frequently. These rules are used to visualize routes to turning a customer into a regular customer. Our clients have commented, ‘Some of the rules were quite unexpected, it was very helpful.’ One merit of this technology is that we can derive fresh knowledge from it, because it comes up with data-driven rules from outside the scope of what we humans unconsciously consider to be common sense.
The AI we are aiming to create is AI that co-works with humans. Black box AI, which pursues only high predictive accuracy, is useful in areas that can be left to AI and automated. However, there are limits to the areas that can be fully automated using AI. In many cases, the predictive results of AI need to be understood by a human and reflected in human behavior or policies. This is why we developed AI capable of putting out rules that we humans can understand. In addition, what we are considering now as an ideal is a future in which interactions will arise between humans and AI. If we can make it so that the AI can reply to what the human is thinking, and make decisions together with the human, there should be further advances in understanding and collaboration between humans and AI. We would like to continue this research, in pursuit of this kind of AI.