STTT

International Journal on Software Tools for Technology Transfer

STTT Cover

Explanation Paradigms Leveraging Algebraic Intuition (ExPLAIn)

Today’s decision making is increasingly aided by computers, be it in medicine, in the stock market, when playing Go or Chess, or when we follow recommendations provided by various platforms. Few people question the often machine learning-based automated support and ask for explanations. In the majority of cases this might be harmless, in particular when the damage caused by wrong results remains moderate. The situation changes, however, when critical decisions with a potential for disaster are delegated to machine learning-based systems. State-of-the-art approaches for extracting explanations are far from reassuring: Rather than guaranteeing a certain result, state-of-the-art explanations can be seen as indications that nothing has radically gone wrong (e.g., that a trained classifier uses relevant parts in a picture instead of incidental features of the training set), a problem in particular Neural Network (NN)-based systems are known for.

Formal Methods-based automated approaches to software/system verification aim at a provable correctness which is particularly important in the context of industrial critical systems. Classical approaches support explanation in the case of found errors, through error traces or game-based strategies that steer a system into an unexpected situation. These types of explanations can readily be validated and provide a good basis for a corresponding correction. Approaches that provide explanations in the case of positive verification results are less common: there are proof checkers but these are typically tailored to interactive verification methodologies. Another approach is visual verification via dedicated abstract views, which can be quite reassuring but typically does not scale.

In both scenarios (Machine Learning and Formal Methods), system understanding aims at explaining properties of an unfeasible (black box) system in a feasible (white box) form: highlighting pixels that were important for a NN-based classification transforms an incomprehensible computation into a simple picture, errors traces are easy to follow even in highly complex distributed systems, and abstract views, e.g., in the form of automata, are conceptually simple, even though they typically suffer from size explosion.

Despite the similar intent, namely increasing the confidence of automatically deduced results, there is a clear conceptual difference: NN-based research is characterized by experimental approaches and performance heuristics that frequently show amazing results (and failures), whereas the conceptual approaches established by the formal methods community are based on sound theories of (de)composition and abstraction but have a tendency to encounter scalability problems.

The STTT theme area “Explanation Paradigms Leveraging Algebraictical Intuition (ExPLAIn)” provides a forum for research that aims at (1) opening up the black box-character of today’s systems by leveraging the wealth of formal methods that have been developed over recent decades by the formal methods community, dually, (2) exploiting experimental approaches such as those developed in the machine learning community to validate the practical impact of the conceptual approaches, and (3) revealing and characterizing conceptual hurdles and limitations, pinpointing special challenges, and proposing alternative ways for circumvention. We envisage that this kind of explanation-oriented research will not only support the development of correct systems but may also lay a conceptual basis for their certification.

Envisioned articles are tool-related and leverage the synergy between algebraic and experimental approaches with the aim to increase the understanding of systems at any level of the production/lifecycle and of all the involved stakeholders. In particular, we aim at articles that provide new insights concerning:

Changing Paradigm for Conceptual Simplification
An example here is the transformation of Random Forests into Algebraic Decision Diagrams which can be regarded as model explanations.

Modularization and Abstraction
Convolutional layers are a prominent example of ‘syntax-oriented’ heuristics to control the NN-size for image recognition. Learning-based extraction of ‘semantic’ features could help to decompose the recognition process into meaningful parts, e.g. by using techniques like predicate abstraction or other forms of abstract interpretation.

Scalability through Decomposition
Random Forests use decomposition for improving variance. Semantics-based decomposition of decision structures, e.g., in so-called Class Characterization Models, may not only be helpful for a better understanding of the classification problem, but also for improving scalability.

Predicate Reduction
A limiting factor of predicate-based decision structures is the often dramatic growth of the set of predicates. Partitioning-based techniques may help to control the growth without impairing precision.

Case Studies, Experimentation, and Grand Challenges
Success stories and other forms of ‘enlightening’ observations are envisioned to drive new research directions or even to define Grand Challenges.

Challenges and Limitations
There are seemingly simple problems that conceptually escape NN approaches. Learning the modulo value for integers is a good example of a conceptually simple problem that eludes most standard neural network architectures. Understanding such limitation is a major driver towards a better understanding of the field, a precondition for a systematic approach towards explainability. And it is important to avoid unjustified enthusiasm.

Of course, the technology transfer is not meant to be unidirectional, ExPLAIn aims at promoting success stories to employ experimental research in the formal methods community like statistical model checking or Active Automata Learning, and we hope a lot more will come.

New insights without explicit technology transfer character are also welcome as long as they explicitly target explainability. This may, in particular, concern the development of composite explanation structures for the modular development of explanation frameworks or ‘on-the-fly’ techniques that seamlessly provide and improve explanation at runtime.

Being a theme of STTT, tool-based automation should always be the ultimate goal of the presented approaches. However, this does not exclude foundational contributions that may still require quite some elaboration before they can become operational.

Bernhard Steffen (TU Dortmund, Germany)
N.N.
Theme Editors-in-Chief of ExPLAIn

Editorial Board