STTT
International Journal on Software Tools for Technology Transfer
Explanation Paradigms Leveraging Analytic Intuition (ExPLAIn)
Today’s decision-making is increasingly aided by computers, be it in medicine, in the stock market, when playing Go or Chess, or when we follow recommendations provided by various platforms. Few people question the often machine learning-based automated support and ask for explanations. In the majority of cases this might be harmless, in particular, when the damage caused by wrong results remains moderate. The situation changes, however, when critical decisions with a potential for disaster are delegated to machine learning-based systems. State-of-the-art approaches for extracting explanations are far from reassuring: Rather than guaranteeing a certain result, state-of-the-art explanations can be seen as indications that nothing has radically gone wrong (e.g., that a trained classifier uses relevant parts in a picture instead of incidental features of the training set), a problem in particular Deep Neural Network (DNN)-based systems are known for.
Formal Methods-based automated approaches to software/system verification aim at a provable correctness which is particularly important in the context of industrial critical systems. Classical approaches support explanation in the case of found errors, through error traces or game-based strategies that steer a system into an unexpected situation. These types of explanations can readily be validated and provide a good basis for a corresponding correction. Approaches that provide explanations in the case of positive verification results are less common: there are proof checkers but these are typically tailored to interactive verification methodologies. Another approach is visual verification via dedicated abstract views, which can be quite reassuring but typically does not scale.
In both scenarios (Machine Learning and Formal Methods), system understanding aims at explaining properties of an unfeasible (black box) system in a feasible (white box) form: highlighting pixels that were important for a DNN-based classification transforms an incomprehensible computation into a simple picture, errors traces are easy to follow even in highly complex distributed systems, and abstract views, e.g., in the form of automata, are conceptually simple, even though they typically suffer from size explosion.
Despite the similar intent, namely increasing the confidence of automatically deduced results, there is a clear conceptual difference: DNN-based research is characterized by experimental approaches and performance heuristics that frequently show amazing results (and failures), whereas the conceptual approaches established by the formal methods community are based on sound theories of (de)composition and abstraction but have a tendency to encounter scalability problems.
The STTT theme “Explanation Paradigms Leveraging Analytic Intuition (ExPLAIn)” provides a forum for research that aims at
-
opening up the black box-character of today’s systems by leveraging the wealth of formal methods that have been developed over recent decades by the formal methods community,
-
embedding experimental approaches such as those developed in the machine learning community in scenarios that guarantee their responsible use, and
-
revealing and characterizing conceptual hurdles and limitations, pinpointing special challenges, and proposing alternative ways for circumvention.
The ultimate goal of this kind of explanation-oriented research is to improve the understanding of, in particular, heterogeneous systems that comprise both, conceptual and experimental components, in order to decide where and under which circumstances they can safely be used. Ideally, this will also lead to improved certification policies that can adequately deal with the new kind of systems.
Envisioned articles are tool-related and leverage the synergy between analytic and experimental approaches with the aim to increase the understanding of systems at any level of the production/lifecycle and of all the involved stakeholders. In particular, we aim at articles that provide new insights concerning:
Changing Paradigm for Conceptual Simplification
An example here is the transformation of Random Forests into Algebraic Decision Diagrams which can be regarded as model
explanations.
Modularization and Abstraction
Convolutional layers are a prominent example of ‘syntax-oriented’ heuristics to control the NN-size for image
recognition.
Learning-based extraction of ‘semantic’ features could help to decompose the recognition process into
meaningful parts, e.g. by using techniques like predicate abstraction or other forms of abstract interpretation.
Scalability through Decomposition
Random Forests use decomposition for improving variance.
Semantics-based decomposition of decision structures, e.g., in
so-called Class Characterization Models, may not only be helpful for a better understanding of the classification
problem, but also for improving scalability.
Predicate and Dimensionality Reduction
A limiting factor of predicate-based decision structures is the often dramatic growth of the set of predicates.
Partitioning-based techniques aiming at smoothening to reduce noise may help to control the growth without impairing
precision.
And there are various techniques for reducing the dimensionality, like PCA and autoencoders.
Safe Scenarios
Using DNNs simply as heuristics is harmless and there are other scenarios where DNNs are embedded into even
safety-critical solutions without impairing their reliability or precision.
Designing such solutions is of high practical impact as it increases the range where DNNs can be responsibly applied.
PAC and other Statistical Guarantees
In many application areas, statistical guarantees are the best one can expect, and it has to investigated under which
circumstances this is sufficient.
All this must happen with the understanding that also provably correct systems may fail for various reasons
that were not subject to verification.
Case Studies, Benchmarks, Experimentation, and Grand Challenges
Success stories and other forms of ‘enlightening’ observations are envisioned to drive new research directions or even
to define Grand Challenges.
Challenges and Limitations
There are seemingly simple problems that conceptually escape NN approaches.
Learning the modulo value for integers is a good example of a conceptually simple problem that excludes most standard
neural network architectures.
Understanding such limitation is a major driver towards a better understanding of the field, a precondition for a
systematic approach towards explainability.
And it is important to avoid unjustified enthusiasm.
New insights without explicit technology transfer character are also welcome as long as they explicitly target explainability. This may, in particular, concern the development of composite explanation structures for the modular development of explanation frameworks or ‘on-the-fly’ techniques that seamlessly provide and improve explanation at runtime.
Being a theme of STTT, tool-based automation should always be the ultimate goal of the presented approaches. However, this does not exclude foundational contributions that may still require quite some elaboration before they can become operational.
Bernhard Steffen (TU Dortmund,
Germany)
Nils Jansen (Radboud University Nijmegen, NL)
Theme Editors-in-Chief of ExPLAIn