Desktop versionMobile version

Proceedings of the Fifth Italian Conference on Computational Linguistics CLiC-it 2018

 | 
Elena Cabrio
, 
Alessandro Mazzei
, 
Fabio Tamburini

Contributed Papers

On the Readability of Deep Learning Models: the role of Kernel-based Deep Architectures

Danilo Croce, Daniele Rossini and Roberto Basili

Abstract

Deep Neural Networks achieve state-of-the-art performances in several semantic NLP tasks but lack of explanation capabilities as for the limited interpretability of the underlying acquired models. In other words, tracing back causal connections between the linguistic properties of an input instance and the produced classification is not possible. In this paper, we propose to apply Layerwise Relevance Propagation over linguistically motivated neural architectures, namely Kernel-based Deep Architectures (KDA), to guide argumentations and explanation inferences. In this way, decisions provided by a KDA can be linked to the semantics of input examples, used to linguistically motivate the network output.

Full text

1 Introduction

1Deep Neural Networks are usually criticized as they are not epistemologically transparent devices, i.e. their models cannot be used to provide explanations of the resulting inferences. An example can be neural question classification (QC) (Croce et al., 2017)). In QC the correct category of a question is detected to optimize the later stages of a question answering system, (Li and Roth, 2006). An epistemologically transparent learning system should trace back the causal connections between the proposed question category and the linguistic properties of the input question. For example, the system could motivate the decision: "What is the capital of Zimbabwe?" refers to a Location, with a sentence such as: Since it is similar to "What is the capital of California?" which also refers to a Location. Unfortunately, neural models, as for example Multilayer Perceptrons (MLP), Long Short-Term Memory Networks (LSTM), (Hochreiter and Schmidhuber, 1997), or even Attention-based Networks (Larochelle and Hinton, 2010), correspond to parameters that have no clear conceptual counterpart: it is thus difficult to trace back the network components (e.g. neurons or layers in the resulting topology) responsible for the answer.

2In image classification, Layerwise Relevance Propagation (LRP) (Bach et al., 2015) has been used to decompose backward across the MLP layers the evidence about the contribution of individual input fragments (i.e. pixels of the input images) to the final decision. Evaluation against the MNIST and ILSVRC benchmarks suggests that LRP activates associations between input and output fragments, thus tracing back meaningful causal connections.

3In this paper, we propose the use of a similar mechanism over a linguistically motivated network architecture, the Kernel-based Deep Architecture (KDA), (Croce et al., 2017). Tree Kernels (Collins and Duffy, 2001) are here used to integrate syntactic/semantic information within a MLP network. We will show how KDA input nodes correspond to linguistic instances and by applying the LRP method we are able to trace back causal associations between the semantic classification and such instances. Evaluation of the LRP algorithm is based on the idea that explanations improve the user expectations about the correctness of an answer and shows its applicability in human computer interfaces.

4In the rest of the paper, Section 2 describes the KDA neural approach while section 3 illustrates how LRP connects to KDAs. In section 4 early results of the evaluation are reported.

2 Training Neural Networks in Kernel Spaces

Given a training set Image 100000000000002F0000000EF8FFF6F8.jpg, a kernel Image 10000000000000330000000E983EDC70.jpg is a similarity function over D2 that corresponds to a dot product in the implicit kernel space, i.e., Image 100000000000008B0000000E4A6A4681.jpg. Kernel functions are used by learning algorithms, such as Support Vector Machines (Shawe-Taylor and Cristianini, 2004), to efficiently operate on instances in the kernel space: their advantage is that the projection function Image 10000000000000610000000E2EC4FE5F.jpg is never explicitly computed. The Nyström method is a factorization method applied to derive a new low-dimensional embedding Image 100000000000000D0000000EDC6280BD.jpg in a l-dimensional space, with Image 100000000000002C0000000E1BFDB63A.jpg so that Image 10000000000000720000000ED00D5F1D.jpg, where Image 100000000000004A0000000ECE40795C.jpg is the Gram matrix such that Image 10000000000000B10000000ED47B3B19.jpg. The approximation Image 100000000000000D0000000E75B02702.jpg is obtained using a subset of l columns of the matrix, i.e., a selection of a subset Image 100000000000003F0000000E5BE9D9B3.jpg of the available examples, called landmarks. Given l randomly sampled columns of G, let Image 10000000000000490000000EB4E99343.jpg be the matrix of these sampled columns. Then, we can rearrange the columns and rows of G and define Image 10000000000000510000000E0D41C4D4.jpg such that:

Image 10000000000000AE0000001AFE36B705.jpg

Where Image 10000000000000430000000E07AD36E5.jpg, i.e., the subset of G that contains only landmarks. The Nyström approximation can be defined as:

Image 10000000000000750000000E385A2A60.jpg (1)

where Image 10000000000000170000000E7D6ECCB5.jpg denotes the Moore-Penrose inverse of W. If we apply the Singular Value Decomposition (SVD) to W, which is symmetric definite positive, we get Image 10000000000000A90000000EF2AD784F.jpg. Then it is straightforward to see that Image 10000000000000800000000E29D5404B.jpgImage 100000000000004A0000000E1AFB6D11.jpg and that by substitution Image 100000000000003F0000000E2936A44E.jpgImage 10000000000000800000000E13DC08AD.jpg. Given an example Image 100000000000002D0000000E9E11CC41.jpg, its new low-dimensional representation Image 100000000000000A0000000E5DF9D15F.jpg is determined by considering the corresponding item of C as

Image 100000000000003D0000000E85A88BB3.jpg (2)

where Image 100000000000000D0000000EAA36CD5E.jpg is the vector whose dimensions contain the evaluations of the kernel function between o and each landmark Image 10000000000000280000000E7ACB62C9.jpg. Therefore, the method produces l-dimensional vectors.

5Given a labeled dataset, a Multi-Layer Perceptron (MLP) architecture can be defined, with a specific Nyström layer based on the Nyström embeddings of Eq. 2, (Croce et al., 2017).

Such Kernel-based Deep Architecture (KDA) has an input layer, a Nyström layer, a possibly empty sequence of non-linear hidden layers and a final classification layer, which produces the output. In particular, the input layer corresponds to the input vector Image 100000000000000D0000000E91ADB99D.jpg, i.e., the row of the C matrix associated to an example o. It is then mapped to the Nyström layer, through the projection in Equation 2. Notice that the embedding provides also the proper weights, defined by Image 10000000000000220000000ED22218E7.jpg, so that the mapping can be expressed through the Nyström matrix Image 100000000000003D0000000EAB82354C.jpg: it corresponds to a pre-training stage based on the SVD. Formally, the low-dimensional embedding of an input example Image 10000000000000790000000E57E7C452.jpg, encodes the kernel space. Any neural network can then be adopted: in the rest of this paper, we assume that a traditional Multi-Layer Perceptron (MLP) architecture is stacked in order to solve the targeted classification problems. The final layer of KDA is the classification layer whose dimensionality depends on the classification task: it computes a linear classification function with a softmax operator.

6A KDA is stimulated by an input vector c which corresponds to the kernel evaluations K (o, li) between each example o and the landmarks li. Linguistic kernels (such as Semantic Tree Kernels (Croce et al., 2011)) depend on the syntactic/semantic similarity between the x and the subset of li used for the space reconstruction. We will see hereafter how tracing back through relevance propagation into a KDA architecture corresponds to determine which semantic landmarks contribute mostly to the final output decision.

3 Layer-wise Relevance Propagation in Kernel-based Deep Architectures

7Layer-wise Relevance propagation (LRP, presented in (Bach et al., 2015)) is a framework which allows to decompose the prediction of a deep neural network computed over a sample, e.g. an image, down to relevance scores for the single input dimensions, such as a subset of pixels.

Formally, let Image 100000000000004F0000000EB3D0F773.jpg be a positive real-valued function taking a vector Image 100000000000002C0000000E91D09E08.jpg as input: f quantifies, for example, the probability of Image 100000000000000E0000000E3AFF1593.jpg characterizing a certain class. The Layer-wise Relevance Propagation assigns to each dimension, or feature, Image 10000000000000170000000E2F2E9434.jpg , a relevance score Image 1000000000000015000000119B761759.jpg such that:

Image 100000000000004D00000011225D32F8.jpg (3)

Features whose score Image 1000000000000074000000110B26A681.jpg correspond to evidence in favor (or against) the output classification. In other words, LRP allows to identify fragments of the input playing key roles in the decision, by propagating relevance backwards. Let us suppose to know the relevance score Image 100000000000001D0000001103374AF7.jpg of a neuron l at network layer l +1, then it can be decomposed into messages Image 100000000000002200000011FDCD27D0.jpg sent to neurons i in layer l:

Image 10000000000000620000001C308CC833.jpg (4)

8Hence the relevance of a neuron i at layer l can be defined as:

Image 10000000000000630000001CB37FC5F4.jpg (5)

Note that 4 and 5 are such that 3 holds. In this work, we adopted the Image 100000000000000C0000000ECA89F960.jpg-rule defined in (Bach et al.2015) to compute the messagesImage 1000000000000023000000115872289C.jpg, i.e.

Image 10000000000000B70000001C60EA3541.jpg

where Image 10000000000000560000000E07741A38.jpg and Image 100000000000002E0000000E58B7FB3B.jpg is a numerical stabilizing term and must be small. Notice that weights wij correspond to weighted activations of input neurons. If we apply LRP to a KDA it implicitly traces the relevance back to the input layer, i.e. to the landmarks. It thus tracks back syntactic, semantic and lexical relations between a question and the landmark and it grants high relevance to the relations the network selected as highly discriminating for the class representations it learned; note that this is different from similarity in terms of kernel-function evaluation as the latter is task independent whereas LRP scores are not. Notice also that each landmark is uniquely associated to an entry of the input vector Image 100000000000000D0000000EEAEF04A2.jpg, as shown in Sec 2, and, as a member of the training dataset, it also corresponds to a known class.

4 Explanatory Models

LRP allows the automatic compilation of justifications for the KDA classifications: explanations are possible using landmarks Image 10000000000000130000000E44C5E69E.jpg as examples. The Image 10000000000000130000000E44C5E69E.jpg that the LRP method produces as the most active elements in layer 0 are semantic analogues of input annotated examples. An Explanatory Model is the function in charge of compiling the linguistically fluent explanation of individual analogies (or differences) with the input case. The meaningfulness of such analogies makes a resulting explanation clear and should increase the user confidence on the system reliability. When a sentence o is classified, LRP assigns activation scores Image 100000000000000C0000000EAFF0E7CD.jpg to each individual landmark l: let L(+) (or L(-)) denote the set of landmarks with positive (or negative) activation scores.

  • 1 k is a parameter used to make explanation depending on not more than k landmarks, denoted by Lk.

Formally, an explanation is characterized by a triple Image 100000000000004A0000000EC2FA6479.jpg where s is the input sentence, C is the predicted label and τ is the modality of the explanation: τ = +1 for positive (i.e. acceptance) statements while τ = -1 correspond to rejections of the decision C. A landmark l is positively activated for a given sentence s if there are not more than k –1 other active landmarks1 Image 100000000000000D0000000E6845E1C2.jpg whose activation value is higher than the one for l, i.e.

Image 10000000000000BC0000000E201D13E8.jpg

A landmark is negatively activated when:Image 10000000000000200000000E36E2521F.jpgImage 10000000000000B60000000EA2C0C551.jpg. Positively (or negative) active landmarks in Lk are assigned to an activation value Image 100000000000006C0000000E3E1BEEEF.jpg. For all other not activated landmarks: Image 100000000000003F0000000E8F41D223.jpg.

Given the explanation Image 10000000000000490000000EAA02C1FF.jpg, a landmark l whose (known) class is Cl is consistent (or inconsistent) with e according to the fact that the following function:

Image 10000000000000730000000E192DE751.jpg

is positive (or negative, respectively), where Image 10000000000000F40000000E1AC062F0.jpg and δkron is the Kronecker delta.

The explanatory model is then a function Image 10000000000000390000000ED20767A2.jpg which maps an explanation e, a sub set Lk of the active and consistent landmarks L for e into a sentence in natural language. Of course several definitions for Image 10000000000000390000000ED20767A2.jpg and Lk are possible. A general explanatory model would be:

Image 10000000000000EC0000007B807BEDA6.jpg

where Image 100000000000003A0000000EAD7D947B.jpg are the partitions of landmarks with positive (and negative) relevance scores in Lk, respectively. Here we provide examples for two explanatory models, used during the experimental evaluation. A first possible model returns the analogy only with the (unique) consistent landmark with the highest positive score if τ = 1 and lowest negative when τ = -1. The explanation of a rejected decision in the Argument Classification of a Semantic Role Labeling task (Vanzo et al., 2016), described by the tripleImage 10000000000000280000000DC085A476.jpgImage 10000000000001100000000E780DC557.jpg, is:

I think "in camera da letto" IS NOT [Source] of [Bringing] in "Vai in camera da letto" (LU:[vai]) since it’s different from "sul tavolino" which is [Source] of [Bringing] in “Portami il mio catalogo sul tavolino” (LU:[porta])

The second model uses two active landmarks: one consistent and one contradictory with respect to the decision. For the triple Image 10000000000001160000000E2064E47E.jpg the second model produces:

I think "in camera da letto" IS [Goal] of [Motion] in "Vai in camera da letto" (LU:[vai]) since it recalls "al telefono" which is [Goal] of [Motion] in "Vai al telefono e controlla se ci sono messaggi" (LU:[vai]) and it IS NOT [Source] of [Bringing] since different from "sul tavolino" which is the [Source] of [Bringing] in "Portami il mio catalogo sul tavolino" (LU:[portami])

4.1 Evaluation methodology

In order to evaluate the impact of the produced explanations, we defined the following task: given a classification decision, i.e. the input o is classified as C, to measure the impact of the explanation e on the belief that a user exhibits on the statement Image 10000000000000840000000EDE4711B7.jpg. This information can be modeled through the estimates of the following probabilities: Image 100000000000003A0000000EF0D9F71C.jpg that characterizes the amount of confidence the user has in accepting the statement, and its corresponding form Image 100000000000004C0000000EA913D4A2.jpg, i.e. the same quantity in the case the user is provided by the explanation e. The core idea is that semantically coherent and exhaustive explanations must indicate correct classifications whereas incoherent or non-existent explanations must hint towards wrong classifications. A quantitative measure of such an increase (or decrease) in confidence is the Information Gain (IG, (Kononenko and Bratko, 1991)) of the decision Image 10000000000000310000000E8C2F0BE4.jpg. Notice that IG measures the increase of probability corresponding to correct decisions, and the reduction of the probability in case the decision is wrong. This amount suitably addresses the shift in uncertainty Image 10000000000000430000000E29E064AA.jpg between two (subjective) estimates, i.e., Image 10000000000000910000000E2CB05212.jpg.

Different explanatory models Image 10000000000000170000000E53320E98.jpg can be also compared. The relative Information Gain Image 10000000000000150000000EC1F4E45B.jpg is measured against a collection of explanations Image 10000000000000390000000E5F154451.jpg generated by Image 10000000000000170000000E68FEEE19.jpg and then normalized throughout the collection’s entropy ε as follows:

Image 10000000000000710000001C7384EF7A.jpg

  • 2 More details are in (Kononenko and Bratko, 1991)

where I(e) is the IG of each explanation2.

5 Experimental Evaluation

The effectiveness of the proposed approach has been measured against two different semantic processing tasks, i.e. Question Classification (QC) over the UIUC dataset (Li and Roth, 2006) and Argument Classification in Semantic Role Labeling (SRL-AC) over the HuRIC dataset (Bastianelli et al., 2014; Vanzo et al., 2016). The adopted architecture consisted in a LRP-integrated KDA with 1 hidden layers and 500 landmarks for QC, 2 hidden layers and 100 landmarks for SRL-AC and a stabilization-term Image 10000000000000480000000EBCDD5E4D.jpg.

We defined five quality categories and associated each with a value of Image 10000000000000520000000EA512FB7A.jpg, as shown in Table 1. Three annotators then independently rated explanations generated from a collection composed of an equal number of correct and wrong classifications (for a total amount of 300 and 64 explanations, respectively, for QC and SRL-AC). This perfect balancing makes the prior probability Image 100000000000003A0000000EB0BF18EE.jpg being 0.5, i.e. maximal entropy with a baseline IG = 0 in the [-1, 1] range. Notice that annotators had no information on the system classification performance, but just knowledge of the explanation dataset entropy.

Table 1: Posterior probab. w.r.t. quality categories

Category

Image 100000000000003A0000000EB0BF18EE.jpg

Image 10000000000000560000000ED0EE5B72.jpg

V.Good

0.95

0.05

Good

0.8

0.2

Weak

0.5

0.5

Bad

0.2

0.8

Incoher.

0.05

0.95

Table 2: Information gains for two Explanatory Models applied to the QC and SRL-AC datasets

Model

QC

SRL-AC

One landmark

0.548

0.669

Two landmarks

0.580

0.784

5.1 Question Classification

  • 3 For details on KDA performance against the task, see (Croce et al., 2017)

9Experimental evaluations3 showed that both the models were able to gain more than half the bit required to ascertain whether the network statement is true or not (Table 2). Consider:

I think "What year did Oklahoma become a state ?" refers to a NUMBER since recalls me "The film Jaws was made in what year ?"

10Here the model returned a coherent supporting evidence, a somewhat easy case as for the available discriminative pair, i.e. "What year". The system is able to capture semantic similarities even in poorer conditions, e.g.:

I think "Where is the Mall of the America ?" refers to a LOCATION since recalls me "What town was the setting for The Music Man ?" which refers to a LOCATION.

11This high quality explanation is achieved even if with such poor lexical overlap. It seems that richer representations are here involved with grammatical and semantic similarity used as the main information involved in the decision at hand. Let us consider:

I think "Mexican pesos are worth what in U.S. dollars ?" refers to a DESCRIPTION since it recalls me "What is the Bernoulli Principle ?"

12Here the provided explanation is incoherent, as expected since the classification is wrong. Now consider:

I think "What is the sales tax in Minnesota ?" refers to a NUMBER since it recalls me "What is the population of Mozambique ?" and does not refer to a ENTITY since different from "What is a fear of slime ?".

13Although explanation seems fairly coherent, it is actually misleading as ENTITY is the annotated class. This shows how the system may lack of contextual information, as humans do, against inherently ambiguous questions.

5.2 Argument Classification

14Evaluation also targeted a second task, that is Argument classification in Semantic Role Labeling (SRL-AC): KDA is here fed with vectors from tree kernel evaluations as discussed in (Croce et al., 2011). The evaluation is carried out over the HuRIC dataset (Vanzo et al., 2016), including about 240 domotic commands in Italian, comprising of about 450 roles. The system has an accuracy of 91.2% on about 90 examples, while the training and development set have a size of, respectively, 270 and 90 examples. We considered 64 explanations for measuring the IG of the two explanation models. Table 2 confirms that both explanatory models performed even better than in QC. This is due to the narrower linguistic domain (14 frames are involved) and the clearer boundaries between classes: annotators seem more sensitive to the explanatory information to assess the network decision. Examples of generated sentences are:

I think "con me" is NOT the MANNER of Cotheme in "Robot vieni con me nel soggiorno? (LU:[vieni])" since it does NOT recall me "lentamente" which is MANNER in "Per favore segui quella persona lentamente (LU:[segui])". It is rather COTHEME of Cotheme since it recalls me "mi" which is Cotheme in "Seguimi nel bagno (LU:[segui])".

6 Conclusion and Future Works

15This paper describes an LRP application to a KDA that makes use of analogies as explanations of a neural network decision. A methodology to measure the explanation quality has been also proposed and the experimental evidence confirms the effectiveness of the method in increasing the trust of a user upon automatic classifications. Future work will focus on the selection of subtrees as meaningful evidences for the explanation, or on the modeling of negative information for disambiguation as well as on more in depth investigation of the landmark selection policies. Moreover, improved experimental scenarios involving users and dialogues will be also designed, e.g. involving further investigation within Semantic Role Labeling, using the method proposed in (Croce et al., 2012).

Bibliography

Sebastian Bach, Alexander Binder, Gregoire Montavon, Frederick Klauschen, Klaus-Robert Mller, and Wojciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLOS ONE, 10(7).

Emanuele Bastianelli, Giuseppe Castellucci, Danilo Croce, Luca Iocchi, Roberto Basili, and Daniele Nardi. 2014. Huric: a human robot interaction corpus. In LREC, pages 4519–4526. European Language Resources Association (ELRA).

Michael Collins and Nigel Duffy. 2001. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics (ACL ’02), July 7-12, 2002, Philadelphia, PA, USA, pages 263–270. Association for Computational Linguistics, Morristown, NJ, USA.

Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2011. Structured lexical similarity via convolution kernels on dependency trees. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1034–1046. Association for Computational Linguistics.

Danilo Croce, Alessandro Moschitti, Roberto Basili, and Martha Palmer. 2012. Verb classification using distributional similarity in syntactic and semantic structures. In ACL (1), pages 263–272. The Association for Computer Linguistics.

Danilo Croce, Simone Filice, Giuseppe Castellucci, and Roberto Basili. 2017. Deep learning in semantic kernel spaces. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 345–354, Vancouver, Canada, July. Association for Computational Linguistics.

Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735–1780, November.

Igor Kononenko and Ivan Bratko. 1991. Information-based evaluation criterion for classifier’s performance. Machine Learning, 6(1):67–80, Jan.

Hugo Larochelle and Geoffrey E. Hinton. 2010. Learning to combine foveal glimpses with a third-order boltzmann machine. In Proceedings of Neural Information Processing Systems (NIPS), pages 1243–1251.

Xin Li and Dan Roth. 2006. Learning question classifiers: the role of semantic information. Natural Language Engineering, 12(3):229–249.

John Shawe-Taylor and Nello Cristianini. 2004. Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge, UK.

Andrea Vanzo, Danilo Croce, Roberto Basili, and Daniele Nardi. 2016. Context-aware spoken language understanding for human robot interaction. In Proceedings of Third Italian Conference on Computational Linguistics (CLiC-it 2016) & Fifth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2016), Napoli, Italy, December 5-7, 2016.

Notes

1 k is a parameter used to make explanation depending on not more than k landmarks, denoted by Lk.

2 More details are in (Kononenko and Bratko, 1991)

3 For details on KDA performance against the task, see (Croce et al., 2017)

The text and other elements (illustrations, imported files) may be used under OpenEdition Books License, unless otherwise stated.

Search OpenEdition Search

You will be redirected to OpenEdition Search