Versión clásicaVersión móvil

EVALITA. Evaluation of NLP and Speech Tools for Italian

 | 
Pierpaolo Basile
, 
Franco Cutugno
, 
Malvina Nissim
, 
et al.

Part II: EVALITA 2016: Task overviews and participants reports

ChiLab4It System in the QA4FAQ Competition

Arianna Pipitone, Giuseppe Tirone y Roberto Pirrone

Resumen

ChiLab4It is the Question Answering system (QA) for Frequently Asked Questions (FAQ) developed by the Computer-Human Interaction Laboratory (ChiLab) at the University of Palermo for participating to the QA4FAQ task at EVALITA 2016 competition. The system is the versioning of the QuASIt framework developed by the same authors, which has been customized to address the particular task. This technical report describes the strategies that have been imported from QuASIt for implementing ChiLab4It, the actual system implementation, and the comparative evaluations with the results of the other participant tools, as provided by the organizers of the task. ChiLab4It was the only system whose score resulted to be above the experimental baseline fixed for the task. A discussion about future extensions of the system is also provided.

Texto completo

1 Introduction

1This technical report presents ChiLab4It (Pipitone et al., 2016a), the QA system for FAQ developed by the ChiLab at the University of Palermo to attend the QA4FAQ task (Caputo et al., 2016) in the EVALITA 2016 competition (Basile et al., 2016). The main objective of such a task is answering to a natural language question posed by the user by retrieving the more relevant FAQs, among those in the set provided by the Acquedotto Pugliese society (AQP) which developed a semantic retrieval engine for FAQs, called AQP Rispond1. Such an engine is based on a QA system; it opens new challenges about both the Italian language usage and the variability of language expressions by users. The background strategy of the proposed tool is based on the cognitive model described in (Pipitone et al., 2016b); in this work the authors present QuASIt, an open-domain QA system for the Italian language, that can be used for both multiple choice and essay questions. When a support text is provided for finding the correct answer (as in the case of text comprehension), QuASIt is able to use this text and find the required information.

2ChiLab4It is the customized version of QuASIt to the FAQ domain; such a customization was the result of some restrictions applied on the whole functionalities of QuASIt. The intuition was to consider the FAQ as support text; the more relevant FAQ will be the one whose text will best fit the user’s question, according to a set of matching strategies that keep into account some linguistic properties, such as typology and syntactic correspondences. The good performances obtained in the evaluations demonstrate the high quality of the idea, although the current linguistic resources for the Italian are not exhaustive. This report is organized as follow: in section 2 the QuASIt system is presented, and in section 3 the ChiLab4It system is described as a restriction of QuASIt. In section 4 the results of ChiLab4It are shown according to the evaluation test bed provided by the competition organizers. Finally, future works are discussed in section 5.

2 The QuASIt System

3The main characteristic of QuASIt is the underlying cognitive architecture, according to which the interpretation and/or production of a natural language sentence requires the execution of some cognitive processes over both a perceptually grounded model of the world (that is an ontology), and a previously acquired linguistic knowledge. In particular, two kinds of processes have been devised, that are the conceptualization of meaning and the the conceptualization of form.

4The conceptualization of meaning allows to associate a sense to perceived forms, that are the words of the user query. A sense is the set of concepts of the ontology that explains the form; such a process is implemented considering the ontology nodes whose labels match best the forms from a syntactic point of view. The set of such nodes is the candidate sub-ontology to contain the answer to produce. The syntactic match is based on a syntactic measure.

5The second process associates a syntactic expression to a meaning; it implements the strategies for producing the correct form of an answer, once it has been inferred. The form depends on the way QuASIt can be used, that is in both multiple choice and essay questions. In the case of multiple choice questions, the form must be one of the proposed answers. The system infers the correct answer among the proposed ones using the values of the properties’ ranges in the sub-ontology; the answer that better syntactically match such ranges is considered the correct answer. If no answer can be inferred in this way, a support text can be used if available. The support text can be either derived automatically by the system, using the plain text associated to the nodes of the sub-ontology (such as an abstract node in the DBPedia ontology2) or provided directly to the questions as in the case of a text comprehension task. In figure 1 the architecture of QuASIt is shown. The ontology and the linguistic knowledge are located respectively in the Domain Ontology Base and the Linguistic Base. The Mapping to Meanings (MtM) and the Mapping to Forms (MtF) modules are the components that model the cognitive processes related to the conceptualization of meaning and form respectively. The Unification Merging module is essentially the FCG engine (Steels, 2011) that is used to perform query parsing.

Figure 1: The QuASIt Cognitive Architecture

Image 100000000000049D00000485F3AB2E9E.jpg

6The strategy we implemented in ChiLab4It system is based on the QuASIt function that selects the correct answer to multiple choice questions using support text; the intuition was that a FAQ can be considered a support text that can be used for retrieving the more relevant FAQ to a user’s query. For this reason, in the next subsection, we describe this strategy in detail, and next we show how it was applied in the proposed tool.

2.1 Searching in the support text

7Searching in a support text is a possible strategy to deal with unstructured information when an artificial agent is trying to answer a particular question. In this case the agent learns a possible answer by comprehending the text dealing with the question topic. Such a process is implemented in QuASIt by the MtF module. Formally, let Q = {q1, q2...qn} be the query of the user, and P = {p1, p2, ...pm} a sentence in the support text; each element in these sets is a token. P will be considered as much similar as Q when maximizing the following similarity measure m:

Image 10000000000000BA000000197B4CE975.jpg

where Image 100000000000010E0000001496E93C55.jpg and J(pj, qi) is the Jaro-Winkler distance between a couple of tokens (Winkler, 1990). As a consequence, Image 100000000000007400000014CEBB7B2F.jpg, and Image 100000000000001700000014F983F93E.jpg is the number of matching tokens both in Q and P .

l = Image 1000000000000027000000145EFCA921.jpg is the number of “lacking tokens” that are tokens belonging to Q that do not match in P , while u = Image 100000000000004100000019C9F70D4F.jpg is the number of “unordered tokens” that is the number of tokens in Q that do not have the same order in Image 100000000000001200000014A6A4AAC6.jpg; here o(a, b) is the function returning maximum number of ordered tokens in a with respect to b.

8Both l and u are normalized in the range [0 ... 1]; they are penalty values representing syntactical differences among the sentences. The higher u and l are, the lower is the sentences similarity.

9The α and β parameters weight the penalty, and they have been evaluated empirically through experimentation along with τ.

10We re-used such strategy in ChiLab4It using different values for α and β parameters depending on which kind of support text we consider during the search, as next explained.

3 ChiLab4It

11The basic idea of the proposed tool was to consider a FAQ as a support text. According to the provided dataset, a FAQ is composed by three textual fields: the question text, the answer text and the tag set. For each of these fields we applied the search strategy defined above; in particular we set different α and β parameters for each field in the m measure, depending on linguistics considerations. For this reason, we defined three different parameterized m measures named m1, m2 and m3. Moreover, further improvements were achieved by searching for the synonyms of the words of the query in the answer text. These synonyms were not considered in the QuASIt implementation.

12Given the previously defined variables , l and u, the α and β parameters were set according to the following considerations:

  • question text; the α and β parameters are the same of QuASIt, that is α = 0.1 and β = 0.2. This choice is based solely on linguistic motivations; in fact, considering that the support text is a question such as the user query, both sentences to be matched will have interrogative form. As a consequence, both l and u influence the final match. The final measure is:

Image 100000000000011800000019104BFB4C.jpg

  • answer text; the search is iterated for each sentence in the text. In this case, the α and β parameters are zero (α = 0 and β = 0). This is because the answer text has a direct form, so the order of tokens must not be considered; moreover, a sentence in the answer text owns more tokens than the query, so this information is not discriminative for the final match.

13In this case, the search is extended to the synonyms of the words in the query except to the synonyms of the stop-words; this extension has improved significantly the performances of the system. Empirical evaluations demonstrated that there were not the same improvements when the synonyms were considered for the other parts of a FAQ (question text and tag set) because in these cases the synonyms increase uselessly the number of irrelevant FAQs retrieved by the system.

14Formally, let Σ be the σ-expansion set (Pipitone et al., 2014) that contains both the words and the synonyms of such words in the Q Sw set, being Q the user query as previously defined and Sw the set of stop-words:

Image 100000000000013B000000194E4092D7.jpg

15Let’s define S = {S1, S2,. .., SN} the set of sentences in the answer text. We defined the M set that contains the msi measures computed with α = 0 and β = 0 in m, for each sentence Si S with the σ-expanded query:

Image 10000000000000D90000001905AFC568.jpg

where

Image 100000000000018F0000001998BFC0D9.jpg

16The final similarity measure m2 will be the maximum value in M :

Image 1000000000000115000000191C557D2B.jpg

tag set; the α and β parameters are zero (α = 0 and β = 0) also in this case. This is because the tags in the set do not own a particular linguistic typology, so the information related to both the order of tokens and the lacking ones must not to be considered. As already explained, the synonyms are not included in this search. As consequence:

m3 = Image 100000000000001700000014F983F93E.jpg

where is the previously defined intersection among the query of the user and the set of tags.

17A query will be considered as much similar as a FAQ when maximizing the sum of the measures defined previously, so the final similarity value is:

mfaq = m1 + m2 + m3

18These values were ordered, and the first 25 FAQs were outputted for a single query as required by the task.

3.1 The architecture

19In figure 2 the architecture of ChiLab4It is shown; the input is the query of the user, while the output is the list of the first 25 relevant FAQs. The sources became the FAQ base and the Wiktionary source from which the provided FAQ dataset and the synonyms are respectively queried.

20The white module of such an architecture is the MtF module as implemented in QuASIt. The dark modules are the integrations that have been applied to the MtF module for customizing it to the FAQ domain; in particular, such integrations regard both the σ-expansion of the query and the setting of the analytic form (including parameters) of the m measure depending on the FAQ field.

21The first integration is implemented by the σ module, that returns the Σ set for the query of the user retrieving the synset from Wiktionary3.

22Parameters and the measure settings are performed by the FAQ Ctrl module which is encapsulated into the main MtF module; it retrieves the FAQ from the FAQ base and customizes the m measure according to the analyzed field (m1 for the question text, m2 for the answer text, m3 for the tag set). The MtF module computes such measures referring to the σ-expanded query, and finally the mfaq value is computed and memorized by the FAQ Ctrl for tracing the id of the FAQ with the highest value.

Figure 2: The ChiLab4It Architecture

Image 100000000000049D00000506A9497914.jpg

3.2 A toy example

23In this section we show a toy example with the aim of explaining better the searching process in the support text and how the similarity measure works. Such an example is a real question as retrieved in the data set provided by the organizers. Let consider the query with id = 4, that is: “a quali orari posso chiamare il numero verde”.

24In this case, the Q and the Sw set are:

Q = {A, quali, orari, posso, chiamare, il, numero, verde}

and

Sw = {A, il}

being “a” and “il” the stop-words in the question. The highest measure is computed by ChiLab4It in correspondence to the FAQ with id = 339, that is shown in table 1. Considering this FAQ, let compute the three measures for the question text, the answer text and the tag set.

25In the first case the support text is the question text of the FAQ, and the P set is:

26P = {Quali, sono, gli, orari, del, numero, verde} with P = 7. The m1 value will be computed considering that the intersection between the question text and the query of the user is:

Image 100000000000001200000014A6A4AAC6.jpg= {quali, orari, numero, verde}

The Jaro-Winkler distance is 1 for each word, and Image 100000000000001700000014F983F93E.jpg = 4. Also, l = Image 100000000000004E0000001489BBACF0.jpg = 0.428.

Table 1: The XML description of FAQ 339 as provided in the data set

<faq>

<id>339</id>

<question>Quali son ogli orari del numero verde?</question>

<answer>Il servizio del numero verde assistenza clienti AQP 800.085.853 e attivo dal lunedi al venerdi dalle ore 08.30 alle 17.30, il sabato dalle 08.30 alle 13.00; il servizio del numero verde segnalazioni guasto 800.735.735 e attivo 24 ore su 24.</answer>

<tag>infromazioni, orari, numero verde</tag>

</faq>

For the calculation of u, we notice that o(Q, Image 100000000000001200000014A6A4AAC6.jpg) returns 4 because the tokens in Q are all ordered with respect toImage 100000000000001200000014A6A4AAC6.jpg, that means they follow the same sequence inImage 100000000000001200000014A6A4AAC6.jpg. As consequence, u = Image 100000000000002F0000001424644F96.jpg = 1 – 4/4 = 0. Substituting all values, m1 will be:

Image 100000000000015E00000019EC9916C6.jpg

27In the next step, we consider the answer text; in the FAQ, this text is composed by only one sentence that becomes the new support text P , and the procedure will be applied once. In particular,

S = {S1} and P = S1 = {Il, servizio, del, numero, verde, assistenza, clienti,...., attivo, 24, ore, su, 24} as shown in table 1. In this case, the m2 measure depends only from the intersection between the σ-expanded query and S1. In particular, the Σ set is computed unifying the difference set Q Sw = {Quali, orari, posso, chiamare, numero, verde} with the synset from Wiktionary of each such token, so: Σ = {[[quali], [orari], [posso], [chiamare, soprannominare, chiedere, richiedere], [numero, cifra, contrassegno numerico, matricola, buffone, pagliaccio, elenco, gruppo, serie, classe, gamma, schiera, novero, taglia, misura, attrazione, scenetta, sketch, esibizione, gag, sagoma, macchietta, fascicolo, puntata, dispensa, copia, tagliando, contrassegno, talloncino, titoli, dote, requisito], [verde, pallido, smorto, esangue, acerbo, giovanile, vivace, vigoroso, florido, verdeggiante, lussureggiante, rigoglioso, agricolo, agrario, vegetazione, vigore, rigoglio, freschezza, floridezza, via, avanti, ecologista, ambientalista, livido]]}, where the synsets are represented in square brackets for clarity. The intersection Image 100000000000001200000014A6A4AAC6.jpg1 = Σ S1 is simple Image 100000000000001200000014A6A4AAC6.jpg1 = numero, verde, orari because these tokens have the highest Jaro-Winkler distance from the tokens in S1. As consequence, M = |Image 100000000000001200000014A6A4AAC6.jpg1|= 3 and m2 = 3.

In the third case, the support text is the tag set, so P = {informazioni, orari, numero, verde} and Image 100000000000001200000014A6A4AAC6.jpg = {orari, numero, verde}. The m3 value is simply m3 = Image 100000000000001700000014F983F93E.jpg = 3.

28Finally, the m measure is computed adding the three calculated values, so m = 3.95 + 3 + 3 = 9.95 that represents the highest value among those computed for all FAQs in the dataset.

4 Evaluations

29The dataset used for the evaluation was the one provided by the QA4FAQ task organizers; they released such a dataset as a collection of both questions and feedbacks that real customers provided to the AQP Risponde engine.

30In particular, such dataset includes:

  • a knowledge base of about 470 FAQs, each composed by the text fields we referred to;

  • a set of query by customers;

  • a set of pairs that allows organizers to evaluate the possible contestants. The organizers analyzed the feedbacks provided by real customers of AQP Risponde engine, and checked them for removing noise.

31Training data were not provided: in fact AQP is interested in the development of unsupervised systems, like ChiLab4It is.

32According to the guideline, we provided results in a text file purposely formatted, and for each query in the dataset we considered the first 25 answers. However, only the first FAQ is considered relevant for the scope of the task. ChiLab4It is ranked according to the accuracy@1 (c@1), whose formulation is:

Image 100000000000006900000019D12925F0.jpg

where nR is the number of correct answers, nU is the number of unanswered questions, and n is the total number of questions.

33A participant could have provided two different runs, but in our case we considered only the best configuration of the system. In table 2 we show the final results with the ranks of all participants as provided by the organizers; our tool performed better than the other participants, and it was the only one ranked above the experimental baseline.

Table 2: The final results for QA4FAQ task

TEAM

c@1

ChiLab4It

0.4439

baseline

0.4076

Team 1 run 1

Team 1 run 2

0.3746

0.3587

Team 2 run 1

Team 2 run 2

0.2125

0.0168

5 Discussion and Future Works

34ChiLab4It has been presented in this work, that is a tool designed for participating to the QA4FAQ task in the EVALITA 2016 competition. ChiLab4It relies on QuASIt, a cognitive model for an artificial agent performing question answering in Italian, already presented by the authors. QuASIt is able to answer both multiple choice and essay questions using an ontology-based approach where the agents manages both domain and linguistic knowledge.

35ChiLab4It uses the functions of QuASIt aimed at answering multiple choice questions using a support text to understand the query because a FAQ can be regarded exactly as a support text, that can be used to understand the query sentence and to provide the answer. Moreover our tool enhances the sentence similarity measure introduced in our reference cognitive model in two ways. First, three separate measures are computed for the three parts of a FAQ that is question text, answer text and tag set, and they are summed to provide the final similarity. Second, the synonyms of the query words are analyzed to match the query against each sentence of the answer text of the FAQ to achieve linguistic flexibility when searching for the query topic inside each text.

36ChiLab4It was tested with the competition data, and it resulted to be the winner having a c@1 rank well above the fixed experimental baseline.

37Future works are aimed at refining the development of the entire QuASIT system. Particular attention will be devoted in studying more refined versions of the similarity measure to take into account complex phrasal structures.

Bibliografía

Pierpaolo Basile, Franco Cutugno, Malvina Nissim, Viviana Patti, and Rachele Sprugnoli. 2016. EVALITA 2016: Overview of the 5th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. aAcademia University Press.

Annalina Caputo, Marco de Gemmis, Pasquale Lops, Franco Lovecchio, and Vito Manzari. 2016. Overview of the EVALITA 2016 Question Answering for Frequently Asked Questions (QA4FAQ) Task. In Pierpaolo Basile, Franco Cutugno, Malvina Nissim, Viviana Patti, and Rachele Sprugnoli, editors, Proceedings of the 5th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian (EVALITA 2016). aAcademia University Press.

Arianna Pipitone, Vincenzo Cannella, and Roberto Pirrone. 2014. I-ChatbIT: an Intelligent Chatbot for the Italian Language. In Roberto Basile, Alessandro Lenci, and Bernardo Magnini, editors, Proceedings of the First Italian Conference on Computational Linguistics CLiC-it 2014. Pisa University Press.

Arianna Pipitone, Giuseppe Tirone, and Roberto Pirrone. 2016a. Chilab4IT: ChiLab4It System in the QA4FAQ Competition. In Pierpaolo Basile, Franco Cutugno, Malvina Nissim, Viviana Patti, and Rachele Sprugnoli, editors, Proceedings of the 5th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian (EVALITA 2016). aAcademia University Press.

Arianna Pipitone, Giuseppe Tirone, and Roberto Pirrone. 2016b. QuASIt: a Cognitive Inspired Approach to Question Answering System for the Italian Language. In Proceedings of the 15th International Conference on the Italian Association for Artificial Intelligence 2016. in press.

Luc Steels, 2011. Introducing Fluid Construction Grammar. John Benjamins.

William E. Winkler. 1990. String comparator metrics and enhanced decisionn rules in the fellegi-sunter model of record linkage. In Proceedings of the Section on Survey Research, pages 354–359.

Autores

DIID - Dipartimento dell’Innovazione Industriale e Digitale - Ingegneria Chimica, Gestionale, Informatica, Meccanica Università degli Studi di Palermo - arianna.pipitone@unipa.it

DIID - Dipartimento dell’Innovazione Industriale e Digitale - Ingegneria Chimica, Gestionale, Informatica, Meccanica Università degli Studi di Palermo - giuseppe.tirone@unipa.it

DIID - Dipartimento dell’Innovazione Industriale e Digitale - Ingegneria Chimica, Gestionale, Informatica, Meccanica Università degli Studi di Palermo - roberto.pirrone@unipa.it

CC-BY-NC-ND-4.0

Únicamente el texto se puede utilizar bajo licencia CC BY-NC-ND 4.0. Salvo indicación contraria, los demás elementos (ilustraciones, archivos adicionales importados) son "Todos los derechos reservados".

Leer

Open access

Comprar

Buscar en OpenEdition Search

Se le redirigirá a OpenEdition Search