• Contenu principal
  • Menu
OpenEdition Books
  • Accueil
  • Catalogue de 15530 livres
  • Éditeurs
  • Auteurs
  • Facebook
  • X
  • Partager
    • Facebook

    • X

    • Accueil
    • Catalogue de 15530 livres
    • Éditeurs
    • Auteurs
  • Ressources numériques en sciences humaines et sociales

    • OpenEdition
  • Nos plateformes

    • OpenEdition Books
    • OpenEdition Journals
    • Hypothèses
    • Calenda
  • Bibliothèques

    • OpenEdition Freemium
  • Suivez-nous

  • Lettre d’information
OpenEdition Search

Redirection vers OpenEdition Search.

À quel endroit ?
  • Accademia University Press
  • ›
  • Collana dell'Associazione Italiana di Li...
  • ›
  • Proceedings of the Seventh Italian Confe...
  • ›
  • Contributed Papers
  • ›
  • Cross-Language Transformer Adaptation fo...
  • Accademia University Press
  • Accademia University Press
    Accademia University Press
    Informations sur la couverture
    Table des matières
    Liens vers le livre
    Informations sur la couverture
    Table des matières
    Formats de lecture

    Plan

    Plan détaillé Texte intégral 1. Introduction 2. Related Works 3. Datasets 4. ATTandA Approach 5. Results 6. Conclusion Bibliographie Notes de bas de page Auteurs

    Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020

    Ce livre est recensé par

    Précédent Suivant
    Table des matières

    Cross-Language Transformer Adaptation for Frequently Asked Questions

    Luca Di Liello, Daniele Bonadiman, Cristina Giannone, Andrea Favalli, Raniero Romagnoli et Alessandro Moschitti

    p. 157-163

    Résumé

    Transfer learning has been proven to be effective, especially when data for the target domain/task is scarce. Sometimes data for a similar task is only available in another language because it may be very specific. In this paper, we explore the use of machine-translated data to transfer models on a related domain. Specifically, we transfer models from the question duplication task (QDT) to similar FAQ selection tasks. The source domain is the well-known English Quora dataset, while the target domain is a collection of small Italian datasets for real case scenarios consisting of FAQ groups retrieved by pivoting on common answers. Our results show great improvements in the zero-shot learning setting and modest improvements using the standard transfer approach for direct in-domain adaptation.1

    Note de l’éditeur

    Work done prior to joining Amazon

    Texte intégral Bibliographie Notes de bas de page Auteurs

    Texte intégral

    1. Introduction

    1Frequently Asked Question (FAQ) websites are an essential service for user’s self-assistance. FAQ websites typically present a list of questions, each associated with an answer. When searching for information, users have to go through the FAQs to determine whether there is a similar question providing a solution to their problem. However, this process does not scale well when the number of FAQs increases since too many questions may be presented to the user, and a simple search by the query may not retrieve the desired results. Additionally, in the last decade, users started looking for information using smartphones and voice assistants, such as Alexa, Google Assistant, or Siri. By design, voice assistants provide users with a different information access paradigm: the FAQ websites’ navigation service is substituted by natural language dialogues, which satisfy the users’ information need in few interactions. To achieve this goal, FAQ retrieval systems need to understand the question and present the user only with a set of strong candidates. One possible solution offered by personal assistants is constituted by (i) a FAQ retrieval system (Caputo et al. 2016) for efficiently finding relevant questions, and (ii) accurate neural models to select the most probable FAQ.

    2One of the major obstacles for building such a system is the availability of training data for the selection model. FAQ systems are domain-specific in nature since they aim to provide users with information about specific websites or services. Moreover, the industrial setting does not always allow for creating a large corpus of questions for any specific domain, as the customers (FAQ’s owners) typically cannot provide such data. There are many reasons: (i) they are not familiar with the process of training data creation, as it is not part of their business; (ii) the topic of the FAQ system does not require more than tens of question/solution pairs; (iii) it is not easy to generate a dataset for question-question similarity from a question-answer system.

    Table 1: Some examples of QDT and FAQ pairs. Notice that in the first block question are paraphrase of each other. The second block contains instead questions that only share a common answer

    Task

    Question 1

    Question 2

    Label

    QDT

    How many months does it take to gain knowledge in developing Android apps from scratch?

    How much time does it take to learn Android app development from scratch?

    True

    QDT

    How do I prepare for software interviews?

    What are the best ways to prepare for software interviews?

    True

    QDT

    Why did harry become a horcrux?

    What is a Horcrux?

    False

    QDT

    What is journalism for you?

    What is journalism?

    False

    FAQ

    Can medicines be sold on Amazon?

    What items can’t I sell on Amazon?

    True

    FAQ

    I forgot my username

    Why won’t the page load?

    True

    FAQ

    Is it possible to change my personal information after I have registered?

    Is it possible to change the password?

    False

    FAQ

    Can I have food brought from home during the flight?

    What is included in the price I pay?

    False

    3A traditional approach to alleviating such a problem is to use transfer learning (TL), i.e., data from other domains/tasks is used to train a model on the target task. TL research has been boosted by the availability of pre-trained transformer-based models (Vaswani et al. 2017; Devlin et al. 2018), which capture general-purpose language models. In this paper, we approach the problem of FAQ selection, fine-tuning pre-trained language models on the Question Duplication Task (QDT) from Quora2. This task aims to identify whether two questions are duplicated or not, i.e., semantically equivalent or not. (Androutsopoulos and Malakasiotis 2010).

    4Although the FAQ selection task shares some commonalities with QDT one, they are different. A FAQ task can indeed be solved by ranking all the FAQs in the collection using a system that computes the semantic similarity score between two questions, i.e., a Paraphrase Identification model. However, there are still some crucial differences. While QDT requires to infer if two questions are semantically equivalent, FAQ selection seeks questions that share the same intent and, at the same time, that they share the same answer. Moreover, the FAQ selection strongly depends on the domain in which the retrieval system is applied. For example, if a website responds to every technical complaint with “contact us”, there will be many positive pairs that will not share any real answer. Every portal in which a FAQ similarity system is needed, e.g., online services and e-commerce, requires a different level of details depending on the service type and its complexity. Table 1 provides some examples taken from QDT and FAQ datasets to underline the difference better.

    5One of the largest corpora for the fine-tuning of QDT is the well-known Quora dataset, sourced from the homonymous community question answering website. The dataset is constituted by question pairs, labeled as being duplicates or not. However, the Quora dataset is only available in the English language, preventing its use for building Italian systems.

    6In this paper, we propose to adapt Transformer architectures to the task of FAQ selection using machine translation. We first translated the Quora dataset to Italian, and then we trained a state-of-the-art QDT model for Italian. Finally, we tested the adapted QDT model to two FAQ datasets showing significant improvement on the zero-shot learning baselines (i.e., using no target domain training data). Moreover, we show that fine-tuning the adapted model on small target data provides a consistent improvement over models not exploiting our transfer learning approach. It should be noted that our techniques can be seen as an extension of the Transfer and Adapt (TandA) (Garg, Vu, and Moschitti 2019), but with the difference that transfer is carried out on a similar approximate task using translated data, i.e., Approximated machine Translated TandA (ATTandA).

    7The rest of the paper is organized as follows: Section 2 describes similar approaches to do Cross-Lingual Transfer Learning, Section 3 provides an overview of the available datasets and Section 4 describes the methodology we developed. Finally, Section 5 summarizes the main results and Section 6 draws the conclusions of this work.

    2. Related Works

    8The current state of the art for QDT makes use of pre-trained transformer-based frameworks, e.g., BERT (Devlin et al. 2018), RoBERTa (Liu et al. 2019) or XLNet (Yang et al. 2019). These models have millions of parameters that are trained in a two-step approach. First, they are trained as language models using various losses (e.g., masked language modeling or sentence order prediction loss) on a large corpus in an unsupervised way and then are fine-tuned on the target labeled dataset.

    9In Transfer Learning, a model is transferred (i.e., trained) on data coming from a high-resource task and is then adapted to another, usually more specific. All the Transformers-based models can be seen as Transfer Learning models: they are first trained on large corpora of unlabelled data and then are specialized in a downstream domain. Nonetheless, there are scenarios where data about similar tasks can further improve already-great models.

    10Cross-lingual transfer-learning (CLTL) is an extension in which data from a high-resource language is used to solve a low-resource language task. This technique is sometimes used in combination with Cross-Lingual Word Embeddings alignment. The actual trend is to align word embeddings to focus only on shared language-independent features and then apply Transfer Learning techniques (Lange et al. 2020; Keung, Lu, and Bhardwaj 2020). However, solving a task using data coming from a similar one has different requirements.

    11A similar approach to our has been explored by (Schuster et al. 2019), in which they used multilingual data to improve the performance of low-resource languages. However, even if they used translated data, they did not explore applying the transferred model to an affine task. Another approach (Do and Gaspers 2019) filters high-quality samples from a high-resource language dataset to train the model in reduced time. Authors claim a significant improvement in the target language and task, even using only a small amount of computing.

    12In (Joty et al. 2017), the authors improve the performance in question-question similarity by using an adversarial approach. Thanks to adversarial training, they extract language-independent features from a trained model with supervision on a high-resource language and adapted to a low-resource one for testing. Results show important improvements in the target language, even in the zero-shot setting.

    13Also, in (Wang et al. 2020), a complete overview of the common approaches for cross-lingual transfer learning (CLTL) is proposed. Authors start by comparing (i) joint training, in which a model is trained on multilingual data using both a monolingual and a cross-lingual loss, and (ii) CLWE alignment before training, in which language embeddings are mapped to a shared space before fine-tuning. They find out that both methods perform well and that there is not an overall winner. Finally, they show that training with both approaches outperforms previous state-of-the-art methods.

    3. Datasets

    3.1 Quora Question Pairs

    14The Quora dataset is a collection of question pairs for QDT. It contains many semantically equivalent questions that people asked more than once, for example, "What is the most populous state in the USA?" and "Which state in the United States has the most people?". Human experts have assigned labels; therefore, it is not free from subjective decisions and questionable labels. The dataset contains about 404K question pairs, 37\% with a positive label, and 63% with a negative one. However, this dataset is not error-free: many ids are used more than once (14K), and many questions are referred by more than a single id (76K).

    3.2 FAQ: RDC and LCN

    15RDC and LCN are two real-world datasets of FAQ retrieval. They were designed to build a QA component of conversational agent systems in Italian, targeting specific domains. Neither dataset is ready for FAQ retrieval out of the box, so we needed to group questions differently. Given that many questions share a common answer in RDC, we created several examples for the FAQ selection task by clustering questions with respect to the answers. For RDC, since the answers were simply the name of the category in which an answer could be found, we pivoted on the categories to create the clusters.

    16To build the examples, we first built clusters of equivalent questions, using their similarity gold standard labels, or rather the answers or the categories. LCN consists of 388 questions, which we grouped in 24 clusters of different sizes. The smallest contains only two elements, while the largest contains 50 elements. RDC contains 369 entries, which we grouped in 30 clusters with a minimum and maximum size of 1 and 37, respectively.3

    17Tests will show that LCN is the hardest dataset. The reason is that clustering has not been applied by pivoting on the answers but the same category instead (answers were not available). Then, each cluster contains questions that do share a precise answer but rather the same category.

    18The transformation of a set of clusters in a training or test set was done with the following algorithm: for N times, an element from each cluster was chosen, called champion, and was temporarily removed from its cluster. Each champion was then paired with a random element from every cluster, assigning positive labels when the two shared belonging to the same cluster. We found that N=5 was a reasonable number of rounds since more would have lead to information repetition.

    19Moreover, there was a need to create both small training and test sets to measure models’ performance when fine-tuned on the FAQ domain. We could not divide the dataset described before since training and test sets would have had many common sentences. To accomplish a perfect separation, 70% of the clusters were used to create a train set while the remaining 30% were used for the test set.

    3.3 FAQ: ItaFAQ

    20We built a small FAQ dataset in Italian by scraping popular websites. Then, we asked 10 different people with different backgrounds and levels of education to create additional questions similar to those automatically collected. The specific request was to create questions that would have had the same or a similar answer. The dataset is released as open-source and is available for download4. This dataset can be useful to test an information retrieval system. However, it is easier to solve than the previously described RDC and LCN. The main reasons are that (i) humans tend to create partially related new questions, and that (ii) general FAQ dataset about well-known companies and topics are easier to process than strong domain-specific data.

    4. ATTandA Approach

    4.1 Machine Translation of Quora

    21There are no medium or large-size Italian datasets for QDT or FAQ retrieval; thus, we applied machine translation. We used Microsoft Azure Cognitive Services to translate Quora Question Pairs into Italian. Since the original Quora dataset had some questions repeated on different entries, we followed the approach in (Haponchyk et al. 2018; Bonadiman, Kumar, and Mittal 2019) and grouped all the questions in clusters by mean of the transitive property: if a and b are the two questions of a pair with a positive label and Ci is a cluster, Image 100000000000007D000000107EA8FC8E9C230A4D.jpg. Moreover, if there is a tuple (a, b) with a positive label and Image 100000000000006B000000130A17B51915139BFB.jpg, then Ci and Cj are merged in Image 100000000000006600000013E81FA0AA4322B6DA.jpg.

    22After that, we translated all the questions of the clusters with at least two members. This allowed us to effectively reduce machine translation costs because we avoided translating questions that would have appeared only in negative pairs (millions of negative pairs can be easily generated by randomly picking questions from different clusters). We built the transfer dataset by labeling (i) all pairs of questions in the same cluster as positive examples; and (ii) a random number of pairs with members from different clusters as negative examples. We limited the number of the latter to be equal to the number of positive examples.

    4.2 Transformer architectures

    23To reach the highest performance, we developed our models on the actual state of the art for QA. We took into consideration:

    • Multilingual BERT (mBERT), a BERT model trained on the 104 largest Wikipedia, in terms of the number of articles. The model contains 177M5 parameters and has 12 transformer layers (Devlin et al. 2018);

    • Italian BERT6, a BERT model trained only on Italian text. The version we used was trained over the concatenation of the OSCAR corpus and the Italian OPUS corpus, for a total of 81GB of text. This model features a total of 110M parameters on 12 layers;

    • GilBERTo7, a RoBERTa model trained over 71GB of lowercase Italian text extracted from the OSCAR corpus. The authors state that this model applies masking to whole words (WWM), as in (Martin et al. 2020), instead of masking at the sub-words level, as in the original BERT. This model has a total of 111M parameters.

    4.3 Cross-Domain training

    24We aim at exploiting data similar to the target task, which may also come from a different language, to train models for our FAQ target task. Our approach can be seen as an extension of TandA by (Garg, Vu, and Moschitti 2019), which consists in two-step fine-tuning. First, they transfer the model on a general QA task with a huge dataset, and then they adapt the model to a smaller and specific QA benchmark such as WikiQA. They showed that a transfer step could improve the final performance if the source and target tasks are similar. We extend this idea by creating our transfer dataset utilizing machine translation, as described before. We call our approach ATTandA (Approximated machine-Translated TandA).

    5. Results

    25This section shows the results of testing different models on the FAQ retrieval task. We use Precision at 1 (P@1) as we mainly need to measure if the returned FAQ is correct. LCNtrain, LCNtest, RDCtrain and RDCtest are the names of the splits of LCN and RDC derived by dividing the set of clusters.

    26We start by comparing the available, transformer-based models. Table 1 shows that Italian BERT is better than the other models in most tests. This comes not as a surprise since it is specialized in the Italian language, it takes into consideration the case sensitivity of the input text, and it is trained on the most extensive corpus. GilBERTo also performs well, but RoBERTa’s improvement is insufficient to overcome the smaller training set and the case-insensitive tokenizer.

    27Comparison of different transformers-based models. Each model in the bottom half of the table has been trained on Quora with the same hyper-parameters (batch size of 64 and Adam optimizer with a learning rate of 1e-05) for a single epoch. Reported metrics are the average over 8 runs with different seeds and splits.

    Models

    Train

    Test

    MRR

    mBERT

    -

    LCNtest

    45.4

    25.5

    IT BERT

    -

    LCNtest

    56.6

    39.9

    GilBERTo

    -

    LCNtest

    47.0

    29.1

    mBERT

    -

    RDCtest

    59.4

    41.0

    IT BERT

    -

    RDCtest

    65.1

    53.2

    GilBERTo

    -

    RDCtest

    67.9

    56.1

    mBERT

    Quora

    LCNtest

    64.3

    49.2

    IT BERT

    Quora

    LCNtest

    75.1

    61.4

    GilBERTo

    Quora

    LCNtest

    72.4

    58.2

    mBERT

    Quora

    RDCtest

    88.4

    81.1

    IT BERT

    Quora

    RDCtest

    91.1

    84.8

    GilBERTo

    Quora

    RDCtest

    89.7

    83.3

    28Once we established that the best pre-training model is Italian BERT, since it shows the highest scores in 3 comparisons out of 4, we tested different transfer methods on LCN and RDC splits. We compare the performance of Italian BERT in two scenarios: (i) the model is directly fine-tuned on the target domain, and (ii) the model is first transferred on Quora and then fine-tuned on the target domain (ATTandA). We also report the results of the model without in-domain fine-tuning (zero-shot tests), taken from table 1.

    29Figure 1 reports the P@1 while training for the first five epochs on the test sets. This does not affect consistency of results since we do a comparison on the whole fine-tuning phase. All the plots show that transferring the model first on Quora gives an increase in P@1, especially in the early steps. Also, in this setting, training for more than two epochs did not provide further improvement, which could lead to over-fitting. This is intuitive as the training and test splits are small and also contain repeated information. There is no clear reason not to perform a transfer step since the resulting performance is at least equal and the computational effort to train for a single epoch on Quora is negligible.

    30Figure 1: Above: Trend of the P@1 when transferring on Quora and validating on ItaFAQ. Below: Comparison of Zero-Shot (no training at all on Quora nor RDCtrain or LCNtrain), Few-Shot learning (only in-domain adaptation on RDCtrain or LCNtrain), Transfer+Zero-Shot learning (transfer on Quora and test directly) and Transfer+Few-Shot learning (transfer on Quora and in-domain adaptation on RDCtrain or LCNtrain. The results are the average of 8 runs with different seed and dataset splits. All runs used the same hyper-parameters: batch size of 32 and Adam optimizer with a learning rate of 1e − 05. Notice the different scales on the y-axis.

    6. Conclusion

    31We explored transfer learning in a typical industrial scenario where only small (or no) data is available in the target language. We showed that it is possible to use machine translated data to improve a strictly related task’s performance. We suspect that if the tasks had been more similar, for example, Question Answering and FAQ, the performance gain would have been even better. However, this was a real-world scenario where the target datasets were used for production in real websites, and size and quality were not large. In this setting, applying a transfer phase can improve the retrieval of similar questions, and the transfer step is a low-cost operation compared to the pre-training.

    Bibliographie

    Des DOI sont automatiquement ajoutés aux références bibliographiques par Bilbo, l’outil d’annotation bibliographique d’OpenEdition. Ces références bibliographiques peuvent être téléchargées dans les formats APA, Chicago et MLA.

    Format

    • APA
    • Chicago
    • MLA
    Androutsopoulos, I., & Malakasiotis, P. (2010). A Survey of Paraphrasing and Textual Entailment Methods. In Journal of Artificial Intelligence Research (Vols. 38, pp. 135-187). AI Access Foundation. https://doi.org/10.1613/jair.2985
    Do, Q., & Gaspers, J. (2019). Cross-lingual Transfer Learning with Data Selection for Large-Scale Spoken Language Understanding. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (1–). Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics. https://doi.org/10.18653/v1/d19-1153
    Haponchyk, I., Uva, A., Yu, S., Uryupina, O., & Moschitti, A. (2018). Supervised Clustering of Questions into Intents for Dialog System Applications. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (1–). Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. https://doi.org/10.18653/v1/d18-1254
    Joty, S., Nakov, P., Màrquez, L., & Jaradat, I. (2017). Cross-language Learning with Adversarial Neural Networks. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017) (1–). Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017). Association for Computational Linguistics. https://doi.org/10.18653/v1/k17-1024
    Martin, L., Muller, B., Ortiz Suárez, P. J., Dupont, Y., Romary, L., de la Clergerie, Éric, Seddah, D., & Sagot, B. (2020). CamemBERT: a Tasty French Language Model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (1–). Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-main.645
    Androutsopoulos, I., and P. Malakasiotis. “A Survey of Paraphrasing and Textual Entailment Methods”. Journal of Artificial Intelligence Research. AI Access Foundation, May 28, 2010. https://doi.org/10.1613/jair.2985.
    Do, Quynh, and Judith Gaspers. “Cross-Lingual Transfer Learning With Data Selection for Large-Scale Spoken Language Understanding”. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, 2019. https://doi.org/10.18653/v1/d19-1153.
    Haponchyk, Iryna, Antonio Uva, Seunghak Yu, Olga Uryupina, and Alessandro Moschitti. “Supervised Clustering of Questions into Intents for Dialog System Applications”. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2018. https://doi.org/10.18653/v1/d18-1254.
    Joty, Shafiq, Preslav Nakov, Lluís Màrquez, and Israa Jaradat. “Cross-Language Learning With Adversarial Neural Networks”. Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017). Association for Computational Linguistics, 2017. https://doi.org/10.18653/v1/k17-1024.
    Martin, Louis, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric de la Clergerie, Djamé Seddah, and Benoît Sagot. “CamemBERT: A Tasty French Language Model”. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2020. https://doi.org/10.18653/v1/2020.acl-main.645.
    Androutsopoulos, I., and P. Malakasiotis. “A Survey of Paraphrasing and Textual Entailment Methods”. Journal of Artificial Intelligence Research, vol. 38, AI Access Foundation, 28 May 2010, pp. 135-87. Crossref, https://doi.org/10.1613/jair.2985.
    Do, Quynh, and Judith Gaspers. “Cross-Lingual Transfer Learning With Data Selection for Large-Scale Spoken Language Understanding”. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics, 2019. Crossref, https://doi.org/10.18653/v1/d19-1153.
    Haponchyk, Iryna, et al. “Supervised Clustering of Questions into Intents for Dialog System Applications”. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, 2018. Crossref, https://doi.org/10.18653/v1/d18-1254.
    Joty, Shafiq, et al. “Cross-Language Learning With Adversarial Neural Networks”. Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), Association for Computational Linguistics, 2017. Crossref, https://doi.org/10.18653/v1/k17-1024.
    Martin, Louis, et al. “CamemBERT: A Tasty French Language Model”. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, 2020. Crossref, https://doi.org/10.18653/v1/2020.acl-main.645.

    Cette bibliographie a été enrichie de toutes les références bibliographiques automatiquement générées par Bilbo en utilisant Crossref.

    Ion Androutsopoulos and Prodromos Malakasiotis. 2010. “A Survey of Paraphrasing and Textual Entailment Methods.” Journal of Artificial Intelligence Research 38: 135–87.

    10.1613/jair.2985 :

    Daniele Bonadiman, Anjishnu Kumar, and Arpit Mittal. 2019. “Large Scale Question Paraphrase Retrieval with Smoothed Deep Metric Learning.” In Proceedings of the 5th Workshop on Noisy User-Generated Text (W-Nut 2019), 68–75.

    Annalina Caputo, Marco de Gemmis, Pasquale Lops, Francesco Lovecchio, Vito Manzari, and Acquedotto Pugliese AQP Spa. 2016. “Overview of the Evalita 2016 Question Answering for Frequently Asked Questions (Qa4faq) Task.” In Of the Final Workshop 7 December 2016, Naples, 124.

    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. “BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding.” arXiv Preprint arXiv:1810.04805.

    Quynh Do and Judith Gaspers. 2019. “Cross-Lingual Transfer Learning with Data Selection for Large-Scale Spoken Language Understanding.” In, 1455–60. https://doi.org/10.18653/v1/D19-1153.

    10.18653/v1/D19-1153 :

    Siddhant Garg, Thuy Vu, and Alessandro Moschitti. 2019. “TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection.”

    Iryna Haponchyk, Antonio Uva, Seunghak Yu, Olga Uryupina, and Alessandro Moschitti. 2018. “Supervised Clustering of Questions into Intents for Dialog System Applications.” In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2310–21. Brussels, Belgium: Association for Computational Linguistics. https://doi.org/10.18653/v1/D18-1254.

    10.18653/v1/D18-1254 :

    Shafiq Joty, Preslav Nakov, Lluı́s Màrquez, and Israa Jaradat. 2017. “Cross-Language Learning with Adversarial Neural Networks.” In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), 226–37. Vancouver, Canada: Association for Computational Linguistics. https://doi.org/10.18653/v1/K17-1024.

    10.18653/v1/K17-1024 :

    Phillip Keung, Yichao Lu, and Vikas Bhardwaj. 2020. “Adversarial Learning with Contextual Embeddings for Zero-Resource Cross-Lingual Classification and Ner.” http://arxiv.org/abs/1909.00153.

    Lukas Lange, Anastasiia Iurshina, Heike Adel, and Jannik Strötgen. 2020. “Adversarial Alignment of Multilingual Models for Extracting Temporal Expressions from Text.” http://arxiv.org/abs/2005.09392.

    Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. “RoBERTa: A Robustly Optimized Bert Pretraining Approach.” arXiv, arXiv–1907.

    Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric de la Clergerie, Djamé Seddah, and Benoît Sagot. 2020. “CamemBERT: A Tasty French Language Model.” Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-main.645.

    10.18653/v1/2020.acl-main.645 :

    Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. “Cross-Lingual Alignment of Contextual Word Embeddings, with Applications to Zero-Shot Dependency Parsing.” http://arxiv.org/abs/1902.09492.

    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need.” http://arxiv.org/abs/1706.03762.

    Zirui Wang, Jiateng Xie, Ruochen Xu, Yiming Yang, Graham Neubig, and Jaime Carbonell. 2020. “Cross-Lingual Alignment Vs Joint Training: A Comparative Study and a Simple Unified Framework.” http://arxiv.org/abs/1910.04708.

    Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. “Xlnet: Generalized Autoregressive Pretraining for Language Understanding.” In Advances in Neural Information Processing Systems, 5753–63.

    Notes de bas de page

    1 Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0)

    2 https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs

    3 There is an Italian FAQ dataset called QA4FAQ, but it is not suitable for question similarity since annotations for the dataset are not available.http://qa4faq.github.io

    4 The dataset can be downloaded at https://github.com/lucadiliello/italian-faq-dataset

    5 mBERT has a bigger size since its vocabulary is considerably larger than monolingual models.

    6 Italian BERT models and code are available at https://github.com/dbmdz/berts

    7 GilBERTo models and code are available at https://github.com/idb-ita/GilBERTo

    Auteurs

    • Luca Di Liello

      DISI, University of Trento, Italy – luca.diliello@unitn.it

    • Daniele Bonadiman

      DISI, University of Trento, Italy – d.bonadiman@unitn.it

    • Cristina Giannone

      Almawave Srl., Italy – c.giannone@almawave.it

    • Andrea Favalli

      Almawave Srl., Italy – a.favalli@almawave.it

    • Raniero Romagnoli

      Almawave Srl., Italy – r.romagnoli@almawave.it

    • Alessandro Moschitti

      DISI, University of Trento, Italy – alessandro.moschitti@unitn.it

    Précédent Suivant
    Table des matières

    Le texte seul est utilisable sous licence Licence OpenEdition Books. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

    Voir plus de livres
    Proceedings of the Second Italian Conference on Computational Linguistics CLiC-it 2015

    Proceedings of the Second Italian Conference on Computational Linguistics CLiC-it 2015

    3-4 December 2015, Trento

    Cristina Bosco, Sara Tonelli et Fabio Massimo Zanzotto (dir.)

    2015

    Proceedings of the Third Italian Conference on Computational Linguistics CLiC-it 2016

    Proceedings of the Third Italian Conference on Computational Linguistics CLiC-it 2016

    5-6 December 2016, Napoli

    Anna Corazza, Simonetta Montemagni et Giovanni Semeraro (dir.)

    2016

    EVALITA. Evaluation of NLP and Speech Tools for Italian

    EVALITA. Evaluation of NLP and Speech Tools for Italian

    Proceedings of the Final Workshop 7 December 2016, Naples

    Pierpaolo Basile, Franco Cutugno, Malvina Nissim et al. (dir.)

    2016

    Proceedings of the Fourth Italian Conference on Computational Linguistics CLiC-it 2017

    Proceedings of the Fourth Italian Conference on Computational Linguistics CLiC-it 2017

    11-12 December 2017, Rome

    Roberto Basili, Malvina Nissim et Giorgio Satta (dir.)

    2017

    Proceedings of the Fifth Italian Conference on Computational Linguistics CLiC-it 2018

    Proceedings of the Fifth Italian Conference on Computational Linguistics CLiC-it 2018

    10-12 December 2018, Torino

    Elena Cabrio, Alessandro Mazzei et Fabio Tamburini (dir.)

    2018

    EVALITA Evaluation of NLP and Speech Tools for Italian

    EVALITA Evaluation of NLP and Speech Tools for Italian

    Proceedings of the Final Workshop 12-13 December 2018, Naples

    Tommaso Caselli, Nicole Novielli, Viviana Patti et al. (dir.)

    2018

    EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

    EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

    Proceedings of the Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian Final Workshop

    Valerio Basile, Danilo Croce, Maria Maro et al. (dir.)

    2020

    Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020

    Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020

    Bologna, Italy, March 1-3, 2021

    Felice Dell'Orletta, Johanna Monti et Fabio Tamburini (dir.)

    2020

    Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021

    Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021

    Milan, Italy, 26-28 January, 2022

    Elisabetta Fersini, Marco Passarotti et Viviana Patti (dir.)

    2022

    Voir plus de livres
    1 / 9
    Proceedings of the Second Italian Conference on Computational Linguistics CLiC-it 2015

    Proceedings of the Second Italian Conference on Computational Linguistics CLiC-it 2015

    3-4 December 2015, Trento

    Cristina Bosco, Sara Tonelli et Fabio Massimo Zanzotto (dir.)

    2015

    Proceedings of the Third Italian Conference on Computational Linguistics CLiC-it 2016

    Proceedings of the Third Italian Conference on Computational Linguistics CLiC-it 2016

    5-6 December 2016, Napoli

    Anna Corazza, Simonetta Montemagni et Giovanni Semeraro (dir.)

    2016

    EVALITA. Evaluation of NLP and Speech Tools for Italian

    EVALITA. Evaluation of NLP and Speech Tools for Italian

    Proceedings of the Final Workshop 7 December 2016, Naples

    Pierpaolo Basile, Franco Cutugno, Malvina Nissim et al. (dir.)

    2016

    Proceedings of the Fourth Italian Conference on Computational Linguistics CLiC-it 2017

    Proceedings of the Fourth Italian Conference on Computational Linguistics CLiC-it 2017

    11-12 December 2017, Rome

    Roberto Basili, Malvina Nissim et Giorgio Satta (dir.)

    2017

    Proceedings of the Fifth Italian Conference on Computational Linguistics CLiC-it 2018

    Proceedings of the Fifth Italian Conference on Computational Linguistics CLiC-it 2018

    10-12 December 2018, Torino

    Elena Cabrio, Alessandro Mazzei et Fabio Tamburini (dir.)

    2018

    EVALITA Evaluation of NLP and Speech Tools for Italian

    EVALITA Evaluation of NLP and Speech Tools for Italian

    Proceedings of the Final Workshop 12-13 December 2018, Naples

    Tommaso Caselli, Nicole Novielli, Viviana Patti et al. (dir.)

    2018

    EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

    EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

    Proceedings of the Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian Final Workshop

    Valerio Basile, Danilo Croce, Maria Maro et al. (dir.)

    2020

    Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020

    Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020

    Bologna, Italy, March 1-3, 2021

    Felice Dell'Orletta, Johanna Monti et Fabio Tamburini (dir.)

    2020

    Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021

    Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021

    Milan, Italy, 26-28 January, 2022

    Elisabetta Fersini, Marco Passarotti et Viviana Patti (dir.)

    2022

    Voir plus de chapitres

    Deep Neural Networks for Named Entity Recognition in Italian

    Daniele Bonadiman, Aliaksei Severyn et Alessandro Moschitti

    Neural Sentiment Analysis for a Real-World Application

    Daniele Bonadiman, Giuseppe Castellucci, Andrea Favalli et al.

    Tree Kernels-based Discriminative Reranker for Italian Constituency Parsers

    Antonio Uva et Alessandro Moschitti

    Predicting Land Use of Italian Cities using Structural Semantic Models

    Gianni Barlacchi, Bruno Lepri et Alessandro Moschitti

    Language Transfer for Identifying Diagnostic Paragraphs in Clinical Notes

    Luca Di Liello, Olga Uryupina et Alessandro Moschitti

    Dialog-based Help Desk through Automated Question Answering and Intent Detection

    Antonio Uva, Pierluigi Roberti et Alessandro Moschitti

    Appetitoso: A Search Engine for Restaurant Retrieval based on Dishes

    Gianni Barlacchi, Azad Abad, Emanuele Rossinelli et al.

    Commercial Applications through Community Question Answering Technology

    Antonio Uva, Valerio Storch, Casimiro Carrino et al.

    Voir plus de chapitres
    1 / 8

    Deep Neural Networks for Named Entity Recognition in Italian

    Daniele Bonadiman, Aliaksei Severyn et Alessandro Moschitti

    Neural Sentiment Analysis for a Real-World Application

    Daniele Bonadiman, Giuseppe Castellucci, Andrea Favalli et al.

    Tree Kernels-based Discriminative Reranker for Italian Constituency Parsers

    Antonio Uva et Alessandro Moschitti

    Predicting Land Use of Italian Cities using Structural Semantic Models

    Gianni Barlacchi, Bruno Lepri et Alessandro Moschitti

    Language Transfer for Identifying Diagnostic Paragraphs in Clinical Notes

    Luca Di Liello, Olga Uryupina et Alessandro Moschitti

    Dialog-based Help Desk through Automated Question Answering and Intent Detection

    Antonio Uva, Pierluigi Roberti et Alessandro Moschitti

    Appetitoso: A Search Engine for Restaurant Retrieval based on Dishes

    Gianni Barlacchi, Azad Abad, Emanuele Rossinelli et al.

    Commercial Applications through Community Question Answering Technology

    Antonio Uva, Valerio Storch, Casimiro Carrino et al.

    Accès ouvert

    Accès ouvert

    ePub

    PDF

    PDF du chapitre

    1 Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0)

    2 https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs

    3 There is an Italian FAQ dataset called QA4FAQ, but it is not suitable for question similarity since annotations for the dataset are not available.http://qa4faq.github.io

    4 The dataset can be downloaded at https://github.com/lucadiliello/italian-faq-dataset

    5 mBERT has a bigger size since its vocabulary is considerably larger than monolingual models.

    6 Italian BERT models and code are available at https://github.com/dbmdz/berts

    7 GilBERTo models and code are available at https://github.com/idb-ita/GilBERTo

    Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020

    X Facebook Email

    Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020

    Ce livre est cité par

    • Miaschi, Alessio. Alzetta, Chiara. Brunato, Dominique. Dell’Orletta, Felice. Venturi, Giulia. (2023) Testing the Effectiveness of the Diagnostic Probing Paradigm on Italian Treebanks. Information, 14. DOI: 10.3390/info14030144
    • Miaschi, Alessio. Sarti, Gabriele. Brunato, Dominique. Dell’Orletta, Felice. Venturi, Giulia. (2022) Probing Linguistic Knowledge in Italian Neural Language Models across Language Varieties. Italian Journal of Computational Linguistics, 8. DOI: 10.4000/ijcol.965

    Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020

    Vous pouvez vous connecter à votre bibliothèque à l’adresse suivante : https://freemium.openedition.org/oebooks

    Suggérer l’acquisition à votre bibliothèque

    Si vous avez des questions, vous pouvez nous écrire à access[at]openedition.org

    Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020

    Vérifiez si votre bibliothèque a déjà acquis ce livre : authentifiez-vous à OpenEdition Freemium for Books.

    Vous pouvez suggérer à votre bibliothèque d’acquérir un ou plusieurs livres publiés sur OpenEdition Books. N’hésitez pas à lui indiquer nos coordonnées : access[at]openedition.org

    Vous pouvez également nous indiquer, à l’aide du formulaire suivant, les coordonnées de votre bibliothèque afin que nous la contactions pour lui suggérer l’achat de ce livre. Les champs suivis de (*) sont obligatoires.

    Veuillez, s’il vous plaît, remplir tous les champs.

    La syntaxe de l’email est incorrecte.

    Référence numérique du chapitre

    Format

    Liello, L. D., Bonadiman, D., Giannone, C., Favalli, A., Romagnoli, R., & Moschitti, A. (2020). Cross-Language Transformer Adaptation for Frequently Asked Questions. In F. Dell’Orletta, J. Monti, & F. Tamburini (éds.), Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020 (1‑). Accademia University Press. https://doi.org/10.4000/books.aaccademia.8463
    Liello, Luca Di, Daniele Bonadiman, Cristina Giannone, Andrea Favalli, Raniero Romagnoli, et Alessandro Moschitti. « Cross-Language Transformer Adaptation for Frequently Asked Questions ». In Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-It 2020, édité par Felice Dell’Orletta, Johanna Monti, et Fabio Tamburini. Torino: Accademia University Press, 2020. https://doi.org/10.4000/books.aaccademia.8463.
    Liello, Luca Di, et al. « Cross-Language Transformer Adaptation for Frequently Asked Questions ». Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-It 2020, édité par Felice Dell’Orletta et al., Accademia University Press, 2020, https://doi.org/10.4000/books.aaccademia.8463.

    Référence numérique du livre

    Format

    Dell’Orletta, F., Monti, J., & Tamburini, F. (éds.). (2020). Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020 (1‑). Accademia University Press. https://doi.org/10.4000/books.aaccademia.8203
    Dell’Orletta, Felice, Johanna Monti, et Fabio Tamburini, éd. Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-It 2020. Torino: Accademia University Press, 2020. https://doi.org/10.4000/books.aaccademia.8203.
    Dell’Orletta, Felice, et al., éditeurs. Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-It 2020. Accademia University Press, 2020, https://doi.org/10.4000/books.aaccademia.8203.
    Compatible avec Zotero Zotero

    1 / 3

    Accademia University Press

    Accademia University Press

    • Plan du site
    • Se connecter

    Suivez-nous

    • Facebook
    • Flux RSS

    URL : http://www.aaccademia.it/

    Email : info@aaccademia.it

    Adresse :

    Accademia University Press

    Via Carlo Alberto 55

    I‐10123

    Torino

    Italia

    OpenEdition
    • Candidater à OpenEdition Books
    • Connaître le programme OpenEdition Freemium
    • Commander des livres
    • S’abonner à la lettre d’OpenEdition
    • CGU d’OpenEdition Books
    • Accessibilité : partiellement conforme
    • Données personnelles
    • Gestion des cookies
    • Système de signalement