Investigating Continued Pretraining for Zero-Shot Cross-Lingual Spoken Language Understanding
p. 205-213
Résumé
Spoken Language Understanding (SLU) in task-oriented dialogue systems involves both intent classification (IC) and slot filling (SF) tasks. The de facto method for zero-shot cross-lingual SLU consists of fine-tuning a pretrained multilingual model on English labeled data before evaluating the model on unseen languages. However, recent studies show that adding a second pretraining stage (continued pretraining) can improve performance in certain settings. This paper investigates the effectiveness of continued pretraining on unlabeled spoken language data for zero-shot cross-lingual SLU. We demonstrate that this relatively simple approach benefits either SF and IC task across 8 target languages, especially the ones written in Latin script. We also find that discrepancy between languages used during pretraining and fine-tuning may introduce training instability, which can be alleviated through code-switching.1
Texte intégral
1. Introduction
1In task-oriented dialogue systems, a Spoken Language Understanding (SLU) component typically involves intent classification (IC) and slot filling (SF) (Tur and De Mori 2011) tasks. For example, in “Show me the fares for Delta flights from Dallas to San Francisco“, the intent is asking an airfare and its corresponding slots are Delta (airline-name), Dallas (city-origin), and San Francisco (city-destination). Scaling SLU models to other languages is still challenging, especially when there is limited or no labeled data available in the target language (Louvan and Magnini 2020).
2To approach this problem, previous work studies IC and SF tasks in a zero-shot cross-lingual setting (Schuster et al. 2019; Upadhyay et al. 2018; Xu, Haider, and Mansour 2020), where it is assumed that a labeled dataset is only available for a high resource language (e.g., English). With the rise of pretrained multilingual language models (LMs) (Devlin et al. 2019; Lample and Conneau 2019) the most common approach is by fine-tuning the pretrained multilingual model on the English labeled data, and then evaluate the model directly on the target language data that are not seen during fine-tuning.
3While direct fine-tuning serves as a strong baseline, pretrained LMs are not necessarily universal and they may need domain-specific adaptation. Recent works have shown that adding a second pretraining stage (or continued pretraining) before fine-tuning can give positive impact on the model performance (Beltagy, Lo, and Cohan 2019; Lee et al. 2020; Gururangan et al. 2020). During continued pretraining, we continue training the pretrained language model using a domain-specific or task-specific unlabeled dataset, with the same masked language model objective. This stage is useful to alleviate the domain mismatch between the original pretraining and the target task data. By continued pretraining on domain specific unlabeled data, the model acquires prior knowledge which is expected to be helpful in the fine-tuning stage. This approach has shown promising results on text classification, typically on English. However, it remains unclear whether it is applicable in the context of zero-shot cross-lingual SLU.
4In contrast to previous work which has mostly focused on English classification tasks, we investigate the effectiveness of continued pretraining for zero-shot cross-lingual SLU tasks on eight target languages. Our study reveals that the existing continued pretraining method (Gururangan et al. 2020), that is successful in English text classification tasks, does not always generalize to the context of zero-shot cross-lingual SLU. We focus on the following research questions:
(Q1) Is continued pretraining effective for zero-shot cross-lingual SLU tasks?
5Our experiments on the MultiATIS++ dataset (Xu, Haider, and Mansour 2020) reveal that incorporating continued pretraining on intermediate English data can improve performance over direct fine-tuning for all languages, on zero-shot SLU. The performance gain is especially evident for languages with Latin script writing system. The benefit of continued pretraining diminishes as we inject cross-lingual supervision in the fine-tuning stage, even with simple data augmentation through code-switching.
(Q2) What are the factors that influence the effectiveness of the continued pretraining stage?
6Using the target language for continued pretraining before fine-tuning on English can be detrimental to the overall performance. However, this can be largely alleviated by code-switching the fine-tuning data. We also observe that performance improvement are not obtained by merely adding more continued pretraining data; higher domain similarity between the continued pretraining data and the fine-tuning data is indeed more important.
2. Continued Pretraining in Zero-Shot SLU
7Figure 1 shows a comparison between the standard direct fine-tuning approach with the continued pretraining approach. The main difference is the additional intermediate pretraining stage (second block in Figure 1), in which we continue training the model on an intermediate unlabeled data using the same masked language modeling objective. As the original pretraining data is relatively far from the task-oriented dialogues used in SLU, we hypothesize that continued pretraining can alleviate the domain mismatch and ingest a better prior knowledge that will be useful during fine-tuning.
Intermediate Data for Continued Pretraining
8We define several criteria for the intermediate pretraining data for the continued pretraining stage. First, their domain should be relatively close to the target dataset. We interpret the term domain as a multidimensional variety space (Ramponi and Plank 2020; Plank 2016): a domain comprises multiple aspects (style, topic, and genre (Wees et al. 2015)) that contribute to the text variation. Using this perspective and considering the target domain of a task-oriented dialogue system, we require that the intermediate data comprises text that presents a spoken language dialog style and covers a broad range of topics. Second, the dataset should be several magnitudes larger in size than the target task dataset. Finally, it must be available in many languages to support our study of continued pretraining with the target language.
3. Experimental Setup
9In this section, we describe the experimental settings related to models, evaluation metrics, and datasets.
3.1 Models
10For all of our experiments, we use a transformer-based model (Vaswani et al. 2017), namely multilingual BERT (mBERT) (Devlin et al. 2019), as the pretrained model. This model was pretrained on Wikipedia articles covering 104 languages, and we use the bert-base-multilingual-cased version.
Continued Pretraining
11For the continued pretraining stage, we further train mBERT with unlabeled intermediate data using only the Masked Language Modeling (MLM) objective for 12.5K steps, and mostly adopt the hyperparameters in . We compare the following configurations: (i) DAPTTgt a continued domain adaptive pretraining (DAPT) of mBERT on intermediate unlabeled data on the target language. (ii) DAPTEn a continued DAPT of mBERT on intermediate unlabeled data on English.
Fine-Tuning
12As the baseline model, without any adaptation (No DAPT), we use the joint IC and SF model architecture (Chen, Zhuo, and Wang 2019). This model is the state-of-the-art for IC and SF (Louvan and Magnini 2020), and it is often used as one of the baselines in recent zero-shot cross-lingual SLU studies (Xu, Haider, and Mansour 2020; Li et al. 2021). The model is trained on the English dataset; as the setup is zero-shot cross-lingual and we use the model’s last epoch for zero-shot evaluation following . We evaluate the effectiveness of each of the DAPT configurations when applied to the following fine-tuning scenarios:
Fine-tuning on English (Finetune-En). This is the standard fine-tuning scenario, where we take mBERT either with DAPT or no DAPT, fine-tune it on the English IC and SF data, and then perform zero-shot prediction to all target language data.
Fine-tuning on the English code-switched data (Finetune-CS). In this scenario, we perform data augmentation on the English fine-tuning dataset via code-switching. We follow the approach from Qin et al. (2020), where we replace the English words with their translation in the target language using the Panlex bilingual dictionary (Kamholz et al., 2014). Given a training batch, we randomly sample sentences and tokens to replace. We use the same hyperparameter used by Qin et al. (2020), that defines both sentence and word ratio to control the word replacement. We include FineTune-CS because we want to study the benefits of DAPT when adding stronger cross-lingual supervision in the fine-tuning stage.
13We did not experiment with more complex models models as our main goal is to investigate the effect of the the continued pretraining stage, rather than achieving the state of the art performance per se.
Implementation & Model Evaluation metric
14For the intent and SF models, we adapt the implementation from in which they make it publicly available (https://github.com/kodenii/CoSDA-ML). The sentence and token ratio replacement for code-switching is set to 1.0 and 0.9 respectively. For training, the learning rate is set to 10-5, batch size is set to 32, number of epoch is set to 20. We did not do extensive hyperparameter tuning, as this is a zero-shot cross lingual case where the target dataset is not available, we use the same hyperparameters as Xu et al. (2020). For the continued pretraining we use the language modeling script from Huggingface (Wolf et al. 2019). We use the bert-base-multilingual-cased, hidden state size is 768, we apply dropout probability of 0.1. The number of training steps is 12,500 following , the batch size is set to 16.
3.2 Dataset
SF and IC Dataset
15We use the MultiATIS++ (Xu, Haider, and Mansour 2020) dataset, which contains nine languages (Table 1). The dataset is derived from the original ATIS English dataset (Hemphill, Godfrey, and Doddington 1990), widely used as a benchmark for IC and SF for task-oriented dialogue systems. Utterances are related to conversations of a user asking for flight information to a system.
Table 1: Multi-ATIS++ (Xu, Haider, and Mansour 2020) statistics
Language | #train / #dev /#test | #slot | #intent |
English (EN) | 4.4K/ 490 / 893 | 83 | 24 |
German (DE) | 4.4K / 490 / 892 | 83 | 24 |
Spanish (ES) | 4.4K/ 490 / 893 | 83 | 24 |
French (FR) | 4.4K / 490 / 893 | 83 | 24 |
Portuguese (PT) | 4.4K / 489 / 892 | 83 | 24 |
Hindi (HI) | 1.4K / 160 / 888 | 74 | 22 |
Japanese (JA) | 4.4K / 490/ 886 | 83 | 24 |
Chinese (ZH) | 4.4K / 490 / 893 | 83 | 24 |
Turkish (TR) | 0.6K / 60/ 715 | 70 | 21 |
Continued Pretraining Dataset
16We use the OpenSubtitle (OpenSub) (Lison and Tiedemann 2016) (Table 2) dataset for the continued pretraining stage for several reasons. First, the dataset is constructed from movies and TV series containing spoken language in dialogue settings covering a broad range of topics. Second, OpenSubtitle covers all the languages that we use on the downstream tasks, which enables us to evaluate not only DAPTEn but also DAPTTgt. Third, the dataset is large in size, thus ideal for continued pretraining. Typically, the dataset used for continued pretraining is larger than that used for fine-tuning. For our experiments we randomly sampled 100K sentences for each language in the OpenSub dataset, resulting in a dataset around 20 times larger than the downstream task dataset.
Table 2: OpenSub (Lison and Tiedemann 2016) dataset statistics. Each language has 100 K utterances
Language | Total Tokens |
English (EN) | 734,302 |
German (DE) | 691,039 |
Spanish (ES) | 711,264 |
French (FR) | 739,551 |
Portuguese (PT) | 676,789 |
Hindi (HI) | 688,675 |
Japanese (JA) | 747,780 |
Chinese (ZH) | 611,700 |
Turkish (TR) | 554,709 |
4. Results
17The main goal of our experiment is to answer research question (Q1). Table 3 compares the zero-shot performance for SF and IC across languages. In terms of language (by column in Table 3), we observe that all languages improve over No-DAPT in at least one DAPT setting, suggesting that DAPT is effective across languages. Observing the results per task, SF benefits from either DAPTEn or DAPTTgt for German, Spanish, French, Portuguese, and Turkish, which all are languages with Latin scripts writing system. For these languages, the margin obtained from DAPT when fine-tuning on English (FineTune-En) is higher than when we apply DAPT on code-switched data (FineTune-CS). The margin of DAPT when applied on FineTune-CS diminishes because FineTune-CS uses a stronger supervision signal in the fine-tuning stage, thus providing a higher baseline. For languages with non-Latin script writing system, continued pretraining is less useful; we only observe marginal improvement on Japanese when applying DAPTEn and FineTune-En. Similar to , we believe that performance is also affected by typological language proximity such as the subject, verb, and object ordering, phonology features or other aspect related to the original size of the pre-training data of mBERT. We leave this for future work.
18DAPT is less effective for IC than for SF. The only language that consistently benefits from continued pretraining in both fine-tuning scenarios is Turkish. We found that it is harder to improve the model performance of languages with Latin script through DAPT because the baseline is relatively high; a stronger supervision signal would thus be needed. The performance gain is small even for those languages that do benefit from DAPT. We also observe that using a different language between continued pretraining and fine-tuning stages, DAPTTgt and FineTune-En, may hamper performance.
5. Analysis and Discussion
19To answer the research question (Q2), we analyze our results focusing on the performance variation when using different languages in DAPT and fine-tuning (5.1) and the effect of domain distribution in different sources for DAPTEn (5.2).
5.1 Performance Variation when Applying DAPT
20As we have noticed in Section 4, there are cases where performance drop when we use DAPTTgt and FineTune-En, especially for IC. This behaviour holds even for languages relatively close to English, such as German and French. One possible reason for the drop in accuracy is that the language difference introduces instability in fine-tuning. Our post-hoc analysis shows that the target language performance during training on the dev set has a large deviation and continues fluctuating even after the English dev performance has stabilized. This observation resonates with a previous study from , which shows that, for zero-shot text classification, English dev performance often does not correlate with those of the target language. Using DAPTTgt and FineTune-En pronounces the disagreement of performance between the English and the target dev set. Figure 2 shows the comparison of the IC performance during training across continued pretraining strategies when fine-tuning on English for French. However, for the SF task, we do not observe a large performance variation even with a language mismatch: this might indicate that text classification is more susceptible to instability than sequence tagging. The variability caused by DAPTTgt is largely alleviated when we use DAPTEn. For the FineTune-CS scenario, the system is relatively stable even when combined with DAPTTgt or DAPTEn.
5.2 Domain Relevance for DAPTEn
21We aim at investigating whether the improvement from the continued pretraining comes from the domain relevance of the intermediate data. For this purpose, we selected a few written text datasets instead of spoken language, which are focused on a specific topic. Specifically, we use the European Medicines Agency (EMEA) and European Central Bank corpus (ECB) from Tiedemann (2012). EMEA contains articles about human, veterinary, or herbal medicines extracted from the EMEA website. ECB contains financial documents that are extracted from the website and documentation of the European Central Bank. In order to check that EMEA and ECB are more distant in terms of domain from MultiATIS than OpenSub, we compute the Jensen Shannon Divergence (JSD) measure of the term distribution (Dai et al., 2020; Ruder and Plank, 2017). We compute the JSD between the MultiATIS English dataset that is used for fine-tuning and each English intermediate dataset. Based on the JSD measure, EMEA and ECB are more distant to MultiATIS than OpenSub (Table 4).
Table 4: Domain similarity between MultiATIS and each of the intermediate data
OpenSub | EMEA | ECB | |
JSD | 0.419 | 0.391 | 0.397 |
Table 5: Comparison of SF performance with dif-ferent intermediate data
Lang. | No DAPT | ΔDAPTEn | ||
OpenSub | EMEA | ECB | ||
DE | 65.3 | +2.1 | −2.5 | −9.5 |
ES | 71.3 | +0.9 | +0.9 | +1.3 |
FR | 64.0 | +5.9 | +2.0 | +0.7 |
PT | 61.9 | +1.4 | −0.3 | −9.1 |
Avg | +2.5 | +0.005 | −4.1 |
22For each intermediate dataset, we randomly sample 100K sentences and use them for continued pretraining. We compare the SF performance of DAPTEn with FineTune-En on OpenSub, EMEA, and ECB in Table 5. We focus on languages that belongs to Indo-European family which mostly obtain benefit from DAPT on SF (Table 3) Overall, we see that DAPT using OpenSub obtains improvements over No-DAPTin all cases. The DAPT performance using EMEA and ECB are lower than OpenSub in most cases. Even for DE and PT languages, DAPT with ECB obtains substantially lower performance than No-DAPT. However, there are cases when EMEA or ECB match or even perform better than OpenSub i.e., for Spanish. These cases indicate that performing data selection before continued pretraining could be beneficial to construct more optimal DAPT dataset. It would be interesting also to observe how continued pre-training would work using smaller unlabeled pre-training data but more task relevant. We leave this possibility for future work.
6. Related Work
Zero-Shot Cross-Lingual SLU
23Before the advent of the pre-trained multilingual transformer models, most approaches relied on pre-trained cross-lingual embeddings to perform zero-shot SLU. Upadhyay et al. (2018) uses cross-lingual embedding (Bojanowski et al. 2017) to perform zero-shot SLU while Schuster et al. (2019) uses multilingual embedding (Cove) from pre-trained multilingual bi-LSTM encoder used in Neural Machine Translation (NMT). Liu et al. (2019) leverages transferable latent variables to improve the sentence representation across languages. More recently, as pre-trained multilingual transformer models show potential in zero-shot settings, most approaches focus on improving their multilingual representation through augmentation and alignment methods. Qin et al. (2020) proposes multilingual code-switching using a bi-lingual dictionary to improve mBERT’s multilingual representation. Xu et al. (2020) introduces soft alignment of slots between English and the target language produced by a machine translation system that eliminates the need for an annotation projection pipeline. Kul-shreshtha et al. (2020) study the effect of various cross-lingual alignment methods to improve mBERT representation.
Continued Pre-training
24Domain adaptation is a long-studied problem in the NLP community (Daumé III 2007; ???), in which we assume data in the target domain might be hard to obtain while being abundant in source domains. Continued pre-training – where the model is trained on relevant data using the same pre-training objective – is used for mitigating the distribution mismatch between the pre-training and the fine-tuning data in terms of domain (Logeswaran et al., 2019; Han and Eisenstein, 2019; Gururangan et al., 2020; Beltagy et al., 2019), task (Gururangan et al. 2020), and language (Pfeiffer et al., 2020). A complementary approach performs a first fine-tuning on related auxiliary tasks (for which training data are easy to obtain) before the final fine-tuning on the downstream task (Arase and Tsujii, 2019; Garg et al., 2020; Khashabi et al., 2020). Our work is in line with Gururangan et al. (2020) where we investigate further the effectiveness of continued pre-training in the context of zero-shot cross-lingual SLU.
7. Conclusion
25We systematically study the effectiveness of continued pre-training of a multilingual model on intermediate English unlabeled spoken language data for zero-shot cross-lingual tasks, namely intent classification and slot filling, on 8 languages. Our results show that the domain knowledge learned in English is transferable to other languages. The gain from continued pre-training diminishes as we inject cross-lingual supervision in the fine-tuning stage. There are several factors that influence the effectiveness of the continued pre-training: (i) Using different language between pre-training and fine-tuning can hamper performance and introduce instability in the model training, which can be alleviated with code switching. (ii) Domain similarity is important. The more similar – in terms of data distribution – the intermediate data to the target dataset yields better performance.
Bibliographie
Des DOI sont automatiquement ajoutés aux références bibliographiques par Bilbo, l’outil d’annotation bibliographique d’OpenEdition. Ces références bibliographiques peuvent être téléchargées dans les formats APA, Chicago et MLA.
Format
- APA
- Chicago
- MLA
Yuki Arase and Jun’ichi Tsujii. 2019. Transfer fine-tuning: A BERT case study. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5393–5404, Hong Kong, China, November. Association for Computational Linguistics.
Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. “SciBERT: A Pretrained Language Model for Scientific Text.” In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (Emnlp-Ijcnlp), 3615–20. Hong Kong, China: Association for Computational Linguistics. https://doi.org/10.18653/v1/D19-1371.
10.18653/v1/D19-1371 :John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 440–447, Prague, Czech Republic, June. Association for Computational Linguistics.
P. Bojanowski, E. Grave, Armand Joulin, and Tomas Mikolov. 2017. “Enriching Word Vectors with Subword Information.” Transactions of the Association for Computational Linguistics 5: 135–46.
10.1162/tacl_a_00051 :Qian Chen, Zhu Zhuo, and Wen Wang. 2019. “BERT for Joint Intent Classification and Slot Filling.” ArXiv abs/1902.10909.
Hal Daumé III. 2007. “Frustratingly Easy Domain Adaptation.” In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, 256–63. Prague, Czech Republic: Association for Computational Linguistics. https://www.aclweb.org/anthology/P07-1033.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. “BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, Mn, Usa, June 2-7, 2019, Volume 1 (Long and Short Papers), edited by Jill Burstein, Christy Doran, and Thamar Solorio, 4171–86. Association for Computational Linguistics. https://doi.org/10.18653/v1/n19-1423.
10.18653/v1/n19-1423 :Siddhant Garg, Thuy Vu, and Alessandro Moschitti. 2020. Tanda: Transfer and adapt pre-trained transf ormer models for answer sentence selection. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):7780–7788, Apr.
Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. “Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks.” In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 8342–60. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-main.740.
10.18653/v1/2020.acl-main.740 :Xiaochuang Han and Jacob Eisenstein. 2019. Unsupervised domain adaptation of contextualized embeddings for sequence labeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4238–4248, Hong Kong, China, November. Association for Computational Linguistics.
Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. “The ATIS Spoken Language Systems Pilot Corpus.” In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, Usa, June 24-27, 1990. Morgan Kaufmann. https://www.aclweb.org/anthology/H90-1021/.
David Kamholz, Jonathan Pool, and Susan Colowick. 2014. PanLex: Building a resource for panlingual lexical translation. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 3145–3150, Reykjavik, Iceland, May. European Language Resources Association (ELRA).
Phillip Keung, Y. Lu, Julian Salazar, and Vikas Bhardwaj. 2020. Don’t Use English Dev: On the Zero-Shot Cross-Lingual Evaluation of Contextual Embed-dings. In EMNLP.
Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UNIFIEDQA: Crossing format boundaries with a single QA system. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1896–1907, Online, November. Association for Computational Linguistics.
Saurabh Kulshreshtha, José Luis Redondo García, and Ching-Yun Chang. 2020. Cross-lingual alignment methods for multilingual BERT: A comparative study. In EMNLP.
Guillaume Lample and Alexis Conneau. 2019. Cross-lingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS).
Anne Lauscher, Vinit Ravishankar, Ivan Vulić, and Goran Glavaš. 2020. From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4483–4499, Online, November. Association for Computational Linguistics.
Guillaume Lample and Alexis Conneau. 2019. “Cross-Lingual Language Model Pretraining.” Advances in Neural Information Processing Systems (NeurIPS).
Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, D. Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. “BioBERT: A Pre-Trained Biomedical Language Representation Model for Biomedical Text Mining.” Bioinformatics 36: 1234–40.
Haoran Li, Abhinav Arora, Shuohui Chen, Anchit Gupta, Sonal Gupta, and Yashar Mehdad. 2021. “MTOP: A Comprehensive Multilingual Task-Oriented Semantic Parsing Benchmark.” In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 2950–62. Online: Association for Computational Linguistics. https://www.aclweb.org/anthology/2021.eacl-main.257.
Pierre Lison and J. Tiedemann. 2016. “OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and Tv Subtitles.” In LREC.
Zihan Liu, Jamin Shin, Yan Xu, Genta Indra Winata, Peng Xu, Andrea Madotto, and Pascale Fung. 2019. Zero-shot cross-lingual dialogue systems with transferable latent variables. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1297–1303, Hong Kong, China, November. Association for Computational Linguistics.
Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, and Honglak Lee. 2019. Zero-shot entity linking by reading entity descriptions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3449–3460, Florence, Italy, July. Association for Computational Linguistics.
Samuel Louvan and Bernardo Magnini. 2020. “Recent Neural Methods on Slot Filling and Intent Classification for Task-Oriented Dialogue Systems: A Survey.” In Proceedings of the 28th International Conference on Computational Linguistics, 480–96. Barcelona, Spain (Online): International Committee on Computational Linguistics. https://doi.org/10.18653/v1/2020.coling-main.42.
10.18653/v1/2020.coling-main.42 :Jonas Pfeiffer, Ivan Vuli´c, Iryna Gurevych, and Sebastian Ruder. 2020. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7654–7673, Online, November. Association for Computational Linguistics.
Barbara Plank. 2016. “What to Do About Non-Standard (or Non-Canonical) Language in NLP.” arXiv Preprint arXiv:1608.07836.
L. Qin, Minheng Ni, Y. Zhang, and W. Che. 2020. Cosda-ml: Multi-lingual code-switching data aug-mentation for zero-shot cross-lingual NLP. In IJCAI.
Alan Ramponi, and Barbara Plank. 2020. “Neural Unsupervised Domain Adaptation in NLP—A Survey.” In Proceedings of the 28th International Conference on Computational Linguistics, 6838–55. Barcelona, Spain (Online): International Committee on Computational Linguistics. https://doi.org/10.18653/v1/2020.coling-main.603.
10.18653/v1/2020.coling-main.603 :Sebastian Ruder, and Barbara Plank. 2017. “Learning to Select Data for Transfer Learning with Bayesian Optimization.” In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 372–82. Copenhagen, Denmark: Association for Computational Linguistics. https://doi.org/10.18653/v1/D17-1038.
10.18653/v1/D17-1038 :Sebastian Schuster, Sonal Gupta, Rushin Shah, and Mike Lewis. 2019. “Cross-Lingual Transfer Learning for Multilingual Task Oriented Dialog.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, Mn, Usa, June 2-7, 2019, Volume 1 (Long and Short Papers), edited by Jill Burstein, Christy Doran, and Thamar Solorio, 3795–3805. Association for Computational Linguistics. https://doi.org/10.18653/v1/n19-1380.
10.18653/v1/n19-1380 :Jörg Tiedemann. 2012. Parallel data, tools and interfaces in opus. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Mehmet Ugur Dogan, Bente Maegaard, Joseph Mariani, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12), Istanbul, Turkey, may. European Language Re-sources Association (ELRA).
Gokhan Tur and Renato De Mori. 2011. Spoken Language Understanding: Systems for Extracting Semantic Information from Speech. John Wiley & Sons.
Shyam Upadhyay, Manaal Faruqui, Gökhan Tür, Dilek Z. Hakkani-Tür, and Larry Heck. 2018. “(Almost) Zero-Shot Cross-Lingual Spoken Language Understanding.” 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 6034–8.
Marlies van der Wees, Arianna Bisazza, Wouter Weerkamp, and Christof Monz. 2015. What’s in a domain? Analyzing genre and topic differences in statistical machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 560–566, Beijing, China, July. Association for Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need.” In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, ca, USA, edited by Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, 5998–6008. http://papers.nips.cc/paper/7181-attention-is-all-you-need.
Marlies van der Wees, Arianna Bisazza, Wouter Weerkamp, and Christof Monz. 2015. “What’s in a Domain? Analyzing Genre and Topic Differences in Statistical Machine Translation.” In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), 560–66. Beijing, China: Association for Computational Linguistics. https://doi.org/10.3115/v1/P15-2092.
10.3115/v1/P15-2092 :Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, et al. 2019. “HuggingFace’s Transformers: State-of-the-Art Natural Language Processing.” ArXiv abs/1910.03771.
Weijia Xu, Batool Haider, and Saab Mansour. 2020. “End-to-End Slot Alignment and Recognition for Cross-Lingual NLU.” In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (Emnlp), 5052–63. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.emnlp-main.410.
10.18653/v1/2020.emnlp-main.410 :Notes de bas de page
1 Copyright ️2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Auteurs
Fondazione Bruno Kessler, Italy – University of Trento, Italy – slouvan@fbk.eu
Fondazione Bruno Kessler, Italy – scasola@fbk.eu
Fondazione Bruno Kessler, Italy – magnini@fbk.eu
Le texte seul est utilisable sous licence Creative Commons - Attribution - Pas d'Utilisation Commerciale - Pas de Modification 4.0 International - CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.
Proceedings of the Second Italian Conference on Computational Linguistics CLiC-it 2015
3-4 December 2015, Trento
Cristina Bosco, Sara Tonelli et Fabio Massimo Zanzotto (dir.)
2015
Proceedings of the Third Italian Conference on Computational Linguistics CLiC-it 2016
5-6 December 2016, Napoli
Anna Corazza, Simonetta Montemagni et Giovanni Semeraro (dir.)
2016
EVALITA. Evaluation of NLP and Speech Tools for Italian
Proceedings of the Final Workshop 7 December 2016, Naples
Pierpaolo Basile, Franco Cutugno, Malvina Nissim et al. (dir.)
2016
Proceedings of the Fourth Italian Conference on Computational Linguistics CLiC-it 2017
11-12 December 2017, Rome
Roberto Basili, Malvina Nissim et Giorgio Satta (dir.)
2017
Proceedings of the Fifth Italian Conference on Computational Linguistics CLiC-it 2018
10-12 December 2018, Torino
Elena Cabrio, Alessandro Mazzei et Fabio Tamburini (dir.)
2018
EVALITA Evaluation of NLP and Speech Tools for Italian
Proceedings of the Final Workshop 12-13 December 2018, Naples
Tommaso Caselli, Nicole Novielli, Viviana Patti et al. (dir.)
2018
EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020
Proceedings of the Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian Final Workshop
Valerio Basile, Danilo Croce, Maria Maro et al. (dir.)
2020
Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020
Bologna, Italy, March 1-3, 2021
Felice Dell'Orletta, Johanna Monti et Fabio Tamburini (dir.)
2020
Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021
Milan, Italy, 26-28 January, 2022
Elisabetta Fersini, Marco Passarotti et Viviana Patti (dir.)
2022