Version classiqueVersion mobile

EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

 | 
Valerio Basile
, 
Danilo Croce
, 
Maria Maro
, 
et al.

AcCompl-it: Acceptability & Complexity evaluation

UmBERTo-MTSA @ AcCompl-It: Improving Complexity and Acceptability Prediction with Multi-task Learning on Self-Supervised Annotations

Gabriele Sarti

Résumé

This work describes a self-supervised data augmentation approach used to improve learning models’ performances when only a moderate amount of labeled data is available. Multiple copies of the original model are initially trained on the downstream task. Their predictions are then used to annotate a large set of unlabeled examples. Finally, multi-task training is performed on the parallel annotations of the resulting training set, and final scores are obtained by averaging annotator-specific head predictions. Neural language models are fine-tuned using this procedure in the context of the AcCompl-it shared task at EVALITA 2020, obtaining considerable improvements in prediction quality.

Texte intégral

The author was supported by a scholarship for Data Science and Scientific Computing students from the International School of Advanced Studies (SISSA).

1. Introduction

1In recent times, pre-trained neural language models (NLMs) have become the preferred approach for language representation learning, pushing the state-of-the-art in multiple NLP tasks (Devlin et al. (2019; Radford et al. 2019; Yang et al. 2019; Raffel et al. 2019) inter alia). These approaches rely on a two-step training process: first, a self-supervised pre-training is performed on large-scale corpora; then, the model undergoes a supervised fine-tuning on downstream task labels using task-specific prediction heads. While this method was found to be effective in scenarios where a relatively large amount of labeled data are present, researchers highlighted that this is not the case in low-resource settings (Yogatama et al. 2019).

2Recently, pattern-exploiting training (PET, Schick and Schutze (2020a, 2020b) tackles the dependence of NLMs on labeled data by first reformulating tasks as cloze questions using task-related patterns and keywords, and then using language models trained on those to annotate large sets of unlabeled examples with soft labels. PET can be thought of as an offline version of knowledge distillation (Hinton, Vinyals, and Dean 2015), which is a well-established approach to transfer the knowledge across models of different size, or even between different versions of the same model as in self-training (Scudder 1965; Yarowsky 1995). While effective on classification tasks that can be easily reformulated as cloze questions, PET cannot be easily extended to regression settings since they cannot be adequately verbalized. Contemporary work by Du et al. (2020) showed how self-training and pre-training provide complementary information for natural language understanding tasks.

3In this paper, I propose a simple self-supervised data augmentation approach that can be used to improve the generalization capabilities of NLMs on regression and classification tasks for modest-sized labeled corpora. In short, an ensemble of fine-tuned models is used to annotate a large corpus of unlabeled text, and new annotations are leveraged in a multi-task setting to obtain final predictions over the original test set. The method was tested on the AcCompl-it shared tasks of the EVALITA 2020 campaign (Brunato, Cristiano, et al. 2020; Basile et al. 2020), where the objective was to predict respectively complexity and acceptability scores on a 1-7 Likert scale for each test sentence, alongside an estimation of its standard error. Results show considerable improvements over regular fine-tuning performances on COMPL and ACCEPT using the UmBERTo pre-trained model (“UmBERTo: An Italian Language Model Trained with Whole Word Masking,” n.d.), suggesting the validity of this approach for complexity/acceptability prediction and possibly other language processing tasks.

2. Description of the Approach

4Let:

    • 1 yi can be either discrete or continuous in this context.

    Image 10000000000000BE0000001309EA66E80E896943.jpgbe the initial labeled corpus containing sentence-annotation pairs Image 1000000000000072000000117C7AFBA41B5AEDB7.jpg1

  • Image 100000000000007600000013D1DC1194A25C9FD7.jpgbe a large unlabeled corpus such that Image 10000000000000370000000D739B10351ED5DC2F.jpg

  • Image 100000000000005B00000011877411F37EDEA241.jpgbe a pre-trained neural language model with a single task-specific heads, taking sentence xi as input and predicting label yi at inference time.

For some Image 1000000000000032000000101FDB8E7A6EEE5347.jpg, we begin by splitting Image 100000000000000C0000000DBBB974DC149BA0CB.jpgin k equal-sized segments Image 1000000000000050000000114433C457A5EA079E.jpgand fine-tune k identical versions of M using k-fold cross-validation. We call the resulting models Image 100000000000005D00000014B4A9C865D57C9965.jpg“NLMs with standard fine-tuning on the y target task”, with Mi being trained on the subset Image 1000000000000034000000104F9C17650BC985A7.jpgand evaluated on Image 10000000000000110000001072575CBF701B9D55.jpg. Then, each sentence of Image 100000000000000E0000000EA2526401BD5F1896.jpgis passed to each model, obtaining the corpus

Image 100000000000013700000015D280786C078C0840.jpg

5labeled with expert annotations from fine-tuned models. Predicted values are taken instead of probability distributions after the softmax, which are typically used in the knowledge distillation literature, to keep the approach simple while making it viable in the context of regression tasks.

Now that the large corpus is annotated, a multi-task NLM Image 10000000000000AA00000015863AC5D012C74625.jpgis fine-tuned on Image 10000000000000120000000F69B8E097165F0895.jpgby treating each annotation in the set Image 10000000000000440000001490ED78C42E4AF64F.jpgas a separate task, using 1-layer feed-forward neural networks as task-specific heads while performing hard parameter sharing (Caruana 1997) on underlying model parameters. Intuitively, the k models used to produce annotations were trained on different folds of the original corpus, and as such, they provide complementary viewpoints on the modeled phenomenon when k is small.

As a final step, MTM is fine-tuned on a training portion of Image 100000000000000C0000000DBBB974DC149BA0CB.jpg, using as prediction scores Image 10000000000000550000001532954CACCB7B1D86.jpg, where f is a task and context-dependent aggregation function. For example, in the case of a classification task, one can select the majority vote from the ensemble of model heads as the final prediction, while in a regression setting this can be done by averaging scores across heads. Once fine-tuned, the model can be tested on the test portion of Image 100000000000000C0000000DBBB974DC149BA0CB.jpgusing the same f as the aggregator. I refer to this approach as Multi-Task Self-Annotation (MTSA) in the following sections.

3. Experimental Evaluation

6For the experimental evaluation part:

  • The ACCEPT and COMPL training corpora, containing respectively 1339 and 2012 sentences labeled with average scores and standard error across annotators, were used as labeled datasets Image 1000000000000036000000117D93C2FDC39FDDDB.jpg. The two tasks were learned separately, following the same approach described in the previous section.

  • A set of multiple Italian treebanks including train, dev, and test sets of the Italian Stanford Dependency Treebank (Bosco, Montemagni, and Simi 2013), the Turin University Parallel Treebank (Sanguinetti and Bosco 2015), PoSTWITA-UD (Sanguinetti et al. 2018) and the Venice Italian Treebank (Delmonte, Bristot, and Tonelli 2007) was used as unlabeled corpus Image 100000000000000E0000000EA2526401BD5F1896.jpg. The final corpus contains 37,344 unlabeled sentences and spans multiple textual genres.

  • The UmBERTo model (“UmBERTo: An Italian Language Model Trained with Whole Word Masking,” n.d.) available through the HuggingFace’s Transformers framework (Wolf et al. 2019) was used both for fine-tuning Image 100000000000002C00000010F3D6AF3882D028A3.pngduring the annotation part and for fine-tuning MTM. The model is based on the RoBERTa architecture (Liu et al. 2019) and was pre-trained on the Italian portion of the OSCAR CommonCrawl corpus (Ortiz Suárez, Romary, and Sagot 2020), containing roughly 210M sentences and over 11B tokens.

Since both tasks involve predicting both averaged scores and the original standard error across participants, the approach presented in the previous section was adapted to account for multi-task learning of scores and errors from the beginning, with each model Mi producing both a predicted score Image 1000000000000012000000146A53722C77575B47.jpgand a predicted error Image 100000000000000F000000103E5ADF9DD2E15785.jpgfor the annotation step. The k parameter was set to 5 to prevent excessive overlapping of training data across models, with the final multi-task model Image 10000000000000E900000015F10F34B54CFA5BCB.jpgreturning prediction for scores and errors for all the five sets of fine-tuned model annotations.

Models M…k were trained for a maximum of 15 epochs on the labeled training sets using early stopping (5 patience steps, 20 evaluation steps using a 10% slice as dev set), learning rate Image 1000000000000044000000119A939DC20B3E37B8.jpg, batch size b = 32 and embedding dropout Image 10000000000000370000000D7309DC9CED1211B5.jpg. The model’s base variant was used, having a hidden size |h| = 768, and a maximum sequence length of 128. Notably, the representations at the last layer of the UmBERTo model were averaged to obtain a sentence-level representation instead of using the [CLS] token. During the training on the whole unlabeled corpus, the evaluation steps were increased to 100 to balance evaluation time with the corpus’s increased size.

4. Results

7Table 1 presents methods for which the correlation between values and complexity scores was tested on the training portion of the ACCEPT and COMPL tasks with 5-fold cross validation, leading to the selection of MTSA as the top-performing approach:

Table 1: Spearman’s correlation scores on the ACCEPT (top) and COMPL (bottom) subtasks’ training portions. Models are evaluated using 5-fold cross-validation. All scores have p < 0.001

Model

Score (ρ)

Error (ρ)

UmBERTo surprisal

-0.36

0.17

Length (# of tokens)

-0.39

0.17

Length (characters)

-0.39

0.21

UmBERTo fine-tuned

0.90

0.50

UmBERTo-STSA

0.91

0.53

UmBERTo-MTSA

0.91

0.54

UmBERTo surprisal

0.49

0.28

Length (# of tokens)

0.55

0.36

Length (characters)

0.60

0.39

UmBERTo fine-tuned

0.84

0.54

UmBERTo-STSA

0.87

0.62

UmBERTo-MTSA

0.88

0.63

  • UmBERTo surprisal: Sentence-level surprisal estimates are produced using the pre-trained model without fine-tuning as:

Image 10000000000000F7000000323422F4E85A304F36.jpg

  • Length (# of tokens): Length of the sentence in number of tokens

  • Length (characters): Length of the sentence in number of characters (including whitespaces)

  • UmBERTo fine-tuned: Predictions produced by Umberto with standard fine-tuning on complexity corpus annotations.

  • UmBERTo-STSA: A variant of the MTSA approach where instead of performing multi-task learning over model annotations on Image 100000000000000E0000000EA2526401BD5F1896.jpg, we average them in a single score, and the model is trained on it with single-task fine-tuning.

  • UmBERTo-MTSA: The approach presented in this work.

8From Table 1, it can be observed that, although length alone is already correlated with acceptability complexity scores, UmBERTo can leverage additional information from its representation to produce much stronger predictions. Interestingly, both the STSA and MTSA self-annotation approaches consistently outperform regular fine-tuning, especially for what concerns standard error scores. This fact suggests that self-annotation leads to better generalization capabilities in the model over downstream tasks when relatively few annotations are available. While the contribution of multi-task learning is modest, the MTSA approach may prove especially beneficial when training models M…k on scores produced by different annotators instead of using different folds of the same corpus, as in this case. In both cases, predicted surprisal scores act as poor predictors for downstream tasks. It should also be noted that length appears to be negatively correlated to acceptability scores (i.e. longer sentences are generally less acceptable), while the relation is positive in the case of complexity (i.e. longer sentences are generally more complex).

9Table 2 reports the scores obtained by MTSA over the test sets for the ACCEPT and the COMPL shared tasks. The organizers’ baseline scores correspond to the correlation among gold labels and acceptability and complexity predictions produced by an SVM model trained on 1-grams and bigrams of sentences and an SVM trained on sentence length, respectively. The MTSA approach achieved the first rank in both tasks, with considerable improvements over baseline scores.

Table 2: Correlation scores with gold labels on the ACCEPT (top) and COMPL (bottom) subtasks’ test portions. All scores have p < 0.001

Model

Score (ρ)

Error (ρ)

SVM 2-gram baseline

0.30

0.35

UmBERTo-MTSA

0.88

0.52

SVM length baseline

0.50

0.33

UmBERTo-MTSA

0.83

0.51

5. Error Analysis

  • 2 A description of produced annotations is omitted for brevity. Refer to Brunato, Cimino, et al. (202 (...)

Finally, some error analysis is performed to gain additional insights on which factors influence the predictability of complexity and acceptability judgments. The Profiling-UD tool by Brunato, Cimino, et al. (2020) is used to produce linguistic annotations on test sentences for both tasks. Given an input sentence, Profiling-UD produces roughly Image 100000000000002D0000000C90F872A6C765C966.jpgnumeric scores representing different phenomena and properties at different language levels.2 I then correlate the value of all features with Image 100000000000000E0000000C9835E45CC791EC90.jpgand Image 100000000000000C0000000B90088E473F91B51A.jpg, representing the mean absolute error between true and predicted values for scores and standard errors, respectively. Table 3 presents the results of the error analysis.

Table 3: Pearson’s correlation scores between pre-diction errors and various linguistic features. Orange and cyan cells contain respectively positive and negative scores for which p < 0.001.

Acceptability

Complexity

ρ(Image 100000000000000E0000000C9835E45CC791EC90.jpg)

ρ(Image 100000000000000C0000000B90088E473F91B51A.jpg)

ρ(Image 100000000000000E0000000C9835E45CC791EC90.jpg)

ρ(Image 100000000000000C0000000B90088E473F91B51A.jpg)

avg. score (Image 10000000000000090000000C8079FA3D0B687A80.jpg)

-25%

10%

41%

-2%

std. error (Image 100000000000000700000008A70B892EA5FEF5A5.jpg)

12%

2%

23%

27%

upos_dist_PROPN

19%

-3%

4%

6%

dep_dist_nmod

19%

-8%

4%

1%

avg_max_depth

16%

-3%

7%

-7%

n_prep_chains

16%

-8%

4%

-2%

prep_chain_len

16%

-6%

9%

-4%

upos_dist_PRON

1%

20%

8%

9%

dep_dist_root

-9%

18%

-4%

23%

dep_dist_punct

-9%

17%

1%

-3%

aux_mood_dist_Imp

7%

6%

17%

7%

n_tokens

9%

-13%

5%

-18%

avg_links_len

-3%

1%

-6%

-17%

max_links_len

-1%

-9%

-1%

-16%

10Strongly correlated values in Table 3 correspond to features that highly influence, either positively or negatively, the prediction capabilities of the MTSA model. Extreme task scores (avg. score), denoting either not very acceptable or highly complex sentences, are less predictable than their average counterparts by MTSA. Sentences for whose the standard deviation of scores is high across participants appear to be less predictable in the context of complexity scores, while this does not affect acceptability predictions.

11Concerning acceptability, I found a significant correlation between acceptability prediction errors and the presence of multilevel syntactic structures, (avg_max_depth) multiple long prepositional chains (n_prep_chains, prep_chain_len) and nominal modifiers (dep_dist_nmod). From the complexity viewpoint, instead, the presence of inflectional morphology related to the imperfect tense in auxiliaries (aux_mood_dist_Imp) was the only property related to higher prediction errors. However, high token counts (n_tokens) and long dependency links (avg_links_len, max_links_len) were shown to make the variability in complexity scores more predictable.

12Overall, results suggest that incorporating syntactic information during the model’s training process may further improve complexity and acceptability models.

6. Discussion and Conclusion

13This work introduced a simple and effective data augmentation approach improving the fine-tuning performances of NLMs when only a modest amount of labeled data is available. The approach was first formalized and then empirically tested on the ACCEPT and COMPL shared tasks of the EVALITA 2020 campaign. Strong performances were reported for both acceptability and complexity prediction using a multi-task self-training approach, obtaining the top position in both subtasks. Finally, an error analysis highlighted the unpredictability of extreme scores and sentences having complex syntactic structures.

14The suggested approach, although computationally refined and well-performing, is lacking in terms of complexity-driven biases that may prove useful in the context of complexity and acceptability prediction. A possible extension of this work may include a complementary syntactic task (e.g., biaffine parsing, as in Glavas and Vulic (2020)) during multi-task learning to see if forcing syntactically-competent representations in the top layers may prove beneficial in the context of syntax-heavy tasks like complexity and acceptability prediction. Moreover, it would be interesting to evaluate multi-task learning performances with complexity and acceptability parallel annotations given the conceptual similarity between the two tasks and estimate the effectiveness of a feed-forward network as the final aggregator f in the MTSA paradigm instead of merely averaging predictions. Finally, Du et al. (2020) findings suggest that using an unsupervised in-domain filtering approach may further improve the self-training procedure when large unlabeled corpora are available.

Bibliographie

Valerio Basile, Danilo Croce, Maria Di Maro, and Lucia C. Passaro. 2020. “EVALITA 2020: Overview of the 7th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian.” In Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (Evalita 2020), edited by Valerio Basile, Danilo Croce, Maria Di Maro, and Lucia C. Passaro. Online: CEUR.org.

Cristina Bosco, Simonetta Montemagni, and Maria Simi. 2013. “Converting Italian Treebanks: Towards an Italian Stanford Dependency Treebank.” In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, 61–69. Sofia, Bulgaria: Association for Computational Linguistics. https://www.aclweb.org/anthology/W13-2308.

Dominique Brunato, Andrea Cimino, Felice Dell’Orletta, Giulia Venturi, and Simonetta Montemagni. 2020. “Profiling-UD: A Tool for Linguistic Profiling of Texts.” In Proceedings of the 12th Language Resources and Evaluation Conference, 7147–53. Marseille, France: European Language Resources Association. https://www.aclweb.org/anthology/2020.lrec-1.883.

Dominique Brunato, Chesi Cristiano, Felice Dell’Orletta, Simonetta Montemagni, Giulia Venturi, and Roberto Zamparelli. 2020. “AcCompl-it @ EVALITA2020: Overview of the Acceptability & Complexity Evaluation Task for Italian.” In Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (Evalita 2020), edited by Valerio Basile, Danilo Croce, Di Maro Maria, and Lucia C. Passaro. Online: CEUR.org.

Rich Caruana. 1997. “Multitask Learning.” Machine Learning 28: 41–75. https://www.cs.utexas.edu/~kuipers/readings/Caruana-mlj-97.pdf.

Rodolfo Delmonte, Antonella Bristot, and Sara Tonelli. 2007. “VIT–Venice Italian Treebank: Syntactic and Quantitative Features.”

Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. “BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171–86. Minneapolis, Minnesota: Association for Computational Linguistics. https://doi.org/10.18653/v1/N19-1423.

Jingfei Du, E. Grave, Beliz Gunel, Vishrav Chaudhary, Onur Çelebi, M. Auli, Ves Stoyanov, and Alexis Conneau. 2020. “Self-Training Improves Pre-Training for Natural Language Understanding.” ArXiv abs/2010.02194.

Goran Glavas, and Ivan Vulic. 2020. “Is Supervised Syntactic Parsing Beneficial for Language Understanding? An Empirical Investigation.” ArXiv abs/2008.06788.

Geoffrey E. Hinton, Oriol Vinyals, and J. Dean. 2015. “Distilling the Knowledge in a Neural Network.” ArXiv abs/1503.02531.

Y. Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. 2019. “RoBERTa: A Robustly Optimized Bert Pretraining Approach.” ArXiv abs/1907.11692.

Pedro Javier Ortiz Suárez, Laurent Romary, and Benoı̂t Sagot. 2020. “A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages.” In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 1703–14. Online: Association for Computational Linguistics. https://www.aclweb.org/anthology/2020.acl-main.156.

A. Radford, Jeffrey Wu, R. Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. “Language Models Are Unsupervised Multitask Learners.” In. OpenAI.

Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, W. Li, and P. Liu. 2019. “Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer.” ArXiv abs/1910.10683.

Manuela Sanguinetti, and Cristina Bosco. 2015. “PartTUT: The Turin University Parallel Treebank.” In Harmonization and Development of Resources and Tools for Italian Natural Language Processing Within the Parli Project, edited by Roberto Basili, Cristina Bosco, Rodolfo Delmonte, Alessandro Moschitti, and Maria Simi, 51–69. Cham: Springer International Publishing. https://link.springer.com/book/10.1007/978-3-319-14206-7.

Manuela Sanguinetti, Cristina Bosco, Alberto Lavelli, Alessandro Mazzei, Oronzo Antonelli, and Fabio Tamburini. 2018. “PoSTWITA-UD: An Italian Twitter Treebank in Universal Dependencies.” In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). Miyazaki, Japan: European Language Resources Association (ELRA). https://www.aclweb.org/anthology/L18-1279.

Timo Schick, and Hinrich Schutze. 2020a. “Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference.” ArXiv abs/2001.07676.

Timo Schick, and Hinrich Schutze. 2020b. “It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners.” ArXiv abs/2009.07118.

H. Scudder 1965. “Probability of Error of Some Adaptive Pattern-Recognition Machines.” IEEE Transactions on Information Theory 11 (3): 363–71.

“UmBERTo: An Italian Language Model Trained with Whole Word Masking.” n.d.

Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, et al. 2019. “HuggingFace’s Transformers: State-of-the-Art Natural Language Processing.” ArXiv abs/1910.03771.

Z. Yang, Zihang Dai, Y. Yang, J. Carbonell, R. Salakhutdinov, and Quoc V. Le. 2019. “XLNet: Generalized Autoregressive Pretraining for Language Understanding.” In NeurIPS.

David Yarowsky. 1995. “Unsupervised Word Sense Disambiguation Rivaling Supervised Methods.” In 33rd Annual Meeting of the Association for Computational Linguistics, 189–96. Cambridge, Massachusetts, USA: Association for Computational Linguistics. https://doi.org/10.3115/981658.981684.

Dani Yogatama, Cyprien de Masson d’Autume, J. Connor, Tomás Kociský, M. Chrzanowski, Lingpeng Kong, A. Lazaridou, et al. 2019. “Learning and Evaluating General Linguistic Intelligence.” ArXiv abs/1901.11373.

Notes

1 yi can be either discrete or continuous in this context.

2 A description of produced annotations is omitted for brevity. Refer to Brunato, Cimino, et al. (2020) for additional details.

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Lire

Open access

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search