Version classiqueVersion mobile

Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020

 | 
Felice Dell'Orletta
, 
Johanna Monti
, 
Fabio Tamburini

Contributed Papers

Is Neural Language Model Perplexity Related to Readability?

Alessio Miaschi, Chiara Alzetta, Dominique Brunato, Felice Dell’Orletta et Giulia Venturi

Résumé

This paper explores the relationship between Neural Language Model (NLM) perplexity and sentence readability. Starting from the evidence that NLMs implicitly acquire sophisticated linguistic knowledge from a huge amount of training data, our goal is to investigate whether perplexity is affected by linguistic features used to automatically assess sentence readability and if there is a correlation between the two metrics. Our findings suggest that this correlation is actually quite weak and the two metrics are affected by different linguistic phenomena.1

Texte intégral

1. Introduction and Motivation

1Standard Neural Language Models (NLMs) are trained to predict the next token given a context of previous tokens. The metric commonly used for assessing the performance of a language model is perplexity, which corresponds to the inverse geometric mean of the joint probability of words w1,… wn in a held-out test corpus C. While being primarily an intrinsic metric of NLM quality, perplexity has been used in a variety of scenarios, such as to classify between formal and colloquial tweets (González 2015), to detect the boundaries between varieties belonging to the same language family (Gamallo, Pichel, and Alegria 2017) or to identify speech samples produced by subjects with cognitive and/or language diseases e.g. dementia (Cohen and Pakhomov 2020) or Specific Language Impairment (Gabani et al. 2009). From the perspective of computational studies aimed at modeling human language processing, perplexity scores have also been shown to effectively match various human behavioural measures, such as gaze duration during reading (Demberg and Keller 2008; Goodkind and Bicknell 2018).

2In this paper we focus on a less investigated perspective addressing the connection between perplexity and readability. Since by definition perplexity gives a good approximation of how well a model recognises an unseen piece of text as a plausible one, our intuition is that lower model perplexity should be assigned to easy-to-read sentences, while difficult-to-read ones should obtain higher perplexity. On the other hand, state-of-the-art NLMs trained on huge data have shown to implicitly learn a sophisticated knowledge of language phenomena, also with respect to complex syntactic properties of sentences (Tenney, Das, and Pavlick 2019; Jawahar et al. 2019; Miaschi et al. 2020). This could suggest that variations in terms of linguistic complexity, especially when related to subtle morpho–syntactic and syntactic features of sentence rather than lexical ones, could not impact on model perplexity to a great extent. This assumption seems to be confirmed by the (still unpublished) results by which, to our knowledge, is the only one explicitly leveraging unsupervised neural language model predictions in the context of readability assessment. According to this study, a NLM is even less perplexed by articles addressed at adults than by documents conceived for a younger readership. From a relatively different perspective focused on the ability of automatic comprehension systems to solve cloze tests, showed that NLMs performance is not affected by the level of text complexity.

3In order to test the validity of all these hypotheses, we rely on the perplexity score given by a state-of-the-art NLM for the Italian language to several datasets representative of different textual genres containing both easy– and complex–to–read sentences: ideally, such datasets should emphasise the correlation between perplexity and readability (if present) since the corpora are explicitly designed to contain both simple and difficult examples.

Contributions

4We inspect whether and to which extent it is possible to find a relationship between a readability score and the perplexity of a NLM. To this aim we investigate (i) if the perplexity of a NLM and the readability score of a set of sentences show a significant correlation and (ii) whether the two metrics are equally affected by the same set of linguistic phenomena that occur in the sentence.

2. Experimental Design

5According to our research questions, we devised a set of experiments to study whether NLMs perplexity reflects the level of readability of a sentence and which are the linguistic phenomena mostly involved in each metric. For this purpose, we firstly investigated whether sentence-level perplexity scores computed with one of the most prominent NLM model correlate with the scores assigned to the same sentences by a supervised readability assessment tool. Secondly, we investigated which are the linguistic features of the considered sentences that correlate in a statistically significant way with the perplexity and readability score respectively. In order to verify whether correlations hold across different typology of texts, we tested our approach on five Italian datasets.

2.1 Models

6Automatic readability (henceforth ARA) was assessed using READ-IT (Dell’Orletta, Montemagni, and Venturi 2011) the first readability assessment tool for Italian which combines traditional raw text features with lexical, morpho-syntactic and syntactic information extracted from automatically parsed documents. In READ-IT, analysis of readability is modelled as a binary classification task, based on Support Vector Machines using LIBSVM (Chang and Lin 2001). Training corpora are representative of two classes of texts, i.e. difficult– vs. easy–to-read ones, both containing newspaper articles. The set of features exploited for predicting readability has been proved to capture different aspects of sentence complexity. Thus, the assigned readability score ranges between 0 (easy-to-read) and 1 (difficult-to-read) referring to the percentage probability for unseen documents or sentences to belong to the class of difficult-to- read documents. For the purposes of our work, we carried out readability assessment at sentence level, making the analysis reliable for the comparison with sentence-based perplexity of a NLM.

NNLNw1:nw1wn

2.2 Corpora

7In order to test the reliability of our initial hypothesis, we chose four corpora containing different typologies of texts, i.e. web pages, educational materials, narrative texts, newspaper and scientific articles. Each corpus includes a balanced amount of difficult- and easy-to-read sentence. In addition, we also considered in the analysis the Italian Universal Dependency treebank. This is meant to verify whether the connection between sentence-level readability and perplexity also holds in a well-acknowledged benchmark corpus. For each of them, we excluded from our analysis short sentences, i.e. having less than 5 tokens.

8PACCSS-IT2 (Brunato et al. 2016): we took into account 125,977 sentences belonging to PACCSS-IT, a corpus of complex-simple aligned sentences extracted from the ItWaC corpus. The resource was build using an automatic approach for acquiring large corpora of paired sentences able to intercept structural transformations (such as deletion, reordering, etc.). For example, the two following sentences represent a pair in the corpus, where a reordering operation occurs at phrase level (i.e. the subordinate clause proceeds vs. follows the main clause):

  • Complex: Ringraziandola per la sua cortese attenzione, resto in attesa di risposta. [Lit: Thanking you for your kind attention, I look forward to your answer.]

  • Simple: Resto in attesa di una risposta e ringrazio vivamente per l’attenzione. [Lit: I look forward to your answer and I thank you greatly for your attention.]

9Terence and Teacher3 (Brunato et al. 2015): two corpora of original and manually simplified texts aligned at sentence level. Terence contains short Italian novels for children and their manually simplified version carried out by linguists and psycholinguists targeting children with text comprehension difficulties. Teacher is a corpus of pairs of documents belonging to different genres (e.g. literature, handbooks) used in educational settings manually simplified by teachers. We exploited 1,644 sentences belonging to these corpora.

10Multi–Genre Multi–Type Italian corpus: a collection of Italian texts representative of three traditional textual genres: Journalism, Scientific prose and Narrative. Each genre has been internally subdivided into two sub-corpora representative of an easy- vs difficult-to-read variety, which was defined according to the intended target audience for a given genre. The journalistic prose corpus includes articles automatically downloaded from the online versions of two general-purpose newspapers4, while the “easy” sub-corpus contains articles from two easy-to-read newspapers5 addressed to adults with low literacy skills or mild intellectual disabilities. The scientific prose collection consists of scholarly publications on linguistics and computational linguistics and Wikipedia pages downloaded from the portal “Linguistics”, representative of the complex and easy variety respectively. For the narrative genre, we included long novels written by novelists of the last century and contemporary writers in the corpora of complex variety, while for the easy variety we collected short novels for children. The complete corpus contains 56,685 sentences.

11Italian Universal Dependency Treebank: it includes different sections of the Italian Universal Dependency Treebank (IUDT), version 2.5 (Zeman et al. 2019). In particular, we considered two groups: a first one containing the whole Italian Stanford Dependency Treebank (ISDT)6 (Bosco, Montemagni, and Simi, n.d.), the Italian version of the multilingual Turin University Parallel Treebank (Sanguinetti and Bosco 2015) and the Venice Italian Treebank (Delmonte, Bristot, and Tonelli 2007) (24,998 sentences), all containing a mix of textual genres; and a second one including two collections of texts representative of social media language, i.e. generic tweets and tweets labelled for irony (PosTWITA7 and TWITTIRO8) (Sanguinetti et al. 2018; Cignarella, Bosco, and Rosso 2019) (3,660 sentences in total).

3. Sentence Perplexity and Readability

Table 1

Dataset

PPL

ARA

PACCSS-IT

 21,306.07) 0.24)

Terence-Teacher

5,002.62) 0.27)

Multi-Genre Multi-Type

4,820.12) 0.31)

Italian-UD

3,633.64) 0.30)

Twitter-UD

2,479.64)

Perplexity (PPL) and Readability (ARA) mean and standard deviation values for the 5 datasets.

0.30)

Table 2

Dataset

PPL-ARA

Feats

PACCSS-IT

-0.031*

0.169*

Terence-Teacher

0.014

0.149

Multi-Genre Multi-Type

0.026*

0.184*

Italian-UD

-0.054*

0.332*

Twitter-UD

-0.038*

-0.037

Spearman’s correlation coefficients between sentence-level perplexity and readability scores (PPL-ARA) and between rankings of linguistic features (Feats). Statistically significant correlations (p < 0.05) are marked with *.

12Our analysis starts from a comparison between the average perplexity and readability scores obtained for each sentence of the five considered datasets. As shown in Table 1, readability values (column ARA) are quite homogeneous across the datasets, with low standard deviation values. On the contrary, the range of perplexity scores is wider (column PPL), going from an average score of 3,905.83 of PACCSS-IT to 436.75 of the IUDT miscellaneous portion (Italian UD). These differences seem to provide a first evidence that perplexity and readability are not correlate to each other.

13This intuition has been proved computing the Spearman’s rank correlation coefficient between the perplexity and readability scores for each dataset. Results are reported in Table 1, column PPL-ARA. As it can be seen, all correlation rates are significant, except for the result obtained on the Terence and Teacher corpus, possibly due to the fact that the size of the corpus is too small to allow a significant comparison. Contrary to our expectations, no correlation was detected between the two metrics for all corpora, suggesting that perplexity and and readability are independent from each other.

14To further investigate the reasons behind these scores and to deepen the analysis about the relationship between the two metrics, we investigated whether they capture the same (or similar) linguistic properties of the sentences. To this aim, we tested the presence and strength of the correlation between each of the two metrics and a set of 176 linguistic features, which have been shown to capture properties of sentence complexity (Brunato et al. 2018). In particular, this analysis is based on the set of features described in , which are acquired from raw, morpho-syntactic and syntactic levels of annotation. They range from basic information on the average sentence and word length, to lexical information about the internal composition of the vocabulary of the text (e.g. the distribution of lemmas belonging to the Basic Italian Vocabulary (De Mauro 2000)). They also include morpho–syntactic information (e.g. POS distribution and of inflectional properties of verbs) and more complex aspects of sentence structure derived from syntactic annotation and modeling global and local properties of parsed tree structure, e.g. the relative order of subjects and objects with respect to the verb, the use of subordination. In order to extract these features, the considered corpora were morpho-syntactically annotated and dependency parsed by the UDPipe pipeline (Straka, Hajic, and Strakova, n.d.), with the exception of the IUDT corpus.

15Column Feats of Table 2 illustrates the results of this analysis: we report the Spearman’s correlation coefficients between the two rankings of linguistic features, each ordered by strength of correlation between feature value and perplexity score and readability score respectively. Once again we observe rather weak correlation values, with the only exception of Italian-UD which is the only one reporting a medium correlation (.332). Overall, these results corroborate our previous findings that the two metrics are not particularly related with each other, and they further suggest that the linguistic phenomena affecting the perplexity of NLM and the readability level of a sentence are very different. Consider for example the two following sentences:

  1. Il furto è avvenuto giovedì notte.
    The theft has taken place Thursday night.

  2. Il comitato di bioetica: no all’eutanasia.
    The bioethics committee: no to euthanasia.

16While (1) is very easy-to-read, with a readability score of 0.25, but it has a quite high perplexity score, i.e. 40,737.81, (2) is quite difficult-to-read (ARA=1) but is has a very low perplexity score (PPL=11.24).

4. In-Depth Linguistic Investigation

17To better explore the motivation behind these results, we performed an in-depth investigation aimed at understating the relationship between our set of linguistic features and the two metrics taken into consideration. Since we noticed that for all datasets a higher number of features correlates with ARA than with PPL, we selected those that are significantly correlated with the two metrics. The number of shared features varies for each dataset, depending on their size. For example, for the two smallest ones, i.e. Terence and Teacher and the UD Twitter Treebank, we could only consider 34.65% (61) and 44.88% (79) of the whole set of features respectively, while for the larger corpora the sub-set is wider: 81.81% (144) in PACCSS-IT, 78.97% (139) for Multi-Genre Multi-Type and 84.65% (149) for the IUD Treebank.

18Table 3 shows the top ten features for each dataset, i.e. those that obtained the strongest correlation with both PPL and ARA. As expected, correlations are generally stronger between linguistic features and readability scores, although they are lower than expected. This could be due to the fact that, even if the READ–IT classifier is trained with a similar set of features, the non-linear feature space makes it difficult to identify clear correlations with individual features. Similarly, our set of features seem to play only a marginal role on perplexity. However, this is not the case of the PACCSS-IT corpus, for which the set of considered linguistic features have an higher correlation with PPL. This can be possibly related to the partial overlap between the GePpeTto training data and the PACCSS-IT sentences, since the latter is drawn from the ItWac corpus which is included in the GePpeTto’s training.

19Inspecting these results, we can also observe that correlations between features and PPL seem to be more affected by genre–specific characteristics. This is particularly clear if we consider the Italian UD Twitter treebank, for which among the top ten most correlated features we find some of them characterising social media language, e.g. symbols (upos-xpos_dist_SYM) or the vocative relation, which marks a dialogue participant addressed in a text along with the specification, specifically used for Twitter @-mentions (dep_dist_vocative:mention).

Table 3

PACCSS-IT

PPL

ARA

Feats

Corr

Feats

Corr

aux num pers dist Sing+3

0,53

xpos dist FF

0,34

dep dist cop

0,51

dep dist punct

0,32

avg max depth

0,50

upos dist PUNCT

0,32

upos dist ADP

0,50

ttr form

0,29

xpos dist E

0,50

aux mood dist Cnd

0,25

dep dist case

0,49

upos dist DET

0,25

n tokens

0,48

dep dist det

0,25

dep dist root

0,48

ttr lemma

0,22

xpos dist FS

0,48

upos dist NOUN

0,21

Terence and Teacher

PPL

ARA

Feats

Corr

Feats

Corr

xpos dist B

0,25

dep dist det

-0,39

verbs num pers dist Sing+3

0,23

upos dist DET

-0,38

lexical density

0,22

upos dist NOUN

-0,37

dep dist advmod

0,21

xpos dist S

-0,37

upos dist ADV

0,21

xpos dist RD

-0,29

verbs num pers dist Plur+3

-0,16

upos dist ADV

0,27

xpos dist V

0,16

dep dist advmod

0,25

avg token per clause

-0,16

xpos dist FF

0,25

upos dist VERB

0,14

avg sub chain len

0,24

Multi-Genre Multi-Type

PPL

ARA

Feats

Corr

Feats

Corr

n tokens

-0,19

principal prop dist

-0,42

dep dist root

0,19

ttr form

-0,34

dep dist advmod

0,19

xpos dist FF

0,34

upos dist ADV

0,18

dep dist det

-0,33

n prepositional chains

-0,18

upos dist DET

-0,33

xpos dist B

0,18

upos dist PUNCT

0,33

upos dist ADP

-0,17

dep dist punct

0,33

xpos dist E

-0,17

xpos dist FB

0,31

ttr lemma

0,16

sub prop dist

0,27

Italian UD Treebank

PPL

ARA

Feats

Corr

Feats

Corr

n tokens

-0,27

principal prop dist

-0,53

dep dist root

0,27

sub proposition dist

0,40

n prepositional chains

-0,26

n tokens

0,39

avg max depth

-0,24

dep dist root

-0,39

upos dist ADP

-0,24

ttr form

-0,37

ttr lemma

0,23

avg max depth

0,36

max links len

-0,23

avg links len

0,35

avg max links len

-0,23

max links len

0,34

xpos dist E

-0,22

avg max links len

0,34

Italian UD Twitter Treebank

PPL

ARA

Feats

Corr

Feats

Corr

upos dist SYM

0,38

upos dist PUNCT

0,30

avg max depth

-0,28

dep dist punct

0,30

xpos dist SYM

0,28

dep dist det

-0,29

in dict

-0,24

upos dist DET

-0,29

dep dist vocative:mention

0,23

verbal root perc

-0,27

in dict types

-0,22

xpos dist RD

-0,27

ttr lemma

0,21

avg token per clause

-0,27

in FO

-0,21

subj pre

-0,27

verbal head per sent

-0,19

obj post

-0,24

Top 10 features along with their correla-tion scores between perplexity and readability

5. Conclusion

20The paper presented a study aimed at investigating the relationship between two metrics computed at sentence-level, i.e. perplexity of a state-of-the-art NLM for the Italian language and readability score automatically assigned to a sentence by a supervised classifier. We carried out our analysis considering several datasets differing at the level of textual genre and language variety. Specifically, we observed that comparing the rankings obtained using the two metrics we cannot find any significant correlation, either between the scores of the two metrics or with respect to the set of linguistic features that mostly impact their values. Further investigation within this line of research will explore whether we can draw the same observations when a different NLM is exploited to compute sentence perplexity.

Bibliographie

Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. “The Wacky Wide Web: A Collection of Very Large Linguistically Processed Web-Crawled Corpora.” Language Resources and Evaluation 43 (3): 209–26.

C. Bosco, S. Montemagni, and M. Simi. n.d. “Converting Italian Treebanks: Towards an Italian Stanford Dependency Treebank.” In Proceedings of the Acl Linguistic Annotation Workshop & Interoperability with Discourse. Sofia, Bulgaria.

Dominique Brunato, Andrea Cimino, Felice Dell’Orletta, and Giulia Venturi. 2016. “PaCCSS-IT: A Parallel Corpus of Complex-Simple Sentences for Automatic Text Simplification.” In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 351–61. Austin, Texas: Association for Computational Linguistics. https://doi.org/10.18653/v1/D16-1034.

Dominique Brunato, Felice Dell’Orletta, Giulia Venturi, and Simonetta Montemagni. 2015. “Design and Annotation of the First Italian Corpus for Text Simplification.” In Proceedings of the 9th Linguistic Annotation Workshop, 31–41.

Dominique Brunato, Lorenzo De Mattei, Felice Dell’Orletta, Benedetta Iavarone, and Giulia Venturi. 2018. “Is This Sentence Difficult? Do You Agree?” In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2690–9. Brussels, Belgium: Association for Computational Linguistics. https://doi.org/10.18653/v1/D18-1289.

Chih-Chung Chang, and Chih-Jen Lin. 2001. “LIBSVM: A Library for Support Vector Machines.”

Alessandra Teresa Cignarella, Cristina Bosco, and Paolo Rosso. 2019. “Presenting TWITTIRÒ-UD: An Italian Twitter Treebank in Universal Dependencies.” In Proceedings of the Fifth International Conference on Dependency Linguistics (Depling, Syntaxfest 2019). https://www.aclweb.org/anthology/W19-7723.pdf.

Trevor Cohen and Serguei Pakhomov. 2020. “A Tale of Two Perplexities: Sensitivity of Neural Language Models to Lexical Retrieval Deficits in Dementia of the Alzheimer’s Type.” In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 1946–57. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-main.176.

Felice Dell’Orletta, Simonetta Montemagni, and Giulia Venturi. 2011. “READ–IT: Assessing Readability of Italian Texts with a View to Text Simplification.” In Proceedings of the Second Workshop on Speech and Language Processing for Assistive Technologies, 73–83. Edinburgh, Scotland, UK: Association for Computational Linguistics. https://www.aclweb.org/anthology/W11-2308.

Rodolfo Delmonte, Antonella Bristot, and Sara Tonelli. 2007. “VIT - Venice Italian Treebank: Syntactic and Quantitative Features.” In Proceedings of the Sixth International Workshop on Treebanks and Linguistic Theories.

Lorenzo De Mattei, Michele Cafagna, Felice Dell’Orletta, Malvina Nissim, and Marco Guerini. 2020. “GePpeTto Carves Italian into a Language Model.” arXiv Preprint arXiv:2004.14253.

Tullio De Mauro. 2000. Il Dizionario Della Lingua Italiana. Vol. 1. Paravia.

V. Demberg, and Frank Keller. 2008. “Data from Eye-Tracking Corpora as Evidence for Theories of Syntactic Processing Complexity.” Cognition 109: 193–210.

Keyur Gabani, Melissa Sherman, Thamar Solorio, Yang Liu, Lisa Bedore, and Elizabeth Peña. 2009. “A Corpus-Based Approach for the Prediction of Language Impairment in Monolingual English and Spanish-English Bilingual Children.” In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, 46–55. Boulder, Colorado: Association for Computational Linguistics. https://www.aclweb.org/anthology/N09-1006.

Pablo Gamallo, Jose Ramom Pichel, and Iñaki Alegria. 2017. “A Perplexity-Based Method for Similar Languages Discrimination.” In VarDial2017 Workshop at Eacl 2017. Proceedings of the Fourth Workshop on Nlp for Similar Languages, Varieties and Dialects, Pages 109–114,Valencia, Spain, April 3, 2017. C©2017 Association for Computational Linguistics (Http://Web.science.mq.edu.au/ Smalmasi/Vardial4/Index.html).

M. González. 2015. “An Analysis of Twitter Corpora and the Differences Between Formal and Colloquial Tweets.” In TweetMT@SEPLN.

Adam Goodkind and Klinton Bicknell. 2018. “Predictive Power of Word Surprisal for Reading Times Is a Linear Function of Language Model Quality.” In Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics, CMCL 2018, Salt Lake City, Utah, Usa, January 7, 2018, edited by Asad B. Sayeed, Cassandra Jacobs, Tal Linzen, and Marten Van Schijndel, 10–18. Association for Computational Linguistics. https://doi.org/10.18653/v1/w18-0102.

Ganesh Jawahar, Benoı̂t Sagot, Djamé Seddah, Samuel Unicomb, Gerardo Iñiguez, Márton Karsai, Yannick Léo, et al. 2019. “What Does Bert Learn About the Structure of Language?” In 57th Annual Meeting of the Association for Computational Linguistics (Acl), Florence, Italy.

Alessio Miaschi, Dominique Brunato, Felice Dell’Orletta, and Giulia Venturi. 2020. “Linguistic Profiling of a Neural Language Model.” arXiv Preprint arXiv:2010.01869.

Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. “Language Models Are Unsupervised Multitask Learners.”

Manuela Sanguinetti and Cristina Bosco. 2015. “PartTUT: The Turin University Parallel Treebank.” In Harmonization and Development of Re- Sources and Tools for Italian Natural Language Processing Within the PARLI Project, edited by Roberto Basili et al., 51–69. Springer. https://link.springer.com/book/10.1007/978-3-319-14206-7.

Manuela Sanguinetti, Cristina Bosco, Alberto Lavelli, Alessandro Mazzei, and Fabio Tamburini. 2018. “PoSTWITA-UD: An Italian Twitter Treebank in Universal Dependencies.” In Proceedings of the Eleventh Language Resources and Evaluation Conference (Lrec 2018). https://www.aclweb.org/anthology/L18-1279.pdf.

M. Straka, J. Hajic, and J. Strakova. n.d. “UD-Pipe: Trainable Pipeline for Processing CoNLL-U Files Performing Tokenization, Morphological Analysis, Pos Tagging and Parsing.” In Proceedings of the Tenth International Conference on Language Resources and Evaluation (Lrec).

Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. “BERT Rediscovers the Classical NLP Pipeline.” In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 4593–4601. Florence, Italy: Association for Computational Linguistics. https://doi.org/10.18653/v1/P19-1452.

Daniel Zeman, Joakim Nivre, Mitchell Abrams, and et al. 2019. “Universal Dependencies 2.5.” In LINDAT/Clariah-Cz Digital Library at the Institute of Formal and Applied Linguistics (úfal). https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105

Table des illustrations

URL http://books.openedition.org/aaccademia/docannexe/image/8743/img-1.jpg
Fichier image/jpeg, 14k
Légende Perplexity (PPL) and Readability (ARA) mean and standard deviation values for the 5 datasets.
URL http://books.openedition.org/aaccademia/docannexe/image/8743/img-2.jpg
Fichier image/jpeg, 11k

Auteurs

Le texte et les autres éléments (illustrations, fichiers annexes importés) sont sous Licence OpenEdition Books, sauf mention contraire.

Lire

Open access

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search