The authors would like to thank Maria Simi and Roberta Montefusco for providing the EVALITA14 gold standard set, and the two anonymous reviewers who contributed with their valuable feedback. MF would also like to thank the EPSRC for its support in the form of a doctoral training grant.
1. Introduction
1Collecting and manually annotating linguistic data (typically referred to as gold standard) is a very expensive activity, both in terms of time and effort (Tomanek et al., 2007). For this reason, in the last years the question of whether we can train good Natural Language Processing (NLP) models by using just automatically annotated data (called silver standard) is arising interest (Hahn et al., 2010; Chowdhury and Lavelli, 2011).
2In this case, human annotations are replaced by those generated by pre-existing state-of-the-art systems. The annotations are then merged by using a committee approach specifically tailored on the data (Rebholz-Schuhmann et al., 2010a). The key advantage of such approach is the possibility to drastically reduce both time and effort, therefore generating considerably larger data sets in a fraction of the time. This is particularly true for text data in different fields such as temporal information extraction (Filannino et al., 2013), text chunking (Kang et al., 2012) and named entity recognition (Rebholz-Schuhmann et al., 2010b; Nothman et al., 2013) to cite just a few, and for non-textual data like in in medical imaging recognition (Langs et al., 2013).
3In this paper we focus on the case of dependency parsing for the Italian language. Dependency parsers are systems that automatically generate the linguistic dependency structure of a given sentence (Nivre, 2005). An example is given in Figure 1 for the sentence “Essenziale per l’innesco delle reazioni e la presenza di radiazione solare.”` (The presence of solar radiation is essential for triggering the reactions). We investigate whether the use of very large silver standard corpora leads to train good dependency parsers, in order to address the following question: Which characteristic is more important for a training set: quantity or quality?
4The paper is organised as follows: Section 2 presents some background works on dependency parsers for Italian; Section 3 presents the silver standard corpus used for the experiments and its linguistic features, with Section 4 describing the experimental settings and Section 5 describing the results of the comparison between the trained parsers (considering different sizes of data) and two test sets: gold and silver. Finally, the paper’s contributions are summed up in Section 6.
Figure 1: An example of dependency tree for an Italian sentence.
2. Background
5Since dependency parsing systems play a pivotal role in NLP, their quality is crucial in fostering the development of novel applications. Nowadays dependency parsers are mostly data-driven, and mainly designed around machine learning classifiers. Such systems “train classifiers that predict the next action of a deterministic parser constructing unlabelled dependency structures” (Nivre, 2005).
6Like in the case of other languages, in Italian ad-hoc cross-lingual and mono-lingual shared tasks are organised every year to push the boundaries of such technologies (Buchholz and Marsi, 2006; Bosco et al., 2009; Bosco and Mazzei, 2011; Bosco et al., 2014). The most important shared task about dependency parsing systems for Italian is hosted by the EVALITA series, in which participants are provided with manually annotated training data and the evaluation of their system is performed on a non disclosed portion of the data. Since the different systems presented so far have reached an overall performance close to 90% (Lavelli, 2014), we believe that the question of whether we can start using silver standards is a relevant one.
3. The corpus
7The silver standard data comes from a freely available corpus created as part of the project PAISÀ (Piattaforma per l’Apprendimento dell’Italiano Su corpora Annotati) (Lyding et al., 2014). The project was aimed at “overcoming the technological barriers currently preventing web users from having interactive access to and use of large quantities of data of contemporary Italian to improve their language skills”.
8The PAISÀ corpus1is a set of about 380,000 Italian texts collected by systematically harvesting the web looking for frequent Italian collocations. It consists of about 13M sentences and 265M tokens fully annotated in CoNLL format. The average length of the sentences is about 20 tokens.
9The Part-of-Speech tags have been automatically annotated by using ILC-POSTAGGER (Dell‘Orletta, 2009) and the dependency structure by using DeSR Dependency Parser (Attardi et al., 2007), the top performer system at the EVALITA shared task. The POStags are annotated according to the TANL tagset2, whereas the dependency relations follow the ISST-TANL tagset3. These automatic annotations have been successively revised and manually corrected on different stages: text cleaning, annotation corrections and tools alignment.
10Unfortunately we found out that The PAISA` corpus includes some sentences which cannot be used for training purposes due to invalid CONLL representations (i.e. duplicated or missing IDs, and invalid dependency relations). These sentences represent the 6.04% of the corpus, yet only the 0.10% of the tokens. This difference shows the presence of many small invalid sentences.
11Thus we have created a filtered corpus with the working sentences to which we will refer from now on with the name of silver as opposed to the EVALITA corpus as gold. In the latter, for training purposes we merged training and development test sets, whereas we did not modify the official test set.
4. Experiments
4.1 Test corpora
12We quantitatively measured the performance of the proposed parsers with respect to two test sets: gold and silver.
Table 1: PAISÀ corpus’ statistics. The figures show the presence of many short and invalid sentences.
|
original
|
filtered
|
∆%
|
Sentences
|
13.1M
|
12.3M
|
93.96%
|
Tokens
|
264.9M
|
264.6M
|
99.90%
|
Sentence length
|
20.3
|
21.5
|
-
|
13The gold test set corresponds to the official benchmark test set for the EVALITA 2014 dependency parsing task. It contains 344 sentences manually annotated with 9066 tokens (∼26 tokens per sentence). The silver test set, instead, is composed of 1,000 randomly selected sentences from the silver data, which have not been used for training purposes in the experiments.
4.2 Experimental setting
- 4 The instance×feature matrix exceeds the maximum size allowed by the liblinear implementation used.
14The experiments have been carried out using eight different sizes of training set from the silver data: 500, 1K, 5K, 25K, 75K, 125K, 250K and 500K sentences. A limitation of the learning algorithm prevented us to consider even larger training sets4.
15We used the Unlabelled Attachment Score (UAS) measure which studies the structure of a dependency tree and assesses whether the output has the correct head and dependency arcs. The choice of UAS measure is justified by the fact that the gold and silver label sets are not compatible.
16We trained the models with MaltParser5v.1.8.1 by using the default parameters.
17The overall set of experiments took about a month with 16 CPU cores and 128Gb of RAM.
5 Results
18The complete results are presented in Table 2. The 8 parsers trained on silver data perform poorly when tested against the gold test set (∼32%). The same happens for the opposite setting: the parser trained on the gold data and tested on the silver test set (last column of Table 2). By training on one set and testing on another (gold vs. silver), performance immediately drops of about 35%.
19When the parser is trained on and tested against the gold data the performance is 85.85%. Such configuration corresponds to the EVALITA14 setting and provides results comparable with the one obtained by the afore-mentioned challenge’s participants.
Table 2: Parsers’ performance against silver and gold test sets. Silver data refers to PAISÀ corpus, whereas gold refers to EVALITA14 training and development set. Silver data have been used for training purposes in different sizes. Sizes are expressed in number of sentences.
Training set
|
UAS against
|
corpus
|
size
|
gold test
|
silver test
|
|
500
|
30.14
|
66.11
|
|
1.000
|
30.95
|
67.00
|
|
5.000
|
32.21
|
69.11
|
|
10.000
|
32.44
|
69.56
|
silver
|
25.000
|
32.83
|
69.92
|
|
75.000
|
33.22
|
69.79
|
|
125.000
|
33.47
|
70.27
|
|
250.000
|
33.58
|
70.23
|
|
500.000
|
33.20
|
71.17
|
gold
|
7.978
|
85.85
|
48.30
|
20The interesting result lies in the fact that providing a dataset 1000 times bigger does not significantly enhance the performance. This is true regardless of the type of test set used: gold (3.06% variance) and silver (4.89% variance). Moreover, training a parser on a data set smaller than its test set does not negatively affect the final performance.
21Figure 2 depicts the performance curves for the models trained on silver data only.
22In order to allow for the reproducibility of this research and the possibility of using these new resources, we make the dependency parsing models and the used data sets publicly available at http://www.cs.man.ac.uk/˜filannim/projects/dp_italian/.
6. Conclusions
23We presented a set of experiments to investigate the contribution of silver standards when used as substitution of gold standard data. Similar investigations are arising interesting in any NLP subcommunities due to the high cost of generating gold data.
24The results presented in this paper highlight two important facts:
Figure 2: Parsers’ performance against silver and gold test sets. In both cases, the models exhibit an asymptotic behaviour. Figures are presented in Table 2. Silver data sizes express the number of sentences. ‘K’ stands for 1.000.
-
The size increase of the training corpus doesnot provide any sensible difference in terms of performance. In both test sets, a number of sentences between 5.000 and 10.000 seem to be enough to obtain a reliable training. We note that the size of the EVALITA training set lies in such boundary.
-
The annotations between gold and silver cor-pora may be different. This is suggested by the fact that none of the parsers achieved a satisfactory performance when trained and tested on different sources.
25We also note that the gold and silver test data sets have different characteristics (average sentence length, lexicon and type of annotation), which may partially justify the gap. On the other hand, the fact that a parser re-trained on annotations produced by a state-of-the-art system (DeSR) in the EVALITA task performs poorly on the very same gold set sheds light on the possibility that such official benchmark test set may not be representative enough.
26The main limitation of this study lays in the fact that the experiments have not been repeated multiple times, therefore we have no information about the variance of the figures (UAS column in Table 2). On the other hand, the large size of the data sets involved and the absence of any outlier figure suggest that the overall trends should not change. With the computational facilities available to us for this research, a full analysis of that sort would have required years to be completed.
27The results presented in the paper shed light on a recent research question about the employability of automatically annotated data. In the context of dependency parsing for Italian, we provided evidences to support the fact that the quality of the annotation is a far better characteristic to take into account when compared to quantity.
28A similar study on languages other than Italian would constitute an interesting future work of the research hereby presented.