Version classiqueVersion mobile

EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

 | 
Valerio Basile
, 
Danilo Croce
, 
Maria Maro
, 
et al.

DIACR-Ita: Diachronic Lexical Semantics

OP-IMS @ DIACR-Ita: Back to the Roots: SGNS+OP+CD still Rocks Semantic Change Detection

Jens Kaiser, Dominik Schlechtweg et Sabine Schulte im Walde

Résumé

We present the results of our participation in the DIACR-Ita shared task on lexical semantic change detection for Italian. We exploit one of the earliest and most influential semantic change detection models based on Skip-Gram with Negative Sampling, Orthogonal Procrustes alignment and Cosine Distance and obtain the winning submission of the shared task with near to perfect accuracy (.94). Our results once more indicate that, within the present task setup in lexical semantic change detection, the traditional type-based approaches yield excellent performance.

Note de l’éditeur

Copyright  2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

Texte intégral

Dominik Schlechtweg was supported by the Konrad Adenauer Foundation and the CRETA center funded by the German Ministry for Education and Research (BMBF) during the conduct of this study. We thank the task organizers and reviewers for their efforts.

1. Introduction

1Lexical Semantic Change (LSC) Detection has drawn increasing attention in recent years (Kutuzov et al. 2018; Tahmasebi, Borin, and Jatowt 2018). Recently, SemEval-2020 Task 1 provided a multi-lingual evaluation framework to compare the variety of proposed model architectures (Schlechtweg et al. 2020). The DIACR-Ita shared task extends parts of this framework to Italian by providing an Italian data set for SemEval’s binary subtask (P. Basile et al. 2020; V. Basile et al. 2020).

2We present the results of our participation in the DIACR-Ita shared task exploiting one of the earliest and most established semantic change detection models based on Skip-Gram with Negative Sampling, Orthogonal Procrustes alignment and Cosine Distance (Hamilton, Leskovec, and Jurafsky 2016a). Based on our previous research (Schlechtweg et al. 2019; Kaiser et al. 2020) we optimize the dimensionality parameter assuming that high dimensionalities reduce alignment error. With our setting win the shared task with near to perfect accuracy (.94). Our results once more demonstrate that, within the present task setup in lexical semantic change detection, the traditional type-based approaches yield excellent performance.

2. Related Work

3As evident in the field of LSCD is currently dominated by Vector Space Models (VSMs), which can be divided into type-based (Turney and Pantel 2010) and token-based (Schütze 1998) models. Prominent type-based models include low-dimensional embeddings such as the Global Vectors (Pennington, Socher, and Manning 2014 GloVe) the Continuous Bag-of-Words (CBOW), the Continuous Skip-gram as well as a slight modification of the latter, the Skip-gram with Negative Sampling model (Mikolov, Chen, et al. 2013; Mikolov, Sutskever, et al. 2013 SGNS). However, as these models come with the deficiency that they aggregate all senses of a word into a single representation, token-based embeddings have been proposed (Peters et al. 2018; Devlin et al. 2019). According to these models can ideally capture complex characteristics of word use, and how they vary across linguistic contexts. The results of SemEval-2020 Task 1 (Schlechtweg et al. 2020), however, show that contrary to this, the token-based embedding models (Beck 2020; Kutuzov and Giulianelli 2020) are heavily outperformed by the type-based ones (Pražák et al. 2020; Asgari, Ringlstetter, and Schütze 2020). The SGNS model was not only widely used, but also performed best among the participants in the task. Its fast implementation and combination possibilities with different alignment types further solidify SGNS as the standard in LSCD. A common and surprisingly robust (Schlechtweg et al. 2019; Kaiser et al. 2020) practice is to align the time-specific SGNS embeddings with Orthogonal Procrustes (OP) and measure change with Cosine Distance (CD) (Kulkarni et al. 2015; Hamilton, Leskovec, and Jurafsky 2016b). This has been shown in several small but independent experiments (Hamilton, Leskovec, and Jurafsky 2016b; Schlechtweg et al. 2019; Kaiser et al. 2020; Shoemark et al. 2019) and SGNS+OP+CD has produced two of three top-performing submissions in Subtask 2 in SemEval-2020 Task 1 including the winning submission (Pömsl and Lyapin 2020; Arefyev and Zhikov 2020).

3. System overview

4Most VSMs in LSC detection combine three sub-systems: (i) creating semantic word representations, (ii) aligning them across corpora, and (iii) measuring differences between the aligned representations (Schlechtweg et al. 2019). Alignment is needed as columns from different vector spaces may not correspond to the same coordinate axes, due to the stochastic nature of many low-dimensional word representations (Hamilton, Leskovec, and Jurafsky 2016b). Following the above-described success, we use SGNS to create word representations in combination with Orthogonal Procrustes (OP) for vector space alignment and Cosine Distance (CD) (Salton and McGill 1983) to measure differences between word vectors. From the resulting graded change predictions we infer binary change values by comparing the target word distribution to the full distribution of change predictions between the target corpora. For our experiments we use the code provided by .1

3.1 Semantic Representation

5SGNS is a shallow neural network trained on pairs of word co-occurrences extracted from a corpus with a symmetric window. It represents each word w and each context c as a d-dimensional vector to solve

Image 100000000000018400000028FF6BF43EAA7A481D.jpg

where Image 1000000000000077000000271BB0B4CC43CF1F5B.jpg, D is the set of all observed word-context pairs and D' is the set of randomly generated negative samples (Mikolov, Chen, et al. 2013; Mikolov, Sutskever, et al. 2013; Goldberg and Levy 2014). The optimized parameters θ are Image 10000000000000160000000D03A4DE43CB7C2656.jpgand Image 10000000000000130000000D8ED4422EEA718307.jpgfor Image 100000000000004E00000011F5945073D1405E9D.jpg. D' is obtained by drawing k contexts from the empirical unigram distribution Image 10000000000000630000002CE1E6D3F6F5E46513.jpgfor each observation of (w,c), cf. . After training, each word w is represented by its word vector Image 10000000000000120000000BB35A57F88D2D9B06.jpg.

6Previous research on the influence of parameter settings on SGNS+OP+CD lays the foundation for our parameter choices (Schlechtweg et al. 2019; Kaiser et al. 2020). Although this sub-system combination is extremely stable regardless of parameter settings, subtle improvements can be achieved by modifying the window size and dimensionality. A common hurdle in LSC detection is the small corpus size, increasing the standard setting for window size from 5 to 10 leads to the creation of more word-context pairs used for training the model. In addition, we also experiment with dimensionalities of 300 and 500. Higher dimensionalities alleviate the introduction of noise during the alignment process (Kaiser et al. 2020). We keep the rest of the parameter settings at their default values (learning rate α =0.025, #negative samples k=5 and sub-sampling t=0.001).

3.2 Alignment

SGNS is trained on each corpus separately, resulting in matrices A and B. To align them we follow and calculate an orthogonally-constrained matrix Image 100000000000001A0000000D15EA35763276BA6A.jpg:

Image 10000000000000D2000000215EF1575A45C9E49B.jpg

where the i-th row in matrices A and B correspond to the same word. Using Image 100000000000001A0000000D15EA35763276BA6A.jpgwe get the aligned matrices Image 10000000000000D40000001099B2432430661A92.jpg. Prior to this alignment step we length-normalize and mean-center both matrices (Artetxe, Labaka, and Agirre 2017; Schlechtweg et al. 2019).

3.3 Threshold

7The DIACR-Ita shared task requires a binary label for each of the target words. However, CD produces graded values between 0.0 and 2.0 when measuring differences in word vectors between the two time periods. We tackle this problem by defining a threshold parameter, similar to many approaches applied in SemEval-2020 Task 1 (Schlechtweg et al. 2020). All words with a CD greater or equal than the threshold are labeled ‘1’, indicating change. Words with a CD less than the threshold are assigned ‘0’, indicating no change.

8A simplified approach is to set the threshold such that the number of words is equal in both groups. This has many disadvantages: Mainly, it relies on the assumption that the two groups are of equal size. This is rarely given in real world applications, especially if the focus is in one word at a time. Thus a more sophisticated approach is needed. In SemEval-2020’s Subtask 1 many participants faced the same problem and developed various methods to solve it. Similar to the simplified approach, only look at target words, and after fitting the histogram of CDs to a gamma distribution, set the threshold at the 75% density quantile. This approach resulted in good performance but is not always applicable due to its dependence on underlying properties of the test set. avoid the dependence on target words by randomly selecting 200 words and setting the threshold such that 90% of the 200 words have a lower distance than the threshold. A more careful selection of words is taken by , they look at the CD of semantically stable stop words, accumulate them in different bins and set the threshold to the upper limit of the bin containing fewer than \#stopwords\#bins words. propose several methods. One of them is setting the threshold at the mean of the distances of all words in the corpus vocabulary. Our method for determining a threshold is very similar to , but instead of taking the mean, we use the mean + one standard deviation (μ+σ) of all words in the corpus vocabulary.

4. Experimental setup

  • 2 The time periods t1 and t2 were not disclosed to participants.

9The DIACR-Ita task definition is taken from SemEval-2020 Task 1 Subtask 1 (binary change detection): Given a list of target words and a diacronic corpus pair C1 and C2, the task is to identify the respective target words which have changed their meaning between the time periods t1 and t2 (P. Basile et al. 2020; Schlechtweg et al. 2020).2 C1 and C2 have been extracted from Italian newspapers and books. Target words which have changed their meaning are labeled with the value ‘1’, the remaining target words are labeled with ‘0’. Gold data for the 18 target words is semi-automatically generated from Italian online dictionaries. According to the gold data, 6 of the 18 target words are subject to semantic change between t1 and t2. This gold data was only made public after the evaluation phase. During the evaluation phase each team was allowed to submit 4 predictions for the full list of target words, which were scored using classification accuracy between the predicted labels and the gold data. The final competition ranking compares only the highest of the 4 scores achieved by each team.

6. Results

Table 1

entry

dim

treshold

ACC

AP

#2

300

(μ+σ)

.76

.944

.915

#4

500

(μ+σ)

.78

.889

.915

#1

300

(50:50)

.57

.833

.915

#3

500

(50:50)

.64

.833

.915

-

.667

.333

freq. baseline

unk.

.611

.418

colloc. baseline

unk.

.500

unk.

Accuracy (ACC) and Average Precision (AP) for various parameter settings and thresholds and baselines; freq. baseline: Absolute frequency difference between the words in C1 and C2 and an unknown threshold; colloc. baseline: Bag of Words + CD and an unknown threshold; major. baseline: Every word labeled with ‘0’.

Figure 1 (a) d=300

Image 10000201000003B0000002C46823DACEC3B267BD.png

Background shows histogram (in gray) of CDs for all words in the corpus vocabulary. The colored bars show the CDs of target words, green indicates that the target word was correctly labeled, red indicates incorrect labeling. Vertical line marks threshold value (mean + standard deviation)

Figure 1 (b) d=500

Image 10000201000003B0000002C4F480A18D678ED196.png

Background shows histogram (in gray) of CDs for all words in the corpus vocabulary. The colored bars show the CDs of target words, green indicates that the target word was correctly labeled, red indicates incorrect labeling. Vertical line marks threshold value (mean + standard deviation)

10We created target word rankings using SGNS+OP+CD with a dimensionality of 300 and 500 as described above. From these rankings our predictions are calculated using two different thresholding methods: (i) Splitting the targets into two equally-sized groups (50:50) and (ii) using the mean + one standard deviation (μ+σ) as threshold, refer to Section 3.3. The accuracy scores achieved in this way are listed in Table 1, alongside the official baselines freq. and colloc. and an additional major. baseline. Submission #2 is our highest scoring submission and won the DIACR-Ita task together with one other undisclosed submission. For both of our rankings the 50:50 threshold yielded lower accuracy than the μ+σ threshold. This is due to the imbalance of changed to unchanged target words in the test set. Using μ+σ as threshold resulted in an optimal split for the ranking created with d=300. For d=500 this threshold was slightly too high with a value of 0.78. The target word palmare which, according to the gold data, has undergone semantic change (label ‘1’) has CD of 0.76 and was thus incorrectly labeled by our system. Figure 1 shows the histogram of CD values for all words of the corpus dictionary in gray. The green and red colored bars correspond target words. If the target word was correctly labeled the bar is green, incorrect labeled target words have red bars. From this visualisation we can see that there is a pronounced gap between the CDs of target words which have changed and those which have not. Our proposed threshold method of μ+σ tends to slightly overshoot this gap. This has lead to the lower accuracy of submission #4, despite the ranking allowing for a higher accuracy. In order to measure the quality of the rankings independent from the threshold we also report AP (Shwartz, Santus, and Schlechtweg 2017) in Table 1, confirming the potential equal performance.

11The method of using the mean + one standard deviation of the CDs of all words in the corpus dictionary resulted in good accuracy, but leaves room for improvement. It tends to over-shoot the gap between unchanged and changed words slightly. Only using the mean shifts the tendency towards under-shooting the gap. The optimal threshold seems to lie somewhere in between. Though, this needs the be confirmed on other, larger, data sets. Furthermore, not all binary classification tasks are suitable for the approach of first creating a ranked list of graded change predictions and then choosing a threshold. The data set of SemEval-2020 Task 1 comprises two tasks, a binary and a ranked task for the same target words. It is not possible to achieve an accuracy of 1 on the binary task even if all the ranks are predicted correctly for the graded task, i.e., binary change is not just high graded change (Schlechtweg et al. 2020).

12The one target word which our model labels incorrectly, across a variety of parameter settings, is piovra. According to the gold data this word has not undergone semantic change between t1 and t2, while our system labels it as changed. A possible explanation for the error may be differences in frequency: In C1 piovra appears 35 times and in C2 it appears 643 times. SGNS often struggles to create reliable embeddings for low frequency words (Kaiser et al. 2020). Alternatively, the error could be caused by discrepancies between gold labels and corpora. state that the gold data is initially based on Italian online dictionaries such as ‘Sabatini Coletti’. In a manual annotation process the gold data is further refined by providing human judges with up to 100 occurrences of each target word, for which they have to identify the used meaning according to the meanings listed in the dictionaries. A target word is labeled as changed if a meaning is observed in C2 which has not been observed in C1. Although not very likely, it is possible that this annotation method fails to detect novel senses in C2. Sabatini Coletti reports that in addition to the sense “squid” piovra acquired a new sense “a secret criminal organisation deeply rooted in society” in 1983. This might explain why we detect piovra as a word which has undergone semantic change given that C1 comprises texts from 1948 to 1970 and C2 comprises texts from 1990 to 2014 (P. Basile et al. 2020).

13The DIACR-Ita task dataset is a very valuable contribution to the research field of LSC detection and extends the variety of available data sets to the Italian language. Nonetheless, two points are important when interpreting or results this data set: (i) it contains a small number of target words in combination with binary classification. This makes the data set vulnerable to randomness. (ii) The nature of the gold labels, in addition to possibly not being directly related to the corpus, it is unclear if they reflect semantic change as sense gain and sense loss as in SemEval’s Subtask 1. The online dictionaries which create the basis for the gold data only state sense gains. Thus, it might possible for a word to completely lose a sense but still be labeled as unchanged.

6. Conclusion

14We participated in the DIACR-Ita shared task using well-established type-based methods for diacronic semantic representations in combination with a carefully calculated threshold. We were able to reach the first place with a nearly perfect accuracy of .94 confirming once more the reliability of the type-based embeddings created by SGNS, OP as an alignment method and CD to measure differences between word vectors. The presented approach is very suitable for similar tasks as no fine-tuning of parameters is needed. Yet, the system relies on the assumption that graded change is indicative of binary classes.

Renfen Hu, Shen Li, and Shichen Liang. 2019. Diachronic sense modeling with deep contextualized word embeddings: An ecological view. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3899–3908, Florence, Italy. Association for Computational Lin-guistics.

Jinan Zhou and Jiaxin Li. 2020. TemporalTeller at SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection with Temporal Referencing. In Proceedings of the 14th International Workshop on Semantic Evaluation, Barcelona, Spain. Association for Computational Linguistics.

Bibliographie

Nikolay Arefyev, and Vasily Zhikov. 2020. “BOS at SemEval-2020 Task 1: Word Sense Induction via Lexical Substitution for Lexical Semantic Change Detection.” In Proceedings of the 14th International Workshop on Semantic Evaluation. Barcelona, Spain: Association for Computational Linguistics.

Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. “Learning Bilingual Word Embeddings with (Almost) No Bilingual Data.” In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 451–62. Association for Computational Linguistics.

Ehsaneddin Asgari, Christoph Ringlstetter, and Hinrich Schütze. 2020. “EmbLexChange at SemEval-2020 Task 1: Unsupervised Embedding-based Detection of Lexical Semantic Changes.” In Proceedings of the 14th International Workshop on Semantic Evaluation. Barcelona, Spain: Association for Computational Linguistics.

Pierpaolo Basile, Annalina Caputo, Tommaso Caselli, Pierluigi Cassotti, and Rossella Varvara. 2020. “DIACR-Ita @ EVALITA2020: Overview of the EVALITA2020 Diachronic Lexical Semantics (DIACR-Ita) Task.” In Proceedings of the 7th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA 2020), edited by Valerio Basile, Danilo Croce, Maria Di Maro, and Lucia C. Passaro. Online: CEUR.org.

Valerio Basile, Danilo Croce, Di Maro Maria, and Lucia C. Passaro. 2020. “EVALITA 2020: Overview of the 7th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian.” In Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (Evalita 2020), edited by Valerio Basile, Danilo Croce, Di Maro Maria, and Lucia C. Passaro. Online: CEUR.org.

Christin Beck. 2020. “DiaSense at SemEval-2020 Task 1: Modeling sense change via pre-trained BERT embeddings.” In Proceedings of the 14th International Workshop on Semantic Evaluation. Barcelona, Spain: Association for Computational Linguistics.

Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. “BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171–86. Minneapolis, Minnesota: Association for Computational Linguistics. https://doi.org/10.18653/v1/N19-1423.

Yoav Goldberg, and Omer Levy. 2014. “Word2vec Explained: Deriving Mikolov et Al.’s Negative-Sampling Word-Embedding Method.” arXiv:1402.3722.

William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016a. “Cultural Shift or Linguistic Drift? Comparing Two Computational Measures of Semantic Change.” In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2116–21. Austin, Texas: Association for Computational Linguistics.

William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016b. “Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change.” In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 1489–1501. Berlin, Germany: Association for Computational Linguistics.

Jens Kaiser, Dominik Schlechtweg, Sean Papay, and Sabine Schulte im Walde. 2020. “IMS at SemEval-2020 Task 1: How low can you go? Dimensionality in Lexical Semantic Change Detection.” In Proceedings of the 14th International Workshop on Semantic Evaluation. Barcelona, Spain: Association for Computational Linguistics.

Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2015. “Statistically Significant Detection of Linguistic Change.” In Proceedings of the 24th International Conference on World Wide Web, WWW, 625–35. Florence, Italy.

Andrey Kutuzov, and Mario Giulianelli. 2020. “UiO-UvA at SemEval-2020 Task 1: Contextualised Embeddings for Lexical Semantic Change Detection.” In Proceedings of the 14th International Workshop on Semantic Evaluation. Barcelona, Spain: Association for Computational Linguistics.

Andrey Kutuzov, Lilja Øvrelid, Terrence Szymanski, and Erik Velldal. 2018. “Diachronic Word Embeddings and Semantic Shifts: A Survey.” In Proceedings of the 27th International Conference on Computational Linguistics, 1384–97. Santa Fe, New Mexico, USA: Association for Computational Linguistics.

Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. “Efficient Estimation of Word Representations in Vector Space.” In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, Usa, May 2-4, 2013, Workshop Track Proceedings, edited by Yoshua Bengio and Yann LeCun.

Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. “Distributed Representations of Words and Phrases and Their Compositionality.” In Advances in Neural Information Processing Systems 26, 3111–9. Lake Tahoe, Nevada, USA.

Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. “Glove: Global Vectors for Word Representation.” In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, 1532–43. Doha, Qatar.

Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. “Deep Contextualized Word Representations.” In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2227–37. New Orleans, LA, USA.

Martin Pömsl, and Roman Lyapin. 2020. “CIRCE at SemEval-2020 Task 1: Ensembling Context-Free and Context-Dependent Word Representations.” In Proceedings of the 14th International Workshop on Semantic Evaluation. Barcelona, Spain: Association for Computational Linguistics.

Ondřej Pražák, Pavel Přibákň, Stephen Taylor, and Jakub Sido. 2020. “UWB at SemEval-2020 Task 1: Lexical Semantic Change Detection.” In Proceedings of the 14th International Workshop on Semantic Evaluation. Barcelona, Spain: Association for Computational Linguistics.

Gerard Salton, and Michael J McGill. 1983. Introduction to Modern Information Retrieval. New York: McGraw-Hill Book Company.

Dominik Schlechtweg, Anna Hätty, Marco del Tredici, and Sabine Schulte im Walde. 2019. “A Wind of Change: Detecting and Evaluating Lexical Semantic Change Across Times and Domains.” In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 732–46. Florence, Italy: Association for Computational Linguistics.

Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, and Nina Tahmasebi. 2020. “SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection.” In Proceedings of the 14th International Workshop on Semantic Evaluation. Barcelona, Spain: Association for Computational Linguistics.

Hinrich Schütze. 1998. “Automatic Word Sense Discrimination.” Computational Linguistics 24 (1): 97–123.

Philippa Shoemark, Farhana Ferdousi Liza, Dong Nguyen, Scott Hale, and Barbara McGillivray. 2019. “Room to Glo: A Systematic Comparison of Semantic Change Detection Approaches with Word Embeddings.” In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 66–76. Hong Kong, China: Association for Computational Linguistics.

Vered Shwartz, Enrico Santus, and Dominik Schlechtweg. 2017. “Hypernyms Under Siege: Linguistically-Motivated Artillery for Hypernymy Detection.” In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Valencia, Spain, 65–75.

Nina Tahmasebi, Lars Borin, and Adam Jatowt. 2018. “Survey of Computational Approaches to Diachronic Conceptual Change.” CoRR abs/1811.06278. http://arxiv.org/abs/1811.06278.

Peter D. Turney, and Patrick Pantel. 2010. “From Frequency to Meaning: Vector Space Models of Semantics.” J. Artif. Int. Res. 37 (1): 141–88.

Notes

1 https://github.com/Garrafao/LSCDetection

2 The time periods t1 and t2 were not disclosed to participants.

Auteurs

Institute for Natural Language Processing, University of Stuttgart – jens.kaiser@ims.uni-stuttgart.de

Lire

Open access

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search