Version classiqueVersion mobile

EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

 | 
Valerio Basile
, 
Danilo Croce
, 
Maria Maro
, 
et al.

DIACR-Ita: Diachronic Lexical Semantics

UWB @ DIACR-Ita: Lexical Semantic Change Detection with CCA and Orthogonal Transformation

Ondřej Pražák, Pavel Přibáň et Stephen Taylor

Résumé

In this paper, we describe our method for detection of lexical semantic change (i.e., word sense changes over time) for the DIACR-Ita shared task, where we ranked 1st. We examine semantic differences between specific words in two Italian corpora, chosen from different time periods. Our method is fully unsupervised and language independent. It consists of preparing a semantic vector space for each corpus, earlier and later. Then we compute a linear transformation between earlier and later spaces, using CCA and Orthogonal Transformation. Finally, we measure the cosines between the transformed vectors.

Note de l’éditeur

Equal contribution. Copyright 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0)

Texte intégral

This work has been partly supported by ERDF ”Research and Development of Intelligent Components of Advanced Technologies for the Pilsen Metropolitan Area (InteCom)” (no.: CZ.02.1.01/0.0/0.0/17 048/0007267); by the project LO1506 of the Czech Ministry of Education, Youth and Sports; and by Grant No. SGS-2019-018 Processing of heterogeneous data and its specialized applications. Access to computing and storage facilities owned by parties and projects contributing to the National Grid Infrastructure MetaCentrum provided under the programme "Projects of Large Research, Development, and Innovations Infrastructures" (CESNET LM2015042), is greatly appreciated.

1. Introduction

1Language evolves with time. New words appear, old words fall out of use, and the meanings of some words shift. There are changes in topics, syntax, and presentation structure. Reading the natural philosophy musings of aristocratic amateurs from the eighteenth century, and comparing with a monograph from the nineteenth century, or a medical study from the twentieth century, we can observe differences in many dimensions, some of which need a deep historical background to study. Changes in word senses are both a visible and a tractable part of language evolution.

2Computational methods for researching the stories of words have the potential of helping us understand this small corner of linguistic evolution. The tools for measuring these diachronic semantic shifts might also be useful for measuring whether the same word is used in different ways in synchronic documents. The task of finding word sense changes over time is called diachronic Lexical Semantic Change (LSC) detection. The task is getting more attention in recent years (Hamilton, Leskovec, and Jurafsky 2016a; Schlechtweg et al. 2017, 2020). There is also the synchronic LSC task, which aims to identify domain-specific changes of word senses compared to general-language usage (Schlechtweg et al. 2019).

1.1 Related Work

3provide a comprehensive survey of techniques for the LSC task, as do . evaluate available approaches for LSC detection using the DURel dataset (Schlechtweg, Wlade, and Eckmann 2018). present results of the first shared task that addresses the LSC problem and provide an evaluation dataset that was manually annotated for four languages.

4According to , there are three main types of approaches. (1) Semantic vector spaces approaches (Gulordava and Baroni 2011; Eger and Mehler 2016; Hamilton, Leskovec, and Jurafsky 2016b, 2016a; Rosenfeld and Erk 2018; Pražák et al. 2020) represent each word with two vectors for two different time periods. The change of meaning is then measured by some distance (usually by the cosine distance) between the two vectors. (2) Topic modeling approaches (Bamman and Crane 2011; Mihalcea and Nastase 2012; Cook et al. 2014; Frermann and Lapata 2016; Schlechtweg and Walde 2020) estimate a probability distribution of words over their different senses, i.e., topics and (3) Clustering models (Mitra et al. 2015; Tahmasebi and Risse 2017).

1.2 The DIACR-Ita task

5The goal of the DIACR-Ita task (P. Basile et al. 2020; V. Basile et al. 2020) is to establish if a set of Italian words (target words) change their meaning from time period t1 to time period t2 (i.e., binary classification task). The organizers provide corresponding corpora C1 and C2 and a list of target words. Only these inputs may be used to train systems, which judge for each target word, whether it is changed or not. The task is the same as the binary sub-task of the SemEval-2020 Task 1 (Schlechtweg et al. 2020) competition.

2. Data

6The DIACR-Ita data consists of many randomly ordered text samples that have no relationship to each other. Most of the text samples are complete sentences, but some are sentence fragments.

7The ‘early’ corpus, C1 has about 2.4 million text samples and 52 million tokens; the ‘later’ corpus, C2 has about 7.8 million text samples and 738 million tokens. Each token is given in the corpora with its part-of-speech tag and lemma. The target word list consists of 18 lemmas. The POS and lemmas of the corpora are generated with the UDPipe (Straka 2018) model ISDT-UD v2.5, which has an error rate of about 2%.

3. System Description

3.1 Overview

8Because language is evolving, expressions, words, and sentence constructions in two corpora from different time periods about the same topic will be written in languages that are quite similar but slightly different. They will share the majority of their words, grammar, and syntax. We can observe a similar situation in languages from the same family, such as Italian-Spanish in Romance languages or Czech-Slovak in Slavic languages. These pairs of languages share a lot of common words, expressions and syntax. For some pairs, native speakers can understand and sometimes even actively communicate through a (low) language barrier.

9Our system follows the approach from (Pražák et al. 2020)1. The main idea behind our solution is that we treat each pair of corpora C1 and C2 as different languages L1 and L2 even though the text from both corpora is written in Italian. We believe that these two languages L1 and L2 will be extremely similar in all aspects, including semantic. We train a separate semantic space for each corpus, and subsequently, we map these two spaces into one common cross-lingual space. We use methods for cross-lingual mapping (Brychcín, Taylor, and Svoboda, n.d.; Artetxe, Labaka, and Agirre 2016, 2017, 2018; Artetxe, Labaka, and Eneko Agirre 2018) and thanks to the large similarity between L1 and L2 the quality of transformation should be high. We compute cosine similarity of the transformed word vectors to classify whether the target words changed their sense.

3.2 Semantic Space Transformation

  • 2 The source space Xs is created from the corpus C1 and the target space Xt is created from the corpu (...)

10First, we train two semantic spaces from corpus C1 and C2. We represent the semantic spaces by a matrix Xs (i.e., a source space s) and a matrix Xt (i.e., a target space t)2 using word2vec Skip-gram with negative sampling (Mikolov et al. 2013). We perform a cross-lingual mapping of the two vector spaces, getting two matrices and projected into a shared space. We select two methods for the cross-lingual mapping Canonical Correlation Analysis (CCA) using the implementation from (Brychcín, Taylor, and Svoboda, n.d.) and a modification of the Orthogonal Transformation from VecMap (Artetxe, Labaka, and Agirre 2018). Both of these methods are linear transformations. The transformations can be written as follows:

Image 100000000000007200000011A3CC7E6C978B85CE.jpg

where is a matrix that performs linear transformation from the source space s (matrix Xs) into a target space t and is the source space transformed into the target space t (the matrix Xt does not have to be transformed because Xt is already in the target space t and Xt = Image 100000000000001500000011372E4346A3494FDE.jpg).

Finally, in all transformation methods, for each word wi from the set of target words T, we select its corresponding vectors and from matrices and Image 100000000000001500000011372E4346A3494FDE.jpg, respectively (Image 1000000000000046000000181B2011E564CA964A.jpg and Image 100000000000004500000018C1CB957630FE0A61.jpg), and we compute cosine similarity between these two vectors. The cosine similarity is then used to generate a final classification output using different strategies, see Section 3.5 and 3.6.

3.3 Canonical Correlation Analysis

Generally, the CCA transformation transforms both spaces Xs and \mathbf{X}^t into a third shared space o (where and Image 1000000000000042000000154B41D1EF1740FFEC.jpg). Thus, CCA computes two transformation matrices for the source space and for the target space. The transformation matrices are computed by minimizing the negative correlation between the vectors and that are projected into the shared space o. The negative correlation is defined as follows:

Image 100000000000014C000000691D0DBC48E700BF0B.jpg (2)

where cov is the covariance, var is the variance and n is the number of vectors used for computing the transformation. In our implementation of CCA, the matrix is equal to the matrix Xt because it transforms only the source space s (matrix Xs) into the target space t from the common shared space with a pseudo-inversion, and the target space does not change. The matrix for this transformation is then given by:

Image 10000000000000C2000000152C57F83A5BD72CA1.jpg (3)

11The submissions that use CCA are referred to as cca-bin and cca-ranking in Table 1. The -bin and -ranking parts refer to a strategy used for the final classification decision, see Section 3.5 and 3.6.

3.4 Orthogonal Transformation

12In the case of the Orthogonal Transformation, the submission is referred to as ort-bin. We use Orthogonal Transformation with a supervised seed dictionary consisting of all words common to both semantic spaces. The transformation matrix is given by:

Image 10000000000000D100000038F58D91F1769055A9.jpg(4)

under the hard condition that needs to be orthogonal, where V is the vocabulary of correct word translations from source space Xs to target space Xt and and Image 100000000000003C0000001598D5684D1D506FAE.jpg. The reason for the orthogonality constraint is that linear transformation with an orthogonal matrix does not squeeze or re-scale the transformed space. It only rotates the space, thus it preserves most of the relationships of its elements (in our case, it is important that orthogonal transformation preserves angles between the words, so it preserves the cosine similarity).

3.5 Binary Strategy

13We use different strategies for the binary classification output, but all have in common that they use continuous scores. The continuous score for each target word is computed as the cosine similarity between the two vectors from the earlier and later corpus.

14In the case of the binary strategy, we assume a threshold t for which the target words with a continuous score greater than t changed meaning and words with the score lower than t did not. We know that this assumption is generally wrong (because using the threshold, we introduce some error into the classification), but we still believe it holds for most cases and it is the best choice.

  • 3 The ort-bin submission sets the threshold to be in the largest gap between the similarity values

To estimate the threshold t, we used an approach called binary-threshold (cca-bin and ort-bin in Table 1). For each target word wi we compute cosine similarity of its vectors and Image 100000000000001800000017DA5F96B87D1F219D.jpg, then we average these similarities for all words. The resulting averaged3 value is used as the threshold.

3.6 Ranking Strategy

15The ranking strategy is the second approach for generating a classification output (the submission result cca-ranking in Table 1). It uses the mean rank of repeated runs of each embedding pair. For each run, the target words are scored with a cosine distance. Then the distances for each embedding pair are sorted and a rank-order is assigned to each target. The rank-orders are averaged, to get a mean rank (and a standard deviation) for each target for each pair. Finally, ranks for all embedding pairs are averaged. The composite rank is used, along with an estimate of the associated cosine distance and its corresponding angle, to divide the target list into changed and unchanged sets. This does not work well; there are competing gaps in rank and distance estimates.

16We use the number of embeddings, and not the total number of runs, to compute the standard error of the mean (which is standard deviation divided by the square root of samples).

4. Experimental Setup

17To obtain the semantic spaces, we employ Skip-gram with negative sampling (Mikolov et al. 2013). For the final submission, we trained the semantic spaces with 100 (the ort-bin submission) and 150 (the cca-bin submission) dimensions for five iterations with five negative samples and window size set to five. Each word has to appear at least five times in the corpus to be used in the training. To train the semantic space, we used the lemmatized corpora. The dimensions 100 and 150 are selected based on our previous experiences with these methods (Pražák et al. 2020). Since we were able to submit four different submissions, we did not use the same dimension for both methods.

18The cca-ranking submission uses the same settings and dimensions 100-105, 110-115, etc. up to 210-215, resulting in 72 different dimension sizes. It combines 40 runs on each of 72 embedding pairs, a total of 2880 runs.

19For the cca-bin submission, we build the translation dictionary for the transformation of the two spaces by removing the target words from the intersection of their vocabularies. In the case of the cca-ranking submission, the dictionary in each run consists of up to 5000 randomly chosen common words for each semantic space.

20The random submission represents output that was generated completely randomly.

4.1 Corpus variants

21The organizers provided the corpora already tokenized in four different versions: original tokens; lemmatized tokens; original tokens with POS tag; lemmatized tokens with POS tag. We experimented with each of these variants, although in the end, we used results based only on lemmas.

22Figure 1 shows the mean standard deviation of rank for target words over forty runs for each of 72 different embedding sizes. The most consistent variant is the lemmas only.

Figure 1: Standard deviation (of rank) versus embedding size for four versions of the corpora

Image 10000201000001CD0000011555A181F1C9EF9AC2.png

5. Results

  • 4 We share the first place with another team that achieved the same accuracy.
  • 5 That is, 100% accuracy was possible with the continuous scores of both methods if we only had an or (...)

23We submitted four different submissions. The accuracy results for each submission are shown in Table 1. The ort-bin system achieved the best accuracy of 0.944 and ranked first4 among eight other teams in the shared task, classifying 17 out of 18 target words correctly. The cca-bin system achieved an accuracy of 0.889 (16 correct classifications out of 18). After releasing the gold labels, we performed an additional experiment with the cca-bin system achieving also an accuracy of 0.944 when the same word embeddings (with embeddings dimension 100 instead of 150) are used as for the ort-bin system. We found an optimal threshold for both systems, which makes them classify all the words correctly5.

24We believe that the key factor of the success of our system is the sufficient size of the provided corpora. Thanks to that, we were able to train semantic spaces of good quality and thus achieve good results.

Table 1: Results for our final submissions

System

Accuracy

cca-bin

.889

ort-bin

.944

cca-ranking

.778

random

.500

6. Conclusion

25Our systems based on Canonical Correlation Analysis and Orthogonal Transformation achieved the best accuracy of 0.944 in the shared task and ranked first among eight other teams. We showed that our approach is a suitable solution for the Lexical Semantic Change detection task. Applying a threshold to semantic distance is a sensible architecture for detecting the binary semantic change in target words between two corpora. Our binary-threshold strategy succeeded quite well.

26This task provided plenty of text to build good word embeddings. Corpora with much smaller amounts of data might have increased the random variation between the earlier and later embeddings, which would have given our method problems. A flaw in our technique is that semantic vectors are based on all senses of a word in the corpus. We do not yet have tools to tease out what kinds of changes are implied by a particular semantic distance between vectors. We considered using the part of speech data in the corpora since different parts of speech for the same lemma are likely different senses. But placing the POS in the token, like using inflections instead of lemmas, results in many more, less well-trained semantic vectors, as suggested by Figure 1.

Bibliographie

Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. “Learning Principled Bilingual Mappings of Word Embeddings While Preserving Monolingual Invariance.” In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2289–94. Austin, Texas: Association for Computational Linguistics. https://aclweb.org/anthology/D16-1250.

Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. “Learning Bilingual Word Embeddings with (Almost) No Bilingual Data.” In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 451–62. Vancouver, Canada: Association for Computational Linguistics. https://doi.org/10.18653/v1/P17-1042.

Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. “A Robust Self-Learning Method for Fully Unsupervised Cross-Lingual Mappings of Word Embeddings.” In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 789–98. Melbourne, Australia: Association for Computational Linguistics. https://doi.org/10.18653/v1/P18-1073.

Mikel Artetxe, Gorka Labaka, and and Eneko Agirre. 2018. “Generalizing and Improving Bilingual Word Embedding Mappings with a Multi-Step Framework of Linear Transformations.” In Proceedings of the Thirty-Second Aaai Conference on Artificial Intelligence (Aaai-18), 5012–9. https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16935/16781.

David Bamman, and Gregory Crane. 2011. “Measuring Historical Word Sense Variation.” In Proceedings of the 11th Annual International Acm/Ieee Joint Conference on Digital Libraries, 1–10. JCDL ’11. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/1998076.1998078.

Pierpaolo Basile, Annalina Caputo, Tommaso Caselli, Pierluigi Cassotti, and Rossella Varvara. 2020. “DIACR-Ita @ EVALITA2020: Overview of the EVALITA2020 Diachronic Lexical Semantics (DIACR-Ita) Task.” In Proceedings of the 7th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA 2020), edited by Valerio Basile, Danilo Croce, Maria Di Maro, and Lucia C. Passaro. Online: CEUR.org.

Valerio Basile, Danilo Croce, Maria Di Maro, and Lucia C. Passaro. 2020. “EVALITA 2020: Overview of the 7th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian.” In Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (Evalita 2020), edited by Valerio Basile, Danilo Croce, Maria Di Maro, and Lucia C. Passaro. Online: CEUR.org.

Tomáš Brychcín, Stephen Taylor, and Lukáš Svoboda. n.d. “Cross-Lingual Word Analogies Using Linear Transformations Between Semantic Spaces.” Expert Systems with Applications 135: 287–95.

Paul Cook, Jey Han Lau, Diana McCarthy, and Timothy Baldwin. 2014. “Novel Word-Sense Identification.” In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, 1624–35. Dublin, Ireland: Dublin City University; Association for Computational Linguistics. https://www.aclweb.org/anthology/C14-1154.

Steffen Eger, and Alexander Mehler. 2016. “On the Linearity of Semantic Change: Investigating Meaning Variation via Dynamic Graph Models.” In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 52–58. Berlin, Germany: Association for Computational Linguistics. https://doi.org/10.18653/v1/P16-2009.

Lea Frermann, and Mirella Lapata. 2016. “A Bayesian Model of Diachronic Meaning Change.” Transactions of the Association for Computational Linguistics 4: 31–45. https://doi.org/10.1162/tacl_a_00081.

Kristina Gulordava, and Marco Baroni. 2011. “A Distributional Similarity Approach to the Detection of Semantic Change in the Google Books Ngram Corpus.” In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, 67–71. Edinburgh, UK: Association for Computational Linguistics. https://www.aclweb.org/anthology/W11-2508.

William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016a. “Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change.” In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1489–1501. Berlin, Germany: Association for Computational Linguistics. https://doi.org/10.18653/v1/P16-1141.

William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016b. “Cultural Shift or Linguistic Drift? Comparing Two Computational Measures of Semantic Change.” In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2116–21. Austin, Texas: Association for Computational Linguistics. https://doi.org/10.18653/v1/D16-1229.

Andrey Kutuzov, Lilja Øvrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embeddings and semantic shifts: a survey. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1384–1397, Santa Fe, New Mexico, USA, August. Association for Computational Linguistics.

Rada Mihalcea, and Vivi Nastase. 2012. “Word Epoch Disambiguation: Finding How Words Change over Time.” In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 259–63. Jeju Island, Korea: Association for Computational Linguistics. https://www.aclweb.org/anthology/P12-2051.

Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. “Efficient Estimation of Word Representations in Vector Space.” In Proceedings of Workshop at Iclr. arXiv1301.3781. https://arxiv.org/pdf/1301.3781.pdf.

Sunny Mitra, Ritwik Mitra, Suman Kalyan Maity, Martin Riedl, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2015. “An Automatic Approach to Identify Word Sense Changes in Text Media Across Timescales.” Natural Language Engineering 21 (5): 773–98.

Ondřej Pražák, Pavel Přibáň, Stephen Taylor, and Jakub Sido. 2020. “UWB at Semeval-2020 Task 1: Lexical Semantic Change Detection.” In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020). Barcelona, Spain: Association for Computational Linguistics.

Alex Rosenfeld, and Katrin Erk. 2018. “Deep Neural Models of Semantic Shift.” In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 474–84. New Orleans, Louisiana: Association for Computational Linguistics. https://doi.org/10.18653/v1/N18-1044.

Dominik Schlechtweg, Stefanie Eckmann, Enrico Santus, Sabine Schulte im Walde, and Daniel Hole. 2017. “German in Flux: Detecting Metaphoric Change via Word Entropy.” In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), 354–67. Vancouver, Canada: Association for Computational Linguistics. https://doi.org/10.18653/v1/K17-1036.

Dominik Schlechtweg, Anna Hätty, Marco Del Tredici, and Sabine Schulte im Walde. 2019. “A Wind of Change: Detecting and Evaluating Lexical Semantic Change Across Times and Domains.” In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 732–46. Florence, Italy: Association for Computational Linguistics. https://doi.org/10.18653/v1/P19-1072.

Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, and Nina Tahmasebi. 2020. “SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection.” In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020). Barcelona, Spain: Association for Computational Linguistics.

Dominik Schlechtweg, and Sabine Schulte im Walde. 2020. “Simulating Lexical Semantic Change from Sense-Annotated Data.” In The Evolution of Language: Proceedings of the 13th International Conference (Evolang13), edited by A. Ravignani, C. Barbieri, M. Martins, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, K. Mudd, and T. Verhoef. https://doi.org/10.17617/2.3190925.

Dominik Schlechtweg, Sabine Schulte im Wlade, and Stefanie Eckmann. 2018. “Diachronic Usage Relatedness (Durel): A Framework for the Annotation of Lexical Semantic Change.” In Proceedings of Naacl-Hlt 2018, 169–74.

Milan Straka. 2018. “UDPipe 2.0 Prototype at CoNLL 2018 UD Shared Task.” In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, 197–207. Brussels, Belgium: Association for Computational Linguistics. https://doi.org/10.18653/v1/K18-2020.

Nina Tahmasebi, and Thomas Risse. 2017. “Finding Individual Word Sense Changes and Their Delay in Appearance.” In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, 741–49. Varna, Bulgaria: INCOMA Ltd. https://doi.org/10.26615/978-954-452-049-6_095.

Notes

1 The source code is available at https://github.com/pauli31/SemEval2020-task1

2 The source space Xs is created from the corpus C1 and the target space Xt is created from the corpus C2.

3 The ort-bin submission sets the threshold to be in the largest gap between the similarity values

4 We share the first place with another team that achieved the same accuracy.

5 That is, 100% accuracy was possible with the continuous scores of both methods if we only had an oracle to set the threshold.

Auteurs

NTIS – New Technologies for the Information Society - Department of Computer Science and Engineering, Faculty of Applied Sciences, University of West Bohemia, Czech Republic http://nlp.kiv.zcu.cz – ondfa@kiv.zcu.cz

NTIS – New Technologies for the Information Society - Department of Computer Science and Engineering, Faculty of Applied Sciences, University of West Bohemia, Czech Republic http://nlp.kiv.zcu.cz – pribanp@kiv.zcu.cz

Lire

Open access

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search