Versione classicaVersione mobile

EVALITA Evaluation of NLP and Speech Tools for Italian

 | 
Tommaso Caselli
, 
Nicole Novielli
, 
Viviana Patti
, 
et al.

Part II. Participant reports

ItVENSES - A Symbolic System for Aspect-Based Sentiment Analysis

Rodolfo Delmonte

Abstract

ItVENSES è un sistema per analisi sintattico-semantiche basato sul parser per l’italiano chiamato ItGetaruns per analizzare ogni frase. ItVenses riceve l’output di ItGetaruns e decide quali termini possono essere usati come feature o semi per identificare l’aspetto. Questo passo viene compiuto dapprima con una semplice operazione di lookup in una lista creata precedentemente sulla base di un‘analisi quantitativa del corpus di training. Il risultato viene quindi vagliato attivando un insieme di filtri che agiscono sulla costituenza sintattica, la lista lemmatizzata e classificata delle parole e le strutture predicato-argomentali della frase. Dopo questo passaggio, l’aspetto associato a ciascuna frase viene arricchito dalle componenti di polarità e sentiment calcolate sull’output di ItGetaruns. Infine, vengono considerate negazione, fattualità e soggettività in relazione a ciascun aspetto. I risultati sono stati dapprima alquanto bassi – attorno al 61% di F1, ma successivamente, dopo aver eseguito una serie di esperimenti di sottrazione in cui sono stati ridotti due componenti dell’algoritmo, la valutazione ha improvvisamente avuto un’impennata raggiungendo l’83% di F1, valore simile a quello ottenuto per il training corpus.

Testo integrale

1 Introduction

1In this paper we present work carried out to analyze aspect and polarity in a corpus of Italian tweets collected and annotated at the University of Turin (Basile et al. (2018)). The final system produced is fully symbolic, is made up by different modules and takes advantage of previous work for similar challenges presented at Clic-It 2014 (Delmonte (2014b)). In particular, the underlying parser for the semantic analysis of each text provides a full processing pipeline including tokenization, multiwords creation, morpho-syntactic analysis, POS tagging, Named Entity Detection, chunking and finally, extraction of dependency relations such as subject, object and modifiers. It also provides for pronominal binding and coreference resolution, and propositional level semantics related to negation, factuality and subjectivity. In the sections below we present in detail the method used in the main modules, the general features of the dataset and the problems with some of its inconsistencies, the results.

2 The System and the Modules

2One important step in the creation of the system ItVenses has been adaptation and contextualization which has acted on ItGetaruns – the semantic parser - at almost all levels of analysis. ItGetarun receives as input a string – the sentence to be analysed - which is then tokenized into a list. The list is then fully tagged, disambiguated and chunked. Chunks are then put together into a full sentence structure which is passed to the Island-Based predicate-argument structure (hence PAS) analyzer.

3One important part of the adaptation of the system ItGetarun has been constituted by all requirements imposed by the domain at lexical, tagging, syntactic and semantic levels. Reviews on holiday resorts, hotels, touristic places have a specialized vocabulary which requires certain choices to be imposed by the components of the parser already from the start. In particular, we imposed a specific tag – here a Noun - to a set of otherwise lexically ambiguous words, as for instance in the following set of examples:

(1) torta, tavolo, fermata, pianta, insegna

  • 1 Tagging is very important to tell apart homographs like “personale” in this example, which would be (...)

where, each word could be tagged both as Noun and as PastParticiple or simply as Verb1, A certain number of multiwords have been created in order again to reduce ambiguity of a set of words. In ItGetarun, the creation of multiwords is carried out during tokenization, thus before tagging takes place. Here are some examples,

(2) deposito bagagli, camera da letto, presa di corrente, ricevuta fiscale, sala colazione, centro storico

where again each first component could be analyzed as noun but also as verb or pastParticiple. Finally, since a great number of texts are simple fragments, made up of a list of nouns and adjectives and no verbs, we introduced a dummy verb ESSERE and marked the first noun phrase as Subject to be able to compute propositional level semantics. At semantic and pragmatic level, specific words acquire a meaning determined by the context: consider the adjective “piccolo” which is only used to mark negative polarity when predicative and as a modifier when attributive, together with a number of downtoners like ”poco”, as in the example below:

(3) 1240348699;1;1;0;1;0;1;0;0;0;0;0;0;0;0;0; 0;0;0;0;0;0;0;0;0;"Stanza piccola ma pulita."

4The problem in this case is represented by the implicit presence of ”stanza” in the elliptical portion of the text preceded by the adversative marker ”ma” which allows exclusive reference. Consider also example (4),

(4) 1240350017;0;0;0;0;0;0;1;0;1;1;1;0;0;0;0; 0;0;0;0;0;0;0;0;0;"Qualche difficoltà col parcheggio nonostante la disponibilità del personale"

where aspect feature terms are not the most relevant items, syntactically and semantically speaking, but are included as modifiers in a noun phrase (”del personale”) or are treated as adjuncts prepositional phrases (”col parcheggio”). Inclusive semantic interpretation is associated to examples like (5) below,

(5) 1240347398;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0; 0;0;0;1;1;1;0;0;0;"La posizione della struttura è un po’ fuori dal centro, ma in compenso è vicina al capolinea degli autobus"

  • 2 or in the example below where, however, the annotated aspect has been wrongly marked as “other” in (...)

5There are also idiomatic expressions which are taken into consideration and computed from PAS, as for instance ”lasciare a desiderare”, meaning ”being insufficient” rather than its literal meaning ”leave to desire”2.

(6) 1240347831;1;0;1;0;0;0;0;0;0;0;0;0;0;0;0; 0;0;0;0;0;0;0;0;0;"La pulizia lascia a desiderare per un hotel da 4 stelle."

6ItVenses, the algorithm for aspect classification and sentiment analysis – we took part in SENTIPOLC, (see Basile et al. (2014)) - takes as input the previous analysis and input files, including a list of all entities fully semantically classified and a list of all PAS with up to 4 arguments/adjuncts. Every PAS is preceded by one among three possible labels: a label NEG for negation (including negative sentiment of the lexical predicate), a label OPNSG for subjective propositions,and UNREAL for nonfactual ones; in addition to a label for speech act, STATEMENT, EXCLAMATIVE or QUESTION.

3 Sifting Aspect and Polarity with Semantic Sieves

  • 3 The list has been derived - checking and re-elaborating the data - from a number of previous lexica (...)

7Seeds or features present in each text are searched both at word and lemma level. This is done at first by a simple look-up operation which matches each token and corresponding lemma in the text with the list of possible seeds created by a quantitative analysis of the training corpus. In fact, this list only includes most frequent terms found and collected – amounting to 300. To reinforce this match, synonyms are added when present again by matching with the synonyms lexicon available in ItGetaruns. This was done in view of the need to enlarge the number of features to make available for the test corpus, as well as to generalize the procedure. No such lists exist for polarity items which are searched for and matched with the lexica available in ItGetaruns3

8All these operations are subject to local filtering actions. If we consider example (6) above we see that there are three possible seeds: ”pulizia”, ”hotel”, ”4 stelle”, but the focus of attention is on ”pulizia” which also is the Subject of the main predicate. So one filter is determined by grammatical functions (for a similar approach see Brun and Raux (2016)) extracted by constituency and dependency analysis and made available one by one with the corresponding head. ”Pulizia” and related lexemes are then regarded more relevant than the simple seed ”camera” or ”stanza” and the choice is to delete aspect 2 in favour of aspect 1 when this is verified. Also consider positive words like MIGLIORARE which are usually associated to negative evaluation. Aspect and sentiment polarities (negative and positive), are then checked together one by one, in order to verify whether polarity has to be attenuated, shifted or inverted (see Polanyi and Zaenen (2006)) as a result of the presence of intensifiers, maximizers, minimizers, diminishers, or simply negations at a higher than constituent level (see Ohana and Delany (2006)). Consider now the presence of focalizers like ”solo, soltanto”, which is mostly used to focus on the insufficient presence of a given aspect related feature, as for instance in this case:

(7) 1240344222;0;0;0;0;0;0;1;0;1;0;0;0;0;0;0; 0;0;0;0;0;0;0;0;0;"In bagno 1 solo schampo e 1 solo bagno doccia per 2 persone."

9Other important components are privative markers like ”senza, eccetto” but also specific words indicating ”lack of”, ”mancanza, assenza, privo di, in cerca di”. Specific markers are related to Aspect 7 – location one, and are markers for negative evaluation: ”lontano da, fuori da ”usually referring to city center. Eventually, all aspects plus polarities are collapsed in one single list for each sentence and passed to a final SIEVE that acts on more than one aspect at a time in order to establish preferences for pairs of aspects and erase redundantly assigned ones. Additional work on preferences will be discussed in the section below. Here it is important to note that in some cases presence of a specific feature to identify the corresponding aspect may be implicit, i.e. not linguistically expressed. Consider for instance the following example:

(8) 1240347807;1;1;0;1;1;0;0;0;0;0;0;0;0;0;0; 0;0;0;0;0;0;0;0;0;"Pulita, spaziosa e soprattutto funzionale."

  • 4 But in (9) the annotation is not consistent: both “scala” and “ascensore” are mainly annotated with (...)

where the word ”camera” is missing, or the following example where ”gradini” implies ”scala”. 4

(9) 1240345322;0;0;0;0;0;0;1;0;1;0;0;0;0;0;0; 0;0;0;0;0;0;1;0;1;"Molti gradini 93 Ascensore piccolo."

10Another set of interesting examples are constituted by those reviews declaring their total appraisal or sometimes their total lack of appraisal of the place:

(10) 1240346791;1;1;0;1;1;0;1;1;0;1;1;0;1;1;0; 1;1;0;1;1;0;1;1;0;"niente da reclamare; tutto perfetto"

(11) 1240344015;0;0;0;1;1;0;1;1;0;1;1;0;1;1;0; 0;0;0;0;0;0;0;0;0;"tutto molto bello e professionale"

11These examples are redundantly marked with positive evaluation associated to all, or almost all aspects. However the next set of examples has no such marking and is contrasted by example (14) with almost the same text:

(12) 1240345792;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0; 0;0;0;0;0;0;1;1;0;"Mi è piaciuto praticamente tutto."

(13) 1240344497;1;1;0;1;1;0;1;1;0;1;1;0;1;1;0; 1;1;0;1;1;0;1;1;0;"Mi è piaciuto tutto"

12Eventually, ItVenses takes into account negation and non-factuality usually marked by unreal mood, information available at propositional level, used to modify previously assigned polarity from negative to positive, on the basis of PAS and their semantics. Consider the example below where double and triple negations are used to produce a positive effect:

(14) 1240345153;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0; 0;0;0;0;0;0;1;1;0;"Non c’è niente che non mi è piaciuto."

13Non-factuality and subjectivity are used to mark negative polarity for the simple reason that unreal mood is usually associated to criticism for some service or comfort missing, as for instance in the following example,

(15) 1240344698;0;0;0;0;0;0;1;0;1;0;0;0;0;0;0; 0;0;0;0;0;0;0;0;0;"Tuttavia sarebbe comodo un servizio navetta per il centro città soprattutto la sera."

4 Defining Preferences from Persistence

14Telling seeds or features from terms is the difficult part of the task of aspect identification which is done here on the basis of preferences. Preferences have been partly determined by the quantitative analysis of the training corpus. After tagging and lemmatizing each word or term contained in each review text, we associated the numerical value/s of the aspect/s present in the annotation for the current text to each nominal expression. The idea was to come up with a list of frequency values for each term that allowed us to choose a “majority class” aspect to associate to the seeds. We did not expect a complete uniform distribution, i.e. that every seed had one and only one majority class and the rest of the frequency distribution were flat. There are three possible cases to consider: a. the text contains only one term which is annotated accordingly; b. the text contains more than one term, but only one is annotated; c. the text contains more than one term and they are all annotated. In case c. we associate each value to each seed thus producing a redundant annotation since we don’t know which term has been associated to which aspect value. Taking into account each of the most frequent possible seeds or majority class lemmata associated to each aspect class, we computed indices for their persistence in that particular class, thus measuring level of dispersion, corresponding to ambiguity or uncertainty when choosing it. To produce such indices we considered the number of times in which a seed was associated to a given aspect class in texts with a unique aspect identifier (case a. above) rather than as part of a cluster of seeds for the same text. At first we consider general results of the quantitative analysis in Table 1. Seeds collected amount to a total of 25468 occurrences of nominal expressions which collapse down to 2678 unique types, distributed over the 8 aspect classes with a fairly unequal share, with aspect 2 and 3 covering almost half of all occurrences, followed by aspect 4, 7 and eventually but much lower, aspect 1, 5, 8 and 6 at the lowest. The majority of all nominal expressions, 16594, are contained in review texts with a unique aspect identifier, and this should make automatic assignment easier, but as can be seen from Table 3 below, this is not the case.

Table 1: Most frequent quantitative data of aspects and aspect clusters in training corpus

Aspect

2

3

4

7

Frequency

10623

8386

5488

5927

  • 5 Here two examples of empty evaluation: (i)1240349964;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0; 0;0 (...)
  • 6 This may be simply due to the fact that a given seed may be less “relevant” than another present in (...)

15Multiple aspect annotation for the same text amounts to 8974 and they are mainly made by Doubles, then Triples, and finally Quadruples, and Quintuples. There are a few cases of empty evaluation5, and also a few cases of all slots filled up – see below. Going back now to the question of uniformity of annotation and/or consistency of aspect/feature association, organizers have computed the usual inter-annotator agreement per class and reported a lower boundary of 85% of agreed sentences. However, we note the following situation: if all unique aspect identifiers in texts were consistently assigned to the same term or seed, then there would be no ambiguity and the algorithm would be easily working. But even unique identifiers do not show such persistence.6 Most persistent seeds are the following ones in a graded scale order in list (1): WIFI, POSIZIONE, PERSONALE, STAZIONE, STAFF, CENTRO, PULIZIA; and in a lesser degree in list (2): METRO, PARCHEGGIO, DOCCIA, PREZZO. All these seeds have a persistence ratio respectively over 90% - the first list – and over 80%, the second list. In Table 3 we report some of the remaining high frequency seeds, where percentages in column three register the ratio between total of occurrences of the seed with respect to majority class; and in column four, the ratio between majority class and unique aspect identifiers. These seeds have a much lower level of persistence – well below 80% -, ”struttura” being the lowest. A low level of persistence indicates the fact that, for instance, the seed ”struttura” has been associated to a great variety of aspect class markers of which two are however paramount, and are marked in column two, with aspect 8 (Other) the most frequent, and aspect 2 (comfort), the less frequent. In two cases, we have high values associated with column 4, for ”rapporto” and ”materasso” indicating that these two seeds when found in unique aspect texts, have always or almost always persisted in the same annotated aspect marker.

Table 2: 19 less persistent majority class aspect seeds

Seeds

Asp1

% MajCl

% MajCl

/ Aspect

- Asp2

Tot.Occs

Tot.Uniqs

zona

7 - 2

72,44

66,29

vista

7 - 3

62,09

49,46

struttura

8 - 2

18,57

30,74

servizi/o

3 - 2

56,97

38,46

qualità

5 - 3

52,72

45,84

notte

2 - 3

50,28

38,94

hotel

7 - 3

32,99

27,27

arredo

/ amento

2 - 2

58,46

45,57

albergo

7 - 2

36,56

30,97

4stelle

3 - 2

32,77

28,57

camera

2 - 1

69,32

59,84

ascensore

3 - 2

69

64,18

materasso

2 - 3

71,01

100

rapporto

5 - 7

64,82

98,68

16If we consider aspect 8 or “Other”, in the majority of the cases, we take it to be a case of failure to annotate the text with the correct aspect class rather than a case of lack of aspect seed. As shown in table 2 above, the number of such associations is fairly high, amounting to 1586 cases. Seeds marked with 8, in order of their frequency, are: ”struttura”, ”camera”, ”hotel”, ”stanza”, ”arredo”, ”bagno”, ”4 stelle”. All cases of ”struttura” marked with class 8 are cases of unique aspect identifier, followed by ”hotel”, ”arredo”, and ”4 stelle”. The remaining cases are scattered among all seeds.

5 Results and Discussion

17Results obtained and delivered in due time are not particularly satisfactory as shown in Table 4 below. When compared to results obtained for the training set, we notice a great difference. In order to understand the reason for that difference we started a set of ablation experiments, removing at first syntactic filtering and then lexical resources one by one. This was done also to evaluate what contribution was obtained from lexical resources we were using. After ablation test removing free synonym search for both aspect feature items and polarity items we discovered that a fully different result has been obtained and from the evaluation algorithm it was by far the best result – see Table 5 - in line with what we obtained in training data, which we report here below in Table 4.

Table 3: Results for Training Dataset

Tasks/Results

ACD

ACP

Macro-Precis

0.8414

0.7705

Macro-Recall

0.8493

0.7916

Macro-F1score

0.8453

0.7809

Micro-Precis

0.8074

0.7076

Micro-Recall

0.8290

0.7766

Micro-F1score

0.8181

0.7405

18We discovered later on that that was due to a redundancy at a semantic level caused by polysemous synonyms present in our dictionary and used to enrich the list derived from training data. In developing the system with training data, we extracted synonym lists in order to adapt and contextualize them to the domain. Consider an important and frequent seed like POSIZIONE: it has a set of five different meanings in IWN (ItalianWordNet), (see Roventini et al. (2000)) which are then reflected in the extension of synonym lists covering all of them. So what we did was creating sublists adapted and limited to our domain. However, when we turned to analyse test data we decided to tune the seeds we regarded semantically unique, to the synonym lists without any previous adjustment. The result was a dramatic drop in performance when compared to training data.

Table 4: Published Results for Test Dataset

Tasks

ACD

ACP

ACD

ACP

/Results

Run1

Run1

Run2

Run2

Macro-P

0.5887

0.5277

0.5856

0.5241

Macro-R

0.6089

0.5661

0.6140

0.5699

Macro-F1

0.5986

0.5463

0.5994

0.5461

Micro-P

0.6232

0.5209

0.6164

0.5144

Micro-R

0.6093

0.5659

0.6134

0.5692

Micro-F1

0.6162

0.5425

0.6149

0.5404

Table 5: Results for Test Dataset after Ablation Experiments

Tasks

ACD

ACP

ACD

ACP

/Results

Run1

Run1

Run2

Run2

Macro-P

0.8222

0.7590

0.8258

0.7603

Macro-R

0.8458

0.7932

0.8564

0.8009

Macro-F1

0.8339

0.7757

0.8408

0.7801

Micro-P

0.7975

0.7033

0.7951

0.6986

Micro-R

0.8348

0.7880

0.8430

0.7938

Micro-F1

0.8157

0.7432

0.8183

0.7431

Bibliografia

P. Basile, V. Basile, D. Croce, M. Polignano. 2018. Overview of the EVALITA 2018 Aspect-based Sentiment Analysis (ABSITA) Task. T. Caselli, N. Novielli, V. Patti, P. Rosso, (eds). In Proceedings of the 6th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA’18). CEUR.org, Turin.

V. Basile, A. Bolioli, M. Nissim, V. Patti, P. Rosso. 2014. Overview of the Evalita 2014 SENTIment POLarity Classification Task, Proceedings of EVALITA’14. Edizioni PLUS, Pisa University Press, Pisa.

Brun, C. J. Perez, and C. Raux 2016. XRCE at SemEval-2016 Task 5: Feedbacked Ensemble Modeling on Syntactico-Semantic Knowledge for Aspect Based Sentiment Analysis. Proceedings of SemEval-2016, 277–281.

Rodolfo Delmonte. 2014a. A Linguistic Rule-Based System for Pragmatic Text Processing. Proceedings of Fourth International Workshop EVALITA 2014, 64-69. Edizioni PLUS, Pisa University Press, Pisa.

Rodolfo Delmonte. 2014b. A Reevaluation of Dependency Structure Evaluation. Proceedings of CLiC-it 2014 - The First Italian Conference on Computational Linguistics, 151-157. Edizioni PLUS, Pisa University Press, Pisa.

Esuli, A. and F. Sebastiani. 2006. SentiWordnet: a publicly available lexical resource for opinion mining. Proceedings of the 5th Conference on Language Resources and Evaluation LREC, 417–422.

Ohana, B. and B. Tierney and S.J. Delany. 2016. Sentiment Classification Using Negation as a Proxy for Negative Sentiment. Proceedings of 29th FLAIRS Conference, AAAI, 316-321.

Polanyi, Livia and Zaenen, Annie. 2006. Contextual valence shifters. Janyce Wiebe, editor, Computing Attitude and Affect in Text: Theory and Applications. Springer, Dordrecht, 1–10.

Roventini A., Alonge A., Calzolari N., Magnini B., Bertagna F.. 2014. ItalWordNet: a Large Semantic Database for Italian. Proceedings of LREC II, ELRA, 783-790.

Allegati

English Translation of all Italian examples in the paper

(1) cake/twisted, table, stop, plant, teaches/sign

(2) luggage storage, bed room, socket, fiscal receipt, breakfast room, historical center

(3) Small room but clean

(4) Some difficulties with parking notwithstanding staff helpfulness

(5) The position of the structure is a little out of the center, but in return it is close to the main bus stops

(6) Cleaning has a lot to be desired for a 4 star hotel

(7) In the bathroom only 1 schampoo and only 1 shower gel for 2 people

(8) Clean, spacious and what’s more functional

(9) Many steps 93 lift small

(10) nothing to complain; all perfect

(11) all very nice and professional

(12) I liked practically all

(13) I liked all

(14) There is nothing that I didn’t like

(15) However, a shuttle service for the city center would be convenient especially in the evening

(16) Great position, hotel in remaking this is why there’s care for modernizing, great breakfast, fabulous wifi internet I made a video call with no problems and I was in my room, room and bath very spacious, gentle staff

(17) The large room and the comfortable bath, the WIFI service and the pleasant breakfast

(i) Definitely better the deluxe one that I have already taken other times and at lower costs

(ii) At 9 45 breakfast was rather scarse for the price of 15 Euros

Note

1 Tagging is very important to tell apart homographs like “personale” in this example, which would be wrongly classified by a bag-of-words approach: 1240342728,”L unico difetto è che, a differenza di altri ostelli, l armadietto personale è molto piccolo.”

2 or in the example below where, however, the annotated aspect has been wrongly marked as “other” in slot 8: as will be shown in a section below, “palazzo”, “struttura” and “hotel” have both been usually associated to location, slot 7. (7) 1240351211;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0; 0;0;0;0;0;0;1;0;1;”il palazzo dove è posta la struttura e l’accesso dalla via lascia un po’ a desiderare; tutto sembra ma non un hotel.”

3 The list has been derived - checking and re-elaborating the data - from a number of previous lexica as for instance the one by Esuli and Sebastiani (2006).

4 But in (9) the annotation is not consistent: both “scala” and “ascensore” are mainly annotated with aspect 2, rather than aspect 3 present here. None of them apart for a small number of cases are annotated with 8, “Other”.

5 Here two examples of empty evaluation: (i)1240349964;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0; 0;0;0;”Decisamente meglio la deluxe che ho già occupato altre volte e con costi inferiori.” (ii)1240350466;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0; 0;0;0;”Alle 9 45 la colazione era piuttosto scarsa per il prezzo di 15 Euro.” If example ii. may be still regarded as a case of implicit association of aspect 2 due to presence of attribute “deluxe” which is usually referrable to “camera”; no such situation is found in example i. where “colazione” and “prezzo” are clear seeds for aspects 3 and 5.

6 This may be simply due to the fact that a given seed may be less “relevant” than another present in the same text. Some examples below:

(16) 1240344314;0;0;0;1;1;0;1;1;0;1;1;0;0;0;0;1;1;0;1;1;0; 0;0;0;”Ottima posizione, hotel in rifacimento per cui c’è una cura verso l’ammodernamento, ottima colazione, favoloso wifi internet ho fatto una video call senza problemi ed ero in camera, camera e bagno molto ampi, personale gentile”

(17) 1240343916;0;0;0;0;0;0;1;1;0;0;0;0;0;0;0;0;0;0;0;0;0; 0;0;0;”La vasta camera ed il bagno confortevole, il servizio WIFI e la piacevole colazione”

In (16) all aspects have been correctly annotated, not so in (17). In this examples there are many “relevant” aspects to be annotated: aspect 2, 3 and 6.

CC-BY-NC-ND-4.0

Solamente il testo è utilizzabile con licenza CC BY-NC-ND 4.0. Salvo diversa indicazione, per tutti agli altri elementi (illustrazioni, allegati importati) la copia non è autorizzata ("Tutti i diritti riservati").

Acquista

Versione a stampa

Accademia University Press
Cerca su OpenEdition Search

Sarai reindirizzato su OpenEdition Search