Version classiqueVersion mobile

EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

 | 
Valerio Basile
, 
Danilo Croce
, 
Maria Maro
, 
et al.

HaSpeeDe: Hate Speech Detection

UO @ HaSpeeDe2: Ensemble Model for Italian Hate Speech Detection

Mariano Jason Rodriguez Cisnero et Reynier Ortega Bueno

Résumé

This document describes our participation in the Hate Speech Detection task at Evalita 2020. Our system is based on deep learning techniques, specifically RNNs and attention mechanism, mixed with transformer representations and linguistic features. In the training process a multi task learning was used to increase the system effectiveness. The results show how some of the selected features were not a good combination within the model. Nevertheless, the generalization level achieved yield encourage results.

Note de l’éditeur

Copyright 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

Texte intégral

1. Introduction

1Modern societies found easy and interesting ways for sharing information via Social Media. Users discover freedom to express themselves through online communication. Even if the ability to freely express oneself is a human right, some users take this opportunity to spread hateful content. A dangerous and hurtful potential arises with this kind of information. Recognizing automatically such content is an interesting topic for researchers.

2Creative methods have been proposed to tackle the fascinating task of recognizing hate in texts (De la Pena Sarracén et al. 2018; Gambäck and Sikdar 2017). Some of those works face the problem using feature extraction (Schmidt and Wiegand 2017) and classification algorithms like SVM (Santucci et al. 2018). In the last years, Deep Learning approaches have become one of the most successful research areas in Natural Language Processing (NLP). There are exciting investigations about this topic, such as (Cimino, De Mattei, and Dell’Orletta 2018), involving LSTM (Liu and Guo 2019) and transformers (Vaswani et al. 2017) that gain attention in NLP community due to their results.

3We propose a model based on multiple representations learned by means of deep learning techniques and linguistic knowledge. Particularly a Long Short Term Memory architecture mixed with linguistic features and language model representations given by a special kind of transformer model, BERT.

4The paper is organized as follows. The Section 2 introduces a brief description of HaSpeeDe Task. Our hate detection system is presented in Section 3. The experiments and results are discussed in Section 4. Finally, in Section 5 the conclusions and future directions are given. The code of this work is available on GitHub: https://github.com/mjason98/evalita20_hate

2. HaSpeeDe2 Task

5Hate speech and stereotypes recognition on social media have become an attractive research area from the computational point of view. In the second edition of HaSpeeDe (Sanguinetti et al. 2020) at Evalita 2020 (Basile et al. 2020), the organizers proposed to address three subtasks. The main subtask is the subtask A, which aims at determining the presence or absence of hateful content in a text. The dataset is composed by 6839 short texts, 2766 labeled as hate speech and 4076 as not hate speech. In this work we focused only on subtask A. The subtask B consists of a binary classification problem oriented to stereotypes’ detection. The last subtask C is a sequence labeling task aims at recognizing Nominal Utterances in hateful tweets.

3. Our Proposal

6We dealt with hate detection task as a text classification problem to classify “hateful" or “no hateful" categories. We train a deep learning model based on attention mechanism and Recurrent Neural Networks, specifically a Bidirectional Long Short Term Memory (Bi-LSTM) (Hochreiter and Schmidhuber 1997) mixed with linguistic features and transformers representations by means of an interpretable multi-source fusion component (Karimi et al. 2018).

7In Section 3.1 and Section 3.2 we describe the linguistic features and the transformer representation used in this work. The Section 3.3 presents the preprocessing phase. Finally, the neural network model and the feature ensemble are described in Section 3.4.

Linguistic Feature

8To build the hate detection model, we start by extracting several sets of linguistic features:

  • 1 The wordnet came from the python library nltk

WordNet Features: We count the number of verbs, adverbs, nouns and adjectives. Also, for every word, we calculated the average of its similarity with respect to the others using the Image 100000000000007B00000011C84894D7E3FEA76F.jpgfunction provided by the wordnet1 corpus. Furthermore, we consider the degree of lexical ambiguity by counting the number of synsets of each word within the text.

9Hurt and Sentiment content: HurtLex (Bassignana, Basile, and Patti 2018) is a lexicon of offensive, aggressive, and hateful words in over 50 languages. The words according to the 17 categories offered by the lexicon are counted and added as linguistic features jointly with polarity and semantic values obtained from SenticNet (Cambria et al. 2018) corpus.

10Information Gain: Information gain (Lewis 1992) had been a good feature selection measure for text categorization. It takes into account the presence of the term in a category as well as its absence and can be defined by:

Image 100000000000014D00000030ADBC360404DD6B81.jpg

where Image 10000000000000620000001438F7C391C8B78E3F.jpgand Image 1000000000000053000000131DAA25D275AE4EBA.jpg.In this formula, probabilities are interpreted on an event space of documents, where Image 100000000000003F000000134E33A5D48652961A.jpgis the probability that, for a random document d, term tk does not occur in d who belongs to category Ci. In our case, categories were two: hateful and no hateful, and the term is the word’s lemma.

  • 2 We selected the top 50 words with highest IG value.

11To create the information gain feature (IgF), we calculated the IG for every word and the highest ones are chosen2. Then, the occurrence of those selected words in the text are counted.

3.2 Italian BERT

12Finally, we use a pre-trained BERT3 to accomplish the calculation of a deep representation of the text. One of the most widely used auto-encoding pre-trained Language Models (PLMs) is BERT (Devlin et al. 2018). BERT is trained using the masked language modeling task that randomly masks some tokens in a text sequence, and then independently recovers the masked tokens by conditioning on the encoding vectors obtained by a bidirectional Transformer.

13Inside BERT, the information is passed forward crosswise transformer layers. In this work, we used a specific output from one of those layers, this operation can be expressed by:

Image 100000000000008600000048190F5A056C22B995.jpg

  • 4 The text is represented as a vector of integers using the tokenizer function in BERT Model

where texttok is the text after its tokenization4, hi is the output of the ith transformer layer(Hli) called Image 10000000000000610000000DED455295619CE7FE.jpgand n is the total transformer layers in BERT. Then, for an specific i, from the tensor of order 2 hi it is computed the vector fbert, as a deep representation of the initial text who will act as PLM feature.

Image 10000000000000D60000002AEBA1A8C3486143FB.jpg

3.3 Preprocessing

  • 5 The automaton was created using the re library from python and the words from an italian corpus.

14In the preprocessing step, firstly stopwords were removed. Then, the hashtags composed of many words are split (e.g: #NessunDorma becomes # nessun dorma). We use a regular expression5 algorithm to archive this step.

15Secondly, using the FreeLing6 tool we obtain for each word it lemma, and non alphanumeric characters are removed. Finally, the remaining words are represented as vectors using a pre-trained word embedding generated by Word2Vec model (Mikolov et al. 2013).

3.4 The Deep Ensemble Model

16The standard LSTM receives sequentially at each time step a vector xt and produces a hidden state ht. Each hidden state ht is calculated as follow:

1

Image 10000000000000FC000000A4040A50023EE0A1B0.jpg

Where all W(*) , U(*) and b(*) are parameters to be learned during training. Function σ is the sigmoid function and Image 100000000000000D0000000DC12FB63274A6A670.jpgstands for element-wise multiplication.

Bidirectional LSTM, on the other hand, makes the same operations as standard LSTM but, processes the incoming text in a left-to-right and a right-to-left order in parallel. Thus, it output become Image 100000000000005B0000001C200EB66ECF0A9AE3.jpgfor the two directions.

By adding an attention mechanism, we allow the model to decide which part of the sequence “attends to”. First, lets define the softmax function π(υ) for a vector Image 1000000000000087000000134F05A6C5C590B7E3.jpgas:

2

Image 100000000000007D0000002BC9D4EAC9B61D987D.jpg

Then, let Image 100000000000004C0000001149DD265B5C08094F.jpgbe the matrix of input vectors, where L the size of then and N the length of the given sequence. We define the attention layer (AttLSTM), as a regular LSTM layer like (1) with extra operations described as follow:

Image 100000000000015300000032E66E14C508A2454C.jpg

Here Image 10000000000000640000001395D1E4954BD895FB.jpgrepresents the number of attention’s heads, Image 10000000000000600000001391EF0546665407E4.jpgwhere M is the size of the hidden state vector ht, Image 100000000000006B00000013098D6D9BC715C6FD.jpg, ba and bk are learnable parameters. The (*)T is the transpose operation and the output of the layer is Image 10000000000000A4000000131EC1B5FE11628D18.jpg, a concatenation of the hidden states produced by the AttLSTM at each time step.

17As mentioned before, we propose a feature ensemble by using an interpretable multi-source fusion component (IMF). The IMF aims to combine features from different sources. A naive way of doing this is concatenating the vector representations into a single vector. This scheme considers all sources equally, but one source may yield a better result than others. With IMF we propose to consider the contribution of every source of feature via an attention mechanism. The IMF can be expressed by:

Image 10000000000000AB00000014E481413B65AF53EF.jpg

where ri represents a projection of fi, the ith feature vector passed to IMF ensuring that every ri have the same size. In this step, all the Image 100000000000001C000000134F143CE882D33681.jpg, Image 100000000000001800000019FBD40FDAE5B6E07D.jpg, Wa and ba are parameters to be learned during training, then:

3

Image 10000000000000D900000043A4ED18CE8A12D403.jpg

where Image 100000000000000F0000000B603D64EF2A22ECA9.jpgrepresents the importance of ri to the final calculation of z, the IMF outcome.

18To increase the learning power of our system, we used a multitask learning (Caruana 1997) in which we predict the polarity of tweets in parallel with the classes of the hate speech detection subtask. This approach have been developed before (Cimino, De Mattei, and Dell’Orletta 2018) in HaSpeede at Evalita 2018 (Bosco et al. 2018). The tweets used to accomplish the multitask learning are extracted from the Sentipolc-2016 (Barbieri et al. 2016) challenge.

19Finally we present the composition of the previous layers and features to create our deep ensemble model:

4

Image 10000000000000BB0000002DB29E434062DEF20E.jpg

where E represents the vector representation of the text, see Section 3.3. Equation (4) is the first block of our model, and the second block can be described as follow:

5

Image 10000000000000A70000005244C554C996B38E3F.jpg

The vector ob2 is the return of a MaxPool layer over the A vector sequence, then:

6

Image 10000000000000D20000006351868561DD268491.jpg

The third block is described in (6) where Wh, Wf, bf and bh are learnable parameters and Image 10000000000000460000001373A1E92AD12DFC60.jpg. The vectors fbert, fwn, fhs and fig correspond to the BERT, WordNet, Hurt-Sentiment and Information Gain features respectively. The prediction of the tweets polarity is determined by the Image 100000000000001000000013E7CD9D5E3C9C8B86.pngvalue and the hate value trough Image 1000000000000009000000119421B27276BA9F5F.jpg.

20The overall weighted loss of the model is calculated by cross-entropy, with higher importance value for the hate speech predictions that polarity predictions. The overall loss is calculated according to the following formula.

Image 10000000000001700000003621D3A7A65A567CCF.jpg

Here L1 and L2 are the cross-entropy loss of hate predictions and sentiment polarity predictions respectively. The value λ is the main task importance weight. The values yi and Image 10000000000000130000000EBE4E64AEFD28AB5D.jpgrepresents the ground true hate classification and polarity classification respectively. Then, the final loss is obtained as a convex sum of L1 and L2.

4. Experiments and Results

21In this section we show the results of our proposed method in subtask A and discuss about them. The organizers allow a maximum of two submissions for every subtask in the challenge. We named our team UO.

22Experiments where conducted in two main directions: Firstly, to investigate the impact of the IMF fusion strategy and secondly, to evaluate the impact of each proposed single-modal representation into our proposal. The results of our experiments are presented in Table 1 and Table 2.

23In those tables, the column named heads is the number of attention headers in the Att-LSTM layer. If this space is empty, this layer was not used. Columns bert and ig correspond to the presence or not of BERT and IG representations. The column wn-hs express the presence of Hurt-Sentiment and WordNet based representations. If a cell has a cross, the representation associated to the column were not used in the corresponding run. We used a 10% of the training dataset for validation. We report the accuracy measure computed on this validation data.

24Both Tables show that the presence of BERT increase the performance, also almost all the runs have higher values with IMF in contrast to not using it. Increasing the number of attention heads without IMF increase the results, but the opposite occurs in the presence of the IMF.

Table 1. Experiment results without IMF

Name

heads

bert

ig

wn-hs

acc

run1

2

0.764386

run2

-

0.742690

run3

3

0.767544

run4

2

0.713450

run5

2

0.763158

run6

-

0.757310

run7

-

0.724152

run8

-

0.755848

Table 2. Experiment results with IMF

Name

heads

bert

ig

wn-hs

acc

run1

2

0.795848

run2

-

0.779101

run3

3

0.764620

run4

2

0.720760

run5

2

0.774854

run6

-

0.767544

run7

-

0.719298

run8

-

0.777778

25The pretrained embedding have a size of 300, the number of neurons in the Bi-LSTM and in the AttLSTM was 128. The λ value was equal to 0.75 and the dropout (Srivastava et al. 2014) after the embedding layer was 0.3. The optimizer algorithm to train the whole model was Adam (Kingma and Ba 2015), with a learning rate of 0.01.

26The bold models in Table 2 were chosen as final submission for the subtask. The run1 uses the attention layer proposed in Section 3.2 and consider all proposed representations. The run2 does not use attention mechanism and handcraft features, using only the BERT text representation and the rest of the architecture.

27The Table 3 shows the official results of our system. The evaluation was performed on two distinct corpora: one conformed by tweets and the other by news headlines.

Table 3. Official results.

Runs

macro-F

UO:tweets_run1

0.6878

UO:tweets_run2

0.7214

BEST_RATED:tweets

0.8088

UO:news_run1

0.6657

UO:news_run2

0.7314

BEST_RATED:news

0.7744

28These results show that between our two models, the simple one get better results. The simplicity is not a condition for a better performance using deep learning. These results also express that some linguistic features decrease the effectiveness of the model, but the similarity between the results in the tweets and news evaluation sets suggest that the system is able to generalize with a good performance.

5. Conclusions and Future Work

29In this paper we presented an Ensemble Model for the task Hate Speech Detection (HaSpeeDe2) sub-task A at Evalita 2020. Our proposal combines linguistic features and RNNs with transformers representations using an IMF. In the training phase, we used a multi-task learning approaches to recognize hate speech and polarity simultaneously.

30The achieved results show that the ability of this ensemble to generalize the detection of hate content in different text genres. Nevertheless, some handcraft features decrements its results. Motivated by this, we plan to explore better features selection, other attention mechanisms and multitask learning techniques to improve the performance.

Bibliographie

Francesco Barbieri, Valerio Basile, Danilo Croce, Malvina Nissim, Nicole Novielli, and Viviana Patti. 2016. “Overview of the Evalita 2016 Sentiment Polarity Classification Task.” In.

Valerio Basile, Danilo Croce, Di Maro Maria, and Lucia C. Passaro. 2020. “EVALITA 2020: Overview of the 7th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian.” In Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (Evalita 2020), edited by Valerio Basile, Danilo Croce, Di Maro Maria, and Lucia C. Passaro. Online: CEUR.org.

Elisa Bassignana, Valerio Basile, and Viviana Patti. 2018. “Hurtlex: A Multilingual Lexicon of Words to Hurt.” In 5th Italian Conference on Computational Linguistics, Clic-It 2018, 2253:1–6. CEUR-WS.

Cristina Bosco, Dell’Orletta Felice, Fabio Poletto, Manuela Sanguinetti, and Tesconi Maurizio. 2018. “Overview of the Evalita 2018 Hate Speech Detection Task.” In EVALITA 2018-Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian, 2263:1–9. CEUR.

Erik Cambria, Soujanya Poria, Devamanyu Hazarika, and Kenneth Kwok. 2018. “SenticNet 5: Discovering Conceptual Primitives for Sentiment Analysis by Means of Context Embeddings.” In Thirty-Second Aaai Conference on Artificial Intelligence.

Rich Caruana. 1997. “Multitask Learning.” Machine Learning 28 (1): 41–75.

Andrea Cimino, Lorenzo De Mattei, and Felice Dell’Orletta. 2018. “Multi-Task Learning in Deep Neural Networks at Evalita 2018.” Proceedings of the 6th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian (EVALITA’18), 86–95.

Gretel Liz De la Pena Sarracén, Reynaldo Gil Pons, Carlos Enrique Muniz Cuza, and Paolo Rosso. 2018. “Hate Speech Detection Using Attention-Based Lstm.” EVALITA Evaluation of NLP and Speech Tools for Italian 12: 235.

Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. “Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding.” arXiv Preprint arXiv:1810.04805.

Björn Gambäck, and Utpal Kumar Sikdar. 2017. “Using Convolutional Neural Networks to Classify Hate-Speech.” In Proceedings of the First Workshop on Abusive Language Online, 85–90.

Sepp Hochreiter, and Jürgen Schmidhuber. 1997. “Long Short-Term Memory.” Neural Computation 9 (8): 1735–80. https://doi.org/10.1162/neco.1997.9.8.1735.

Hamid Karimi, Proteek Roy, Sari Saba-Sadiya, and Jiliang Tang. 2018. “Multi-Source Multi-Class Fake News Detection.” In Proceedings of the 27th International Conference on Computational Linguistics, 1546–57.

Diederik P. Kingma, and Jimmy Ba. 2015. “Adam: A Method for Stochastic Optimization.” In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, ca, Usa, May 7-9, 2015, Conference Track Proceedings, edited by Yoshua Bengio and Yann LeCun. http://arxiv.org/abs/1412.6980.

David D. Lewis 1992. “An Evaluation of Phrasal and Clustered Representations on a Text Categorization Task.” In Proceedings of the 15th Annual International Acm Sigir Conference on Research and Development in Information Retrieval, 37–50.

Gang Liu, and Jiabao Guo. 2019. “Bidirectional Lstm with Attention Mechanism and Convolutional Layer for Text Classification.” Neurocomputing 337: 325–38.

Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. “Distributed Representations of Words and Phrases and Their Compositionality.” In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a Meeting Held December 5-8, 2013, Lake Tahoe, Nevada, United States, edited by Christopher J. C. Burges, Léon Bottou, Zoubin Ghahramani, and Kilian Q. Weinberger, 3111–9. http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.

Manuela Sanguinetti, Comandini Gloria, Elisa Di Nuovo, Simona Frenda, Marco Stranisci, Cristina Bosco, Tommaso Caselli, Viviana Patti, and Irene Russo. 2020. “Overview of the Evalita 2020 Second Hate Speech Detection Task (Haspeede 2).” In Proceedings of the 7th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA 2020), edited by Valerio Basile, Danilo Croce, Maria Di Maro, and Lucia C. Passaro. Online: CEUR.org.

Valentino Santucci, Stefania Spina, Alfredo Milani, Giulio Biondi, and Gabriele Di Bari. 2018. “Detecting Hate Speech for Italian Language in Social Media.” In EVALITA 2018, Co-Located with the Fifth Italian Conference on Computational Linguistics (Clic-It 2018). Vol. 2263.

Anna Schmidt, and Michael Wiegand. 2017. “A Survey on Hate Speech Detection Using Natural Language Processing.” In Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, 1–10.

Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. “Dropout: A Simple Way to Prevent Neural Networks from Overfitting.” J. Mach. Learn. Res. 15 (1): 1929–58. http://dl.acm.org/citation.cfm?id=2670313.

Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need.” In Advances in Neural Information Processing Systems, 5998–6008.

Notes

1 The wordnet came from the python library nltk

2 We selected the top 50 words with highest IG value.

3 https://huggingface.co/dbmdz/bert-base-italian-cased

4 The text is represented as a vector of integers using the tokenizer function in BERT Model

5 The automaton was created using the re library from python and the words from an italian corpus.

6 http://nlp.lsi.upc.edu/freeling/index.php

Lire

Open access

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search