Versión clásicaVersión móvil

EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

 | 
Valerio Basile
, 
Danilo Croce
, 
Maria Maro
, 
et al.

DANKMEMES: Multimodal Artefacts Recognition

UPB @ DANKMEMES: Italian Memes Analysis - Employing Visual Models and Graph Convolutional Networks for Meme Identification and Hate Speech Detection

George-Alexandru Vlad, George-Eduard Zaharia, Dumitru-Clementin Cercel y Mihai Dascalu

Resumen

Certain events or political situations determine users from the online environment to express themselves by using different modalities. One of them is represented by Internet memes, which combine text with a representative image to entail a wide range of emotions, from humor to sarcasm and even hate. In this paper, we describe our approach for the DANKMEMES competition from EVALITA 2020 consisting of a multimodal multi-task learning architecture based on two main components. The first one is a Graph Convolutional Network combined with an Italian BERT for text encoding, while the second is varied between different image-based architectures (i.e., ResNet50, ResNet152, and VGG-16) for image representation. Our solution achieves good performance on the first two tasks of the current competition, ranking 3rd for both Task 1 (.8437 macro-F1 score) and Task 2 (.8169 macro-F1 score), while exceeding by high margins the official baselines.

Nota del editor

George-Alexandru Vlad and George-Eduard Zaharia contributed equally

Texto completo

1. Introduction

1During the past two decades, the Internet evolved massively and the social web became a hub where people share their opinions, cooperate to solve issues, or simply discuss on various topics. There are many ways in which users can express themselves: plain text, videos, or images. The latter option became widely used due to its convenience; however, images are frequently accompanied by a short text description to better convey information. As the Internet and the online social interactions evolved, certain image templates emerged and gained global popularity, contributing to a de facto standardization of joint text-image usage, and thus leading to the creation of memes. Memes can be humorous, satirical, offensive, or hateful, therefore encapsulating a wide range of emotions and beliefs. Properly identifying memes from non-memes, and then analyzing them to detect the users’ intentions is becoming a stringent task in online marketing campaigns by targeting the automated identification of opinions pertaining to certain groups of users.

2The DANKMEMES competition (Miliani et al. 2020) from EVALITA 2020 (Basile et al. 2020) challenged participants to approach the previously mentioned issues by creating systems that identify and analyze Internet memes in Italian. The competition consists of three tasks, out of which we tackled two. Task 1 - Meme Detection considers the identification of memes from a collection of images, such that a clear distinction can be made between memes and ordinary images. Afterwards, Task 2 - Hate Speech Identification targets the classification of images in terms of their purpose, by analyzing content and identifying whether images are hateful or not.

2. Related Work

2.1 Multimodal Fake News Detection

3Singhal et al. (Singhal et al. 2019) employed the usage of multimodal techniques for fake news detection. The authors introduced SpotFake, an architecture divided into three sub-parts: one for identifying textual features using Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al. 2018), a second for visual analysis based on VGG-19 (Simonyan and Zisserman 2014), while the third combines the previously mentioned elements into a single feature vector.

4Similarly, Shah and Priyanshi (Shah and Kobti 2020) performed multimodal fake news detection by using two separate channels, visual and textual, both of them aiming to extract relevant features. Moreover, they included a Cultural Algorithm that introduces another dimension by employing situational knowledge, i.e. information about the depicted event as seen by a specific individual. Another approach regarding fake news detection was introduced by Khattar et al. (Khattar et al. 2019) who created MVAE, a multimodal autoencoder including encoders (both visual and textual), decoders, and a detection module for classifying the inputs.

2.2 Multimodal Hate Speech Identification

5Kiela et al. (Kiela et al. 2020) created a new dataset specifically designed for identifying hateful speech in memes. At the same time, the authors also introduced a series of baselines for further comparison, including ResNet-152 (He et al. 2016) and VilBERT (Lu et al. 2019) for the visual channel, and BERT for the textual counterpart.

6Furthermore, Sabat et al. (Sabat, Ferrer, and Giro-i-Nieto 2019) tackled the problem of hate speech identification in memes by also employing a multimodal system. However, they used an Optical Character Recognition system for extracting the textual component from the inputs, alongside visual features from a VGG-16 component and the text encoded with BERT.

3. Method

7Our approach for both tasks consists of a multi-task learning technique (Caruana 1997) and our architecture consists of two main neural network components, one for the text input, while the other for the image input. Thus, we combined the outputs of these two components and used the learned features for determining the required class, either for Task 1 or Task 2.

3.1 Corpus

8The dataset for the meme detection task is split into two parts, train and test. The training dataset contains 1,600 image entries, together with a CSV file containing other useful metadata, such as: the engagement (i.e., number of comments and likes), date, and manipulation (i.e., binary coding denoting the low/high level of image modifications), alongside a transcript of the text present in the image. We kept 85% of the entries for training, while 15% are used for validation; the same class distribution is kept in both partition. The test dataset for the first task contains 400 entries with a corresponding CSV file of a similar structure. The second task offers a dataset containing 800 entries which was partitioned in a similar manner.

3.2 Image Component

9Several image-based neural networks were considered for the first component of our final architecture. First, we used VGG-16 which consists of five stacks of Convolutional Neural Networks (Kim 2014) accompanied by max-pooling layers. Pretrained weights on the ImageNet dataset (Deng et al. 2009) were afterwards fine-tuned. Second, we also experimented with ResNet in two variants, ResNet50 and ResNet152. ResNet introduced the concept of skip connections as a solution to the vanishing gradient problem; as such, the networks could be further scaled in terms of depth, enabling more abstract high-level features to be extracted from the input images. Similar VGG-16 architecture, pretrained weights on ImageNet were fine-tuned for ResNet152, whereas pretrained weights on  (Cao et al. 2018) were used for ResNet50.

3.3 Text Component

10A Graph Convolutional Network (GCN) (Yao, Mao, and Luo 2019) for representing long-term dependencies between tokens was selected, alongside a pretrained version of BERT for Italian (ItalianBERT)1 to model the contextual information at sample level. The underlying implementation of the textual feature extractor follows the architectural design of Vocabulary Graph Convolutional Network with BERT (VGCN-BERT) (Lu, Du, and Nie 2020).

11The proposed architecture (VGCN-ItalianBERT) uses a tight coupling between the graph convolutional layers and the ItalianBERT embeddings, enabling the model to better adjust the GCN extracted features through ItalianBERT’s attention mechanism. The input to the VGCN layer is represented by a vector Xd, v, where d is the dimension of the ItalianBERT embedding and v is the number of tokens in the dataset vocabulary. A symmetric adjacency matrix Av, v is built to preserve the prior global relationship between tokens, where v is the vocabulary dimension. The edge weight between two nodes i, j, denoted as Ai, j, is initialized with the normalized point-wise mutual information (NPMI) value (Bouma 2009) between the two vocabulary tokens i, j. The mechanism of the VGCN layer is formally summarized by the following equations:

1

Image 10000000000000CC00000018BC6C5A6C3489881B.jpg

2

Image 10000000000000B900000014BA31B49C2EBA1291.jpg

3

Image 100000000000007D00000013F4CD6B0BEDD1CAF8.jpg

where terms Wv, h and Wh, g represent the weights of the two GCN internal layers, with v the vocabulary dimension, h and g the output feature dimensions. In Equation 2, we add the global context by multiplying the normalized adjacency matrix Image 100000000000000F00000012F807C994E8E8E9F0.jpgwith the weight matrix of the first GCN layer. We use the normalized adjacency matrix Image 100000000000008C000000129484D594FA570FED.jpgto ensure numerical stability. A convolution between the input vector Xd,v and the result from the previous operation (Equation 2) is performed to combine the global information with the ItalianBERT embeddings. Lastly, Equation 3 projects the features to the dimensions required to fill in the reserved VGCN-ItalianBERT embedding slots.

12Visual text features describing the actors of a meme are added as the pair sentence to ItalianBERT’s input. We cap the second sentence containing the visual text features to K tokens, overflowing tokens being dropped. Considering L the maximum number of input tokens, the remainder of L - K tokens are being split between the text tokens associated with a meme and G VGCN reserved slots. Those slots are kept empty to be internally filled with VGCN embeddings during training. Alongside ordinary inputs required by ItalianBERT (i.e. input ids, input masks and segment ids), we build a gcn ids vector similarly to input ids, by mapping each unique input token to the corresponding index in the task vocabulary Vtask; Vtask represents the set of tokens available in the task text corpus and in the ItalianBERT’s vocabulary. The second additional input is represented by a binary mask vector having the value of 1 for the VGCN reserved tokens, and 0 otherwise. During training, all ItalianBERT layers with the exception of the last 4 encoder blocks were frozen.

3.4 Multimodal Architecture

13The final solution consists of a multimodal architecture with two main components, each specialized on processing one informational channel, namely text or image-based. The dates are segmented and encoded by using complementary sine and cosine functions to preserve the cyclic characteristics of days (in a month) and months. Equation 4 describes the time cyclical encoding procedure, where n represents the day value subtracted by 1 and divided by the number of days in the corresponding month. The same operations are applied for the months encoding over the month index, but the denominator is 12 in this case. Additional metadata (i.e., manipulation and engagement) was also encoded and used in the final prediction. Values representing the year and engagement were normalized to ensure the model’s stability during training.

4

Image 1000000000000105000000286594CD95B47CCACC.jpg

14The two feature vectors from the image and text components were fused together by concatenation into a single vector and passed through two fully connected layers, followed by a dropout layer of 0.5. The output of the dropout layer is then concatenated together with the other extracted features like time, engagement, manipulation, and fed to the output layer. Softmax activation function is used over the last fully connected layer to compute the distribution probability over the task classes. L2 regularization kernel is used on the two hidden layers before fusion to account for large activations and to keep our output layer sensible to the metadata encoded features.

15In addition, an ensemble-based architecture using our ResNet50 + VGCN-ItalianBERT model was also considered. First, the training dataset was split into 5 sets, while preserving the class distribution of each fold. The aforementioned model was trained 5 times using 4/5 sets for training, and the remainder set for validation. A weighted voting procedure is performed at prediction time, in which the weights are represented by the average confidence score of the voters in the class receiving the highest probability after softmax. Thus, we advocate for higher confidence scores over the number of voters in choosing the predicted class.

3.5 Experimental Setup

16Preprocessing steps were performed to feed the datasets to our architecture. The texts were tokenized using the ItalianBERT tokenizer, and then the input ids, input masks, segment ids, gcn ids and gcn masks were computed. Images were resized to a uniform dimension (i.e., 448 x 448) and were serialized alongside the text components in a tfrecords file specific for Tensorflow (Abadi et al. 2016). An Adam Weight decay optimizer (Loshchilov and Hutter 2017) with a learning rate of 1e-5 and a weight decay rate of 0.01 were used in all conducted experiments. Furthermore, the warm up proportion was set to 0.1.

17The maximum input length was limited to L=100 tokens and the Visual text features to K=20 tokens as the textual channel of memes is represented by short sentences. Following the experimental setup described in (Lu, Du, and Nie 2020), we reserve G=16 slots to be filled with the resulted VGCN-ItalianBERT embeddings. Moreover, only NPMI values larger than 0.3 are kept in the adjacency matrix A, corresponding to a higher semantic correlation between words; all the other values below this threshold are set to 0.

18We empirically found 1e-5 to be a good learning rate value, which is on par with the results of (Lu, Du, and Nie 2020). Lastly, we choose to train all the models for 9 epochs with a batch size of 8 examples.

3.6 Results

19Table 1 contains the results obtained by our models for the first two tasks of the DANKMEMES competition. The components that were frozen during the training process are varied for the three main conducted experiments (i.e. combining ItalianBERT with VGCN and ResNet50, ResNet152 and VGG-16, respectively) to identify proper adjustments for the weights of the pretrained models. The best results among the four evaluated sets (i.e. validation, test for Task 1 and validation, test for Task 2) are obtained by either freezing only the VGCN-ItalianBERT component or by freezing both textual and image components. The necessity of freezing the text branch of the architecture underlines the fact that the pretrained weights for the ItalianBERT model already properly capture specific traits of Italian and prove to be a viable option, even when analyzing short texts such as memes. Furthermore, the last convolutional block of the image component needs to be unfrozen because training an architecture on potential meme images is a more specific task when compared to analyzing Italian text.

Table 1: Macro-F1 scores on the validation and test datasets, for both Task 1 and Task 2. Submitted models are shown in italics

Neural Architecture

Frozen Component

Task 1

Task 2

Image

Text

Dev

Test

Dev

Test

ItalianBERT

-

-

0.7618

0.7546

0.8083

0.7996

ResNet50

-

-

0.8203

0.7899

0.5661

0.5598

ResNet50 + ItalianBERT

-

0.8749

0.8499

0.8331

0.7949

ResNet50 + VGCN-ItalianBERT

-

-

0.8666

0.8348

0.8413

0.8150

ResNet50 + VGCN-ItalianBERT

-

0.9041

0.8235

0.8666

0.8169

ResNet50 + VGCN-ItalianBERT

-

0.8874

0.8375

0.8493

0.7584

ResNet50 + VGCN-ItalianBERT

0.8833

0.8499

0.8745

0.7992

ResNet152 + VGCN-ItalianBERT

-

-

0.8458

0.8424

0.8331

0.7998

ResNet152 + VGCN-ItalianBERT

-

0.8791

0.8700

0.8666

0.7994

ResNet152 + VGCN-ItalianBERT

-

0.8246

0.8474

0.8310

0.8093

ResNet152 + VGCN-ItalianBERT

0.8915

0.8273

0.8489

0.7490

VGG-16 + VGCN-ItalianBERT

-

-

0.8124

0.7923

0.6906

0.5478

VGG-16 + VGCN-ItalianBERT

-

0.8083

0.7620

0.5566

0.5469

VGG-16 + VGCN-ItalianBERT

-

0.7485

0.7447

0.6414

0.5263

VGG-16 + VGCN-ItalianBERT

0.7621

0.7248

0.6003

0.5388

Ensemble Architecture

-

-

0.8916

0.8437

0.7874

0.7692

Competition Baselines

-

-

-

0.5198

-

0.5621

20The best results are obtained using variations of the ResNet50 + VGCN-ItalianBERT model, with an .9041 macro-F1 score for the custom validation dataset used for Task 1, and .8745 and .8169 macro-F1 scores on the validation and test datasets for Task 2. However, the best result for the Task 1 test set is yielded by the ResNet152 + VGCN-ItalianBERT architecture, with an .8700 macro-F1 score.

21ItalianBERT, ResNet50, and ResNet50 + ItalianBERT are used as baseline models to explore the improvements made by adding VGCN to the textual architecture while maintaining the same experimental setup. As expected, the model using only the textual channel (i.e. ItalianBERT baseline model) is performing considerably worse than the joint architecture ResNet50 + ItalianBERT, thus arguing for the importance of considering images in disambiguating the textual input. The ResNet50 + VGCN-ItalianBERT model performs consistently better than its baseline counterpart (i.e., ResNet50 + ItalianBERT), by obtaining improvements of 2.92% and 3.35% macro-F1 score on the validation sets for Task 1 and Task 2, respectively.

3.7 Error Analysis

22Although the models performed arguably well on both task, the identified misclassifications represent a good starting point for further analysis and improvement. Figure 1 depicts a series of misclassified entries from both tasks.

Figure 1: Examples of misclassified samples for both tasks

Image 10000000000003B000000118C1C81EFC28B94C17.jpg

23The short texts encountered in memes require in several situations prior information on the sociopolitical context, therefore making the detection of memes an exceedingly difficult task. In general, a few well known and highly popular image templates are reused, by changing or partially adjusting the text to expressively convey an idea or a view on a certain subject. However, the used templates in the current competition are extensively customized and tailored specifically to the political context of Italy. In addition, the subjectivity of the annotators also plays a decisive role, considering that the concept of the hateful speech tag for the second task is not well defined for all situations and can be interpreted differently.

4. Conclusion and Future Work

24This paper introduces our multimodal architecture for the first two tasks of the DANKMEMES competition from EVALITA 2020. Several joint text - Vocabulary Graph Convolutional Network alongside an Italian BERT model - and image-based architectures - ResNet50, ResNet152, VGG-16 - were experimented. The consideration of meme meta-information, such as cyclic temporal characteristics and post engagement, boosted even further our F1-scores when compared to the competition baseline.

25In terms of future work, we intend to experiment with other visual architectures, including VGG-19 (Simonyan and Zisserman 2014) and EfficientNet (Tan and Le 2019), and also with multilingual neural networks, such as mBERT (Pires, Schlinger, and Garrette 2019) and XLM-RoBERTa (Conneau et al. 2019), that will empower transfer learning across meme datasets in different languages.

Bibliografía

Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, et al. 2016. “Tensorflow: Large-Scale Machine Learning on Heterogeneous Distributed Systems.” arXiv Preprint arXiv:1603.04467.

Valerio Basile, Danilo Croce, Maria Di Maro, and Lucia C. Passaro. 2020. “EVALITA 2020: Overview of the 7th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian.” In Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (Evalita 2020), edited by Valerio Basile, Danilo Croce, Maria Di Maro, and Lucia C. Passaro. Online: CEUR.org.

Gerlof Bouma. 2009. “Normalized (Pointwise) Mutual Information in Collocation Extraction.” Proceedings of GSCL, 31–40.

Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. 2018. “Vggface2: A Dataset for Recognising Faces Across Pose and Age.” In 2018 13th Ieee International Conference on Automatic Face & Gesture Recognition (Fg 2018), 67–74. IEEE.

Rich Caruana. 1997. “Multitask Learning.” Machine Learning 28 (1): 41–75.

Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. “Unsupervised Cross-Lingual Representation Learning at Scale.” arXiv Preprint arXiv:1911.02116.

Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. “Imagenet: A Large-Scale Hierarchical Image Database.” In 2009 Ieee Conference on Computer Vision and Pattern Recognition, 248–55. Ieee.

Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. “Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding.” arXiv Preprint arXiv:1810.04805.

Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. “Deep Residual Learning for Image Recognition.” In Proceedings of the Ieee Conference on Computer Vision and Pattern Recognition, 770–78.

Dhruv Khattar, Jaipal Singh Goud, Manish Gupta, and Vasudeva Varma. 2019. “Mvae: Multimodal Variational Autoencoder for Fake News Detection.” In The World Wide Web Conference, 2915–21.

Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. 2020. “The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes.” arXiv Preprint arXiv:2005.04790.

Yoon Kim. 2014. “Convolutional Neural Networks for Sentence Classification.” arXiv Preprint arXiv:1408.5882.

Ilya Loshchilov, and Frank Hutter. 2017. “Decoupled Weight Decay Regularization.” arXiv Preprint arXiv:1711.05101.

Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. “Vilbert: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks.” In Advances in Neural Information Processing Systems, 13–23.

Zhibin Lu, Pan Du, and Jian-Yun Nie. 2020. “VGCN-Bert: Augmenting Bert with Graph Embedding for Text Classification.” In European Conference on Information Retrieval, 369–82. Springer.

Martina Miliani, Giulia Giorgi, Ilir Rama, Guido Anselmi, and Gianluca E. Lebani. 2020. “DANKMEMES @ Evalita2020: The Memeing of Life: Memes, Multimodality and Politics.” In Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (Evalita 2020), edited by Valerio Basile, Danilo Croce, Maria Di Maro, and Lucia C. Passaro. Online: CEUR.org.

Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. “How Multilingual Is Multilingual Bert?” arXiv Preprint arXiv:1906.01502.

Benet Oriol Sabat, Cristian Canton Ferrer, and Xavier Giro-i-Nieto. 2019. “Hate Speech in Pixels: Detection of Offensive Memes Towards Automatic Moderation.” arXiv Preprint arXiv:1910.02334.

Priyanshi Shah, and Ziad Kobti. 2020. “Multimodal Fake News Detection Using a Cultural Algorithm with Situational and Normative Knowledge.” In 2020 Ieee Congress on Evolutionary Computation (Cec), 1–7. IEEE.

Karen Simonyan, and Andrew Zisserman. 2014. “Very Deep Convolutional Networks for Large-Scale Image Recognition.” arXiv Preprint arXiv:1409.1556.

Shivangi Singhal, Rajiv Ratn Shah, Tanmoy Chakraborty, Ponnurangam Kumaraguru, and Shin’ichi Satoh. 2019. “SpotFake: A Multi-Modal Framework for Fake News Detection.” In 2019 Ieee Fifth International Conference on Multimedia Big Data (Bigmm), 39–47. IEEE.

Mingxing Tan, and Quoc V Le. 2019. “Efficientnet: Rethinking Model Scaling for Convolutional Neural Networks.” arXiv Preprint arXiv:1905.11946.

Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. “Graph Convolutional Networks for Text Classification.” In Proceedings of the Aaai Conference on Artificial Intelligence, 33:7370–7.

Autores

University Politehnica of Bucharest, Faculty of Automatic Control and Computers – george.vlad0108@stud.acs.upb.ro

University Politehnica of Bucharest, Faculty of Automatic Control and Computers – george.zaharia0806@stud.acs.upb.ro

University Politehnica of Bucharest, Faculty of Automatic Control and Computers – dumitru.cercel@upb.ro

University Politehnica of Bucharest, Faculty of Automatic Control and Computers – mihai.dascalu@upb.ro

CC-BY-NC-ND-4.0

Únicamente el texto se puede utilizar bajo licencia CC BY-NC-ND 4.0. Salvo indicación contraria, los demás elementos (ilustraciones, archivos adicionales importados) son "Todos los derechos reservados".

Leer

Open access

Buscar en OpenEdition Search

Se le redirigirá a OpenEdition Search