Version classiqueVersion mobile

EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

 | 
Valerio Basile
, 
Danilo Croce
, 
Maria Maro
, 
et al.

SardiStance: Stance Detection

SSNCSE-NLP @ EVALITA2020: Textual and Contextual Stance Detection from Tweets Using Machine Learning Approach

B. Bharathi, J. Bhuvana et Nitin Nikamanth Appiah Balaji

Résumé

Opinions expressed via online social media platforms can be used to analyse the stand taken by the public about any event or topic. Recognizing the stand taken is the stance detection, in this paper an automatic stance detection approach is proposed that uses both deep learning based feature extraction and hand crafted feature extraction. BERT is used as a feature extraction scheme along with stylistic, structural, contextual and community based features extracted from tweets to build a machine learning based model. This work has used multilayer perceptron to detect the stances as favour, against and neutral tweets. The dataset used is provided by SardiStance task with tweets in Italian about Sardines movement. Several variants of models were built with different feature combinations and are compared against the baseline model provided by the task organisers. The models with BERT and the same combined with other contextual features proven to be the best performing models that outperform the baseline model performance.

Texte intégral

We would like to thank the SSN management for supporting the work by sponsoring the GPU systems for the research work.

1. Introduction

1In today’s era everything is in the digital form, people started spending more time online to stay connected. We get to learn about the events across the universe via online social media platforms namely, Facebook, Twitter, Instagram and so on. Sharing everyone’s opinion becomes the norm of today’s digital world either towards or against or neutral upon a particular topic or event. Expressing one’s stand on any matter is refereed to as stance. Recognizing the stance, the stance detection is an interesting part of Natural Language processing that gains lots of traction nowadays. Demand of automatic detection of stance is found in variety of applications such as rumour detection, political standpoint of public, predictions over election results, advertising, opinion survey and so on.

2This paper proposes a method that can be used for textual and contextual stance detection for the task hosted by sardistance@evalita2020. The overview of the sardistance@evalita2020 shared task is given in (Cignarella et al. 2020). The proceedings of the task EVALITA can be found in (Basile et al. 2020). BERT is used to perform the classification of stance from the tweets. Two models have been constructed, where the first one will classify the stance of a tweet into 3 categories as favour, against and neutral, the second model is built to classify the tweets into same number of classes as above by considering the additional contextual information namely number of retweets, number of followers, replies and quote’s relations.

2. Survey of Existing Stance Detections

3As per the authors in Küçük and Can (2020) stance detection is related to so many NLP problems namely, emotion recognition, irony detection, sentiment analysis, rumour classification etc. In specific the stance detection is closely related to sentimental analysis of the text, which is concerned about feelings such as tenderness, sadness, or nostalgia etc., whereas the stance detection needs a specific target on which the text is opined about. Stance detection is similar to perspective identification as well.

4Stance detection can be done using learning based approaches via training and testing stages along with necessary pre-processing. These methods are categorized into machine learning based, deep learning based and ensemble based approaches. Conventional machine learning approaches require the features to be extracted from the text after the pre-processing operations like normalization, tokenization etc. The deep learning approaches use the pre-trained models for classification of text using word embeddings like word2vec, GloVe, ELMo, CoVe, etc., as features (Sun et al. 2019). Bidirectional Encoder Representations from Transformers, BERT is one of the recent pre-trained models designed by Google (Devlin et al. 2018), which is a bidirectional transformer.

5In (Lai et al. 2020), stance detection was done in multiple languages using Stylistic, Structural, Affective and Contextual features and are fed to Linear Regression and SVM classifiers. The authors reported that the machine learning classifiers are more efficient to classify the stance in multi-lingual dataset than the deep learning counterparts.

6In Aldayel (Aldayel and Magdy 2019), a stance detection was made using the features such as on-topic content, network interactions, user’s preferences, online network connection say Connection Networks. Extracted features are given to the standard machine learning classifier Support Vector Machine (SVM) with linear kernel to classify the stance of tweets into Atheism, Climate change is a real concern, Hillary Clinton, Feminist movement and Legalization of abortion (LA) classes. The authors observed that the textual features combined with the network features helped in detecting the stance more accurately.

7A fine tuned BERT model was used for same side stance classification in Ollinger (Ollinger et al. 2020). The authors have used both base and Large models for binary classification and reported that the Large model has outperformed the other one. They also have observed that longer input sequences are predicted well when compared with the smaller ones with a precision of 0.85.

8Bi-directional Recurrent Neural Networks (RNNs) (Borges, Martins, and Calado 2019) along with other features were used for the fake news identification. Sentence encoder for the headlines and document encoder for the content of the news were used along with the common features extracted by combining the headlines and the body of the news. The four stances detected are Agree, Disagree, Unrelated and Discusses. The authors have reported that the pre-training the sentence encoder has enhance the model performance.

9After pre-processing steps like stemming, stop word removal , normalization and Hashtag Pre-processing, the data are fed to five different models such as 1-D CNN-based sentence classification, Target-Specific Attention Neural Network [TAN], Recurrent Neural Network with Long Short Term Memory(LSTM), SVM-based SEN Model, Two-step SVM for reproducibility. Apart from the above the authors (Ghosh et al. 2019) have also used pre-trained BERT (Large-Uncased) model for stance detection. Experiments were conducted using SemEval microblog dataset and text dataset about health-related articles and applied voting scheme for final predictions. The authors observed that the pre-processing enhanced the performance and also reported that the contextual feature will help to improve the stance detection further.

10To detect the stance of tweets as one of favour, against and none a new CNN named CCNN-ASA, the Condensed CNN by Attention over Self- Attention has been designed by Mayfield  (Mayfield and Black 2019). Self-attention based convolution module to improve the representation of each and every word and attention-based condensation module for text condensation are embedded. They have experimented on SemEval-2016 challenge for supervised stance detection in Twitter with three usual stances The works reported in  (Zhou et al. 2019) ,(Sen et al. 2018) , (Wei and Mao 2019), (Popat et al. 2019) are few of the other stance detection articles.

3. Proposed System

3.1 Dataset Description

11The dataset hosted by SardiStance has tweets in Italian language about Sardines movement. The total tweets are about 3,242 instances out of which, training set has 2,132 and testing will have 1,110. The three stances are Against, Favor and Neutral about the Sardines movement with 1,028, 589, 515 instances respectively.

3.2 Model Construction

12The models are built in Python and used GPU system with NVIDIA GTX1080 for running the experiments. The features are extracted from the Italian tweets about Sardines movement to construct the model and the same is evaluated for performance using the tweets meant for testing.

13Feature engineering in our work includes both via the explicit features and also using a deep learning model that does the same. We have used the pre-trained deep learning model BERT to collect the features that provides a sequence of vectors of maximum size 512 which represents the features extracted. Along with that both structural and stylistic features are also extracted from the training instances of the Italian tweets.

14Stylistic features considered in our proposed work are as follows: unigram is the representation in binary of unigrams; Char-grams is the representation in binary with 2 to 5 char n-grams; Structural features extracted from the Italian tweets are num-hashtag which will use the count of most frequently occurred hashtags of the tweet; punctuation marks considers 6 punctuation marks such as !?.,; and their frequencies as numerical values; Length feature will extract the number of characters, the number of words, the average length of the words in each tweet;

15Community based features are also used as discriminating features in our work that exhibits the relationship among the tweets, comments such as network quote community, network reply community, network retweet community, network friend community. These features are vectors of numerical attributes that represent the number of retweets, retweets with comments, number of friends, number of followers, count of lists, created at information and number of emojis in the twitter bio.

16For the textual stance detection, features such as BERT, unigram, unigram-hashtag, char-grams, num-hashtag, punctuation marks and length are extracted from the training instances. These features are given to Multilayer Perceptron (MLP) with 128 hidden layers with 512 nodes each. The training uses K-fold cross validation to fine tune the model parameters with K = 5 folds.

17For the contextual stance detection, along with the features mentioned for the textual SD, additional features of the tweet such as network quote community, network reply community, network retweet community, network friend community, user info bio, tweet info retweet, tweet info create at were also extracted from the training instances and all are fed to MLP classifier with 512 nodes in each of 128 hidden layers. The second model also undergoes 5 fold cross validation to avoid over-fitting and selection bias problems.

1814 different models with individual textual and contextual features have been built for stance detection. Along with that, to explore the combined feature space each of the above mentioned features have combined in two and three to built models. Totally 89 models were built to investigate the performance each of the feature is combined with other one and used for training the MLP classifier. And 147 variants of classifiers were constructed by combining three features together.

19Both the classifiers are iterated for 1000 times with relu as its activation function in their hidden layers and adam as the optimization function which is a variant of stochastic gradient descent.

4. Results and Discussion

20Models are built after 5-fold cross validation and with different combinations of both deep learning based BERT and hand crafted structural, contextual features together to investigate the performance of stance detection system. The few of the best cross validation results are shown in Table 1. The validation results show that the BERT works well either it is used alone for feature extraction or when combined with other features. In particular, when we analyze the validation results we found that the community based features contribute more towards the stance detection either independently or when combined with other textual features.

Table 1: Results after 5-fold cross validation

Models with listed features

F1 score

BERT

0.5763

Unigram

0.5509

chargrams

0.5734

network quote community

0.5419

bert + unigram

0.5897

bert + unigramhashtag

0.5583

bert + chargrams

0.5721

bert + numhashtag

0.5773

bert + puntuactionmarks

0.5501

bert + length

0.5226

bert + network quote community

0.6212

bert + network reply community

0.5993

bert + network retweet community

0.6086

bert + network friend community

0.6482

bert + user info bio

0.5748

bert + tweet info retweet

0.6086

bert + tweet info create at

0.5431

unigram + chargrams

0.5834

unigram + network quote community

0.5965

bert + unigram+ length

0.5813

bert + unigram+ network reply community

0.6048

bert + chargrams + network quote community

0.5853

bert + chargrams+ user info bio

0.5834

bert + network quote community + network friend community

0.6436

Table 2: Detection Results of SardiStance tasks using test data (* - Run 1 & 2 of proposed system )

Task A - Textual Stance Detection

Run

f-avg

prec_a

prec_f

prec_n

recall_a

recall_f

recall_n

f_a

f_f

f_n

Baseline

0.5784

0.7549

0.3975

0.2589

0.6806

0.4949

0.2965

0.7158

0.4409

0.2764

1*

0.6067

0.7506

0.4245

0.2679

0.7951

0.4592

0.1744

0.7723

0.4412

0.2113

2*

0.5749

0.7798

0.3664

0.3196

0.6873

0.4898

0.3605

0.7307

0.4192

0.3388

Task B - Contextual Stance Detection

Baseline

0.6284

0.7845

0.4506

0.3054

0.7507

0.5357

0.2965

0.7672

0.4895

0.3009

1*

0.6582

0.8321

0.4715

0.3508

0.7547

0.5918

0.3895

0.7915

0.5249

0.3691

2*

0.6556

0.8419

0.4574

0.3660

0.7466

0.6020

0.4128

0.7914

0.5198

0.3880

21The models constructed for textual and contextual stance detection are tested with the instances of the test set. Two runs were submitted for each of the two tasks namely the textual stance and contextual stance detection under the name SSNCSE-NLP. Performance measures precision (P), recall (R), and F-score (F) for the three stances such as tweet towards the Sardines movement, against the movement and neutral ones are computed.

22A baseline model was built by the task organizers of Sardistance using the conventional machine learning algorithm SVM with the help of uni-gram feature and has been used to compare the performance of our models.

23Best results obtained are reported in Table 2, with macro average of F1 measure along with the scores for F1 for against tweets, for favour and for neutral tweets classification. The baseline that was used by the task organisers was the SVM with linear kernel obtained the F1 average as 0.5784. The Run 1 which has been built on the model using features extracted by the pre-trained BERT has shown a F1 score average of 0.6067 that is around 3% more than the baseline model as shown in Table 2. The Run 2 has obtained a performance near to the baseline that has used the char n-gram as the feature extracted.

24Our model for Run 1 has outperformed the baseline model in terms of precision of favour and neutral tweets also shown a 11% increase in recall of against tweets over the baseline model. This can be interpreted that the most of the testing instances are identified as relevant tweet against the Sardines movement.

25For the second task on contextual stance detection, our models for Run 1 and 2 have performed better than the baseline model for the same, whose F1 average is given as 0.6284. The Run 1 for this task has used BERT, numhashtag, network_friend_community features whereas the run 2 has been built on BERT, network_quote_community, network_friend_community features.

26This can be inferred that the additional information about the Sardine tweets such as the community based contextual features have contributed towards the classification of the tweets. Metadata about the tweets have served in discriminating the stance better than the textual information of the tweets themselves.

5. Conclusion

27In this paper, we presented the suitable models for stance detection in Italian tweets about Sardine movement. The three stances considered for this work are in favour of the movement, against and neutral. Multilayer perceptron is the classifier used for classification of stance of tweets. The deep learning pre-trained model BERT has been used to extract the features from the tweets along with several stylistic, contextual and community based features namely, The features are extracted Unigram , Char-grams , num-hashtag , Length, network quote community, network reply community, network retweet community, network friend community, user info bio, tweet info retweet, tweet info create at are few of the attributes that are extracted to detect the stance. The Models are trained using the dataset provided by SardiStance task for textual and contextual stance detections. Three of models have outperformed when compared against the baseline model that have used the SVM for stance detection. A maximum of 5% increase is found in precision of in favour tweets over the baseline model for the same. In order to explore all feature spaces, in this work the structural, stylistic, contextual features are combined in different permutations and validated for their performance. The best performing models are found to be using BERT and char n-gram for textual stance and combinations such as BERT along with numhashtag, network friend community and BERT with network quote community, network friend community features for contextual stance detection.

28We have observed that most contributing features along with the textual features are community based features of the tweets, those meta data serve well in discriminating the stance better. More analysis on these features and their combination can help in improving the performance of automatic stance detection system. Since the tweet exhibits the nature of stance a person takes on any event or topic also lead tot he violation of that person’s privacy, which also needs to look at.

Bibliographie

Abeer Aldayel, and Walid Magdy. 2019. “Your Stance Is Exposed! Analysing Possible Factors for Stance Detection on Social Media.” Proceedings of the ACM on Human-Computer Interaction 3 (CSCW): 1–20.

Valerio Basile, Danilo Croce, Di Maro Maria, and Lucia C. Passaro. 2020. “EVALITA 2020: Overview of the 7th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian.” In Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (Evalita 2020), edited by Valerio Basile, Danilo Croce, Di Maro Maria, and Lucia C. Passaro. Online: CEUR.org.

Luı́s Borges, Bruno Martins, and Pável Calado. 2019. “Combining Similarity Features and Deep Representation Learning for Stance Detection in the Context of Checking Fake News.” Journal of Data and Information Quality (JDIQ) 11 (3): 1–26.

Alessandra Teresa Cignarella, Mirko Lai, Cristina Bosco, Viviana Patti, and Paolo Rosso. 2020. “SardiStance@EVALITA2020: Overview of the Task on Stance Detection in Italian Tweets.” In Proceedings of the 7th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian (EVALITA 2020), edited by Valerio Basile, Danilo Croce, Maria Di Maro, and Lucia C. Passaro. CEUR-WS.org.

Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. “Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding.” arXiv Preprint arXiv:1810.04805.

Shalmoli Ghosh, Prajwal Singhania, Siddharth Singh, Koustav Rudra, and Saptarshi Ghosh. 2019. “Stance Detection in Web and Social Media: A Comparative Study.” In International Conference of the Cross-Language Evaluation Forum for European Languages, 75–87. Springer.

Mirko Lai, Alessandra Teresa Cignarella, Delia Irazú Hernández Farı́as, Cristina Bosco, Viviana Patti, and Paolo Rosso. 2020. “Multilingual Stance Detection in Social Media Political Debates.” Computer Speech & Language, 101075.

Elijah Mayfield, and Alan W Black. 2019. “Stance Classification, Outcome Prediction, and Impact Assessment: NLP Tasks for Studying Group Decision-Making.” In Proceedings of the Third Workshop on Natural Language Processing and Computational Social Science, 65–77.

Stefan Ollinger, Lorik Dumani, Premtim Sahitaj, Ralph Bergmann, and Ralf Schenkel. 2020. “Same Side Stance Classification Task: Facilitating Argument Stance Classification by Fine-Tuning a Bert Model.” arXiv Preprint arXiv:2004.11163.

Kashyap Popat, Subhabrata Mukherjee, Andrew Yates, and Gerhard Weikum. 2019. “STANCY: Stance Classification Based on Consistency Cues.” arXiv Preprint arXiv:1910.06048.

Anirban Sen, Manjira Sinha, Sandya Mannarswamy, and Shourya Roy. 2018. “Stance Classification of Multi-Perspective Consumer Health Information.” In Proceedings of the Acm India Joint International Conference on Data Science and Management of Data, 273–81.

Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. “How to Fine-Tune Bert for Text Classification?” In China National Conference on Chinese Computational Linguistics, 194–206. Springer.

Penghui Wei, and Wenji Mao. 2019. “Modeling Transferable Topics for Cross-Target Stance Detection.” In Proceedings of the 42nd International Acm Sigir Conference on Research and Development in Information Retrieval, 1173–6.

Shengping Zhou, Junjie Lin, Lianzhi Tan, and Xin Liu. 2019. “Condensed Convolution Neural Network by Attention over Self-Attention for Stance Detection in Twitter.” In 2019 International Joint Conference on Neural Networks (Ijcnn), 1–8. IEEE.

Auteurs

Department of CSE, Sri Sivasubramaniya Nadar College of Enginnering Chennai, India – bharathib@ssn.edu.in

Department of CSE, Sri Sivasubramaniya Nadar College of Enginnering Chennai, India – bhuvanaj@ssn.edu.in

Department of CSE, Sri Sivasubramaniya Nadar College of Enginnering Chennai, India – nitinnikamanth17099@cse.ssn.edu.in

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Lire

Open access

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search