Versione classicaVersione mobile

Proceedings of the Second Italian Conference on Computational Linguistics CLiC-it 2015

 | 
Cristina Bosco
, 
Sara Tonelli
, 
Fabio Massimo Zanzotto

Deep Learning for Social Sensing from Tweets

Giuseppe Attardi, Laura Gorrieri, Alessio Miaschi e Ruggero Petrolito

Abstract

I Distributional Semantic Model (DSM) rappresentano le parole come vettori di pesi in uno spazio di feature ad alte dimensioni, e si sono dimostrati molto efficaci nel rappresentare la similarità semantica o sintattica tra parole. Per certi compiti però è importante rappresentare aspetti contrastanti come la polarità, significati opposti o parole usate con significato idiomatico. Presentiamo un metodo per calcolare dei word embedding discriminativi che possono essere usati nella sentiment classification o per qualunque altro compito dove vi sia necessità di discriminare tra aspetti semantici contrastanti. Presentiamo un esperimento sull'identificazione di tweet relativi a calamità naturali utilizzando questi embedding.

Testo integrale

1. Introduction

1Distributional Semantic Models (DSM) that represent words as vectors of weights over a high dimensional feature space (Hinton et al., 1986), have proved very effective in representing semantic or syntactic aspects of lexicon. Incorporating such representations has allowed improving many natural language tasks. They also reduce the burden of feature selection since these models can be learned through unsupervised techniques from plain text.

2Deep learning algorithms for NLP tasks exploit distributional representation of words. In tagging applications such as POS tagging, NER tagging and Semantic Role Labeling (SRL), this has proved quite effective in reaching state of art accuracy and reducing reliance on manually engineered feature selection (Collobert & Weston, 2008).

3Word embeddings have been exploited also in constituency parsing (Collobert, 2011) and dependency parsing (Chen & Manning, 2014). Blanco et al. (2015) exploit word embeddings for identifying entities in web search queries.

4Traditional embeddings are created from large collections of unannotated documents through unsupervised learning, for example building a neural language model (Collobert et al. 2011; Mikolov et al. 2013) or through Hellinger PCA (Lebrét and Collobert, 2013). These embeddings are suitable to represent syntactic similarity, which can be measured through the Euclidean distance in the embeddings space. They are not appropriate though to represent semantic dissimilarity, since for example antonyms end up at close distance in the embeddings space

5In this paper we explore a technique for building discriminative word embeddings, which incorporate semantic aspects that are not directly obtainable from textual collocations. In particular, such embedding can be useful in sentiment classification in order to learn vector representations where words of opposite polarity are distant from each other.

2. Building Word Embeddings

6Word embeddings provide a low dimensional dense vector space representation for words, where values in each dimension may represent syntactic or semantic properties.

7For creating the embeddings, we used DeepNL1, a library for building NLP applications based on a deep learning architecture. DeepNL provides two methods for building embeddings, one is based on the use of a neural language model, as proposed by Collobert et al. (2011) and one based on a spectral method as proposed by Lebret and Collobert (2013).

8The neural language method can be hard to train and the process is often quite time consuming, since several iterations are required over the whole training set. Some researcher provide precomputed embeddings for English2.

9Mikolov et al. (2013) developed an alternative solution for computing word embeddings, which significantly reduces the computational costs and can also exploit concurrency trough the Asynchronous Stochastic Gradient Descent algorithm. An optimistic approach to matrix updates is also exploited to avoid synchronization costs.

10The authors published single-machine multithreaded C++ code for computing the word vectors3. A reimplementation of the algorithm in Python, but with core computations in C, is included in the Genism library (Řehůřek and Sojka, 2010)

11Lebret and Collobert (2013) have shown that embeddings can be efficiently computed from word co-occurrence counts, applying Principal Component Analysis (PCA) to reduce dimensionality while optimizing the Hellinger similarity distance.

12Levy and Goldberg (2014) have shown similarly that the skip-gram model by Mikolov et al. (2013) can be interpreted as implicitly factorizing a word-context matrix, whose values are the pointwise mutual information (PMI) of the re-spective word and context pairs, shifted by a global constant.

2.1 Discriminative Word Embeddings

13For certain tasks, as for example sentiment analysis, semantic similarity is not appropriate, since antonyms end up at close distance in the embeddings space. One needs to learn a vector representation where words of opposite polarity are distant.

14Tang et al. (2013) propose an approach for learning sentiment specific word embeddings, by incorporating supervised knowledge of polarity in the loss function of the learning algorithm. The original hinge loss function in the algorithm by Collobert et al. (2011) is:

CW(x, xc) = max(0, 1 − fθ(x) + fθ(xc))

where x is an ngram and xc is the same ngram corrupted by changing the target word with a randomly chosen one, fθ(.) is the feature function computed by the neural network with parameters θ. The sentiment specific network outputs a vector of two dimensions, one for modeling the generic syntactic/semantic aspects of words and the second for modeling polarity.

15A second loss function is introduced as objective for minimization:

SS(x, xc) = max(0, 1 − δs(x) fθ(x)1 + δs(x) fθ(xc)1)

where the subscript in fθ(x)1 refers to the second element of the vector and δs(x) is an indicator function reflecting the sentiment polarity of a sentence, whose value is 1 if the sentiment polarity of x is positive and -1 if it is negative.

16The overall hinge loss is a linear combination of the two:

(x, xc) = α CW(x, xc) + (1 – α) SS(x, xc)

Generalizing the approach to discriminative word embeddings entails replacing the loss function Lss with a one-vs-all hinge loss function:

h(𝑥, 𝑡) = max (0, 1 + max!!! (𝑓(𝑥)h 𝑓(𝑥)y))

where t is the index of the correct class.

17The DeepNL library provides a training algorithm for discriminative word embedding that performs gradient descent using an adaptive learning rate according to the AdaGrad method. The algorithm requires a training set consisting of documents annotated with their discriminative value, for example a corpus of tweets with their sentiment polarity, or in general documents with multiple class tags. The algorithm builds embeddings for both unigrams and ngrams at the same time, by performing variations on a training sentence replacing not just a single word, but a sequence of words with either another word or another ngram.

3. Deep Learning Architecture

18The Deep Learning architecture used for training discriminative word embeddings consists of the following layers:

  1. Lookup layer: extracts the embedding vector associated to each token

  2. Linear layer

  3. Activation layer: using the hardtanh function

  4. Linear layer

  5. Hinge loss layer

4. Experiments

19We tested the use of discriminative word embeddings in the task of social sensing, i.e. of detecting specific signals from social media. In particular we explored the ability to monitor and alert about emergencies caused by natural disasters. We explored the corpus of Social Sensing4, which consist of 5,642 tweets about natural catastrophic events like earthquakes or floods. To obtain a balanced training set, we combined this corpus with a set of generic tweets, consisting of 23,507 tweets. The combined corpus, consisting of 29,149 tweets, was randomly split into a training, development and test set consisting respectively of 23,850, 2,649 and 2,650 tweets.

4.1 Lexicon

20Most sentiment analysis systems exploit a specialized lexicon (Rosenthal et al, 2014; Rosenthal et al, 2015). We built a lexicon of words related or indicative of disasters, by using the Italian Word Embeddings interface5. Starting from a seed set of few specialized words we produced a lexicon of 292 words (including words with a hashtag).

4.2 Classifier

21For detecting tweets reporting about natural disasters, we exploit an SVM classifier, which uses as continuous features the word embeddings created from the text of the Italian Wikipedia. Additionally a set of discrete features is used, similar to those used in the top scoring system in the task 10 of SemEval 2014 on Sentiment Analysis in Twitter (Mohammad et al., 2014). These features are summarized in the following table:

Type

Description

allcaps

feature telling whether a word is all in uppercase

EmoPos

Presence of a positive emoticon

EmoNeg

Presence of a negative emoticon

Elongated

Presence of an elongated word

Lexicon count

Number of word present in a lexicon

Lexicon min

Lowest score of word in lexicon

Lexicon last

Score of the last word present in lexicon

Lexicon sum

Sum of the scores of words present in lexicon

Negation

Count of negative words

Elongated punct

Count of multiple punctuations (e.g. “!!!”)

Ngrams

Ngrams of length 2-4

4.3 Results

22We created generic word embeddings on the corpus consisting of the plain text extracted from the Italian Wikipedia, for a total of 1,096,243,235 tokens, 4,456,972 distinct.

23We selected the 100,000 most frequent words and we created word embeddings for them, with a space dimension of 64.

24The table below shows the results obtained with the discriminative word embeddings compared to a baseline obtained with the same classifier using the generic embeddings.

Data

System

Precision

Recall

F1

Develop

baseline

85.91

72.66

78.73

Develop

DE

87.08

76.37

81.37

Test

baseline

86.87

70.96

78.11

Test

DE

85.94

75.05

80.12

25The results show a significant improvement in recall with respect to the baseline, which leads to over a 2-point improvement in F1.

4.4 Related Work

26Social sensing research is a rapidly growing field; however, it is difficult to compare our work with others since the data sets used are different.

27The only experiment performed on the same data set, is described in (Cresci et al., 2015), which focuses on distinguishing whether damage was reported, rather than just reportig a disaster. Sixteen experiments were carried out, using four subsets of the corpus for training, corresponding to four disaster events, and testing on either different events (cross-event) or same/different disaster types (in-domain, out-domain). F1 scores in detecting non relevant tweets ranged between 19% and 28% for cross-event and out-domain and reached 73% for in-domain in one of the indomain tests.

5 Conclusions

28We have presented the notion of discriminative word embeddings that were designed to cope with semantic dissimilarity in tasks like sentiment analysis or multiclass classification.

29As an example of the effectiveness of this type of embeddings in other applications, we have explored their use in detecting tweets reporting alerts or notices about natural disasters.

30Our approach consisted in using a classifier trained on a corpus of annotated tweets, using discriminative embeddings as features, instead of the typical manually crafted features or dictionnaries employed in tweet classification tasks as sentiment analysis.

31In the future, we plan to explore the use a convolutional network classifier, also provided by DeepNL, without any additional features, as Severyn and Moschitti (2015) have done for the SemEval 2015 task on Sentiment Analysis in Twitter.

Bibliografia

R. Al-Rfou, B. Perozzi, and S. Skiena. 2013. Polyglot: Distributed Word Representations for Multilingual NLP. arXiv preprint arXiv:1307.1662.

R. K. Ando, T. Zhang, and P. Bartlett. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817–1853.

Roi Blanco, Giuseppe Ottaviano, Edgar Meij, 2015. Fast and Space-efficient Entity Linking in Queries, ACM WSDM 2015.

D. Chen and C. D. Manning. 2014. Fast and Accurate Dependency Parser using Neural Networks. In: Proc. of EMNLP 2014.

R. Collobert and J. Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML, 2008.

R. Collobert. 2011. Deep Learning for Efficient Discriminative Parsing. In AISTATS, 2011.

R. Collobert et al. 2011. Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research, 12, 2461–2505.

S. Cresci, M. Tesconi, A. Cimino and F. Dell’Orletta. 2015. A Linguistically-driven Approach to CrossEvent Damage Assessment of Natural Disasters from Social Media Messages. Proceedings of the 24th international conference companion on World Wide Web (WWW'15).

M. Grbovic, N. Djuric, V. Radosavljevic, F. Silvestri, N. Bhamidipati. 2015. Context- and Content-aware Embeddings for Query Rewriting in Sponsored Search. Proceedings of SIGIR 2015, Santiago, Chile.

Huang et al. 2012. Improving Word Representations via Global Context and Multiple Word Prototypes, Proc. of the Association for Computational Linguistics 2012 Conference.

G.E. Hinton, J.L. McClelland, D.E. Rumelhart. Distributed representations. 1986. In Parallel distributed processing: Explorations in the microstructure of cognition. Volume 1: Foundations, MIT Press, 1986.

Quoc Le and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. In Proceedings of the 31st International Conference on Machine Learning, Beijing, China, 2014. JMLR:W&CP volume 32.

Rémi Lebret and Ronan Collobert. 2013. Word Embeddings through Hellinger PCA. Proc. of EACL 2013.

Omer Levy and Yoav Goldberg. 2014. Neural Word Embeddings as Implicit Matrix Factorization. In Advances in Neural Information Processing Systems (NIPS), 2014.

Christopher D. Manning and Hinrich Schütze. 1999. Foundations of Statistical Natural Language Processing. The MIT Press. Cambridge, Massachusetts.

Saif M. Mohammad, Xiaodan Zhu, Svetlana Kiritchenko. 2013. NRC-Canada: Building the Stateof-the-Art in Sentiment Analysis of Tweets, In Proceedings of the seventh international workshop on Semantic Evaluation Exercises (SemEval2013), June 2013, Atlanta, USA.

Saif M. Mohammad, Xiaodan Zhu, Svetlana Kiritchenko. 2014. Nrc-canada-2014: Recent improvements in sentiment analysis of tweets, and the Voted Perceptron. In Eighth International Workshop on Semantic Evaluation Exercises (SemEval2014).

T. Mikolov, M. Karafiat, L. Burget, J. Cernocky, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japanfmikol.

Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. In Proceedings of Workshop at ICLR, 2013.

Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of NIPS, 2013.

Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4):513-553.

Radim Řehůřek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, ELRA, Valletta, Malta, pp. 45–50.

Sara Rosenthal, Alan Ritter, Preslav Nakov, and Veselin Stoyanov. 2014. SemEval-2014 Task 9: Sentiment analysis in Twitter. In Proceedings of the 8th International Workshop on Semantic Evaluation, SemEval ’14, pages 73–80, Dublin, Ireland.

Sara Rosenthal, Preslav Nakov, Svetlana Kiritchenko, Saif Mohammad, Alan Ritter and Veselin

Stoyanov. 2015. SemEval-2015 Task 10: Sentiment Analysis in Twitter. Proc. of the ninth International Workshop on Semantic Evaluation (SemEval-2105), Denver, USA.

Aliaksei Severyn, Alessandro Moschitti. 2015. UNITN: Training Deep Convolutional Neural Network for Twitter Sentiment Classification. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval-2015), Denver, USA.

S. Srivastava, E. Hovy. 2014. Vector space semantics with frequency-driven motifs. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, 634–643, Baltimore, Maryland, USA.

Tang et al. 2014. Learning Sentiment-SpecificWord Embedding for Twitter Sentiment Classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pp. 1555– 1565, Baltimore, Maryland, USA, June 23-25 2014.

Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th annual meeting of the association for computational linguistics, pp. 384-394. Association for Computational Linguistics.2013.

Autori

Dipartimento di Informatica Università di Pisa Largo B. Pontecorvo, 3 I-56127 Pisa, Italy

Dipartimento di Informatica Università di Pisa Largo B. Pontecorvo, 3 I-56127 Pisa, Italy

CC-BY-NC-ND-4.0

Solamente il testo è utilizzabile con licenza CC BY-NC-ND 4.0. Salvo diversa indicazione, per tutti agli altri elementi (illustrazioni, allegati importati) la copia non è autorizzata ("Tutti i diritti riservati").

Acquista

Cerca su OpenEdition Search

Sarai reindirizzato su OpenEdition Search