Version classiqueVersion mobile

Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020

 | 
Felice Dell'Orletta
, 
Johanna Monti
, 
Fabio Tamburini

Contributed Papers

You Don’t Say… Linguistic Features in Sarcasm Detection

Martina Ducret, Lauren Kruse, Carlos Martinez, Anna Feldman et Jing Peng

Résumé

We explore linguistic features that contribute to sarcasm detection. The linguistic features that we investigate are a combination of text and word complexity, stylistic and psychological features. We experiment with sarcastic tweets with and without context. The results of our experiments indicate that contextual information is crucial for sarcasm prediction. One important observation is that sarcastic tweets are typically incongruent with their context in terms of sentiment or emotional load.

Texte intégral

This work is supported by the US National Science Foundation under Grant No.: 1704113.

1. Introduction

  • 1 Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attrib (...)

1Sarcasm, or verbal irony, is a figurative language device employed to convey the opposite meaning of what is actually being said. In verbal communication, a pause, intonation, or look can provide the cues necessary to determine whether there is sarcastic intent behind a comment. In writing, these social cues are inaccessible. Thus, we must rely on our understanding of the world, the speaker, and the context beyond the statement to discern between sarcasm and sincerity. This task has proven to be so subjective that social media users moderate their own comments using symbols and hashtags such as /s and #sarcasm to denote the sentiment on Reddit and Twitter, respectively. In fact, the dataset used in this paper was collected using such hashtags (Ghosh, Vajpayee, and Muresan 2020).1

2For machines, the lack of real-word knowledge is detrimental to their understanding of sarcasm as it hinders many natural language processing applications. Beyond social-media conversations, assessing product reviews as positive or negative requires an understanding of both rhetorical and literary devices. Back in 2012, BIC rolled out a “For Her” line of pens which led their intended female audience to poke fun at the misogynist message of the product. One reviewer commented, “Well at last pens for us ladies to use…now all we need is “for her" paper and I can finally learn to write!”. While this review seems positive and gave the product four stars, our understanding of the social climate today leads us to conclude that this review is sarcastic and should be classified as such.

3In social media communication, new slang words are introduced every day and emojis are often used to negate the sentiment of the text. In addition, stylistic devices and stylometric features are also often employed to convey a meaning opposite from its literal interpretation. While deep learning models can be very effective in their detection of sarcasm, they provide a “black box” approach that gives linguists little to no insight into what features are characteristic of sarcasm. The purpose of the current work is to learn linguistic patterns associated with sarcastic tweets and their contexts and determine which are the strongest indicators of sarcasm. The next step is to combine these observations with transformer-based architectures to achieve a better prediction accuracy.

2. Previous Work

4The field of automatic sarcasm recognition has become quite active in recent years. The most current event is the shared task (Ghosh, Vajpayee, and Muresan 2020) organized as a part of the 2nd FigLang workshop at ACL 2020. The task is typically framed as a binary classification task (sarcastic vs. non-sarcastic) considering either an utterance in isolation or in combination with contextual information. Early approaches to automatic sarcasm detection rely on different types of features, including sarcasm markers, word embeddings, emoticons, patterns between positive and negative sentiment (e.g., Davidov, Tsur, and Rappoport (2010; Tsur, Davidov, and Rappoport 2010; González-Ibáñez, Muresan, and Wacholder 2011; Riloff et al. 2013; Maynard and Greenwood 2014; Wallace, Choe, and Charniak 2015; Ghosh, Guo, and Muresan 2015; Joshi, Sharma, and Bhattacharyya 2015; Veale and Hao 2010; Liebrecht, Kunneman, and Bosch 2013)). Buschmeier, Cimiano, and Klinger (2014) explore a range of features, mainly focused on sentiment, for the detection of verbal irony in product reviews. While this paper provides a good baseline for irony classification, our data differs in that it includes a multi-speaker thread of context prior to the sarcastic remark. More recent approaches apply deep learning methods (e.g., Ghosh and Veale (2016; Tay et al. 2018; Wallace, Choe, and Charniak 2015)). There is a great amount of research exploring the role of contextual information for sarcasm detection (e.g., Joshi, Sharma, and Bhattacharyya (2015; Bamman and Smith 2015, 2015; Misra and Arora 2019; Khattri et al. 2015; Amir et al. 2016; Rajadesingan, Zafarani, and Liu 2015; Ghosh and Veale 2017; Schifanella et al. 2016; Cai, Cai, and Wan 2019; Castro et al. 2019)). Ghosh, Vajpayee, and Muresan (2020) report that almost all systems submitted as part of the shared task have used the transformer architecture, such as BERT (Turc et al. (2019)) or RoBERTa (Liu et al. (2020)), and other variants. They performed better than RNN architectures, even without any task specific fine-tuning. Unfortunately, it is difficult to interpret what these models capture about sarcastic tweets and their context. Our approach uses classical supervised algorithms to better understand which elements characterize sarcasm in a social media setting. We categorize linguistic features, experiment with different combinations, and take context into account when performing our experiments.

3. Our Approach

5Our approach utilizes a combination of complex, stylometric, and psychological linguistic features to automatically detect the presence or absence of sarcasm in a given text. We intentionally experiment with classical machine learning classification algorithms to get a better understanding of the linguistic features contributing to the sarcasm detection task. Our linguistic intuition is that there will be a discordance between the linguistic features corresponding with the responses and contexts labeled as sarcastic. Sarcastic tweets are likely to be semantically or emotionally incongruent with their preceding tweets, while non-sarcastic tweets show a greater harmony with their context. To measure the emotional load of a response and its context, we extract a number of sentiment- and emotion- related features. We also look at the distribution of these features across the two classes. Furthermore, we test the performance of our classifier and importance of our features by considering just the response tweet versus the response with its accompanying context.

4. Data Set

6We use the Twitter Corpus from the CodaLab shared task on sarcasm detection (Ghosh, Vajpayee, and Muresan 2020). The training data consists of 2,500 tweets labeled ‘SARCASM’ and 2,500 tweets labeled ‘NON SARCASM’, the balanced test data consists of an additional 1,800 labeled tweets. Ghosh, Vajpayee, and Muresan (2020), this is a self-labeled data set where the tweets are annotated as sarcastic based on the hashtags used by the users. The non-sarcastic tweets are the ones that do not contain the sarcasm hashtags, but may be labeled with either positive or negative sentiment hashtags, such as ’#happy’. Retweets, duplicates, quotes, etc., are excluded (see Ghosh, Vajpayee, and Muresan (2020) for more details). Each sarcastic and non-sarcastic tweet is accompanied with an hierarchical conversation thread, e.g., context/1 is the immediate context, context/0 is the context that preceded context/1, and so on. The training and test data include up to 19 preceding tweets labeled as context/0, context/1, …, context/19 (if available).

5. Feature Extraction

7Our research focuses on the role linguistic features play in sarcasm detection. We classify our features into three categories: complexity, stylistic, and psychological. Abonizio et al. (2020) defines complexity features as linguistic features that capture the overall objective of the context at the word and sentence level. Stylistic features use natural language techniques to gain grammatical information to better understand the syntax and style of the document. Psychological features are closest related to emotions and the cognitive aspect of NLP. We expand on these psychological features by utilizing VAD (Valence, Arousal, Dominance) (Warriner, Kuperman, and Brysbaert 2013), emotional embeddings, and LIWC (Tausczik and Pennebaker 2010) . Lastly, we use word-level count vectors, word-level tf-idf, n-gram word-level tf-idf, n-gram character-level tf-idf. We stack these features and refer to them as count vectors for the remainder of this paper.

5.1 LIWC

8LIWC (Tausczik and Pennebaker 2010) is a text analysis program with a built-in dictionary that counts words in psychologically meaningful categories. After all the words have been reviewed, the module calculates the total percentages of words that are similar and match that of the user dictionary categories. We used LIWC to extract features to detect and categorize the meaning, emotional sentiment, and social relationship of the words in the data set.

5.2 Valence, Arousal, Dominance (VAD)

9VAD (Valence Arousal Dominance) (Warriner, Kuperman, and Brysbaert 2013) includes almost 14,000 lemmas rated on a 1-9 scale according to the emotions evoked by the terms. Valence refers to the pleasantness of the word, arousal determines how dull or exciting the emotion is, and dominance ranges from submission to feeling in control. The VAD dimensions allow us to further explore the affective meanings of tweets and determine their viability as a predictor of sarcasm. We compute VAD scores for each “response" and use the three scores obtained as a feature in our classifiers. Furthermore, we explore using the scores as a measure of congruity between our response and contexts. We calculate the VAD scores for each individual response and context and then subtract the response scores by their respective context scores. In other words, if a response receives a valence score of 8 and its context/0 receives a valence score of 2, the valence congruity score would be a 6. We hypothesize that sarcastic tweets might show very little affective congruity compared to their non-sarcastic counterparts.

5.3 VADER

10VADER (Valence Aware Dictionary and sEntiment Reasoner) (Hutto and Gilbert 2015) is a lexicon and rule-based tool built especially for sentiment analysis of social media texts. VADER maps lexical features to emotions and provides insight into the intensity of such emotions through a series of polarity indices. VADER considers capitalization, punctuation, degree modifiers, emojis, and negations to compute its negative, positive and neutral scores. Furthermore, VADER’s compound score provides a normalized, weighted composite score for a given tweet.

5.4 Emotional Embeddings

11The emotions conveyed in our data set are portrayed through emotional embeddings. Calculating the emotions of the text goes a level deeper than just looking at the word embeddings. Using a pre-trained model from Hugging Face (Saravia et al. 2018), we categorize the tweets into six emotions. The emotions include, joy, anger, fear, surprise, sadness and love. Figure 1 above represents an example of the distribution of emotions between response and context/0 in the balanced training data set. The results support our intuition that sarcasm is typically associated with negative emotions. When the context is labeled as “anger", non-sarcastic tweets tend to respond with joy, while sarcastic tweets usually respond with anger. By contrast, when the context is labeled as “joy", non-sarcastic tweets overwhelmingly respond with joy, while sarcastic tweets still largely respond with anger. There are 1,216 instances of the same emotion expressed in both response and context for the non- sarcasm class and 863 instances of this in the sarcasm class. Sarcastic tweets are generally incongruent with emotions throughout the response and context, unless associated with a negative emotion, e.g., anger.

Figure 1: Distribution of Emotions for Response vs. Context/0 in the training data.

Image 10000000000001C70000012C50C93481EBE1173A.jpg

5.5 Tweet-Context Similarity Scores

We use the standard document similarity estimation technique using word embeddings (GloVe, Pennington, Socher, and Manning (2014)) and emotional embeddings (Saravia et al. (2018)), which consists of measuring the similarity between the vector representations of the two documents. Let Image 10000000000000480000000C9AA57B6DD177B69F.jpgand Image 10000000000000490000000C553E55C30B8E7739.jpgbe the emotion (or word embedding) vectors of two documents. The cosine similarity value between the two documents (e.g., a tweet and its context) centroids Image 100000000000008500000025C5699F4BBD8F0B08.jpgand Image 100000000000007E000000258412D75C77FB3A86.jpgis calculated as follows:

Image 10000000000000C50000002D7040DDDAC756F4CA.jpg     (1)

where Image 1000000000000027000000130A34C2EBF4EE33F4.jpgdenotes the inner product of two vectors x and y.

12We compute two similarity scores: 1) semantic cosine similarity using word embeddings; 2) cosine similarity using emotional embeddings. Our linguistic intuition is that a sarcastic response is going to be semantically or emotionally incongruent with its context and this is what creates the sarcasm effect.

Table 1: Sarcastic Tweet. Response=R; Context0=C/0; Context1=C/1

Message

c/0

It’s no secret that this president has routinely targeted religious and ethnic minorities. He has fanned the flames of hate against refugees, Muslims, Africans, immigrants, women and all racial and religious minorities.

c/1

He is routinely and openly hostile to any legitimate Congressional oversight. He has made clear his wanton corruption by soliciting a bribe from a foreign government for his personal political gain.

R

Yassss queen, you’re so brave and bold.

Table 2: Non Sarcastic Tweet. Response=R; Context0=C/0; Context1=C/1

Message

c/0

A2 I revert back to Canvas. I am sure you can post assignments for parents in this, (haven’t done this yet). Canvas = #thebomb #KidsDeserveIt

c/1

Can you telk me more about Canvas? I haven’t heard of it.

R

It’s Edmodo with #MorePower You can create assignments in it, post all work, the assignments can be auto graded and imported into your Skyward grade book.

13Table 1 is an example of a sarcastic tweet whose context/0, context/1 and response received an emotion of anger, anger, and joy, respectively. Table 2 represents a non-sarcastic thread of tweets where each message was classified as joy. This indicates that non-sarcastic tweets tend to be more emotionally similar to the preceding context while sarcastic tweets tend to shift in emotion. As a result, when compared to its contexts, the sarcastic tweet received lower emotional similarity scores than the non-sarcastic tweet.

5.6 Feature Analysis

14After running all of the features on the training data, we implemented SHAP (SHapley Additive exPlanations) (Lundberg and Lee 2017) to determine which features are the most important for classification. SHAP is a theoretic output technique that explains predictions of our model, by producing a SHAPLEY score that plots the most important features in our model. The features produced by SHAP were used in our experiments and are referred to as our “select linguistic features". The top 20 features SHAP selects contain a combination of character features such as character count, as well as a number of sentiment features, including VADER scores, emotion scores for both a response and its context as well as VAD features.

6. Experimental Evaluation

6.1 Data Preprocessing

15Our preprocessing procedure consists of steps to remove noisy and unnecessary data. First, we tokenize and lemmatize the tweets using NLTK (Loper and Bird 2002). We also remove any instance of “@USER” due to the repetition of this token in the beginning of most tweets. Prior research demonstrated that classifiers did not tend to benefit from large quantities of additional context and we noticed that a majority of the tweets only contained context/0 and context/1. While we plan to experiment further with additional context layers, in this work we only report on experiments that involve context/0 and context/1. We did not remove any stop words due to the small amount of text in each tweet. We also maintained punctuation and emojis as they proved to be useful information during the extraction of certain features, such as VADER.

7. Results

16We use a Random Forest classifier and run 21 different experiments of which the most relevant ones are outlined in Table 3. The baseline scores represent an attention based LSTM model described in Ghosh, Fabbri, and Muresan (2018) and used in the CodaLab Shared Task. We look at how each feature performed on just the response versus the response and context. We notice that for response, a combination of all count features and all linguistic features achieves the best F1 score of 67%. This score is further increased to 70% when the context is considered.

Table 3: Random Forest; Various feature combinations. Response=R; Context0=C/0; Context1=C/1; Count=Ct; Linguistics= Ling; Features= ft

Experiments

P

R

A

F1

Baseline 1 Shared Task

70%

66.9%

N/A

68%

R (All Ct. Ft.)

72%

61%

63%

66%

R(All Ling Ft.)

56%

60%

59%

52%

R(Sel. Ling Ft.)

54%

60%

59%

57%

R(All Ling Ft. +All Ct.)

70%

64%

65%

67%

R(Sel. Ling Ft. +All Ct.)

65%

51%

51%

57%

R+C/0+C/1 (All Ct. Ft)

67%

61%

62%

64%

R+C/0+C/1(All Ling Ft.)

64%

60%

60%

53%

R+C/0+C/1(Sel. Ling Ft.)

71%

60%

62%

64%

R+C/0+C/1(All Ling Ft. +All Ct.)

80%

62%

66%

70%

R+C/0+C/1(Sel. Ling Ft. +All Ct.)

70%

65%

66%

67%

7. Conclusion

17In this paper we explored the role various linguistic features play in computational sarcasm detection. We investigated a combination of text and word complexity features, stylistic and psychological features. The result of our experiments indicate that contextual information is crucial for sarcasm detection. We also observed that sarcastic tweets are often incongruent with their context in terms of sentiment or emotional load. Using a Random Forest classifier and the features we extracted we obtain promising results. Our current work is concerned with combining these observations with transformer-based architectures to achieve a better prediction accuracy.

Bibliographie

Hugo Queiroz Abonizio, Janaina Ignacio de Morais, Gabriel Marques Tavares, and Sylvio Barbon Junior. 2020. “Language-Independent Fake News Detection: English, Portuguese, and Spanish Mutual Features.” Future Internet 12 (5): 87.

Silvio Amir, Byron C Wallace, Hao Lyu, and Paula Carvalho Mário J Silva. 2016. “Modelling Context with User Embeddings for Sarcasm Detection in Social Media.” arXiv Preprint arXiv:1607.00976.

David Bamman, and Noah Smith. 2015. “Contextualized Sarcasm Detection on Twitter.” https://www.aaai.org/ocs/index.php/ICWSM/ICWSM15/paper/view/10538.

Konstantin Buschmeier, Philipp Cimiano, and Roman Klinger. 2014. “An Impact Analysis of Features in a Classification Approach to Irony Detection in Product Reviews.” In Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, 42–49. Baltimore, Maryland: Association for Computational Linguistics. https://doi.org/10.3115/v1/W14-2608.

Yitao Cai, Huiyu Cai, and Xiaojun Wan. 2019. “Multi-Modal Sarcasm Detection in Twitter with Hierarchical Fusion Model.” In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2506–15.

Santiago Castro, Devamanyu Hazarika, Verónica Pérez-Rosas, Roger Zimmermann, Rada Mihalcea, and Soujanya Poria. 2019. “Towards Multimodal Sarcasm Detection (an _Obviously_ Perfect Paper).” arXiv Preprint arXiv:1906.01815.

Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. “Semi-Supervised Recognition of Sarcasm in Twitter and Amazon.” In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, 107–16. Uppsala, Sweden: Association for Computational Linguistics. https://www.aclweb.org/anthology/W10-2914.

Aniruddha Ghosh and Tony Veale. 2017. “Magnets for Sarcasm: Making Sarcasm Detection Timely, Contextual and Very Personal.” In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 482–91.

Aniruddha Ghosh and Tony Veale. 2016. “Fracking Sarcasm Using Neural Network.” In Proceedings of the 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, 161–69. San Diego, California: Association for Computational Linguistics. https://doi.org/10.18653/v1/W16-0425.

Debanjan Ghosh, Alexander R Fabbri, and Smaranda Muresan. 2018. “Sarcasm Analysis Using Conversation Context.” Computational Linguistics 44 (4): 755–92.

Debanjan Ghosh, Weiwei Guo, and Smaranda Muresan. 2015. “Sarcastic or Not: Word Embeddings to Predict the Literal or Sarcastic Meaning of Words.” In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 1003–12. Lisbon, Portugal: Association for Computational Linguistics. https://doi.org/10.18653/v1/D15-1116.

Debanjan Ghosh, Avijit Vajpayee, and Smaranda Muresan. 2020. “A Report on the 2020 Sarcasm Detection Shared Task.” In Proceedings of the Second Workshop on Figurative Language Processing, 1–11. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.figlang-1.1.

Roberto González-Ibáñez, Smaranda Muresan, and Nina Wacholder. 2011. “Identifying Sarcasm in Twitter: A Closer Look.” In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, 581–86. Portland, Oregon, USA: Association for Computational Linguistics. https://www.aclweb.org/anthology/P11-2102.

C. J. Hutto, and Eric Gilbert. 2015. “VADER: A Parsimonious Rule-Based Model for Sentiment Analysis of Social Media Text.” In Proceedings of the 8th International Conference on Weblogs and Social Media, ICWSM 2014.

Aditya Joshi, Vinita Sharma, and Pushpak Bhattacharyya. 2015. “Harnessing Context Incongruity for Sarcasm Detection.” In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), 757–62. Beijing, China: Association for Computational Linguistics. https://doi.org/10.3115/v1/P15-2124.

Anupam Khattri, Aditya Joshi, Pushpak Bhattacharyya, and Mark Carman. 2015. “Your Sentiment Precedes You: Using an Author’s Historical Tweets to Predict Sarcasm.” In Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, 25–30.

Christine Liebrecht, Florian Kunneman, and Antal van den Bosch. 2013. “The Perfect Solution for Detecting Sarcasm in Tweets #Not.” In Proceedings of the 4th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, 29–37. Atlanta, Georgia: Association for Computational Linguistics. https://www.aclweb.org/anthology/W13-1605.

Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2020. “RoBERTa: A Robustly Optimized BERT Pretraining Approach.” https://openreview.net/forum?id=SyxS0T4tvS.

Edward Loper, and Steven Bird. 2002. “NLTK: The Natural Language Toolkit.” In In Proceedings of the Acl Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics. Philadelphia: Association for Computational Linguistics.

Scott M. Lundberg, and Su-In Lee. 2017. “A Unified Approach to Interpreting Model Predictions.” In Advances in Neural Information Processing Systems 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 4765–74. Curran Associates, Inc. http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf.

Diana Maynard and Mark Greenwood. 2014. “Who Cares About Sarcastic Tweets? Investigating the Impact of Sarcasm on Sentiment Analysis.” In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), 4238–43. Reykjavik, Iceland: European Language Resources Association (ELRA). http://www.lrec-conf.org/proceedings/lrec2014/pdf/67_Paper.pdf.

Rishabh Misra and Prahal Arora. 2019. “Sarcasm Detection Using Hybrid Neural Network.” ArXiv abs/1908.07414.

Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. “GloVe: Global Vectors for Word Representation.” In Empirical Methods in Natural Language Processing (Emnlp), 1532–43. http://www.aclweb.org/anthology/D14-1162.

Ashwin Rajadesingan, Reza Zafarani, and Huan Liu. 2015. “Sarcasm Detection on Twitter: A Behavioral Modeling Approach.” In Proceedings of the Eighth Acm International Conference on Web Search and Data Mining, 97–106.

Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. “Sarcasm as Contrast Between a Positive Sentiment and Negative Situation.” In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, 704–14. Seattle, Washington, USA: Association for Computational Linguistics. https://www.aclweb.org/anthology/D13-1066.

Elvis Saravia, Hsien-Chi Toby Liu, Yen-Hao Huang, Junlin Wu, and Yi-Shin Chen. 2018. “CARER: Contextualized Affect Representations for Emotion Recognition.” In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 3687–97. Brussels, Belgium: Association for Computational Linguistics. https://doi.org/10.18653/v1/D18-1404.

Rossano Schifanella, Paloma de Juan, Joel Tetreault, and Liangliang Cao. 2016. “Detecting Sarcasm in Multimodal Social Platforms.” In Proceedings of the 24th Acm International Conference on Multimedia, 1136–45.

Yla R. Tausczik, and James W. Pennebaker. 2010. “The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods.” Journal of Language and Social Psychology 29 (1): 24–54. https://doi.org/10.1177/0261927X09351676.

Yi Tay, Anh Tuan Luu, Siu Cheung Hui, and Jian Su. 2018. “Reasoning with Sarcasm by Reading in-Between.” In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1010–20. Melbourne, Australia: Association for Computational Linguistics. https://doi.org/10.18653/v1/P18-1093.

Oren Tsur, Dmitry Davidov, and Ari Rappoport. 2010. “ICWSM - a Great Catchy Name: Semi-Supervised Recognition of Sarcastic Sentences in Online Product Reviews.” In ICWSM, edited by William W. Cohen and Samuel Gosling. The AAAI Press. http://dblp.uni-trier.de/db/conf/icwsm/icwsm2010.html#TsurDR10.

Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. “Well-Read Students Learn Better: On the Importance of Pre-Training Compact Models.” arXiv Preprint arXiv:1908.08962v2.

Tony Veale, and Yanfen Hao. 2010. “Detecting Ironic Intent in Creative Comparisons.” In ECAI 2010 - 19th European Conference on Artificial Intelligence, Lisbon, Portugal, August 16-20, 2010, Proceedings, edited by Helder Coelho, Rudi Studer, and Michael J. Wooldridge, 215:765–70. Frontiers in Artificial Intelligence and Applications. IOS Press. https://doi.org/10.3233 / 978-1-60750-606-5-765.

Byron C. Wallace, Do Kook Choe, and Eugene Charniak. 2015. “Sparse, Contextually Informed Models for Irony Detection: Exploiting User Communities, Entities and Sentiment.” In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 1035–44. Beijing, China: Association for Computational Linguistics. https://doi.org/10.3115/v1/P15-1100.

Amy Warriner, Victor Kuperman, and Marc Brysbaert. 2013. “Norms of Valence, Arousal, and Dominance for 13,915 English Lemmas.” Behavior Research Methods 45 (February). https://doi.org/10.3758/s13428-012-0314-x.

Notes

1 Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0)

Auteurs

Montclair State University, Montclair, New Jersey, USA – ducretm@montclair.edu

Montclair State University, Montclair, New Jersey, USA – krusel@montclair.edu

Montclair State University, Montclair, New Jersey, USA – martinezcl@montclair.edu

Montclair State University, Montclair, New Jersey, USA – feldmana@montclair.edu

Montclair State University, Montclair, New Jersey, USA – pengj@montclair.edu

© Accademia University Press, 2020

Conditions d’utilisation : http://www.openedition.org/6540

Lire

Open access

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search