• Contenu principal
  • Menu
OpenEdition Books
  • Accueil
  • Catalogue de 15412 livres
  • Éditeurs
  • Auteurs
  • Facebook
  • X
  • Partager
    • Facebook

    • X

    • Accueil
    • Catalogue de 15412 livres
    • Éditeurs
    • Auteurs
  • Ressources numériques en sciences humaines et sociales

    • OpenEdition
  • Nos plateformes

    • OpenEdition Books
    • OpenEdition Journals
    • Hypothèses
    • Calenda
  • Bibliothèques

    • OpenEdition Freemium
  • Suivez-nous

  • Lettre d’information
OpenEdition Search

Redirection vers OpenEdition Search.

À quel endroit ?
  • Accademia University Press
  • ›
  • Collana dell'Associazione Italiana di Li...
  • ›
  • Proceedings of the Eighth Italian Confer...
  • ›
  • Contributed Papers: Long Papers
  • ›
  • A Multi-Strategy Approach to Crossword C...
  • Accademia University Press
  • Accademia University Press
    Accademia University Press
    Informations sur la couverture
    Table des matières
    Liens vers le livre
    Informations sur la couverture
    Table des matières
    Formats de lecture

    Plan

    Plan détaillé Texte intégral 1. Introduction 2. Language Representations 3. Semantic Search 4. Experiments 5. Conclusions Bibliographie Notes de bas de page Auteurs

    Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021

    Ce livre est recensé par

    Précédent Suivant
    Table des matières

    A Multi-Strategy Approach to Crossword Clue Answer Retrieval and Ranking

    Andrea Zugarini et Marco Ernandes

    p. 359-365

    Résumés

    Crossword clues represent an extremely challenging form of Question Answering, due to their intentional ambiguity. Databases of previously answered clues are a vital source for the retrieval of candidate answers lists in Automatic Crossword Puzzles (CPs) resolution systems. In this paper, we exploit language neural representations for the retrieval and ranking of crossword clues and answers. We assess the performances of several embedding models, both static and contextual, on Italian and English CPs. Results indicate that embeddings usually outperform the baseline. Moreover, the use of embeddings for retrieval allows different ranking strategies, which turned out to be complementary, and lead to better results when used in combination.

    Le domande dei cruciverba rappresentano una forma di Question Answering particolarmente complessa a causa della loro intenzionale ambiguità. I risolutori automatici di cruciverba sfruttano ampiamente basi di dati di domande precedentemente risposte. In questo articolo proponiamo l’uso di embeddings per la ricerca semantica di domande-risposte da tali databases. Le performances sono valutate in cruciverba di lingua sia italiana che inglese, confrontando diversi tipi di embeddings, sia contestuali che statici. I risultati suggeriscono che la ricerca semantica è migliore della baseline. Inoltre, l’utilizzo di embeddings permette di applicare differenti strategie di retrieval, che, migliorano la qualità dei risultati quando usate congiuntamente.

    Remerciements

    We thank Nicola Landolfi and Marco Maggini for the great support and fruitful discussions.

    Texte intégral Bibliographie Notes de bas de page Auteurs

    Texte intégral

    1. Introduction

    1Crossword Puzzles (CPs) resolution is a popular game. As almost any other human game, it is possible to tackle the problem automatically. CPs solvers frame it into a constraint satisfaction task, where the goal is to maximize the probability of filling the grid with answers consistent with their clues and coherent to the puzzle scheme. These systems (Littman, Keim, and Shazeer 2002; Ernandes, Angelini, and Gori 2005; Ginsberg 2011) heavily rely on lists of candidate answers for each clue. Candidates’ quality is crucial to CPs resolution. If the correct answer is not present in the candidates’ list, the Crossword Puzzle cannot be solved correctly. Moreover, even a poorly ranked correct answer can lead to a failure in the crossword puzzle filling. Answers lists can come from multiple solvers, where each solver is typically specialized in solving different kinds of clues, and/or exploits different source of information. Such lists are mainly retrieved with two techniques: (1) by querying the web with search engines using clue representations; (2) interrogating clue-answer databases that contain previously answered clues. In this work, we focus on the latter.

    2In the problem of candidate answers retrieval from clue-answer knowledge sources, answers are ranked according to the similarity between a query clue and the clues in the DB. The similarity is provided by the search engine that assigns a score to each retrieved answer. Several approaches have been carried out to re-rank the candidates’ list by means of learning to rank strategies (Barlacchi, Nicosia, and Moschitti 2014b, 2014a; Nicosia, Barlacchi, and Moschitti 2015; Nicosia and Moschitti 2016; Severyn et al. 2015). These approaches require a training phase to learn how to rank and mostly differ for the re-ranking model or strategy adopted. In particular, pre-trained distributed representations and neural networks are used for re-ranking clues in (Severyn et al. 2015).

    3The re-ranking of answer candidates attempts to improve the quality of candidates’ lists, assuming that the correct answer belongs to the list. Differently from previous work, we aim at directly retrieving richer lists of answer candidates from a clue-answer database. In order to do so, we exploit both static and contextual distributed representations to perform a semantic search on the DB. An embedding-based search extends the retrieval to semantically related clues that may be phrased differently. Moreover, it also allows us to map in the same space questions and answers, which opens the way for ranking answers directly based on their similarity with respect to the query clue. Our approach requires no training on CPs data and it can be applied with any pre-trained embedding model.

    4In summary, the contributions of this work are: (1) a semantic search approach to candidate answer retrieval in automatic crossword resolution; (2) two complementary retrieval methodologies (namely QC and QA) detecting candidate answers that when combined together (even naively) produce a better set of candidates; (3) a comparison between different pre-trained language representations (either static or contextual).

    5The paper is organized as follows. First, we describe in Section 2 distributed representations of language. In Section 3, we present the two answer retrieval approaches proposed in this work. Then, in Section 4 we outline the experiments in detail, and discuss the obtained results. Finally, we draw our conclusions in Section 5.

    2. Language Representations

    6Assigning meaningful representations to language is a long standing problem. Since the inception of the first text mining solutions, the bag-of-words technique has been widely adopted as one of the standard approaches to text representation. Inverted indices and statistical weighting schemes (as TF-IDF or BM25) are still to this day commonly paired with bag-of-words, providing a scalable and effective approach to document retrieval. On the other hand, in the last decade, we have assisted to tremendous progress in the field of Natural Language Processing. Huge credit goes to the diffusion of distributed representations of words (Bengio et al. 2003; Tomas Mikolov, Chen, et al. 2013; Tomas Mikolov, Sutskever, et al. 2013; Collobert et al. 2011; Mikolov et al. 2018; Devlin et al. 2018) learned through Language Modeling related tasks on large corpora.

    7In general, the goal is to assign a fixed length representation of size d, aka embedding, to a textual passage s such that similar text passages - syntactically and/or semantically - are represented closely in such space. An embedding model fe is a function mapping s to a d-dimensional vector, i.e: Image 1000000000000058000000147F6D26ECEBC511B4.jpg. Since language is a composition of symbols (typically words), embedding models first tokenize text and then process such tokens in order to compute the representation of such textual passage.

    8Nowadays, there are lots of embedding models, and for some of them pre-trained embeddings are available in a plethora of languages (Yamada et al. 2020; Grave et al. 2018; Yang et al. 2019). Early methods like (Tomas Mikolov, Chen, et al. 2013) produce dense representations for single tokens - mainly words - therefore further processing is needed to obtain the actual representation of s, when s is composed of multiple words. These kinds of embeddings are also referred to as static embeddings, since the representation of a token is always the same regardless of the context in which it appears. In (Mikolov et al. 2018), authors extend (Tomas Mikolov, Chen, et al. 2013) introducing n-gram and sub-word information and in (Le and Mikolov 2014), distributed representations are learned directly for sentences and documents.

    9Most of the proposed methods for contextual embeddings were based on recurrent neural language models (Melamud, Goldberger, and Dagan 2016; Yang et al. 2019; Chidambaram et al. 2018; Mikolov et al. 2010; Marra et al. 2018; Peters et al. 2018), until the introduction of transformer architectures (Vaswani et al. 2017; Devlin et al. 2018; Liu et al. 2019) which are currently the state-of-the-art models.

    10In the next Section we will discuss how such representations can be used to perform semantic search. In the experiments, we will exploit some of these embedding models - both static and contextual.

    Figure 1

    Image 100000000000038400000258DD4E892F3D695582.jpg

    Sketch of the two answer candidates retrieval approaches: QC (on the left), QA (on the right). In QC, ranking is based on the similarity between the query embedding and all the clues in the DB, while in QA the similarity is computed between the query and the answers in the database.

    3. Semantic Search

    11Traditional CPs solvers rely on Similar Clue Retrieval mechanisms. The idea is to find possible answers from clues in the database that are similar to the given query. This is particularly effective for crosswords, since the same clues tend to be repeated over time, or may have little lexical variations. Retrieval of similar clues is based on search engines based on classical IR algorithms such as TF-IDF or BM25, representing clues in the database as documents to retrieve, given the target clue as query.

    12Here instead, we retrieve and rank documents with semantic search. We propose two strategies, namely QC and QA. QC is analogous to classical similar clues retrieval systems, with the difference that text is represented with a dense representation. The approach retrieves and ranks from the DB clues similar to the query and returns in output the answers associated to those clues. QA, instead, ranks the answers directly by computing the cosine distance between the query and the answers themselves. Intuitively, the latter approach ranks well answers semantically correlated to the question itself, particularly useful for clues about synonyms. As we will show in Section 4, due to their different nature, the list of candidates retrieved by the two approaches are strongly complementary. A sketch of the two approaches is outlined in Fig. 1. Let us describe them separately.

    3.1 Similar Clues Retrieval

    13We are given a query clue which is a sequence of n words Image 10000000000000840000001367A499783B42DF0E.jpg, and a clue-answer DB (C,A) constituted by M clue-answer pairs, where C and A indicate the list of all the clues and answers, respectively, while we denote a clue-answer pair as: (c,a).

    14We assign a fixed-length representation Image 100000000000003900000014365356D0F28FCE98.jpg to the query clue q, computed with an embedding model:

    15Image 1000000000000051000000139D0E75F018058619.jpg      (1)

    16For contextual embeddings f_e is the model itself, since they work directly on the sequence, whereas for static embeddings we have to collapse n word representations together into a single vector. For simplicity, we simply average such embeddings.

    17Analogously, each clue Image 100000000000002C0000000E03B7D002429E7DFF.jpg is encoded as in Equation 1. Then, we measure the cosine similarity between the query and each clue:

    18Image 10000000000000EE00000013295E49E6B578ADF0.jpg    (2)

    where Image 10000000000000380000001320D51DF7E671B3E5.jpg denotes the cosine similarity. Thus, we obtain a similarity score for each clue-answer pair. In order to finally rank answers we average all clue-answer pairs having the same answer:

    19Image 1000000000000132000000309CE984B29AB7D6A4.jpg    (3)

    where Image 100000000000000F0000000E72C71FE1C7892D9A.jpg indicates the set of clue-answer pairs where the answer ak is equal to a. All the answers in A are then ranked. Since we know a priori the length of a query answer, candidates with incorrect lengths are filtered out. We refer to this approach as QC (Query-Clue).

    3.2 Similar Answers Retrieval

    20Since we can map text into a fixed-length space, we can also rank by measuring the similarity between the query and the answer itself. The query is encoded exactly as in Equation 1. In this case however we only need the clue-answer DB to retrieve the set of unique answers, denoted as A. Similarly to Equation 2, we compute the cosine similarity between query and answer embeddings:

    21Image 10000000000000D2000000133CE163C1619F4A18.jpg      (4)

    for each Image 100000000000002F0000000E5FDA58566B5E58AF.jpg, then we rank as in QC. We call it QA (Query-Answer). It is important to remark that QA is only feasible using latent representations, traditional methods like TF-IDF are not suited because of their sparsity of representations. Moreover, QA is somewhat an orthogonal strategy with respect to QC. We will see in Section 4, how even a trivial ensemble of QA and QC is beneficial to the performances.

    4. Experiments

    22In the experiments we aim to prove the effectiveness of semantic search to retrieve accurate lists of candidate answers, and to show that the QA approach carries out complementary information that can increase the coverage of the retrieval.

    4.1 Experimental Setup

    23We considered for our experiments three well known embedding models, two static (Word2Vec12, FastText3) and one contextual (Universal Sentence Encoder4), briefly denoted as W2V, FT and USE, respectively. We exploited pre-trained models for all of them. In absence of an Italian USE model, we used for the Italian crosswords database, the multilingual version of USE, that was trained on 16 languages (Italian included). Embedding models are compared against TF-IDF, which is a typical text representation in document retrieval problems.

    24To measure performances, we used well known metrics of Retrieval systems. In particular we considered Mean Hit at k (MH@k) and Mean Reciprocal Rank (MRR). Hit at k is 1 if the correct answer is within the first k elements of the list, 0 otherwise. The hits at k are evaluated for k={1,5,20,100}. MRR is defined as follows: Image 100000000000006D00000035BD73C65311D5BD8F.jpg.

    4.2 Datasets

    25We consider two different clue-answer databases for our experimentation. In particular, experiments were carried out on two languages, Italian and English, respectively on CWDB dataset (Barlacchi, Nicosia, and Moschitti 2014b) and New York Times Crosswords. We apply the same pre-processing pipeline in both corpora . (1) We discarded clue-answer pairs having answers with more than three characters, because they are typically about linguistic puzzles and they are addressed differently in CPs solvers. (2) Answer and clues containing special characters are erased. (3) Text has been lower-cased and punctuation removed. (4) We kept only answers appearing in at least two clues.

    Figure 2

    Image 10000000000003B0000002E8BD02647D31E1DEE0.jpg

    Comparison between cumulative density functions of ranking using USE (blue) and TF-IDF (orange) methods on English crosswords.

    English Crosswords

    26The data consist of a collection of clue-answer pairs for crossword puzzles published in the New York Times5 in 1997 and 2005, previously collected in (Ernandes, Angelini, and Gori 2008). Overall, there are about 61,000 clue-answer pair samples. Clues, answers and clue-answer pairs may occur multiple times. A clue is generally a short sentence, while answers are usually made up of a single word, but there are cases of multi-word answers. In such a case the answer is a string made of multiple words without any word separator. After pre-processing we obtain a corpus with 31,808 pairs in which 27,527 questions and 8,324 answers are unique.

    Italian Crosswords

    27The clue-answer database for Italian was constructed from CWDB v0.1 it corpus6 (Barlacchi, Nicosia, and Moschitti 2014b). We combined pairs from both train and test splits, since we did not perform any training in our experiments and we opportunely omitted the clue-answer pair itself during its evaluation. From the original 62,011 pairs, it remains 25,545 pairs after pre-processing, constituted of 5,813 unique answers and 16,970 unique questions.

    Table 1: Evaluation of performances on CWDB Italian data. The best values of each column and strategy are marked in bold for both QC and QA methods

    Model

    Strategy

    MH@1

    MH@5

    MH@20

    MH@100

    MRR

    W2V

    QA

    14.97

    32.55

    50.35

    71.59

    23.80

    FT

    QA

    6.78

    14.47

    26.88

    52.46

    11.44

    USE

    QA

    7.89

    17.81

    29.30

    46.80

    13.24

    TF-IDF

    QC

    60.79

    66.43

    68.53

    72.62

    63.54

    W2V

    QC

    52.34

    64.75

    72.58

    82.66

    58.26

    FT

    QC

    23.50

    34.13

    45.94

    64.09

    29.05

    USE

    QC

    60.69

    70.93

    76.81

    84.70

    65.57

    EnsembleUSE-W2V

    QC-QA

    -

    73.59

    82.39

    91.22

    -

    Table 2: Evaluation of performances on English data. The best values of each column and strategy are marked in bold for both QC and QA methods

    Model

    Strategy

    MH@1

    MH@5

    MH@20

    MH@100

    MRR

    W2V

    QA

    7.58

    17.27

    27.78

    42.62

    12.66

    FT

    QA

    7.72

    17.35

    27.29

    43.42

    12.75

    USE

    QA

    8.63

    19.69

    30.01

    45.17

    14.25

    TF-IDF

    QC

    26.15

    37.62

    44.09

    49.54

    31.46

    W2V

    QC

    19.63

    31.69

    42.66

    57.38

    25.65

    FT

    QC

    15.72

    24.32

    32.67

    46.64

    20.20

    USE

    QC

    25.78

    38.57

    49.34

    63.35

    32.12

    EnsembleUSE-USE

    QC-QA

    -

    41.40

    54.34

    69.00

    -

    4.3 Results

    28All the results for Italian and English crosswords are outlined in Tables 1 and 2, respectively. From them, we can catch several interesting insights. First of all, contextual representations from Universal Sentence Encoders are generally the most effective ones, especially on similar clues retrieval (QC), where both the query and the elements to rank are textual sequences. Nonetheless, Word2vec embeddings work surprisingly well, outperforming FastText almost all the times. Furthermore, they are the best ones on QA search in Italian database. We believe the reason why Word2Vec outperforms USE on Italian QA is twofold. First, the advantage of contextual embeddings is less evident in QA setup, indeed USE brings less benefits on English QA as well. Second, USE is a multilingual model, therefore its embeddings are less specialized than Word2Vec which was instead trained for Italian only.

    29When comparing semantic search models against the baseline (TF-IDF) - which is only possible in QC - we can notice that, static embeddings struggle to outperform it. Indeed, the sparse nature of TF-IDF induces crisp similarity scores, very high for clues sharing the same keywords, extremely low for all the rest. On the contrary, similarity scores are more blurred with dense embeddings. As a consequence, TF-IDF achieves high MH@1 and MH@5 scores (and MRR too). However, TF-IDF leads to a poorer coverage when the candidates list grows (MH@20 and MH@100). This behavior is also evident in Fig. 2, where we compare the cumulative distributions of ranking with USE and TF-IDF. After the initial bump, TF-IDF hits growth is almost linear (i.e. random), whereas the Universal Sentence Encoder keeps growing significantly.

    Ensembling QC and QA

    30Analyzing the results, we observed that ranks from QA and QC had low levels of overlaps. We reported in the last line of Tables 1 and 2, performances of a naive ensemble approach to combine QC and QA strategies. Due to the limited levels of overlaps, we decided to merge the two ranks taking the first K/2 ranks from each strategy to compute MH@K, K={5, 20, 100}7. We chose the best embedding model on each strategy. Despite its simplicity and the large room for improvements, the ensemble significantly improved the performances in both languages. This suggests possible directions for further improving the retrieval of CPs solvers.

    5. Conclusions

    31In this paper, we proposed two different semantic search strategies (QC and QA) for ranking and retrieving answer candidates to CPs clues. We exploited pre-trained state-of-the-art embeddings, both static and contextual, to rank clue-answer pairs from databases. Embedding-based retrieval overcomes some of the limitations of inverted indices models, leading to higher coverage ranks, and allowing similar answers retrieval (QA). Finally, we observed that, even a simple ensembling that combines QC and QA, is effective and improves overall retrieval performances.

    32This opens further research directions, where learning to rank methods could be exploited in order to better combine candidate answer lists from complementary approaches like QC and QA.

    Bibliographie

    Des DOI sont automatiquement ajoutés aux références bibliographiques par Bilbo, l’outil d’annotation bibliographique d’OpenEdition. Ces références bibliographiques peuvent être téléchargées dans les formats APA, Chicago et MLA.

    Format

    • APA
    • Chicago
    • MLA
    Barlacchi, Gianni, Nicosia, Massimo, & Moschitti, Alessandro. (2014). A retrieval model for automatic resolution of crossword puzzles in Italian language. In Proceedings of the First Italian Conference on Computational Linguistics CLiC-it 2014 and of the Fourth International Workshop EVALITA 2014 9-11 December 2014, Pisa (1–). PISA UNIVERSITY PRESS. https://doi.org/10.12871/clicit201417
    Proceedings of the Eighteenth Conference on Computational Natural Language Learning (1–). (2014). (1–). Proceedings of the Eighteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics. https://doi.org/10.3115/v1/w14-16
    Littman, M. L., Keim, G. A., & Shazeer, N. (2002). A probabilistic approach to solving crossword puzzles. In Artificial Intelligence (Vols. 134, Issue 1-2, pp. 23-55). Elsevier BV. https://doi.org/10.1016/s0004-3702(01)00114-x
    Advances in Information Retrieval. (2015). In A. Hanbury, G. Kazai, A. Rauber, & N. Fuhr (Eds.), Lecture Notes in Computer Science (1–). Springer International Publishing. https://doi.org/10.1007/978-3-319-16354-3
    Barlacchi, Gianni, Nicosia, Massimo, and Moschitti, Alessandro. “A Retrieval Model for Automatic Resolution of Crossword Puzzles in Italian Language”. Proceedings of the First Italian Conference on Computational Linguistics CLiC-It 2014 and of the Fourth International Workshop EVALITA 2014 9-11 December 2014, Pisa. PISA UNIVERSITY PRESS, 2014. https://doi.org/10.12871/clicit201417.
    “Proceedings of the Eighteenth Conference on Computational Natural Language Learning”. []. Association for Computational Linguistics, 2014. https://doi.org/10.3115/v1/w14-16.
    Littman, Michael L., Greg A. Keim, and Noam Shazeer. “A Probabilistic Approach to Solving Crossword Puzzles”. Artificial Intelligence. Elsevier BV, January 2002. https://doi.org/10.1016/s0004-3702(01)00114-x.
    Hanbury, Allan, Gabriella Kazai, Andreas Rauber, and Norbert Fuhr, eds. Advances in Information Retrieval. Lecture Notes in Computer Science. Springer International Publishing, 2015. https://doi.org/10.1007/978-3-319-16354-3.
    Barlacchi, Gianni, et al. “A Retrieval Model for Automatic Resolution of Crossword Puzzles in Italian Language”. Proceedings of the First Italian Conference on Computational Linguistics CLiC-It 2014 and of the Fourth International Workshop EVALITA 2014 9-11 December 2014, Pisa, PISA UNIVERSITY PRESS, 2014. Crossref, https://doi.org/10.12871/clicit201417.
    Proceedings of the Eighteenth Conference on Computational Natural Language Learning. [], Association for Computational Linguistics, 2014. Crossref, https://doi.org/10.3115/v1/w14-16.
    Littman, Michael L., et al. “A Probabilistic Approach to Solving Crossword Puzzles”. Artificial Intelligence, vol. 134, no. 1-2, Elsevier BV, Jan. 2002, pp. 23-55. Crossref, https://doi.org/10.1016/s0004-3702(01)00114-x.
    Hanbury, Allan, et al., editors. “Advances in Information Retrieval”. Lecture Notes in Computer Science, Springer International Publishing, 2015. Crossref, https://doi.org/10.1007/978-3-319-16354-3.

    Cette bibliographie a été enrichie de toutes les références bibliographiques automatiquement générées par Bilbo en utilisant Crossref.

    Gianni Barlacchi, Massimo Nicosia, and Alessandro Moschitti. 2014a. “A Retrieval Model for Automatic Resolution of Crossword Puzzles in Italian Language.” In The First Italian Conference on Computational Linguistics Clic-It 2014, 33.

    10.12871/CLICIT201417 :

    Gianni Barlacchi, Massimo Nicosia, and Alessandro Moschitti. 2014b. “Learning to Rank Answer Candidates for Automatic Resolution of Crossword Puzzles.” In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, 39–48.

    10.3115/v1/W14-16 :

    Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. “A Neural Probabilistic Language Model.” Journal of Machine Learning Research 3 (Feb): 1137–55.

    Muthuraman Chidambaram, Yinfei Yang, Daniel Cer, Steve Yuan, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. “Learning Cross-Lingual Sentence Representations via a Multi-Task Dual-Encoder Model.” arXiv Preprint arXiv:1810.12836.

    Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. “Natural Language Processing (Almost) from Scratch.” Journal of Machine Learning Research 12 (Aug): 2493–2537.

    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. “Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding.” arXiv Preprint arXiv:1810.04805.

    Marco Ernandes, Giovanni Angelini, and Marco Gori. 2005. “Webcrow: A Web-Based System for Crossword Solving.” In AAAI, 1412–7.

    Marco Ernandes, Giovanni Angelini, and Marco Gori. 2008. “A Web-Based Agent Challenges Human Experts on Crosswords.” AI Magazine 29 (1): 77–77.

    Matthew L. Ginsberg. 2011. “Dr. Fill: Crosswords and an Implemented Solver for Singly Weighted Csps.” Journal of Artificial Intelligence Research 42: 851–86.

    Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. “Learning Word Vectors for 157 Languages.” In Proceedings of the International Conference on Language Resources and Evaluation (Lrec 2018).

    Quoc Le, and Tomas Mikolov. 2014. “Distributed Representations of Sentences and Documents.” In International Conference on Machine Learning, 1188–96. PMLR.

    Michael L. Littman, Greg A Keim, and Noam Shazeer. 2002. “A Probabilistic Approach to Solving Crossword Puzzles.” Artificial Intelligence 134 (1-2): 23–55.

    10.1016/S0004-3702(01)00114-X :

    Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. “Roberta: A Robustly Optimized Bert Pretraining Approach.” arXiv Preprint arXiv:1907.11692.

    Giuseppe Marra, Andrea Zugarini, Stefano Melacci, and Marco Maggini. 2018. “An Unsupervised Character-Aware Neural Approach to Word and Context Representation Learning.” In International Conference on Artificial Neural Networks, 126–36. Springer.

    Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. “Context2vec: Learning Generic Context Embedding with Bidirectional Lstm.” In Proceedings of the 20th Signll Conference on Computational Natural Language Learning, 51–61.

    Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. “Efficient Estimation of Word Representations in Vector Space.” arXiv Preprint arXiv:1301.3781.

    Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. “Advances in Pre-Training Distributed Word Representations.” In Proceedings of the International Conference on Language Resources and Evaluation (Lrec 2018).

    Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. “Distributed Representations of Words and Phrases and Their Compositionality.” In Advances in Neural Information Processing Systems, 3111–9.

    Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černockỳ, and Sanjeev Khudanpur. 2010. “Recurrent Neural Network Based Language Model.” In Eleventh Annual Conference of the International Speech Communication Association.

    Massimo Nicosia, Gianni Barlacchi, and Alessandro Moschitti. 2015. “Learning to Rank Aggregated Answers for Crossword Puzzles.” In European Conference on Information Retrieval, 556–61. Springer.

    10.1007/978-3-319-16354-3 :

    Massimo Nicosia, and Alessandro Moschitti. 2016. “Crossword Puzzle Resolution in Italian Using Distributional Models for Clue Similarity.” In IIR.

    Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. “Deep Contextualized Word Representations.” In Proc. Of Naacl.

    Aliaksei Severyn, Massimo Nicosia, Gianni Barlacchi, and Alessandro Moschitti. 2015. “Distributional Neural Networks for Automatic Resolution of Crossword Puzzles.” In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), 199–204.

    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need.” In Advances in Neural Information Processing Systems, 5998–6008.

    Ikuya Yamada, Akari Asai, Jin Sakuma, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji, and Yuji Matsumoto. 2020. “Wikipedia2Vec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from Wikipedia.” In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 23–30. Association for Computational Linguistics.

    Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, et al. 2019. “Multilingual Universal Sentence Encoder for Semantic Retrieval.” arXiv Preprint arXiv:1907.04307.

    Notes de bas de page

    1 English: https://code.google.com/archive/p/word2vec/

    2 Italian: https://wikipedia2vec.github.io/wikipedia2vec/

    3 https://fasttext.cc/

    4 https://tfhub.dev/google/collections/universal-sentence-encoder/1

    5 https://www.nytimes.com/

    6 https://ikernels-portal.disi.unitn.it/projects/webcrow

    7 Since K=5 is not even, we took the first 3 ranks from QC and the first two ranks from QA.

    Auteurs

    • Andrea Zugarini

      Expert.ai, Italy – DIISM University of Siena, Italy – azugarini@expert.ai

    • Marco Ernandes

      Expert.ai, Italy – mernandes@expert.ai

    Précédent Suivant
    Table des matières

    Creative Commons - Attribution - Pas d'Utilisation Commerciale - Pas de Modification 4.0 International - CC BY-NC-ND 4.0

    Le texte seul est utilisable sous licence Creative Commons - Attribution - Pas d'Utilisation Commerciale - Pas de Modification 4.0 International - CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

    Voir plus de livres
    Proceedings of the Second Italian Conference on Computational Linguistics CLiC-it 2015

    Proceedings of the Second Italian Conference on Computational Linguistics CLiC-it 2015

    3-4 December 2015, Trento

    Cristina Bosco, Sara Tonelli et Fabio Massimo Zanzotto (dir.)

    2015

    Proceedings of the Third Italian Conference on Computational Linguistics CLiC-it 2016

    Proceedings of the Third Italian Conference on Computational Linguistics CLiC-it 2016

    5-6 December 2016, Napoli

    Anna Corazza, Simonetta Montemagni et Giovanni Semeraro (dir.)

    2016

    EVALITA. Evaluation of NLP and Speech Tools for Italian

    EVALITA. Evaluation of NLP and Speech Tools for Italian

    Proceedings of the Final Workshop 7 December 2016, Naples

    Pierpaolo Basile, Franco Cutugno, Malvina Nissim et al. (dir.)

    2016

    Proceedings of the Fourth Italian Conference on Computational Linguistics CLiC-it 2017

    Proceedings of the Fourth Italian Conference on Computational Linguistics CLiC-it 2017

    11-12 December 2017, Rome

    Roberto Basili, Malvina Nissim et Giorgio Satta (dir.)

    2017

    Proceedings of the Fifth Italian Conference on Computational Linguistics CLiC-it 2018

    Proceedings of the Fifth Italian Conference on Computational Linguistics CLiC-it 2018

    10-12 December 2018, Torino

    Elena Cabrio, Alessandro Mazzei et Fabio Tamburini (dir.)

    2018

    EVALITA Evaluation of NLP and Speech Tools for Italian

    EVALITA Evaluation of NLP and Speech Tools for Italian

    Proceedings of the Final Workshop 12-13 December 2018, Naples

    Tommaso Caselli, Nicole Novielli, Viviana Patti et al. (dir.)

    2018

    EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

    EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

    Proceedings of the Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian Final Workshop

    Valerio Basile, Danilo Croce, Maria Maro et al. (dir.)

    2020

    Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020

    Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020

    Bologna, Italy, March 1-3, 2021

    Felice Dell'Orletta, Johanna Monti et Fabio Tamburini (dir.)

    2020

    Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021

    Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021

    Milan, Italy, 26-28 January, 2022

    Elisabetta Fersini, Marco Passarotti et Viviana Patti (dir.)

    2022

    Voir plus de livres
    1 / 9
    Proceedings of the Second Italian Conference on Computational Linguistics CLiC-it 2015

    Proceedings of the Second Italian Conference on Computational Linguistics CLiC-it 2015

    3-4 December 2015, Trento

    Cristina Bosco, Sara Tonelli et Fabio Massimo Zanzotto (dir.)

    2015

    Proceedings of the Third Italian Conference on Computational Linguistics CLiC-it 2016

    Proceedings of the Third Italian Conference on Computational Linguistics CLiC-it 2016

    5-6 December 2016, Napoli

    Anna Corazza, Simonetta Montemagni et Giovanni Semeraro (dir.)

    2016

    EVALITA. Evaluation of NLP and Speech Tools for Italian

    EVALITA. Evaluation of NLP and Speech Tools for Italian

    Proceedings of the Final Workshop 7 December 2016, Naples

    Pierpaolo Basile, Franco Cutugno, Malvina Nissim et al. (dir.)

    2016

    Proceedings of the Fourth Italian Conference on Computational Linguistics CLiC-it 2017

    Proceedings of the Fourth Italian Conference on Computational Linguistics CLiC-it 2017

    11-12 December 2017, Rome

    Roberto Basili, Malvina Nissim et Giorgio Satta (dir.)

    2017

    Proceedings of the Fifth Italian Conference on Computational Linguistics CLiC-it 2018

    Proceedings of the Fifth Italian Conference on Computational Linguistics CLiC-it 2018

    10-12 December 2018, Torino

    Elena Cabrio, Alessandro Mazzei et Fabio Tamburini (dir.)

    2018

    EVALITA Evaluation of NLP and Speech Tools for Italian

    EVALITA Evaluation of NLP and Speech Tools for Italian

    Proceedings of the Final Workshop 12-13 December 2018, Naples

    Tommaso Caselli, Nicole Novielli, Viviana Patti et al. (dir.)

    2018

    EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

    EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

    Proceedings of the Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian Final Workshop

    Valerio Basile, Danilo Croce, Maria Maro et al. (dir.)

    2020

    Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020

    Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020

    Bologna, Italy, March 1-3, 2021

    Felice Dell'Orletta, Johanna Monti et Fabio Tamburini (dir.)

    2020

    Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021

    Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021

    Milan, Italy, 26-28 January, 2022

    Elisabetta Fersini, Marco Passarotti et Viviana Patti (dir.)

    2022

    Accès ouvert

    Accès ouvert

    ePub

    PDF

    PDF du chapitre

    1 English: https://code.google.com/archive/p/word2vec/

    2 Italian: https://wikipedia2vec.github.io/wikipedia2vec/

    3 https://fasttext.cc/

    4 https://tfhub.dev/google/collections/universal-sentence-encoder/1

    5 https://www.nytimes.com/

    6 https://ikernels-portal.disi.unitn.it/projects/webcrow

    7 Since K=5 is not even, we took the first 3 ranks from QC and the first two ranks from QA.

    Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021

    X Facebook Email

    Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021

    Vous pouvez vous connecter à votre bibliothèque à l’adresse suivante : https://freemium.openedition.org/oebooks

    Suggérer l’acquisition à votre bibliothèque

    Si vous avez des questions, vous pouvez nous écrire à access[at]openedition.org

    Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021

    Vérifiez si votre bibliothèque a déjà acquis ce livre : authentifiez-vous à OpenEdition Freemium for Books.

    Vous pouvez suggérer à votre bibliothèque d’acquérir un ou plusieurs livres publiés sur OpenEdition Books. N’hésitez pas à lui indiquer nos coordonnées : access[at]openedition.org

    Vous pouvez également nous indiquer, à l’aide du formulaire suivant, les coordonnées de votre bibliothèque afin que nous la contactions pour lui suggérer l’achat de ce livre. Les champs suivis de (*) sont obligatoires.

    Veuillez, s’il vous plaît, remplir tous les champs.

    La syntaxe de l’email est incorrecte.

    Référence numérique du chapitre

    Format

    Zugarini, A., & Ernandes, M. (2022). A Multi-Strategy Approach to Crossword Clue Answer Retrieval and Ranking. In E. Fersini, M. Passarotti, & V. Patti (éds.), Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021 (1‑). Accademia University Press. https://doi.org/10.4000/books.aaccademia.10892
    Zugarini, Andrea, et Marco Ernandes. « A Multi-Strategy Approach to Crossword Clue Answer Retrieval and Ranking ». In Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-It 2021, édité par Elisabetta Fersini, Marco Passarotti, et Viviana Patti. Torino: Accademia University Press, 2022. https://doi.org/10.4000/books.aaccademia.10892.
    Zugarini, Andrea, et Marco Ernandes. « A Multi-Strategy Approach to Crossword Clue Answer Retrieval and Ranking ». Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-It 2021, édité par Elisabetta Fersini et al., Accademia University Press, 2022, https://doi.org/10.4000/books.aaccademia.10892.

    Référence numérique du livre

    Format

    Fersini, E., Passarotti, M., & Patti, V. (éds.). (2022). Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021 (1‑). Accademia University Press. https://doi.org/10.4000/books.aaccademia.10417
    Fersini, Elisabetta, Marco Passarotti, et Viviana Patti, éd. Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-It 2021. Torino: Accademia University Press, 2022. https://doi.org/10.4000/books.aaccademia.10417.
    Fersini, Elisabetta, et al., éditeurs. Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-It 2021. Accademia University Press, 2022, https://doi.org/10.4000/books.aaccademia.10417.
    Compatible avec Zotero Zotero

    1 / 3

    Accademia University Press

    Accademia University Press

    • Plan du site
    • Se connecter

    Suivez-nous

    • Facebook
    • Flux RSS

    URL : http://www.aaccademia.it/

    Email : info@aaccademia.it

    Adresse :

    Accademia University Press

    Via Carlo Alberto 55

    I‐10123

    Torino

    Italia

    OpenEdition
    • Candidater à OpenEdition Books
    • Connaître le programme OpenEdition Freemium
    • Commander des livres
    • S’abonner à la lettre d’OpenEdition
    • CGU d’OpenEdition Books
    • Accessibilité : partiellement conforme
    • Données personnelles
    • Gestion des cookies
    • Système de signalement