URL originale : https://books.openedition.org/pufc/27154
16. Automatic Construction of a Thesaurus of Proper Names : A Local Grammar-based Approach
p. 261-279
Remerciements
The authors wish to acknowledge the support of the EU-IST GIDA project (IST-2000-31123), the UK ESRC’s e-Science project FINGRID (RES-149-25-0028) and the EPSRC project REVEAL (GR/S98443/01).
Texte intégral
Introduction
1Proper names are universal to natural language as members of this class are found in all languages of the world and are found in many different registers of writing. Proper Names appear to behave differently from other noun classes both grammatically and semantically. Unlike other nouns, proper names lack number contrast, are usually preceded by the definite article in English language, and some of the proper names can be modified by non-restrictive premodifiers. The class of proper names may include names of persons, recurrent items in the calendar, geographical locations, and organizations (Quirk et al., 1985 : 288-297). Proper names are usually prefixed by honorifics, for instance, Prof., Dr., Prince, Prime Minister, Bishop, Ayatollah, Rabbi, and gender markers, for example, Mr, Mrs, Monsieur, Madame, Al-Sayed, Maulana, Begum. In news reportage and official documents, proper names, especially person names, co-occur with status markers (Chief Executive, Chief Operating Officer), organisation names (IBM, Iraq Survey Group), and place names (Berlin, Falluja), calendrical names (Monday, January) or with names of natural substances (Lavosier, the discoverer of Oxygen) or that of artefacts (Stephenson’s steam engine, Rocket). Proper names belong to the category of the so-called open-class words. In this sense there is a ready supply of new names of all sorts. Proper names are key to political, financial, sports and recreational events. The creation and maintenance of a database containing proper names is one of the key challenges in information extraction.
1. Approaches to the Identification of Named Entities
2The computational treatment of proper names usually involves processing a text either with a statistical Part of Speech (POS) tagger or through a “rule-driven” program. It is expected though that the POS tagger has been trained on a large amount of texts and, in turn, the texts have a good coverage of proper names in their various manifestations.
3The coverage of many of POS taggers, however, leaves something to be desired. Consider, for example, ten sentences selected randomly using GoogleTM search engine and processed by the University of Lancaster’s CLAWS on-line POS tagger, which was used to tag the British National Corpus (BNC). Out of 10 sentences, CLAWS correctly tagged 5 complex proper names but made ‘errors’ in five others (see Table 1).
4In the “incorrectly” tagged sentences in Table 1 apart from the proper names all other tags appear to be correct. Ambiguities related to the tagging of proper names have been documented on the BNC website. In most of these sentences common nouns or adjective have been used as proper names or part of compound proper names : securities in Syrus Securities (sentence 7), group in NPD Group (sentence 8) and energy information administration (sentence 10) – CLAWS surprisingly tags United Nations as an adjective plus a plural common noun and adjectives like federal, reserve and new as nouns when used in conjunction of a proper noun (sentence 8 – New York-based rendered as an adjective-adjective compound, and sentence 9 where the widely used compound noun Federal Reserve was also tagged the same way) ; the noun-adjective combination (New) York-based was treated as a single adjective perhaps due to the use of hyphen. Sentence 6 presents a more complex problem : the incorrectly tagged Bronco, tagged as common noun, is a borrowing from the Spanish word that means a “wild horse” : the word is used metaphorically in English now – all the 5 occurrence of the word in lower case have a metaphorical meaning. A close reading of the incorrectly tagged sentences will suggest that even the initial capitalization rule would have helped to resolve the ambiguous tags generated by CLAWS.
5Rule-based proper noun programs have prior orthographic / pragmatic information – for instance programs that use the information that proper names in English are capitalized (Coates-Stephens, 1993 ; Krupka and Hausman, 1998) or that compound proper names are delimited by the use of comma and not by closed class words (McDonald, 1993). The rules incorporated in such programs appear to be powerful yet intuitive, and in many ways are Anglo-centric. The Message Understanding Conferences (MUCs) has helped to look at other languages, including Spanish, Chinese, and Japanese and has encouraged the so-called hybrid approach combining statistical and rule-based techniques (Cucchiarelli and Velardi, !1999 ; Tür et al., 2001). The performance of these systems, measured using recall and precision statistics is claimed to be comparable with that of human experts. However, it appears that the hybrid systems require extensive manual tagging of words during the training stage to achieve a reasonable success rate ! (Bikel et al., 1997). It seems that in addition to POS taggers and rule-based programs, something more is needed in the way of knowledge of how proper names are used in text. This is the burden of argument presented in this paper.
6Information extraction systems generally rely on a list of proper names to help in their identification in texts. However, if the list is not exhaustive, then a rule-based approach is used to infer the existence of a proper noun by examining only words with some syntactic features (initially capitalized in English, for example). In most applications, it is not only necessary to identify proper names but also to find attributes of people, organisations, places, and other named entities. What is required here is a thesaurus rather than a glossary of proper names. A thesaurus usually comprises entry words and words related to the entry words through some conceptual relation. A glossary and / or thesaurus of proper names – gazetteers or yellow pages – is a handcrafted artefact. The automatic construction of a thesaurus of proper names requires the identification of templates used to indicate, for instance, the name of a person together with his or her affiliations, identity and so on. The keyword here is template – a repeatedly used set of words that includes a proper name and associated information (attributes) that is robust enough to incorporate as many proper names as possible – yet remaining simple in structure and content so that algorithms can be developed and templates can be created for identifying and classifying a proper name.
2. Notes on « Local Grammar »
7The fact that proper names behave differently from other nouns has somehow relegated this sub-category to the ‘periphery’ of theoretical linguistics (McDonald, 1993). Complex noun phrases comprising proper names including chemical nomenclature, dates and times appear not to follow grammatical rules that usually govern phrases containing common nouns (Harris, 1991 ; Gross, 1993). For instance, phrases such as the polypeptides were washed in hydrochloric acid Monday the 19th of January 2005 at 3 : 00 pm are two example sentences whose behaviour is governed by a local grammar.
8Consider, for example, the sentences comprising specific dates and / or times (Table 2, Column 1).
9One cannot substitute any other determiner for “a” and “the” in the sentences above, or replace any other (proper) nouns for names of days and months. However, transformations, including ellipsis, are sometimes allowed. For example, the name of the day in a date phrase such as the incident took place on Tuesday May 2nd, 1969 is redundant and can be omitted since the contiguous numerical date can be used to obtain the name of the associated day. But one can easily construct a complex Finite State Automata for recognizing and generating 100 % acceptable / correct sentences for telling time and dates (Figure 1)3.
10Local grammars have also been used to parse dictionary definitions (Barnbrook and Sinclair, 1995), to extract proper names from Korean texts (Choi and Nam, 1997), and verb patterns from texts in English and Portuguese (Oliver, 2004 ; Ranchhod et al., 1999). Finite-State Automata / Transducers (FSA/T) typically facilitate the implementation of the local grammar (Friburger and Maurel, 2004). For us, Local Grammar is more than a handy notation system : rather it is a reflection of an exclusive use of some members of a grammatical category.
I. A Method for Automatically Identifying Patterns of Proper Names
11Proper names are used in different manner and in various genres of texts. For example, in narrative fiction, verbs of “saying” (said, asked, laughed, thought, added, knew, told, saw) are used together with orthographic markers to report the utterances of the various actors and patients involved. Table 3 shows a sample of 10 sentences randomly chosen from a set of 1 724 sentences, containing the saying verb said, extracted from the BNC.
12The phrases preceding / prefixing the saying word contain pronominal references (he, she, I) and person names – occasionally prefixed with an honorific. These phrases are referred to as X in Table 4 below.
13A detailed analysis of these sentences shows that there are 303 instances where “said” is preceded by a phrase, which we refer to as NE, containing a person name and 506 instances where a NE comes after the word said. Other synonyms of said were used less frequently (see Table 4).
14CLAWS will correctly tag the combinations with proper names as :
15VDD and NE refer to a verb past tense and phrase of proper noun respectively. The NE has a more complex structure (see below) but always comprises words belonging to the category of NP0. This, however, is not true of the category labelled VDD, which essentially covers the verb past tense of all known verbs. A sample of 100 verbs thesaurally related to said, extracted by Kilgarriff et al. (2004), can be substituted for said.
16Indeed, Table 4 shows that there is a subset V saying (said, laughed, added, thought asked, knew, told), which can only be used in template (1) and (2) to the exclusion of all other members of the category past-tense verbs. The past-tense verbs that cannot substitute for said include saw, looked, took, gave, went, made, got, came, found, left and sat for example.
17The case of special language texts is more complex. Consider the genre of economic, political or financial news reports. In this genre we find sentences corresponding to templates (1) and (2) very frequently but there is an added restriction on V saying – in most cases the most frequently used member in this category is the verb said, followed by told and added.
18Let us first look at some evidence taken from the corpus we built from Reuters Financial News (REUFIN). The frequency of said is much higher in REUFIN than the BNC – and in both corpora said is the most frequent saying verb (Table 5). The key difference between sentences containing said occurring in news stories and narrative fiction is the complexity of the NE phrases ; there are additional proper names (e.g. organisation / place or date names) in these phrases that are occasionally used to name a person. We discuss these differences in turn.
19The left-right asymmetry of the position of the saying verbs in the context of a named entity is even more pronounced in the specialist texts. More importantly, much fewer members of the category of past-tense verbs and more specifically saying verbs are included (Table 5).
20The complexity of NE phrases in REUFIN manifests itself in the fact that seldom names of persons are used without their affiliation or job descriptions. Randomly chosen examples of these phrases appear in Table 6.
21There is, for us, a convincing visual evidence of the existence of a local grammar that governs the behaviour of proper names in general language : there is even more convincing evidence in specialist texts like economic and financial news texts. This grammar will help in the disambiguous identification of named entities if it could (a) be extracted automatically to avoid the bias of a specialism, and (b) operationalised as a finite state automata. This we discuss next.
22Templates (1) and (2) are collocations, which are referred to in the literature as phrasal templates : collocations between two words, which may be interspersed by one or several empty slots (Smadja 1993). These templates will help us to build a thesaurus of named entities in that we would not only be able to identify a person’s name but also his or her job description, title, and affiliation. A thesaurus helps to establish relationships between words or in our case proper names ; with an inferencing program, it will be possible to relate people who work in the same organisation, or people who have the same job descriptions within the same or different organisations.
II. An Algorithm for Building Thesauri of Proper Names
23Typically thesauri are hand crafted and what we propose next is that such a task could be automated. This automated process requires a corpus of specialist texts, a frequency analysis program, a collocation finder, and some FSA utilities. The algorithm presented in Figure 2 should, in principle, perform this complex task of thesauri building ; it helps in automatically identifying the local grammar for proper names in the specialist domain. Once the local grammar is developed, INTEX system (Silberztein, 2000) is used to implement it into FSA and then apply it to the corpus to build a thesaurus ab initio. Note that a thesaurus can be updated using the same algorithm ; during the ab initio phase the system assumes an “empty” thesaurus and during the updating phase it uses a thesaurus previously constructed.
1. The Extraction of Local Grammars
24The process of extracting a local grammar includes two major tasks : a frequency and collocation analyses. We have conducted such analyses in REUFIN and discussed the results below.
a. Frequency Analysis
25A corpus of a specialist domain invariably contains a lexical signature of the domain : more frequent use of terms associated with key concepts in the domain when compared to the use of most other words (Ahmad and Rogers 2001). This contrast is quite apparent when we look at the 50-100 most frequent words in a domain specific corpus and at 50-100 most frequent words in a corpus of general language texts, for instance, the BNC (see Table 7 for a comparison)
26A list of rank ordered single words in the two corpora shows some interesting differences between the two : First, that said is the 7th most frequent word in REUFIN corpus and the 54th frequent words in the BNC. Note also that the first 10 most frequent words in both corpora comprise over 20 % of all words ; the next 10 most frequent comprise over 5 % of the corpus. In all, the first 50 frequently used comprise over 38 % of both corpora (see Table 8).
27The highlighted words in Table 8 are essentially words, nouns and verbs – and this category is not represented in the first 50 or even the 100 most frequent words in the BNC. These words comprise part of the lexical signature of the financial domain.
b. Collocation Analysis
28A study of the collocation patterns of the 10 lexical signature terms (Table 9) shows the existence of interesting patterns associated with each of these terms. In other studies we have found phrasal templates, governed by local grammars, involving terms like percent and shares/pounds/market/company (Ahmad et al. 2003).
29We now attempt to find the collocations of the most frequent verb said (48014) in REUFIN corpus. It is our observation that proper names often occur in language constructions (patterns) containing such word in news domain.
30We have computed the downward collocation (Sinclair, 1991) of said and chosen five strong collocates for this word. These collocations are shown in Table 10.
31A closer examination of said collocations revealed that this word collocates with tokens that have a higher frequency such as the definite and indefinite articles the (60148), a (79085), the speech mark “(60148), and the period (218 300). This upward collocation (see Table 11), as we show later, is quite important and should be considered.
32This simple collocation analysis helps in identifying tokens often used in the proximity of the verb said, for example, the speech mark (Table 11) found to be mostly occurring within one/two token(s) to the left of the verb said. A visual inspection of sentence fragments containing said and its collocates (e.g. ") is then carried out to determine empty slots and thus isolate local grammars (phrasal templates) having similar structures. The structures of the most frequent local grammars found in REUFIN are shown in Table 12. Note that PN and ORG indicate a person and organization names respectively, TITLE refers to names of titles and job descriptions and AT represents the set of grammatical words found in operation in these sentences.
2. FSA/T for the Local Grammar (The Use of INTEX)
33Finite state transducers are finite-state machines that produce an output alphabet when an input alphabet is recognized in a given text (Friburger and Maurel, 2004). Such feature makes FSTs extremely suitable for the implementation of local grammar. In our case, the input alphabet contains a set of local grammars to be recognized in texts and the output alphabet contains phrases that can be recognised by the local grammar.
34INTEX is a corpus processing system that also supplies tools to represent descriptions of natural languages into FSTs and apply them to large corpora (Silberztein, 2000). We use the FSA toolbox of this system to implement the local grammars, identified in REUFIN and then apply to REUFIN again in order to construct thesauri of proper names. These grammars are implemented into eighty-nine FSTs distributed over nine cascades. The cascade of Figure 3 shows frequent FSTs of the shaded pattern in the
35Table 12 above.
III. Evaluation
36The local grammars discovered in REUFIN not only help in the extraction of person and organization names in news articles, but they also give access to a wealth of information that can be used to build a thesaurus of proper names.
37This hypothesis was validated through the analysis of monthly data in REUFIN and calculating the percentage of unique names that had not been seen in any previous month. Note that all names extracted from the data for October (the month with the highest number of proper names) are regarded as new. The analysis of other months resulted in new names (an average of 157 names per month) and therefore more names could be added to the thesaurus. Over the course of the year’s data, this thesaurus eventually covered information on 2 238 different people (see Figure 4).
Conclusion
38In this paper we have discussed how corpus linguistics methods and techniques could be used in conjunction with the so-called local grammar mechanism to identify phrasal templates of proper names. A small vocabulary comprising verbs and prepositions together with punctuations like speech marks and parentheses were used to put together information about people and organizations. We showed how local grammars could be compiled, built and used to construct thesauri of proper names out of news articles.
39We have attempted to demonstrate that instead of invoking the full paraphernalia and concomitant expectations related to universal grammars and short contrived texts, it is perhaps better to focus on local grammars on large so-called real world corpora. Our work is currently entering its evaluation phase where a small-scale evaluation study has been encouraging. We have instituted our own long-term evaluation, which involves the extraction of proper names in domains other than news articles.
Bibliographie
Des DOI sont automatiquement ajoutés aux références bibliographiques par Bilbo, l’outil d’annotation bibliographique d’OpenEdition. Ces références bibliographiques peuvent être téléchargées dans les formats APA, Chicago et MLA.
Format
- APA
- Chicago
- MLA
Cette bibliographie a été enrichie de toutes les références bibliographiques automatiquement générées par Bilbo en utilisant Crossref.
Bibliography
AHMAD K. and ROGERS M. (2001), “Corpus Linguistics and Terminology Extraction”, in Sue-Ellen WRIGHT and Gerhard BUDIN (eds.), Handbook of Terminology Management, volume 2, Amsterdam & Philadelphia, John Benjamins Publishing Company, p. 725- 760.
AHMAD K., CHENG D. and TRABOULSI H. (2003), “Special Language and Local Grammar : Analysing Financial News Streams”, in Proceedings of LSP 2003, the 14th European Symposium on Language for Special Purposes, Surrey, the UK, p. 38-43.
BARNBROOK G. and SINCLAIR J. (1995), “Parsing CoBuild Entries”, in SINCLAIR J., HOELTER M., PETERS C. (eds.), The Languages of Definition : The Formalization of Dictionary Definitions for Natural Language Processing, Luxembourg, Office for Official Publications of the European Communities, p. 13-58.
10.3115/974557 :BIKEL D. M., MILLER S., SCHWARTZ R. and WEISCHEDEL R. (1997), “Nymble : a High-Performance Learning Name-finder”, in Proceedings of the 5th Conference on Applied Natural Language Processing, (http://citeseer.ist.psu.edu/cache/papers/cs/15361/ http:zSzzSzwww.csi.uottawa.cazSztankazSzArtDBzSzanlp.corrected.pdf/bikel97nymble.pdf, site visited 14 Feb2005).
CHOI Key-sun and NAM Jee-sun (1997), “A Local-Grammar-based Approach to Recognizing of Proper Names in Korean Texts”, in Joe ZHOU and Kenneth CHURCH (eds.), Proceedings of the Fifth Workshop on Very Large Corpora, China, p. 2730-288.
COATESSTEPHENS S. (1993), “The Analysis and Acquisition of Proper Names for the Understanding of Free Text”, Computers and Humanities, volume 26, p. 441-456.
CUCCHIARELLI A. and VELARDI P. (1999), “Adaptability of Linguistic Resources to New Domains : an experiment with Proper Noun Dictionaries”, in Proceedings of VEXTAL’99, Venezia – San Servolo, p. 22-24.
10.1016/j.tcs.2003.10.007 :FRIBURGER N., MAUREL D. (2004), “Finite-state transducer cascades to extract named entities in texts”, Theoretical Computer Science, volume 313, Springer Verlag / Lecture Notes in Computer Science, p. 93-104.
GROSS M. (1993), “Local Grammars and their Representation by Finite Automata”, in HOEY M. (ed.), Data, Description, Discourse, London, HarperCollins, p. 26-38.
10.1093/oso/9780198242246.001.0001 :HARRIS Z. (1991), A Theory of Language and Information : A Mathematical Approach, Oxford, Clarendon Press.
KRUPKA G. and HAUSMAN K. (1998), “IsoQuest, Inc. : Description of the NetOwlTM Extractor System as Used for Muc-7”, in Proceedings of MUC-7, San Francisco, Morgan Kaufmann.
MCDONALD D.D. (1993), “Internal and External Evidence in the Identification and Semantic Categorisation of Proper Names”, in B. BOGURAEV and J. PUSTEJOVSKY (eds), Corpus Processing for Lexical Acquisition, Cambridge, The MIT Press, p. 61-76.
OLIVER M. (2004), “Automatic Processing of Local Grammar Patterns”, in Proceedings of the 7th Annual CLUK (the UK Special-interest Group for Computational Linguistics) Research Colloquium, London, (http://www.cs.bham.ac.uk/mgl/cluk/titles.html, site visited 14 Feb 2005).
10.1093/oso/9780199292332.001.0001 :KILGARRIFF A., RYCHLY P., MASARYK P. S., TUGWELL D. (2004), “The Sketch Engine”, in Proceedings of EURALEX 2004, Lorient, France, (http://www.sketchengine.co.uk/sketch-engineelx 04.pdf, site visited 14 February 2005).
QUIRK R., GREENBAUM S., LEECH G., SVARTVIK J. (1985), A Comprehensive Grammar of the English Language, London, Longman.
RANCHHOD E., MOTA C., BAPTISTA J. A. (1999), “Computational Lexicon of Portuguese for Automatic Text Parsing. Standardizing Lexical Resources”, in Proceedings of a workshop sponsored by SIGLEX’9, Maryland, p. 74-80.
10.1016/S0304-3975(99)00015-8 :SILBERZTEIN M. (2000), “INTEX : an FST toolbox”, Theoretical Computer Science, volume 231, p. 33-46.
SINCLAIR J. (1991), Corpus, Concordance, Collocation, London, Oxford University Press.
SMADJA F. (1993), “Retrieving Collocations from Text : Xtract”, Computational Linguistics, volume 19, p. 143-177.
TÜR G. HAKKANI-TÜR D. and OFLAZER K. (2000), “Name Tagging Using Lexical, Contextual and Morphological Information”, in Proceedings of the Workshop on Information Extraction Meets Corpus Linguistics at The Second International Conference on Language Resources and Evaluation (LREC 2000), Greece.
Notes de bas de page
2 http://www.comp.lancs.ac.uk/computing/research/ucrel/claws/trial.html.
3 NumYear and NumDay refer to year and day numbers repectively.
4 Sketch Engine is used to extract the frequency of the structure variants from the BNC online. Access at : http://www.sketchengine.co.uk.
Le texte seul est utilisable sous licence Licence OpenEdition Books. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.
Formaliser les langues avec l’ordinateur : de INTEX à Nooj
Ce livre est diffusé en accès ouvert freemium. L’accès à la lecture en ligne est disponible. L’accès aux versions PDF et ePub est réservé aux bibliothèques l’ayant acquis. Vous pouvez vous connecter à votre bibliothèque à l’adresse suivante : https://freemium.openedition.org/oebooks
Si vous avez des questions, vous pouvez nous écrire à access[at]openedition.org
Référence numérique du chapitre
Format
Référence numérique du livre
Format
1 / 3