You are here

Explorations in the distributional and semantic similarity of words

TitleExplorations in the distributional and semantic similarity of words
Publication TypeThesis
Year of Publication2011
AuthorsPiitulainen J
Academic DepartmentDepartment of Modern Languages
DegreePhD
UniversityUniversity of Helsinki
CityHelsinki
ISBN Number978-952-10-6760-0
Abstract

A straightforward computation of the list of the words (the ‘tail words’ of the list) that are distributionally most similar to a given word (the ‘head word’ of the list) leads to the question: How semantically similar to the head word are the tail words; that is: how similar are their meanings to its meaning? And can we do better? The experiment was done on nearly 18,000 most frequent nouns in a Finnish newsgroup corpus. These nouns are considered to be distributionally similar to the extent that they occur in the same direct dependency relations with the same nouns, adjectives and verbs. The extent of the similarity of their computational representations is quantified with the information radius. The semantic classification of head-tail pairs is intuitive; some tail words seem to be semantically similar to the head word, some do not. Each such pair is also associated with a number of further distributional variables. Individually, their overlap for the semantic classes is large, but the trained classification-tree models have some success in using combinations to predict the semantic class. The training data consists of a random sample of 400 head-tail pairs with the tail word ranked among the 20 distributionally most similar to the head word, excluding names. The models are then tested on a random sample of another 100 such pairs. The best success rates range from 70% to 92% of the test pairs, where a success means that the model predicted my intuitive semantic class of the pair. This seems somewhat promising when distributional similarity is used to capture semantically similar words. This analysis also includes a general discussion of several different similarity formulas, arranged in three groups: those that apply to sets with graded membership, those that apply to the members of a vector space, and those that apply to probability mass functions.

URLhttp://urn.fi/URN:ISBN:978-952-10-6760-0