|Title||Statistically Generated Summary Sentences: A Preliminary Evaluation using a Dependency Relation Precision Metric|
|Publication Type||Conference Paper|
|Year of Publication||2005|
|Authors||Wan, S, Dras, M, Dale, R, Paris, C|
|Conference Name||Corpus Linguistics 2005 Workshop on Using Corpora for Natural Language Generation|
Often in summarisation, we are required to generate a summary sentence that incorporates the important elements of a related set of sentences. In this paper, we do this by using a statistical approach that combines models of n-grams and dependency structure. The approach is one in which words are recycled and re-combined to forma new sentence, one that is grammatical and that reﬂects the content of the source material. We use an extension to the Viterbi algorithm that generates a sequence that is not only the best n-gram word sequence, but also best replicates component dependency structures taken from the source text. In this paper, we describe the extension and outline a preliminary evaluation that measures dependency structure recall and precision in the generated string. We ﬁnd that our approach achieves higher precision when compared to a bigram generator.