|Title||Benchmarking ARS: anaphora resolution system|
|Publication Type||Conference Paper|
|Year of Publication||2011|
|Authors||Xian, BCM, Zahari, F, Lukose, D|
|Conference Name||11th International Conference on Knowledge Management and Knowledge Technologies|
|Conference Location||Graz, Austria|
|Keywords||anaphora, natural language processing and application, pronominal anaphora resolution|
Benchmarking is an established way for evaluating automatic systems which tackle the same task. This paper presents the results of benchmarking the Anaphora Resolution Systems (ARS) developed at MIMOS against several similar systems, and the lessons learnt from it. The dataset used for this benchmarking effort consists of texts with Pronominal Anaphora, Definite Noun Phrase Anaphora, Pleonastic Anaphora and Reader/Writer Anaphora. The authors used the Recall, Precision and F-measure (F1 score) to measure the results of this evaluation.