Ranking retrieval systems with partial relevance judgements
-
Wu, Shengli
School of Computing and Mathematics, University of Ulster, United Kingdom
-
Fabio Crestani
Facoltà di scienze informatiche, Università della Svizzera italiana, Svizzera
Published in:
- Journal of universal computer science. - 2008, vol. 14, no. 7, p. 1020-1030
English
Some measures such as mean average precision and recall level precision are considered as good system-oriented measures, because they concern both precision and recall that are two important aspects for effectiveness evaluation of information retrieval systems. However, such good system-oriented measures suffer from some shortcomings when partial relevance judgments are used. In this paper, we discuss how to rank retrieval systems in the condition of partial relevance judgments, which is common in major retrieval evaluation events such as TREC conferences and NTCIR workshops. Four system-oriented measures, which are mean average precision, recall level precision, normalized discount cumulative gain, and normalized average precision over all documents, are discussed. Our investigation shows that averaging values over a set of queries may not be the most reliable approach to rank a group of retrieval systems. Some alternatives such as Borda count, Condorcet voting, and the Zero-one normalization method, are investigated. Experimental results are also presented for the evaluation of these methods.
-
Language
-
-
Classification
-
Computer science and technology
-
License
-
License undefined
-
Identifiers
-
-
Persistent URL
-
https://n2t.net/ark:/12658/srd1318361
Statistics
Document views: 53
File downloads:
- crestani_JUCS_2008.pdf: 131