Demonstration of an open source framework for qualitative evaluation of CBIR systems
Document typeConference lecture
PublisherAssociation for Computing Machinery (ACM)
Rights accessOpen Access
Evaluating image retrieval systems in a quantitative way, for example by computing measures like mean average precision, allows for objective comparisons with a ground-truth. However, in cases where ground-truth is not available, the only alternative is to collect feedback from a user. Thus, qualitative assessments become important to better understand how the system works. Visualizing the results could be, in some scenarios, the only way to evaluate the results obtained and also the only opportunity to identify that a system is failing. This necessitates developing a User Interface (UI) for a Content Based Image Retrieval (CBIR) system that allows visualization of results and improvement via capturing user relevance feedback. A well-designed UI facilitates understanding of the performance of the system, both in cases where it works well and perhaps more importantly those which highlight the need for improvement. Our open-source system implements three components to facilitate researchers to quickly develop these capabilities for their retrieval engine. We present: a web-based user interface to visualize retrieval results and collect user annotations; a server that simplifies connection with any underlying CBIR system; and a server that manages the search engine data.
CitationGomez, P., Mohedano, E., McGuinness, K., Giro, X., O'Connor, N. Demonstration of an open source framework for qualitative evaluation of CBIR systems. A: ACM Multimedia Conference. "Proceedings of 2018 ACM Multimedia Conference, Seoul, Republic of Korea, October 22-26, 2018 (MM’18)". New York: Association for Computing Machinery (ACM), 2018, p. 1256-1257.