Download PDFOpen PDF in browserCurrent versionWANDS: Dataset for Product Search Relevance AssessmentEasyChair Preprint 7347, version 114 pages•Date: January 18, 2022AbstractSearch relevance is an important performance indicator used to evaluate search engines. It measures the relationship between users’ queries and products returned in search results. E-commerce sites use search engines to help customers find relevant products among millions of options. The scale of the data makes it difficult to create relevance-focused evaluation datasets manually. As an alternative, user click logs are often mined to create datasets. However, such logs only capture a slice of user behavior in the production environment, and do not provide a complete set of candidates for annotation. To overcome these challenges, we propose a systematic and effective way to build a discriminative, reusable, and fair human-labeled dataset, Wayfair Annotation DataSet (WANDS), for e-commerce scenarios. Our proposal introduces an important cross-referencing step to the annotation process which significantly increases dataset completeness. Experimental results show that this process is effective in improving the scalability of human annotation efforts. We also show that the dataset is effective in evaluating and discriminating between different search models. As part of this contribution, we will also release the dataset. To our knowledge, it is the biggest publicly available search relevance dataset in the e-commerce domain. Keyphrases: Annotation Process, Information Retrieval, Product Search Relevance, annotation guideline, dataset, dataset completeness, evaluation, evaluation dataset, search relevance assessment, search relevance dataset
|