mdekstrand commited on
Commit
1e1dfe1
·
1 Parent(s): cb74be0

README update

Browse files
Files changed (1) hide show
  1. eval/README.md +53 -40
eval/README.md CHANGED
@@ -38,44 +38,45 @@ third, longer list for each query for pooling.
38
 
39
  ## Training and Preparatory Data
40
 
41
- We are providing the following data to track participants (coming soon):
42
-
43
- * Product metadata and user purchase session data from the [Amazon M2][M2] data
44
- set.
45
- * Annotated search results from the [Amazon ESCI][ESCI] data set.
46
- * Annotated training and validation data synthesized from the annotations in the
47
- [Amazon ESCI][ESCI] data set, along with the synthesis code for reference and
48
- synthesis of additional training data. ESCI is a search data set; the
49
- recommendation data is generated from its annotations by selecting the Exact
50
- product as the reference item, and using the Substitute and Complementary
51
- annotations to assess relationships to the Exact item instead of to the query.
52
- One of our hopeful meta-outcomes for this task is a better understanding of
53
- how that data compares to annotations generated specifically for the
54
- related-product recommendation task.
55
- * Documentation for linking the provided data with the [Amazon reviews and
56
- product data](https://amazon-reviews-2023.github.io/) provided by Julian
57
- Mcauley’s research group at UCSD (for reference and supplementary training
58
- data if desired, not a formal part of the task).
59
-
60
- The search corpus is formed from combining the M2 and ESCI product training data sets,
61
- and filtering as follows:
62
 
63
- * All items must also appear on the UCSD review data set (for more detailed
64
- descriptions for the assessors).
65
- * All items must be in the US locale.
66
- * All items must have at least 50-character descriptions.
67
- * Only items in the *Electronics*, *Home and Garden* and *Sports and Outdoors*
68
- categories.
 
 
 
 
69
 
 
70
 
71
- Amazon product identifiers are consistent across both data sets.
 
 
72
 
73
  You are **not** limited to the product data in the corpus — feel free to enrich
74
  with other sources, such as other data available in the original ESCI or M2 data
75
  sets, or the UCSD Ratings & Reviews.
76
 
 
 
 
 
 
 
 
 
 
 
 
 
77
  [ESCI]: https://amazonkddcup.github.io/
78
  [M2]: https://kddcup23.github.io/
 
79
 
80
  ## Task Definition and Query Data
81
 
@@ -84,15 +85,20 @@ recommendations. Each request contains a single Amazon product ID (the
84
  *reference item*). For each reference item, the system should produce (and teams
85
  submit) **three** output lists:
86
 
87
- 1. A ranked list of 100 related items, with an annotation as to whether they are complementary or substitute. This will be used to generate deeper pools for evaluation.
 
 
88
  2. A list of 10 **Top Complementary** items.
89
  3. A list of 10 **Top Substitute** items.
90
 
91
- Participant solutions are not restricted to the training data we provide — it is acceptable to enrich the track data with additional data sources such as the Amazon Review datasets for training or model operation.
 
 
92
 
93
  ### Query Format
94
 
95
- The query data will be in a CSV file with 3 columns: query ID, product ID (ASIN), and the product title.
 
96
 
97
  ### Run Format
98
 
@@ -118,20 +124,27 @@ Recommended items from submitted runs will be pooled and assessed by NIST assess
118
 
119
  ## Evaluation Metrics
120
 
121
- The primary evaluation metric will be **NDCG** computed separately for each top-substitute and top-complement recommendation list. This will be aggregated in the following ways to produce submission-level metrics:
 
 
122
 
123
- * Separate **Complement NDCG** and **Substitute NDCG**, using the relevance grades above (1, 2, and 3\) as the gain.
124
- * **Average NDCG**, averaging the NDCG across all runs. This is the top-line metric for ordering systems in the final report.
 
 
125
 
126
  We will compute supplementary metrics including:
127
 
128
- * **Pool NDCG** of the longer related-product run, where the gain for an incorrectly-classified item is 50% of the gain it would have if it were correctly classified.
 
 
129
  * Agreement of annotations in the long (pooling) run.
130
- * **Diversity** of the substitute and complementary product lists, computed over fine-grained product category data from the 2023 Amazon Reviews data set.
 
131
 
132
- ## **Timeline**
133
 
134
- * Task Data Release: May 15, 2025
135
  * Development Period: Summer 2025
136
- * Test Query Release: Late August 2025
137
- * Submission Deadline: Early Sept. 2025
 
38
 
39
  ## Training and Preparatory Data
40
 
41
+ [repo]: https://huggingface.co/datasets/trec-product-search/product-recommendation-2025/
42
+ [README]: https://huggingface.co/datasets/trec-product-search/product-recommendation-2025/blob/main/eval/README.md
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
+ We have provided the following data to track participants, available [on
45
+ HuggingFace][repo]:
46
+
47
+ * A product corpus curated from [Amazon M2][M2] and [Amazon ESCI][ESCI],
48
+ filtered to only include items also available in the Mcauley Lab's Amazon
49
+ reviews data.
50
+ * Training / validation queries and qrels for the Substitute and Complementary
51
+ subtasks, synthesized from Amazon ESCI (see [README][] for details).
52
+
53
+ For your final submissions, use the **eval** directory.
54
 
55
+ All data is recorded with ASINs, so your model can be trained by cross-linking it with other public datasets:
56
 
57
+ * [Amazon M2][M2] (user purchase sessions)
58
+ * [Amazon ESCI][ESCI] (annotated search results)
59
+ * [Amazon reviews and product data][UCSD]
60
 
61
  You are **not** limited to the product data in the corpus — feel free to enrich
62
  with other sources, such as other data available in the original ESCI or M2 data
63
  sets, or the UCSD Ratings & Reviews.
64
 
65
+ Our repository also contains copies of the relevant pieces of the original M2
66
+ and ESCI data sets, pursuant to their Apache licenses. The search corpus is
67
+ formed from combining the M2 and ESCI product training data sets, and filtering
68
+ as follows:
69
+
70
+ * All items must also appear on the UCSD review data set (for more detailed
71
+ descriptions for the assessors).
72
+ * All items must be in the US locale.
73
+ * All items must have at least 50-character descriptions.
74
+ * Only items in the *Electronics*, *Home and Garden* and *Sports and Outdoors*
75
+ categories.
76
+
77
  [ESCI]: https://amazonkddcup.github.io/
78
  [M2]: https://kddcup23.github.io/
79
+ [UCSD]: https://amazon-reviews-2023.github.io/
80
 
81
  ## Task Definition and Query Data
82
 
 
85
  *reference item*). For each reference item, the system should produce (and teams
86
  submit) **three** output lists:
87
 
88
+ 1. A ranked list of 100 related items, with an annotation as to whether they are
89
+ complementary or substitute. This will be used to generate deeper pools for
90
+ evaluation.
91
  2. A list of 10 **Top Complementary** items.
92
  3. A list of 10 **Top Substitute** items.
93
 
94
+ Participant solutions are not restricted to the training data we provide — it is
95
+ acceptable to enrich the track data with additional data sources such as the
96
+ Amazon Review datasets for training or model operation.
97
 
98
  ### Query Format
99
 
100
+ The query data will be in a CSV file with 3 columns: query ID, product ID
101
+ (ASIN), and the product title.
102
 
103
  ### Run Format
104
 
 
124
 
125
  ## Evaluation Metrics
126
 
127
+ The primary evaluation metric will be **NDCG** computed separately for each
128
+ top-substitute and top-complement recommendation list. This will be aggregated
129
+ in the following ways to produce submission-level metrics:
130
 
131
+ * Separate **Complement NDCG** and **Substitute NDCG**, using the relevance
132
+ grades above (1, 2, and 3\) as the gain.
133
+ * **Average NDCG**, averaging the NDCG across all runs. This is the top-line
134
+ metric for ordering systems in the final report.
135
 
136
  We will compute supplementary metrics including:
137
 
138
+ * **Pool NDCG** of the longer related-product run, where the gain for an
139
+ incorrectly-classified item is 50% of the gain it would have if it were
140
+ correctly classified.
141
  * Agreement of annotations in the long (pooling) run.
142
+ * **Diversity** of the substitute and complementary product lists, computed over
143
+ fine-grained product category data from the 2023 Amazon Reviews data set.
144
 
145
+ ## Timeline
146
 
147
+ * Task Data Release: **Now available**
148
  * Development Period: Summer 2025
149
+ * Test Query Release: **Aug. 25, 2025**
150
+ * Submission Deadline: **Sep. 4, 2025**