Muennighoff commited on
Commit
e6d0539
Β·
verified Β·
1 Parent(s): d3f4550

Scheduled Commit

Browse files
data/retrieval_side_by_side-5b78c89f-244c-4bc4-9693-2c3c7acb6977.jsonl CHANGED
@@ -33,3 +33,4 @@
33
  {"tstamp": 1723172565.8947, "task_type": "retrieval", "type": "rightvote", "models": ["intfloat/multilingual-e5-large-instruct", "Salesforce/SFR-Embedding-2_R"], "ip": "", "0_conv_id": "33adb863dd96420ba2ca7baa322db0c0", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "when did sweet caroline start at red sox games", "0_output": [["when did sweet caroline start at red sox games", "Boston Red Sox\n\nAnother song associated with the team and its fan base is Neil Diamond's 1969 single \"Sweet Caroline\". The song was first introduced to Fenway Park in 1997. By 2002, its play had been established as a nightly occurrence. It continues to be played at every home game during the 8th inning, sung along to by those in attendance. In 2007, Diamond revealed that the song was written for Caroline Kennedy, American diplomat and daughter of Boston icon President John F. Kennedy. Caroline Kennedy's great-grandfather, John F. Fitzgerald, threw Fenway Park's first-ever ceremonial opening pitch on April 20, 1912. When Diamond was named a Kennedy Center Honors recipient in 2011, Red Sox executive assistant Claire Durant arranged for 80 Red Sox fans to travel to Washington for the ceremony, which culminated in them singing the song behind Smokey Robinson onstage.\nHonors and achievements\nAwards\nFor major MLB awards, voted by the Baseball Writers' Association of America (BBWAA), Red Sox players have won the MVP Award 12 times, most recently by Mookie Betts in 2018; the Cy Young Award seven times, most recently by Rick Porcello in 2016; Rookie of the Year six times, most recently by Dustin Pedroia in 2007; and Manager of the Year twice, most recently by Jimy Williams in 1999.\nRetired numbers\nPreviously, the Red Sox published three official requirements for a player to have his number retired on their website and in their annual media guides. The requirements were as follows:\nElection to the National Baseball Hall of Fame\nAt least 10 years played with the Red Sox\nFinished his career with the club.\nThese requirements were reconsidered after the election of Carlton Fisk to the Hall of Fame in 2000; who met the first two requirements but played the second half of his career with the Chicago White Sox. As a means of meeting the criteria, then-GM Dan Duquette hired Fisk for one day as a special assistant, which allowed Fisk to technically finish his career with the Red Sox."]], "0_corpus": "wikipedia", "1_conv_id": "153a744877134ccc803621f7f8bca125", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "when did sweet caroline start at red sox games", "1_output": [["when did sweet caroline start at red sox games", "Sweet Caroline\n\nThe song has been played at Fenway Park, home of Major League Baseball's Boston Red Sox, since at least 1997 and in the middle of the eighth inning at every game since 2002. On opening night of the 2010 season at Fenway Park, the song was performed live by Diamond himself. Several days after the Boston Marathon bombing in April 2013, Neil Diamond led the crowd at Fenway Park in a rendition of the song. Sales of the song surged nearly 600 percent in the week after the bombings, to 19,000 copies, up from 2,800 the week before. Diamond said that he would donate his royalties from those sales to the One Fund Boston charity to help the people affected by the bombings.\nThe Iowa State Cyclones have used \"Sweet Caroline\" as a football victory song since 2006.\nSince 2008, the University of Pittsburgh has used \"Sweet Caroline\" as an unofficial school sing-along song by inserting \"Let's Go Pitt!\" over the instrumental three-beat \"Ba Ba Ba\" interval after the title refrain and replacing the repeated phrase \"So good\" with \"Go Pitt!\" The song started as a rallying anthem played between the third and fourth quarters of Pittsburgh Panthers football games, but has been adopted for use during other university sports contests, alumni events, and student ceremonies, including graduation commencement ceremonies, and references to the song have appeared on various school merchandise. However, Pitt's arch-rival, West Virginia University, \"adapts\" the song for their own purposes to troll Pitt, even when Pitt is not the opponent.\n\"Sweet Caroline\" is sometimes part of the regular rotation of songs during sports events at other universities."]], "1_corpus": "wikipedia"}
34
  {"tstamp": 1723172679.9525, "task_type": "retrieval", "type": "rightvote", "models": ["sentence-transformers/all-MiniLM-L6-v2", "Salesforce/SFR-Embedding-2_R"], "ip": "", "0_conv_id": "eb1819d8b7e54b34a72186205e00631c", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Summary and research directions on Multimodal Large Language Models", "0_output": [["Summary and research directions on Multimodal Large Language Models", "Title: Quantum and classical solutions for free particle in wedge billiards\n\nAbstract: We have studied the quantum and classical solutions of a particle constrained to move inside a sector circular billiard with angle $\\theta_w$ and its pacman complement with angle $2\\pi-\\theta_w$. In these billiards rotational invariance is broken and angular momentum is no longer a conserved quantum number. The \"fractional\" angular momentum quantum solutions are given in terms of Bessel functions of fractional order, with indices $\\lambda_p={p\\pi \\over {\\theta_w}}$, $p=1,2,...$ for the sector and $\\mu_q={q\\pi \\over {2\\pi - \\theta_w}}$, $q=1,2...$ for the pacman. We derive a ``duality'' relation between both fractional indices given by $\\lambda_p={{p\\mu_q} \\over {2\\mu_q - q}}$ and $\\mu_q = {{q\\lambda_p} \\over {2\\lambda_p - p}}$. We find that the average of the angular momentum $\\hat L_z$ is zero but the average of $\\hat L^2_z$ has as eigenvalues $\\lambda_p^2$ and $\\mu_q^2$. We also make a connection of some classical solutions to their quantum wave eigenfunction counterparts."]], "0_corpus": "arxiv", "1_conv_id": "37efe1eba978414a986fc75ab15911c1", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "Summary and research directions on Multimodal Large Language Models", "1_output": [["Summary and research directions on Multimodal Large Language Models", "Title: Large Multimodal Agents: A Survey\n\nAbstract: Large language models (LLMs) have achieved superior performance in powering text-based AI agents, endowing them with decision-making and reasoning abilities akin to humans. Concurrently, there is an emerging research trend focused on extending these LLM-powered AI agents into the multimodal domain. This extension enables AI agents to interpret and respond to diverse multimodal user queries, thereby handling more intricate and nuanced tasks. In this paper, we conduct a systematic review of LLM-driven multimodal agents, which we refer to as large multimodal agents ( LMAs for short). First, we introduce the essential components involved in developing LMAs and categorize the current body of research into four distinct types. Subsequently, we review the collaborative frameworks integrating multiple LMAs , enhancing collective efficacy. One of the critical challenges in this field is the diverse evaluation methods used across existing studies, hindering effective comparison among different LMAs . Therefore, we compile these evaluation methodologies and establish a comprehensive framework to bridge the gaps. This framework aims to standardize evaluations, facilitating more meaningful comparisons. Concluding our review, we highlight the extensive applications of LMAs and propose possible future research directions. Our discussion aims to provide valuable insights and guidelines for future research in this rapidly evolving field. An up-to-date resource list is available at https://github.com/jun0wanan/awesome-large-multimodal-agents."]], "1_corpus": "arxiv"}
35
  {"tstamp": 1723172815.3851, "task_type": "retrieval", "type": "rightvote", "models": ["voyage-multilingual-2", "Salesforce/SFR-Embedding-2_R"], "ip": "", "0_conv_id": "1d6ec0b62c7e4bbd864ae281b7641cfb", "0_model_name": "voyage-multilingual-2", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: The Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text Embedding\n\nAbstract: The evaluation of English text embeddings has transitioned from evaluating a handful of datasets to broad coverage across many tasks through benchmarks such as MTEB. However, this is not the case for multilingual text embeddings due to a lack of available benchmarks. To address this problem, we introduce the Scandinavian Embedding Benchmark (SEB). SEB is a comprehensive framework that enables text embedding evaluation for Scandinavian languages across 24 tasks, 10 subtasks, and 4 task categories. Building on SEB, we evaluate more than 26 models, uncovering significant performance disparities between public and commercial solutions not previously captured by MTEB. We open-source SEB and integrate it with MTEB, thus bridging the text embedding evaluation gap for Scandinavian languages."]], "0_corpus": "arxiv", "1_conv_id": "e2b61d5c469c4dcc97e3e2df6f58b25a", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
 
 
33
  {"tstamp": 1723172565.8947, "task_type": "retrieval", "type": "rightvote", "models": ["intfloat/multilingual-e5-large-instruct", "Salesforce/SFR-Embedding-2_R"], "ip": "", "0_conv_id": "33adb863dd96420ba2ca7baa322db0c0", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "when did sweet caroline start at red sox games", "0_output": [["when did sweet caroline start at red sox games", "Boston Red Sox\n\nAnother song associated with the team and its fan base is Neil Diamond's 1969 single \"Sweet Caroline\". The song was first introduced to Fenway Park in 1997. By 2002, its play had been established as a nightly occurrence. It continues to be played at every home game during the 8th inning, sung along to by those in attendance. In 2007, Diamond revealed that the song was written for Caroline Kennedy, American diplomat and daughter of Boston icon President John F. Kennedy. Caroline Kennedy's great-grandfather, John F. Fitzgerald, threw Fenway Park's first-ever ceremonial opening pitch on April 20, 1912. When Diamond was named a Kennedy Center Honors recipient in 2011, Red Sox executive assistant Claire Durant arranged for 80 Red Sox fans to travel to Washington for the ceremony, which culminated in them singing the song behind Smokey Robinson onstage.\nHonors and achievements\nAwards\nFor major MLB awards, voted by the Baseball Writers' Association of America (BBWAA), Red Sox players have won the MVP Award 12 times, most recently by Mookie Betts in 2018; the Cy Young Award seven times, most recently by Rick Porcello in 2016; Rookie of the Year six times, most recently by Dustin Pedroia in 2007; and Manager of the Year twice, most recently by Jimy Williams in 1999.\nRetired numbers\nPreviously, the Red Sox published three official requirements for a player to have his number retired on their website and in their annual media guides. The requirements were as follows:\nElection to the National Baseball Hall of Fame\nAt least 10 years played with the Red Sox\nFinished his career with the club.\nThese requirements were reconsidered after the election of Carlton Fisk to the Hall of Fame in 2000; who met the first two requirements but played the second half of his career with the Chicago White Sox. As a means of meeting the criteria, then-GM Dan Duquette hired Fisk for one day as a special assistant, which allowed Fisk to technically finish his career with the Red Sox."]], "0_corpus": "wikipedia", "1_conv_id": "153a744877134ccc803621f7f8bca125", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "when did sweet caroline start at red sox games", "1_output": [["when did sweet caroline start at red sox games", "Sweet Caroline\n\nThe song has been played at Fenway Park, home of Major League Baseball's Boston Red Sox, since at least 1997 and in the middle of the eighth inning at every game since 2002. On opening night of the 2010 season at Fenway Park, the song was performed live by Diamond himself. Several days after the Boston Marathon bombing in April 2013, Neil Diamond led the crowd at Fenway Park in a rendition of the song. Sales of the song surged nearly 600 percent in the week after the bombings, to 19,000 copies, up from 2,800 the week before. Diamond said that he would donate his royalties from those sales to the One Fund Boston charity to help the people affected by the bombings.\nThe Iowa State Cyclones have used \"Sweet Caroline\" as a football victory song since 2006.\nSince 2008, the University of Pittsburgh has used \"Sweet Caroline\" as an unofficial school sing-along song by inserting \"Let's Go Pitt!\" over the instrumental three-beat \"Ba Ba Ba\" interval after the title refrain and replacing the repeated phrase \"So good\" with \"Go Pitt!\" The song started as a rallying anthem played between the third and fourth quarters of Pittsburgh Panthers football games, but has been adopted for use during other university sports contests, alumni events, and student ceremonies, including graduation commencement ceremonies, and references to the song have appeared on various school merchandise. However, Pitt's arch-rival, West Virginia University, \"adapts\" the song for their own purposes to troll Pitt, even when Pitt is not the opponent.\n\"Sweet Caroline\" is sometimes part of the regular rotation of songs during sports events at other universities."]], "1_corpus": "wikipedia"}
34
  {"tstamp": 1723172679.9525, "task_type": "retrieval", "type": "rightvote", "models": ["sentence-transformers/all-MiniLM-L6-v2", "Salesforce/SFR-Embedding-2_R"], "ip": "", "0_conv_id": "eb1819d8b7e54b34a72186205e00631c", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Summary and research directions on Multimodal Large Language Models", "0_output": [["Summary and research directions on Multimodal Large Language Models", "Title: Quantum and classical solutions for free particle in wedge billiards\n\nAbstract: We have studied the quantum and classical solutions of a particle constrained to move inside a sector circular billiard with angle $\\theta_w$ and its pacman complement with angle $2\\pi-\\theta_w$. In these billiards rotational invariance is broken and angular momentum is no longer a conserved quantum number. The \"fractional\" angular momentum quantum solutions are given in terms of Bessel functions of fractional order, with indices $\\lambda_p={p\\pi \\over {\\theta_w}}$, $p=1,2,...$ for the sector and $\\mu_q={q\\pi \\over {2\\pi - \\theta_w}}$, $q=1,2...$ for the pacman. We derive a ``duality'' relation between both fractional indices given by $\\lambda_p={{p\\mu_q} \\over {2\\mu_q - q}}$ and $\\mu_q = {{q\\lambda_p} \\over {2\\lambda_p - p}}$. We find that the average of the angular momentum $\\hat L_z$ is zero but the average of $\\hat L^2_z$ has as eigenvalues $\\lambda_p^2$ and $\\mu_q^2$. We also make a connection of some classical solutions to their quantum wave eigenfunction counterparts."]], "0_corpus": "arxiv", "1_conv_id": "37efe1eba978414a986fc75ab15911c1", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "Summary and research directions on Multimodal Large Language Models", "1_output": [["Summary and research directions on Multimodal Large Language Models", "Title: Large Multimodal Agents: A Survey\n\nAbstract: Large language models (LLMs) have achieved superior performance in powering text-based AI agents, endowing them with decision-making and reasoning abilities akin to humans. Concurrently, there is an emerging research trend focused on extending these LLM-powered AI agents into the multimodal domain. This extension enables AI agents to interpret and respond to diverse multimodal user queries, thereby handling more intricate and nuanced tasks. In this paper, we conduct a systematic review of LLM-driven multimodal agents, which we refer to as large multimodal agents ( LMAs for short). First, we introduce the essential components involved in developing LMAs and categorize the current body of research into four distinct types. Subsequently, we review the collaborative frameworks integrating multiple LMAs , enhancing collective efficacy. One of the critical challenges in this field is the diverse evaluation methods used across existing studies, hindering effective comparison among different LMAs . Therefore, we compile these evaluation methodologies and establish a comprehensive framework to bridge the gaps. This framework aims to standardize evaluations, facilitating more meaningful comparisons. Concluding our review, we highlight the extensive applications of LMAs and propose possible future research directions. Our discussion aims to provide valuable insights and guidelines for future research in this rapidly evolving field. An up-to-date resource list is available at https://github.com/jun0wanan/awesome-large-multimodal-agents."]], "1_corpus": "arxiv"}
35
  {"tstamp": 1723172815.3851, "task_type": "retrieval", "type": "rightvote", "models": ["voyage-multilingual-2", "Salesforce/SFR-Embedding-2_R"], "ip": "", "0_conv_id": "1d6ec0b62c7e4bbd864ae281b7641cfb", "0_model_name": "voyage-multilingual-2", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: The Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text Embedding\n\nAbstract: The evaluation of English text embeddings has transitioned from evaluating a handful of datasets to broad coverage across many tasks through benchmarks such as MTEB. However, this is not the case for multilingual text embeddings due to a lack of available benchmarks. To address this problem, we introduce the Scandinavian Embedding Benchmark (SEB). SEB is a comprehensive framework that enables text embedding evaluation for Scandinavian languages across 24 tasks, 10 subtasks, and 4 task categories. Building on SEB, we evaluate more than 26 models, uncovering significant performance disparities between public and commercial solutions not previously captured by MTEB. We open-source SEB and integrate it with MTEB, thus bridging the text embedding evaluation gap for Scandinavian languages."]], "0_corpus": "arxiv", "1_conv_id": "e2b61d5c469c4dcc97e3e2df6f58b25a", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
36
+ {"tstamp": 1723184535.6745, "task_type": "retrieval", "type": "bothbadvote", "models": ["Alibaba-NLP/gte-Qwen2-7B-instruct", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "b4da1da8874c43fdbda8dc10b9cb2018", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "倧于5000ε…ƒηš„ε•†ε“", "0_output": [["倧于5000ε…ƒηš„ε•†ε“", "Pricing\n\nTwo-part pricing Two-part pricing is a variant of captive-market pricing used in service industries. Two-part pricing breaks the actual price into two parts; a fixed service fee plus a variable consumption rate. Two-part pricing tactics are widely used by utility companies such as electricity, gas and water and services where there is a quasi- membership type relationship, credit cards where an annual fee is charged and theme parks where an entrance fee is charged for admission while the customer pays for rides and extras. One part of the price represents a membership fee or joining fee, while the second part represents the usage component.\nPsychological pricing\nPsychological pricing is a range of tactics designed to have a positive psychological impact. Price tags using the terminal digit \"9\", ($9.99, $19.99 or $199.99) can be used to signal price points and bring an item in at just under the consumer's reservation price. Psychological pricing is widely used in a variety of retail settings.\nPremium pricingPremium pricing (also called prestige pricing) is the strategy of consistently pricing at, or near, the high end of the possible price range to help attract status-conscious consumers. The high pricing of a premium product is used to enhance and reinforce a product's luxury image. Examples of companies that partake in premium pricing in the marketplace include Rolex and Bentley. As well as brand, product attributes such as eco-labelling and provenance (e.g. 'certified organic' and 'product of Australia') may add value for consumers and attract premium pricing. A component of such premiums may reflect the increased cost of production. People will buy a premium priced product because:\nThey believe the high price is an indication of good quality"]], "0_corpus": "wikipedia", "1_conv_id": "d871b51334c64013a78b3cacae8de909", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "倧于5000ε…ƒηš„ε•†ε“", "1_output": [["倧于5000ε…ƒηš„ε•†ε“", "500 (number)\n\n500 (five hundred) is the natural number following 499 and preceding 501.\nMathematical properties\n500 = 22 Γ— 53. It is an Achilles number and an Harshad number, meaning it is divisible by the sum of its digits. It is the number of planar partitions of 10.\nOther fields\nFive hundred is also\nthe number that many NASCAR races often use at the end of their race names (e.g., Daytona 500), to denote the length of the race (in miles, kilometers or laps).\nthe longest advertised distance (in miles) of the IndyCar Series and its premier race, the Indianapolis 500."]], "1_corpus": "wikipedia"}