Muennighoff commited on
Commit
d8e5955
·
verified ·
1 Parent(s): 99e4864

Scheduled Commit

Browse files
data/retrieval_battle-ffd1dcbf-aab7-4cd1-bfa8-5b0d8404144b.jsonl CHANGED
@@ -71,3 +71,6 @@
71
  {"tstamp": 1732945699.9093, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "18edd85bb15f4a59b053491fb64adb22", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "A paper that improves conversation generation using commonsense knowledge graphs to model dialog flows.", "0_output": [["A paper that improves conversation generation using commonsense knowledge graphs to model dialog flows.", "Title: Grounded Conversation Generation as Guided Traverses in Commonsense Knowledge Graphs\n\nAbstract: Human conversations naturally evolve around related concepts and scatter to multi-hop concepts. This paper presents a new conversation generation model, ConceptFlow, which leverages commonsense knowledge graphs to explicitly model conversation flows. By grounding conversations to the concept space, ConceptFlow represents the potential conversation flow as traverses in the concept space along commonsense relations. The traverse is guided by graph attentions in the concept graph, moving towards more meaningful directions in the concept space, in order to generate more semantic and informative responses. Experiments on Reddit conversations demonstrate ConceptFlow's effectiveness over previous knowledge-aware conversation models and GPT-2 based models while using 70% fewer parameters, confirming the advantage of explicit modeling conversation structures. All source codes of this work are available at https://github.com/thunlp/ConceptFlow."]], "0_corpus": "arxiv", "1_conv_id": "615634575f1446868b2b07c86961500f", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "A paper that improves conversation generation using commonsense knowledge graphs to model dialog flows.", "1_output": [["A paper that improves conversation generation using commonsense knowledge graphs to model dialog flows.", "Title: Grounded Conversation Generation as Guided Traverses in Commonsense Knowledge Graphs\n\nAbstract: Human conversations naturally evolve around related concepts and scatter to multi-hop concepts. This paper presents a new conversation generation model, ConceptFlow, which leverages commonsense knowledge graphs to explicitly model conversation flows. By grounding conversations to the concept space, ConceptFlow represents the potential conversation flow as traverses in the concept space along commonsense relations. The traverse is guided by graph attentions in the concept graph, moving towards more meaningful directions in the concept space, in order to generate more semantic and informative responses. Experiments on Reddit conversations demonstrate ConceptFlow's effectiveness over previous knowledge-aware conversation models and GPT-2 based models while using 70% fewer parameters, confirming the advantage of explicit modeling conversation structures. All source codes of this work are available at https://github.com/thunlp/ConceptFlow."]], "1_corpus": "arxiv"}
72
  {"tstamp": 1732945811.1992, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f701a684c7ea4dfd8a7c55614b7e579d", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "is 000 a valid cvv code?", "0_output": [["is 000 a valid cvv code?", "Stackoverflow Stackexchange\n\nQ: Can a credit card cvv2 start with a 0? This is a very simple question that isn't easily found on the internets.\nCan a credit card cvv2 number start with a 0?\nExample: 082\nCommon sense says no (since it's an integer and leading zeros don't change the value) but I have a report of an end-user with a 0 as the first digit.\nLink to reputable documentation such as Visa or a payment processor please.\n\nA: Yes, a credit card cvv2 number can include a zero. I do not have any documentation other than the fact that my Visa check card has the number \"087\". I hope this helps.\n"]], "0_corpus": "stackexchange", "1_conv_id": "51808b01cf1d4c1ba39c0346ab9e3798", "1_model_name": "voyage-multilingual-2", "1_prompt": "is 000 a valid cvv code?", "1_output": [["is 000 a valid cvv code?", "Stackoverflow Stackexchange\n\nQ: Can a credit card cvv2 start with a 0? This is a very simple question that isn't easily found on the internets.\nCan a credit card cvv2 number start with a 0?\nExample: 082\nCommon sense says no (since it's an integer and leading zeros don't change the value) but I have a report of an end-user with a 0 as the first digit.\nLink to reputable documentation such as Visa or a payment processor please.\n\nA: Yes, a credit card cvv2 number can include a zero. I do not have any documentation other than the fact that my Visa check card has the number \"087\". I hope this helps.\n"]], "1_corpus": "stackexchange"}
73
  {"tstamp": 1732945962.7329, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "b24592833ad54fc582b0e24e1c3cc5ed", "0_model_name": "text-embedding-004", "0_prompt": "Which paper released the BEIR benchmark?", "0_output": [["Which paper released the BEIR benchmark?", "Title: BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models\n\nAbstract: Existing neural information retrieval (IR) models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their out-of-distribution (OOD) generalization capabilities. To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval. We leverage a careful selection of 18 publicly available datasets from diverse text retrieval tasks and domains and evaluate 10 state-of-the-art retrieval systems including lexical, sparse, dense, late-interaction and re-ranking architectures on the BEIR benchmark. Our results show BM25 is a robust baseline and re-ranking and late-interaction-based models on average achieve the best zero-shot performances, however, at high computational costs. In contrast, dense and sparse-retrieval models are computationally more efficient but often underperform other approaches, highlighting the considerable room for improvement in their generalization capabilities. We hope this framework allows us to better evaluate and understand existing retrieval systems, and contributes to accelerating progress towards better robust and generalizable systems in the future. BEIR is publicly available at https://github.com/UKPLab/beir."]], "0_corpus": "arxiv", "1_conv_id": "211a5da6398c49f89873d4212e595c62", "1_model_name": "voyage-multilingual-2", "1_prompt": "Which paper released the BEIR benchmark?", "1_output": [["Which paper released the BEIR benchmark?", "Title: Resources for Brewing BEIR: Reproducible Reference Models and an Official Leaderboard\n\nAbstract: BEIR is a benchmark dataset for zero-shot evaluation of information retrieval models across 18 different domain/task combinations. In recent years, we have witnessed the growing popularity of a representation learning approach to building retrieval models, typically using pretrained transformers in a supervised setting. This naturally begs the question: How effective are these models when presented with queries and documents that differ from the training data? Examples include searching in different domains (e.g., medical or legal text) and with different types of queries (e.g., keywords vs. well-formed questions). While BEIR was designed to answer these questions, our work addresses two shortcomings that prevent the benchmark from achieving its full potential: First, the sophistication of modern neural methods and the complexity of current software infrastructure create barriers to entry for newcomers. To this end, we provide reproducible reference implementations that cover the two main classes of approaches: learned dense and sparse models. Second, there does not exist a single authoritative nexus for reporting the effectiveness of different models on BEIR, which has led to difficulty in comparing different methods. To remedy this, we present an official self-service BEIR leaderboard that provides fair and consistent comparisons of retrieval models. By addressing both shortcomings, our work facilitates future explorations in a range of interesting research questions that BEIR enables."]], "1_corpus": "arxiv"}
 
 
 
 
71
  {"tstamp": 1732945699.9093, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "18edd85bb15f4a59b053491fb64adb22", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "A paper that improves conversation generation using commonsense knowledge graphs to model dialog flows.", "0_output": [["A paper that improves conversation generation using commonsense knowledge graphs to model dialog flows.", "Title: Grounded Conversation Generation as Guided Traverses in Commonsense Knowledge Graphs\n\nAbstract: Human conversations naturally evolve around related concepts and scatter to multi-hop concepts. This paper presents a new conversation generation model, ConceptFlow, which leverages commonsense knowledge graphs to explicitly model conversation flows. By grounding conversations to the concept space, ConceptFlow represents the potential conversation flow as traverses in the concept space along commonsense relations. The traverse is guided by graph attentions in the concept graph, moving towards more meaningful directions in the concept space, in order to generate more semantic and informative responses. Experiments on Reddit conversations demonstrate ConceptFlow's effectiveness over previous knowledge-aware conversation models and GPT-2 based models while using 70% fewer parameters, confirming the advantage of explicit modeling conversation structures. All source codes of this work are available at https://github.com/thunlp/ConceptFlow."]], "0_corpus": "arxiv", "1_conv_id": "615634575f1446868b2b07c86961500f", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "A paper that improves conversation generation using commonsense knowledge graphs to model dialog flows.", "1_output": [["A paper that improves conversation generation using commonsense knowledge graphs to model dialog flows.", "Title: Grounded Conversation Generation as Guided Traverses in Commonsense Knowledge Graphs\n\nAbstract: Human conversations naturally evolve around related concepts and scatter to multi-hop concepts. This paper presents a new conversation generation model, ConceptFlow, which leverages commonsense knowledge graphs to explicitly model conversation flows. By grounding conversations to the concept space, ConceptFlow represents the potential conversation flow as traverses in the concept space along commonsense relations. The traverse is guided by graph attentions in the concept graph, moving towards more meaningful directions in the concept space, in order to generate more semantic and informative responses. Experiments on Reddit conversations demonstrate ConceptFlow's effectiveness over previous knowledge-aware conversation models and GPT-2 based models while using 70% fewer parameters, confirming the advantage of explicit modeling conversation structures. All source codes of this work are available at https://github.com/thunlp/ConceptFlow."]], "1_corpus": "arxiv"}
72
  {"tstamp": 1732945811.1992, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f701a684c7ea4dfd8a7c55614b7e579d", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "is 000 a valid cvv code?", "0_output": [["is 000 a valid cvv code?", "Stackoverflow Stackexchange\n\nQ: Can a credit card cvv2 start with a 0? This is a very simple question that isn't easily found on the internets.\nCan a credit card cvv2 number start with a 0?\nExample: 082\nCommon sense says no (since it's an integer and leading zeros don't change the value) but I have a report of an end-user with a 0 as the first digit.\nLink to reputable documentation such as Visa or a payment processor please.\n\nA: Yes, a credit card cvv2 number can include a zero. I do not have any documentation other than the fact that my Visa check card has the number \"087\". I hope this helps.\n"]], "0_corpus": "stackexchange", "1_conv_id": "51808b01cf1d4c1ba39c0346ab9e3798", "1_model_name": "voyage-multilingual-2", "1_prompt": "is 000 a valid cvv code?", "1_output": [["is 000 a valid cvv code?", "Stackoverflow Stackexchange\n\nQ: Can a credit card cvv2 start with a 0? This is a very simple question that isn't easily found on the internets.\nCan a credit card cvv2 number start with a 0?\nExample: 082\nCommon sense says no (since it's an integer and leading zeros don't change the value) but I have a report of an end-user with a 0 as the first digit.\nLink to reputable documentation such as Visa or a payment processor please.\n\nA: Yes, a credit card cvv2 number can include a zero. I do not have any documentation other than the fact that my Visa check card has the number \"087\". I hope this helps.\n"]], "1_corpus": "stackexchange"}
73
  {"tstamp": 1732945962.7329, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "b24592833ad54fc582b0e24e1c3cc5ed", "0_model_name": "text-embedding-004", "0_prompt": "Which paper released the BEIR benchmark?", "0_output": [["Which paper released the BEIR benchmark?", "Title: BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models\n\nAbstract: Existing neural information retrieval (IR) models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their out-of-distribution (OOD) generalization capabilities. To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval. We leverage a careful selection of 18 publicly available datasets from diverse text retrieval tasks and domains and evaluate 10 state-of-the-art retrieval systems including lexical, sparse, dense, late-interaction and re-ranking architectures on the BEIR benchmark. Our results show BM25 is a robust baseline and re-ranking and late-interaction-based models on average achieve the best zero-shot performances, however, at high computational costs. In contrast, dense and sparse-retrieval models are computationally more efficient but often underperform other approaches, highlighting the considerable room for improvement in their generalization capabilities. We hope this framework allows us to better evaluate and understand existing retrieval systems, and contributes to accelerating progress towards better robust and generalizable systems in the future. BEIR is publicly available at https://github.com/UKPLab/beir."]], "0_corpus": "arxiv", "1_conv_id": "211a5da6398c49f89873d4212e595c62", "1_model_name": "voyage-multilingual-2", "1_prompt": "Which paper released the BEIR benchmark?", "1_output": [["Which paper released the BEIR benchmark?", "Title: Resources for Brewing BEIR: Reproducible Reference Models and an Official Leaderboard\n\nAbstract: BEIR is a benchmark dataset for zero-shot evaluation of information retrieval models across 18 different domain/task combinations. In recent years, we have witnessed the growing popularity of a representation learning approach to building retrieval models, typically using pretrained transformers in a supervised setting. This naturally begs the question: How effective are these models when presented with queries and documents that differ from the training data? Examples include searching in different domains (e.g., medical or legal text) and with different types of queries (e.g., keywords vs. well-formed questions). While BEIR was designed to answer these questions, our work addresses two shortcomings that prevent the benchmark from achieving its full potential: First, the sophistication of modern neural methods and the complexity of current software infrastructure create barriers to entry for newcomers. To this end, we provide reproducible reference implementations that cover the two main classes of approaches: learned dense and sparse models. Second, there does not exist a single authoritative nexus for reporting the effectiveness of different models on BEIR, which has led to difficulty in comparing different methods. To remedy this, we present an official self-service BEIR leaderboard that provides fair and consistent comparisons of retrieval models. By addressing both shortcomings, our work facilitates future explorations in a range of interesting research questions that BEIR enables."]], "1_corpus": "arxiv"}
74
+ {"tstamp": 1732946112.2794, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "e5eafd35bf72420ea69e8a36c42219cb", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Information leakage of personally identifiable data in GPT-2 models", "0_output": [["Information leakage of personally identifiable data in GPT-2 models", "Title: Analyzing Leakage of Personally Identifiable Information in Language Models\n\nAbstract: Language Models (LMs) have been shown to leak information about training data through sentence-level membership inference and reconstruction attacks. Understanding the risk of LMs leaking Personally Identifiable Information (PII) has received less attention, which can be attributed to the false assumption that dataset curation techniques such as scrubbing are sufficient to prevent PII leakage. Scrubbing techniques reduce but do not prevent the risk of PII leakage: in practice scrubbing is imperfect and must balance the trade-off between minimizing disclosure and preserving the utility of the dataset. On the other hand, it is unclear to which extent algorithmic defenses such as differential privacy, designed to guarantee sentence- or user-level privacy, prevent PII disclosure. In this work, we introduce rigorous game-based definitions for three types of PII leakage via black-box extraction, inference, and reconstruction attacks with only API access to an LM. We empirically evaluate the attacks against GPT-2 models fine-tuned with and without defenses in three domains: case law, health care, and e-mails. Our main contributions are (i) novel attacks that can extract up to 10$\\times$ more PII sequences than existing attacks, (ii) showing that sentence-level differential privacy reduces the risk of PII disclosure but still leaks about 3% of PII sequences, and (iii) a subtle connection between record-level membership inference and PII reconstruction. Code to reproduce all experiments in the paper is available at https://github.com/microsoft/analysing_pii_leakage."]], "0_corpus": "arxiv", "1_conv_id": "07070e23dbb24f7683eca14f9f3beecf", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "Information leakage of personally identifiable data in GPT-2 models", "1_output": [["Information leakage of personally identifiable data in GPT-2 models", "Title: Analyzing Leakage of Personally Identifiable Information in Language Models\n\nAbstract: Language Models (LMs) have been shown to leak information about training data through sentence-level membership inference and reconstruction attacks. Understanding the risk of LMs leaking Personally Identifiable Information (PII) has received less attention, which can be attributed to the false assumption that dataset curation techniques such as scrubbing are sufficient to prevent PII leakage. Scrubbing techniques reduce but do not prevent the risk of PII leakage: in practice scrubbing is imperfect and must balance the trade-off between minimizing disclosure and preserving the utility of the dataset. On the other hand, it is unclear to which extent algorithmic defenses such as differential privacy, designed to guarantee sentence- or user-level privacy, prevent PII disclosure. In this work, we introduce rigorous game-based definitions for three types of PII leakage via black-box extraction, inference, and reconstruction attacks with only API access to an LM. We empirically evaluate the attacks against GPT-2 models fine-tuned with and without defenses in three domains: case law, health care, and e-mails. Our main contributions are (i) novel attacks that can extract up to 10$\\times$ more PII sequences than existing attacks, (ii) showing that sentence-level differential privacy reduces the risk of PII disclosure but still leaks about 3% of PII sequences, and (iii) a subtle connection between record-level membership inference and PII reconstruction. Code to reproduce all experiments in the paper is available at https://github.com/microsoft/analysing_pii_leakage."]], "1_corpus": "arxiv"}
75
+ {"tstamp": 1732946161.3639, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "3d31cbc67cbc4094b8a2e01c3638b4fb", "0_model_name": "text-embedding-004", "0_prompt": "what awards did the curious case of benjamin button win", "0_output": [["what awards did the curious case of benjamin button win", "The Curious Case of Benjamin Button (film)\n\nProducer Ray Stark bought the film rights to do the short story in the mid-1980s with Universal Pictures backing the film, but struggled to get the project off the ground until he sold the rights to producers Kathleen Kennedy and Frank Marshall in the 1990s. Although it was moved to Paramount Pictures in the 1990s, the film did not enter production until after Fincher and Pitt signed on along with the rest of the cast in 2005. Principal photography began in November 2006 and wrapped up in September 2007. Digital Domain worked on the visual effects of the film, particularly in the process of the metamorphosis of Pitt's character.\nThe Curious Case of Benjamin Button was released in North America on December 25, 2008 to positive reviews, with major praise for Fincher's directing, Pitt's performance, production values, and visual effects. The film was a box office success, grossing $335.8 million worldwide against its $167 million budget. The film received a leading 13 Academy Award nominations at the 81st Academy Awards, including Best Picture, Best Director for Fincher, Best Actor for Pitt, and Best Supporting Actress for Taraji P. Henson, and won three, for Best Art Direction, Best Makeup, and Best Visual Effects."]], "0_corpus": "wikipedia", "1_conv_id": "5de2e9f72e7e4ba484d5c331d15b4ed9", "1_model_name": "embed-english-v3.0", "1_prompt": "what awards did the curious case of benjamin button win", "1_output": [["what awards did the curious case of benjamin button win", "The Curious Case of Benjamin Button (film)\n\nPeter Bradshaw in The Guardian called it \"166 minutes of twee tedium\", giving it one star out of five. Cosmo Landesman of the Sunday Times gave the film two out of five stars, writing: \"The film's premise serves no purpose. It's a gimmick that goes on for nearly three hours ... The Curious Case of Benjamin Button is an anodyne Hollywood film that offers a safe and sanitised view of life and death.\" James Christopher in The Times called it \"a tedious marathon of smoke and mirrors. In terms of the basic requirements of three-reel drama the film lacks substance, credibility, a decent script and characters you might actually care for.\" Derek Malcolm of London's Evening Standard felt that \"never at any point do you feel that there's anything more to it than a very strange story traversed by a film-maker who knows what he is doing but not always why he is doing it.\"\nAccolades\nAt the 81st Academy Awards, The Curious Case of Benjamin Button received a leading 13 Academy Award nominations, including Best Picture, Best Director for Fincher, Best Actor for Pitt, and Best Supporting Actress for Taraji P. Henson, and won three, for Best Art Direction, Best Makeup, and Best Visual Effects.\nTaraji P. Henson won Best Actress at the BET Awards for her role in the film combined with two other performances in Not Easily Broken, and The Family That Preys.\nThe film won all four awards it was nominated for at the 7th Visual Effects Society Awards, the categories of \"Outstanding Visual Effects in a Visual Effects-Driven Feature Motion Picture,\" \"Best Single Visual Effect of the Year,\" \"Outstanding Animated Character in a Live Action Feature Motion Picture,\" and \"Outstanding Compositing in a Feature Motion Picture.\""]], "1_corpus": "wikipedia"}
76
+ {"tstamp": 1732946186.1109, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "c4d06987e156450c9fb708f6dc6f68db", "0_model_name": "text-embedding-004", "0_prompt": "How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "0_output": [["How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "Title: Teaching Algorithmic Reasoning via In-context Learning\n\nAbstract: Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines."]], "0_corpus": "arxiv", "1_conv_id": "7b2c3ef296d542f2bcfb79baa5a58ac2", "1_model_name": "embed-english-v3.0", "1_prompt": "How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "1_output": [["How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "Title: Teaching Algorithmic Reasoning via In-context Learning\n\nAbstract: Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines."]], "1_corpus": "arxiv"}
data/retrieval_individual-ffd1dcbf-aab7-4cd1-bfa8-5b0d8404144b.jsonl CHANGED
@@ -258,3 +258,9 @@
258
  {"tstamp": 1732945804.3975, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1732945802.1863, "finish": 1732945804.3975, "ip": "", "conv_id": "51808b01cf1d4c1ba39c0346ab9e3798", "model_name": "voyage-multilingual-2", "prompt": "is 000 a valid cvv code?", "output": [["is 000 a valid cvv code?", "Stackoverflow Stackexchange\n\nQ: Can a credit card cvv2 start with a 0? This is a very simple question that isn't easily found on the internets.\nCan a credit card cvv2 number start with a 0?\nExample: 082\nCommon sense says no (since it's an integer and leading zeros don't change the value) but I have a report of an end-user with a 0 as the first digit.\nLink to reputable documentation such as Visa or a payment processor please.\n\nA: Yes, a credit card cvv2 number can include a zero. I do not have any documentation other than the fact that my Visa check card has the number \"087\". I hope this helps.\n"]], "corpus": "stackexchange"}
259
  {"tstamp": 1732945936.9756, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1732945934.6794, "finish": 1732945936.9756, "ip": "", "conv_id": "b24592833ad54fc582b0e24e1c3cc5ed", "model_name": "text-embedding-004", "prompt": "Which paper released the BEIR benchmark?", "output": [["Which paper released the BEIR benchmark?", "Title: BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models\n\nAbstract: Existing neural information retrieval (IR) models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their out-of-distribution (OOD) generalization capabilities. To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval. We leverage a careful selection of 18 publicly available datasets from diverse text retrieval tasks and domains and evaluate 10 state-of-the-art retrieval systems including lexical, sparse, dense, late-interaction and re-ranking architectures on the BEIR benchmark. Our results show BM25 is a robust baseline and re-ranking and late-interaction-based models on average achieve the best zero-shot performances, however, at high computational costs. In contrast, dense and sparse-retrieval models are computationally more efficient but often underperform other approaches, highlighting the considerable room for improvement in their generalization capabilities. We hope this framework allows us to better evaluate and understand existing retrieval systems, and contributes to accelerating progress towards better robust and generalizable systems in the future. BEIR is publicly available at https://github.com/UKPLab/beir."]], "corpus": "arxiv"}
260
  {"tstamp": 1732945936.9756, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1732945934.6794, "finish": 1732945936.9756, "ip": "", "conv_id": "211a5da6398c49f89873d4212e595c62", "model_name": "voyage-multilingual-2", "prompt": "Which paper released the BEIR benchmark?", "output": [["Which paper released the BEIR benchmark?", "Title: Resources for Brewing BEIR: Reproducible Reference Models and an Official Leaderboard\n\nAbstract: BEIR is a benchmark dataset for zero-shot evaluation of information retrieval models across 18 different domain/task combinations. In recent years, we have witnessed the growing popularity of a representation learning approach to building retrieval models, typically using pretrained transformers in a supervised setting. This naturally begs the question: How effective are these models when presented with queries and documents that differ from the training data? Examples include searching in different domains (e.g., medical or legal text) and with different types of queries (e.g., keywords vs. well-formed questions). While BEIR was designed to answer these questions, our work addresses two shortcomings that prevent the benchmark from achieving its full potential: First, the sophistication of modern neural methods and the complexity of current software infrastructure create barriers to entry for newcomers. To this end, we provide reproducible reference implementations that cover the two main classes of approaches: learned dense and sparse models. Second, there does not exist a single authoritative nexus for reporting the effectiveness of different models on BEIR, which has led to difficulty in comparing different methods. To remedy this, we present an official self-service BEIR leaderboard that provides fair and consistent comparisons of retrieval models. By addressing both shortcomings, our work facilitates future explorations in a range of interesting research questions that BEIR enables."]], "corpus": "arxiv"}
 
 
 
 
 
 
 
258
  {"tstamp": 1732945804.3975, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1732945802.1863, "finish": 1732945804.3975, "ip": "", "conv_id": "51808b01cf1d4c1ba39c0346ab9e3798", "model_name": "voyage-multilingual-2", "prompt": "is 000 a valid cvv code?", "output": [["is 000 a valid cvv code?", "Stackoverflow Stackexchange\n\nQ: Can a credit card cvv2 start with a 0? This is a very simple question that isn't easily found on the internets.\nCan a credit card cvv2 number start with a 0?\nExample: 082\nCommon sense says no (since it's an integer and leading zeros don't change the value) but I have a report of an end-user with a 0 as the first digit.\nLink to reputable documentation such as Visa or a payment processor please.\n\nA: Yes, a credit card cvv2 number can include a zero. I do not have any documentation other than the fact that my Visa check card has the number \"087\". I hope this helps.\n"]], "corpus": "stackexchange"}
259
  {"tstamp": 1732945936.9756, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1732945934.6794, "finish": 1732945936.9756, "ip": "", "conv_id": "b24592833ad54fc582b0e24e1c3cc5ed", "model_name": "text-embedding-004", "prompt": "Which paper released the BEIR benchmark?", "output": [["Which paper released the BEIR benchmark?", "Title: BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models\n\nAbstract: Existing neural information retrieval (IR) models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their out-of-distribution (OOD) generalization capabilities. To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval. We leverage a careful selection of 18 publicly available datasets from diverse text retrieval tasks and domains and evaluate 10 state-of-the-art retrieval systems including lexical, sparse, dense, late-interaction and re-ranking architectures on the BEIR benchmark. Our results show BM25 is a robust baseline and re-ranking and late-interaction-based models on average achieve the best zero-shot performances, however, at high computational costs. In contrast, dense and sparse-retrieval models are computationally more efficient but often underperform other approaches, highlighting the considerable room for improvement in their generalization capabilities. We hope this framework allows us to better evaluate and understand existing retrieval systems, and contributes to accelerating progress towards better robust and generalizable systems in the future. BEIR is publicly available at https://github.com/UKPLab/beir."]], "corpus": "arxiv"}
260
  {"tstamp": 1732945936.9756, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1732945934.6794, "finish": 1732945936.9756, "ip": "", "conv_id": "211a5da6398c49f89873d4212e595c62", "model_name": "voyage-multilingual-2", "prompt": "Which paper released the BEIR benchmark?", "output": [["Which paper released the BEIR benchmark?", "Title: Resources for Brewing BEIR: Reproducible Reference Models and an Official Leaderboard\n\nAbstract: BEIR is a benchmark dataset for zero-shot evaluation of information retrieval models across 18 different domain/task combinations. In recent years, we have witnessed the growing popularity of a representation learning approach to building retrieval models, typically using pretrained transformers in a supervised setting. This naturally begs the question: How effective are these models when presented with queries and documents that differ from the training data? Examples include searching in different domains (e.g., medical or legal text) and with different types of queries (e.g., keywords vs. well-formed questions). While BEIR was designed to answer these questions, our work addresses two shortcomings that prevent the benchmark from achieving its full potential: First, the sophistication of modern neural methods and the complexity of current software infrastructure create barriers to entry for newcomers. To this end, we provide reproducible reference implementations that cover the two main classes of approaches: learned dense and sparse models. Second, there does not exist a single authoritative nexus for reporting the effectiveness of different models on BEIR, which has led to difficulty in comparing different methods. To remedy this, we present an official self-service BEIR leaderboard that provides fair and consistent comparisons of retrieval models. By addressing both shortcomings, our work facilitates future explorations in a range of interesting research questions that BEIR enables."]], "corpus": "arxiv"}
261
+ {"tstamp": 1732946103.9682, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1732946103.6071, "finish": 1732946103.9682, "ip": "", "conv_id": "e5eafd35bf72420ea69e8a36c42219cb", "model_name": "GritLM/GritLM-7B", "prompt": "Information leakage of personally identifiable data in GPT-2 models", "output": [["Information leakage of personally identifiable data in GPT-2 models", "Title: Analyzing Leakage of Personally Identifiable Information in Language Models\n\nAbstract: Language Models (LMs) have been shown to leak information about training data through sentence-level membership inference and reconstruction attacks. Understanding the risk of LMs leaking Personally Identifiable Information (PII) has received less attention, which can be attributed to the false assumption that dataset curation techniques such as scrubbing are sufficient to prevent PII leakage. Scrubbing techniques reduce but do not prevent the risk of PII leakage: in practice scrubbing is imperfect and must balance the trade-off between minimizing disclosure and preserving the utility of the dataset. On the other hand, it is unclear to which extent algorithmic defenses such as differential privacy, designed to guarantee sentence- or user-level privacy, prevent PII disclosure. In this work, we introduce rigorous game-based definitions for three types of PII leakage via black-box extraction, inference, and reconstruction attacks with only API access to an LM. We empirically evaluate the attacks against GPT-2 models fine-tuned with and without defenses in three domains: case law, health care, and e-mails. Our main contributions are (i) novel attacks that can extract up to 10$\\times$ more PII sequences than existing attacks, (ii) showing that sentence-level differential privacy reduces the risk of PII disclosure but still leaks about 3% of PII sequences, and (iii) a subtle connection between record-level membership inference and PII reconstruction. Code to reproduce all experiments in the paper is available at https://github.com/microsoft/analysing_pii_leakage."]], "corpus": "arxiv"}
262
+ {"tstamp": 1732946103.9682, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1732946103.6071, "finish": 1732946103.9682, "ip": "", "conv_id": "07070e23dbb24f7683eca14f9f3beecf", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "Information leakage of personally identifiable data in GPT-2 models", "output": [["Information leakage of personally identifiable data in GPT-2 models", "Title: Analyzing Leakage of Personally Identifiable Information in Language Models\n\nAbstract: Language Models (LMs) have been shown to leak information about training data through sentence-level membership inference and reconstruction attacks. Understanding the risk of LMs leaking Personally Identifiable Information (PII) has received less attention, which can be attributed to the false assumption that dataset curation techniques such as scrubbing are sufficient to prevent PII leakage. Scrubbing techniques reduce but do not prevent the risk of PII leakage: in practice scrubbing is imperfect and must balance the trade-off between minimizing disclosure and preserving the utility of the dataset. On the other hand, it is unclear to which extent algorithmic defenses such as differential privacy, designed to guarantee sentence- or user-level privacy, prevent PII disclosure. In this work, we introduce rigorous game-based definitions for three types of PII leakage via black-box extraction, inference, and reconstruction attacks with only API access to an LM. We empirically evaluate the attacks against GPT-2 models fine-tuned with and without defenses in three domains: case law, health care, and e-mails. Our main contributions are (i) novel attacks that can extract up to 10$\\times$ more PII sequences than existing attacks, (ii) showing that sentence-level differential privacy reduces the risk of PII disclosure but still leaks about 3% of PII sequences, and (iii) a subtle connection between record-level membership inference and PII reconstruction. Code to reproduce all experiments in the paper is available at https://github.com/microsoft/analysing_pii_leakage."]], "corpus": "arxiv"}
263
+ {"tstamp": 1732946125.0041, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1732946122.6063, "finish": 1732946125.0041, "ip": "", "conv_id": "3d31cbc67cbc4094b8a2e01c3638b4fb", "model_name": "text-embedding-004", "prompt": "what awards did the curious case of benjamin button win", "output": [["what awards did the curious case of benjamin button win", "The Curious Case of Benjamin Button (film)\n\nProducer Ray Stark bought the film rights to do the short story in the mid-1980s with Universal Pictures backing the film, but struggled to get the project off the ground until he sold the rights to producers Kathleen Kennedy and Frank Marshall in the 1990s. Although it was moved to Paramount Pictures in the 1990s, the film did not enter production until after Fincher and Pitt signed on along with the rest of the cast in 2005. Principal photography began in November 2006 and wrapped up in September 2007. Digital Domain worked on the visual effects of the film, particularly in the process of the metamorphosis of Pitt's character.\nThe Curious Case of Benjamin Button was released in North America on December 25, 2008 to positive reviews, with major praise for Fincher's directing, Pitt's performance, production values, and visual effects. The film was a box office success, grossing $335.8 million worldwide against its $167 million budget. The film received a leading 13 Academy Award nominations at the 81st Academy Awards, including Best Picture, Best Director for Fincher, Best Actor for Pitt, and Best Supporting Actress for Taraji P. Henson, and won three, for Best Art Direction, Best Makeup, and Best Visual Effects."]], "corpus": "wikipedia"}
264
+ {"tstamp": 1732946125.0041, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1732946122.6063, "finish": 1732946125.0041, "ip": "", "conv_id": "5de2e9f72e7e4ba484d5c331d15b4ed9", "model_name": "embed-english-v3.0", "prompt": "what awards did the curious case of benjamin button win", "output": [["what awards did the curious case of benjamin button win", "The Curious Case of Benjamin Button (film)\n\nPeter Bradshaw in The Guardian called it \"166 minutes of twee tedium\", giving it one star out of five. Cosmo Landesman of the Sunday Times gave the film two out of five stars, writing: \"The film's premise serves no purpose. It's a gimmick that goes on for nearly three hours ... The Curious Case of Benjamin Button is an anodyne Hollywood film that offers a safe and sanitised view of life and death.\" James Christopher in The Times called it \"a tedious marathon of smoke and mirrors. In terms of the basic requirements of three-reel drama the film lacks substance, credibility, a decent script and characters you might actually care for.\" Derek Malcolm of London's Evening Standard felt that \"never at any point do you feel that there's anything more to it than a very strange story traversed by a film-maker who knows what he is doing but not always why he is doing it.\"\nAccolades\nAt the 81st Academy Awards, The Curious Case of Benjamin Button received a leading 13 Academy Award nominations, including Best Picture, Best Director for Fincher, Best Actor for Pitt, and Best Supporting Actress for Taraji P. Henson, and won three, for Best Art Direction, Best Makeup, and Best Visual Effects.\nTaraji P. Henson won Best Actress at the BET Awards for her role in the film combined with two other performances in Not Easily Broken, and The Family That Preys.\nThe film won all four awards it was nominated for at the 7th Visual Effects Society Awards, the categories of \"Outstanding Visual Effects in a Visual Effects-Driven Feature Motion Picture,\" \"Best Single Visual Effect of the Year,\" \"Outstanding Animated Character in a Live Action Feature Motion Picture,\" and \"Outstanding Compositing in a Feature Motion Picture.\""]], "corpus": "wikipedia"}
265
+ {"tstamp": 1732946177.2588, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1732946177.0006, "finish": 1732946177.2588, "ip": "", "conv_id": "c4d06987e156450c9fb708f6dc6f68db", "model_name": "text-embedding-004", "prompt": "How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "output": [["How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "Title: Teaching Algorithmic Reasoning via In-context Learning\n\nAbstract: Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines."]], "corpus": "arxiv"}
266
+ {"tstamp": 1732946177.2588, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1732946177.0006, "finish": 1732946177.2588, "ip": "", "conv_id": "7b2c3ef296d542f2bcfb79baa5a58ac2", "model_name": "embed-english-v3.0", "prompt": "How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "output": [["How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "Title: Teaching Algorithmic Reasoning via In-context Learning\n\nAbstract: Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines."]], "corpus": "arxiv"}