Scheduled Commit
Browse files
data/retrieval_battle-ffd1dcbf-aab7-4cd1-bfa8-5b0d8404144b.jsonl
CHANGED
@@ -74,3 +74,4 @@
|
|
74 |
{"tstamp": 1732946112.2794, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "e5eafd35bf72420ea69e8a36c42219cb", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Information leakage of personally identifiable data in GPT-2 models", "0_output": [["Information leakage of personally identifiable data in GPT-2 models", "Title: Analyzing Leakage of Personally Identifiable Information in Language Models\n\nAbstract: Language Models (LMs) have been shown to leak information about training data through sentence-level membership inference and reconstruction attacks. Understanding the risk of LMs leaking Personally Identifiable Information (PII) has received less attention, which can be attributed to the false assumption that dataset curation techniques such as scrubbing are sufficient to prevent PII leakage. Scrubbing techniques reduce but do not prevent the risk of PII leakage: in practice scrubbing is imperfect and must balance the trade-off between minimizing disclosure and preserving the utility of the dataset. On the other hand, it is unclear to which extent algorithmic defenses such as differential privacy, designed to guarantee sentence- or user-level privacy, prevent PII disclosure. In this work, we introduce rigorous game-based definitions for three types of PII leakage via black-box extraction, inference, and reconstruction attacks with only API access to an LM. We empirically evaluate the attacks against GPT-2 models fine-tuned with and without defenses in three domains: case law, health care, and e-mails. Our main contributions are (i) novel attacks that can extract up to 10$\\times$ more PII sequences than existing attacks, (ii) showing that sentence-level differential privacy reduces the risk of PII disclosure but still leaks about 3% of PII sequences, and (iii) a subtle connection between record-level membership inference and PII reconstruction. Code to reproduce all experiments in the paper is available at https://github.com/microsoft/analysing_pii_leakage."]], "0_corpus": "arxiv", "1_conv_id": "07070e23dbb24f7683eca14f9f3beecf", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "Information leakage of personally identifiable data in GPT-2 models", "1_output": [["Information leakage of personally identifiable data in GPT-2 models", "Title: Analyzing Leakage of Personally Identifiable Information in Language Models\n\nAbstract: Language Models (LMs) have been shown to leak information about training data through sentence-level membership inference and reconstruction attacks. Understanding the risk of LMs leaking Personally Identifiable Information (PII) has received less attention, which can be attributed to the false assumption that dataset curation techniques such as scrubbing are sufficient to prevent PII leakage. Scrubbing techniques reduce but do not prevent the risk of PII leakage: in practice scrubbing is imperfect and must balance the trade-off between minimizing disclosure and preserving the utility of the dataset. On the other hand, it is unclear to which extent algorithmic defenses such as differential privacy, designed to guarantee sentence- or user-level privacy, prevent PII disclosure. In this work, we introduce rigorous game-based definitions for three types of PII leakage via black-box extraction, inference, and reconstruction attacks with only API access to an LM. We empirically evaluate the attacks against GPT-2 models fine-tuned with and without defenses in three domains: case law, health care, and e-mails. Our main contributions are (i) novel attacks that can extract up to 10$\\times$ more PII sequences than existing attacks, (ii) showing that sentence-level differential privacy reduces the risk of PII disclosure but still leaks about 3% of PII sequences, and (iii) a subtle connection between record-level membership inference and PII reconstruction. Code to reproduce all experiments in the paper is available at https://github.com/microsoft/analysing_pii_leakage."]], "1_corpus": "arxiv"}
|
75 |
{"tstamp": 1732946161.3639, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "3d31cbc67cbc4094b8a2e01c3638b4fb", "0_model_name": "text-embedding-004", "0_prompt": "what awards did the curious case of benjamin button win", "0_output": [["what awards did the curious case of benjamin button win", "The Curious Case of Benjamin Button (film)\n\nProducer Ray Stark bought the film rights to do the short story in the mid-1980s with Universal Pictures backing the film, but struggled to get the project off the ground until he sold the rights to producers Kathleen Kennedy and Frank Marshall in the 1990s. Although it was moved to Paramount Pictures in the 1990s, the film did not enter production until after Fincher and Pitt signed on along with the rest of the cast in 2005. Principal photography began in November 2006 and wrapped up in September 2007. Digital Domain worked on the visual effects of the film, particularly in the process of the metamorphosis of Pitt's character.\nThe Curious Case of Benjamin Button was released in North America on December 25, 2008 to positive reviews, with major praise for Fincher's directing, Pitt's performance, production values, and visual effects. The film was a box office success, grossing $335.8 million worldwide against its $167 million budget. The film received a leading 13 Academy Award nominations at the 81st Academy Awards, including Best Picture, Best Director for Fincher, Best Actor for Pitt, and Best Supporting Actress for Taraji P. Henson, and won three, for Best Art Direction, Best Makeup, and Best Visual Effects."]], "0_corpus": "wikipedia", "1_conv_id": "5de2e9f72e7e4ba484d5c331d15b4ed9", "1_model_name": "embed-english-v3.0", "1_prompt": "what awards did the curious case of benjamin button win", "1_output": [["what awards did the curious case of benjamin button win", "The Curious Case of Benjamin Button (film)\n\nPeter Bradshaw in The Guardian called it \"166 minutes of twee tedium\", giving it one star out of five. Cosmo Landesman of the Sunday Times gave the film two out of five stars, writing: \"The film's premise serves no purpose. It's a gimmick that goes on for nearly three hours ... The Curious Case of Benjamin Button is an anodyne Hollywood film that offers a safe and sanitised view of life and death.\" James Christopher in The Times called it \"a tedious marathon of smoke and mirrors. In terms of the basic requirements of three-reel drama the film lacks substance, credibility, a decent script and characters you might actually care for.\" Derek Malcolm of London's Evening Standard felt that \"never at any point do you feel that there's anything more to it than a very strange story traversed by a film-maker who knows what he is doing but not always why he is doing it.\"\nAccolades\nAt the 81st Academy Awards, The Curious Case of Benjamin Button received a leading 13 Academy Award nominations, including Best Picture, Best Director for Fincher, Best Actor for Pitt, and Best Supporting Actress for Taraji P. Henson, and won three, for Best Art Direction, Best Makeup, and Best Visual Effects.\nTaraji P. Henson won Best Actress at the BET Awards for her role in the film combined with two other performances in Not Easily Broken, and The Family That Preys.\nThe film won all four awards it was nominated for at the 7th Visual Effects Society Awards, the categories of \"Outstanding Visual Effects in a Visual Effects-Driven Feature Motion Picture,\" \"Best Single Visual Effect of the Year,\" \"Outstanding Animated Character in a Live Action Feature Motion Picture,\" and \"Outstanding Compositing in a Feature Motion Picture.\""]], "1_corpus": "wikipedia"}
|
76 |
{"tstamp": 1732946186.1109, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "c4d06987e156450c9fb708f6dc6f68db", "0_model_name": "text-embedding-004", "0_prompt": "How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "0_output": [["How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "Title: Teaching Algorithmic Reasoning via In-context Learning\n\nAbstract: Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines."]], "0_corpus": "arxiv", "1_conv_id": "7b2c3ef296d542f2bcfb79baa5a58ac2", "1_model_name": "embed-english-v3.0", "1_prompt": "How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "1_output": [["How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "Title: Teaching Algorithmic Reasoning via In-context Learning\n\nAbstract: Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines."]], "1_corpus": "arxiv"}
|
|
|
|
74 |
{"tstamp": 1732946112.2794, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "e5eafd35bf72420ea69e8a36c42219cb", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Information leakage of personally identifiable data in GPT-2 models", "0_output": [["Information leakage of personally identifiable data in GPT-2 models", "Title: Analyzing Leakage of Personally Identifiable Information in Language Models\n\nAbstract: Language Models (LMs) have been shown to leak information about training data through sentence-level membership inference and reconstruction attacks. Understanding the risk of LMs leaking Personally Identifiable Information (PII) has received less attention, which can be attributed to the false assumption that dataset curation techniques such as scrubbing are sufficient to prevent PII leakage. Scrubbing techniques reduce but do not prevent the risk of PII leakage: in practice scrubbing is imperfect and must balance the trade-off between minimizing disclosure and preserving the utility of the dataset. On the other hand, it is unclear to which extent algorithmic defenses such as differential privacy, designed to guarantee sentence- or user-level privacy, prevent PII disclosure. In this work, we introduce rigorous game-based definitions for three types of PII leakage via black-box extraction, inference, and reconstruction attacks with only API access to an LM. We empirically evaluate the attacks against GPT-2 models fine-tuned with and without defenses in three domains: case law, health care, and e-mails. Our main contributions are (i) novel attacks that can extract up to 10$\\times$ more PII sequences than existing attacks, (ii) showing that sentence-level differential privacy reduces the risk of PII disclosure but still leaks about 3% of PII sequences, and (iii) a subtle connection between record-level membership inference and PII reconstruction. Code to reproduce all experiments in the paper is available at https://github.com/microsoft/analysing_pii_leakage."]], "0_corpus": "arxiv", "1_conv_id": "07070e23dbb24f7683eca14f9f3beecf", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "Information leakage of personally identifiable data in GPT-2 models", "1_output": [["Information leakage of personally identifiable data in GPT-2 models", "Title: Analyzing Leakage of Personally Identifiable Information in Language Models\n\nAbstract: Language Models (LMs) have been shown to leak information about training data through sentence-level membership inference and reconstruction attacks. Understanding the risk of LMs leaking Personally Identifiable Information (PII) has received less attention, which can be attributed to the false assumption that dataset curation techniques such as scrubbing are sufficient to prevent PII leakage. Scrubbing techniques reduce but do not prevent the risk of PII leakage: in practice scrubbing is imperfect and must balance the trade-off between minimizing disclosure and preserving the utility of the dataset. On the other hand, it is unclear to which extent algorithmic defenses such as differential privacy, designed to guarantee sentence- or user-level privacy, prevent PII disclosure. In this work, we introduce rigorous game-based definitions for three types of PII leakage via black-box extraction, inference, and reconstruction attacks with only API access to an LM. We empirically evaluate the attacks against GPT-2 models fine-tuned with and without defenses in three domains: case law, health care, and e-mails. Our main contributions are (i) novel attacks that can extract up to 10$\\times$ more PII sequences than existing attacks, (ii) showing that sentence-level differential privacy reduces the risk of PII disclosure but still leaks about 3% of PII sequences, and (iii) a subtle connection between record-level membership inference and PII reconstruction. Code to reproduce all experiments in the paper is available at https://github.com/microsoft/analysing_pii_leakage."]], "1_corpus": "arxiv"}
|
75 |
{"tstamp": 1732946161.3639, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "3d31cbc67cbc4094b8a2e01c3638b4fb", "0_model_name": "text-embedding-004", "0_prompt": "what awards did the curious case of benjamin button win", "0_output": [["what awards did the curious case of benjamin button win", "The Curious Case of Benjamin Button (film)\n\nProducer Ray Stark bought the film rights to do the short story in the mid-1980s with Universal Pictures backing the film, but struggled to get the project off the ground until he sold the rights to producers Kathleen Kennedy and Frank Marshall in the 1990s. Although it was moved to Paramount Pictures in the 1990s, the film did not enter production until after Fincher and Pitt signed on along with the rest of the cast in 2005. Principal photography began in November 2006 and wrapped up in September 2007. Digital Domain worked on the visual effects of the film, particularly in the process of the metamorphosis of Pitt's character.\nThe Curious Case of Benjamin Button was released in North America on December 25, 2008 to positive reviews, with major praise for Fincher's directing, Pitt's performance, production values, and visual effects. The film was a box office success, grossing $335.8 million worldwide against its $167 million budget. The film received a leading 13 Academy Award nominations at the 81st Academy Awards, including Best Picture, Best Director for Fincher, Best Actor for Pitt, and Best Supporting Actress for Taraji P. Henson, and won three, for Best Art Direction, Best Makeup, and Best Visual Effects."]], "0_corpus": "wikipedia", "1_conv_id": "5de2e9f72e7e4ba484d5c331d15b4ed9", "1_model_name": "embed-english-v3.0", "1_prompt": "what awards did the curious case of benjamin button win", "1_output": [["what awards did the curious case of benjamin button win", "The Curious Case of Benjamin Button (film)\n\nPeter Bradshaw in The Guardian called it \"166 minutes of twee tedium\", giving it one star out of five. Cosmo Landesman of the Sunday Times gave the film two out of five stars, writing: \"The film's premise serves no purpose. It's a gimmick that goes on for nearly three hours ... The Curious Case of Benjamin Button is an anodyne Hollywood film that offers a safe and sanitised view of life and death.\" James Christopher in The Times called it \"a tedious marathon of smoke and mirrors. In terms of the basic requirements of three-reel drama the film lacks substance, credibility, a decent script and characters you might actually care for.\" Derek Malcolm of London's Evening Standard felt that \"never at any point do you feel that there's anything more to it than a very strange story traversed by a film-maker who knows what he is doing but not always why he is doing it.\"\nAccolades\nAt the 81st Academy Awards, The Curious Case of Benjamin Button received a leading 13 Academy Award nominations, including Best Picture, Best Director for Fincher, Best Actor for Pitt, and Best Supporting Actress for Taraji P. Henson, and won three, for Best Art Direction, Best Makeup, and Best Visual Effects.\nTaraji P. Henson won Best Actress at the BET Awards for her role in the film combined with two other performances in Not Easily Broken, and The Family That Preys.\nThe film won all four awards it was nominated for at the 7th Visual Effects Society Awards, the categories of \"Outstanding Visual Effects in a Visual Effects-Driven Feature Motion Picture,\" \"Best Single Visual Effect of the Year,\" \"Outstanding Animated Character in a Live Action Feature Motion Picture,\" and \"Outstanding Compositing in a Feature Motion Picture.\""]], "1_corpus": "wikipedia"}
|
76 |
{"tstamp": 1732946186.1109, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "c4d06987e156450c9fb708f6dc6f68db", "0_model_name": "text-embedding-004", "0_prompt": "How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "0_output": [["How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "Title: Teaching Algorithmic Reasoning via In-context Learning\n\nAbstract: Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines."]], "0_corpus": "arxiv", "1_conv_id": "7b2c3ef296d542f2bcfb79baa5a58ac2", "1_model_name": "embed-english-v3.0", "1_prompt": "How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "1_output": [["How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "Title: Teaching Algorithmic Reasoning via In-context Learning\n\nAbstract: Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines."]], "1_corpus": "arxiv"}
|
77 |
+
{"tstamp": 1732976135.1975, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "5d8c1698a1df43f5962169c3833398aa", "0_model_name": "BM25", "0_prompt": "embedding generation transformer model for ecommerce products sts task", "0_output": [["embedding generation transformer model for ecommerce products sts task", "Title: BERT Goes Shopping: Comparing Distributional Models for Product Representations\n\nAbstract: Word embeddings (e.g., word2vec) have been applied successfully to eCommerce products through~\\textit{prod2vec}. Inspired by the recent performance improvements on several NLP tasks brought by contextualized embeddings, we propose to transfer BERT-like architectures to eCommerce: our model -- ~\\textit{Prod2BERT} -- is trained to generate representations of products through masked session modeling. Through extensive experiments over multiple shops, different tasks, and a range of design choices, we systematically compare the accuracy of~\\textit{Prod2BERT} and~\\textit{prod2vec} embeddings: while~\\textit{Prod2BERT} is found to be superior in several scenarios, we highlight the importance of resources and hyperparameters in the best performing models. Finally, we provide guidelines to practitioners for training embeddings under a variety of computational and data constraints."]], "0_corpus": "arxiv", "1_conv_id": "7ac5b1ca06ab4980a2e904bc50b55a16", "1_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "1_prompt": "embedding generation transformer model for ecommerce products sts task", "1_output": [["embedding generation transformer model for ecommerce products sts task", "Title: Learning-To-Embed: Adopting Transformer based models for E-commerce Products Representation Learning\n\nAbstract: Learning low-dimensional representation for large number of products present in an e-commerce catalogue plays a vital role as they are helpful in tasks like product ranking, product recommendation, finding similar products, modelling user-behaviour etc. Recently, a lot of tasks in the NLP field are getting tackled using the Transformer based models and these deep models are widely applicable in the industries setting to solve various problems. With this motivation, we apply transformer based model for learning contextual representation of products in an e-commerce setting. In this work, we propose a novel approach of pre-training transformer based model on a users generated sessions dataset obtained from a large fashion e-commerce platform to obtain latent product representation. Once pre-trained, we show that the low-dimension representation of the products can be obtained given the product attributes information as a textual sentence. We mainly pre-train BERT, RoBERTa, ALBERT and XLNET variants of transformer model and show a quantitative analysis of the products representation obtained from these models with respect to Next Product Recommendation(NPR) and Content Ranking(CR) tasks. For both the tasks, we collect an evaluation data from the fashion e-commerce platform and observe that XLNET model outperform other variants with a MRR of 0.5 for NPR and NDCG of 0.634 for CR. XLNET model also outperforms the Word2Vec based non-transformer baseline on both the downstream tasks. To the best of our knowledge, this is the first and novel work for pre-training transformer based models using users generated sessions data containing products that are represented with rich attributes information for adoption in e-commerce setting. These models can be further fine-tuned in order to solve various downstream tasks in e-commerce, thereby eliminating the need to train a model from scratch."]], "1_corpus": "arxiv"}
|
data/sts_battle-ffd1dcbf-aab7-4cd1-bfa8-5b0d8404144b.jsonl
CHANGED
@@ -5,3 +5,4 @@
|
|
5 |
{"tstamp": 1732775735.0801, "task_type": "sts", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "d72a8718438c4e47804a387c3396af7e", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_txt0": "you are the GOAT", "0_txt1": "you are an absolute goat", "0_txt2": "you are simply the best", "0_output": "", "1_conv_id": "b79c9c4e385a45cf9af8cb716651400a", "1_model_name": "voyage-multilingual-2", "1_txt0": "you are the GOAT", "1_txt1": "you are an absolute goat", "1_txt2": "you are simply the best", "1_output": ""}
|
6 |
{"tstamp": 1732775984.1561, "task_type": "sts", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "10d75e146d57430abe35c94bc69f091d", "0_model_name": "voyage-multilingual-2", "0_txt0": "this world is on fire", "0_txt1": "the end is nigh", "0_txt2": "global warming is a concern", "0_output": "", "1_conv_id": "62a8299ab32e49f8b9802c88d9231a57", "1_model_name": "BAAI/bge-large-en-v1.5", "1_txt0": "this world is on fire", "1_txt1": "the end is nigh", "1_txt2": "global warming is a concern", "1_output": ""}
|
7 |
{"tstamp": 1732776233.9052, "task_type": "sts", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "615a38d8968e48a48e56ba94895deb2d", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_txt0": "Dr Doom", "0_txt1": "Victor", "0_txt2": "Queen Victoria", "0_output": "", "1_conv_id": "0367c9249fc04750991acda9f322d00f", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_txt0": "Dr Doom", "1_txt1": "Victor", "1_txt2": "Queen Victoria", "1_output": ""}
|
|
|
|
5 |
{"tstamp": 1732775735.0801, "task_type": "sts", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "d72a8718438c4e47804a387c3396af7e", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_txt0": "you are the GOAT", "0_txt1": "you are an absolute goat", "0_txt2": "you are simply the best", "0_output": "", "1_conv_id": "b79c9c4e385a45cf9af8cb716651400a", "1_model_name": "voyage-multilingual-2", "1_txt0": "you are the GOAT", "1_txt1": "you are an absolute goat", "1_txt2": "you are simply the best", "1_output": ""}
|
6 |
{"tstamp": 1732775984.1561, "task_type": "sts", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "10d75e146d57430abe35c94bc69f091d", "0_model_name": "voyage-multilingual-2", "0_txt0": "this world is on fire", "0_txt1": "the end is nigh", "0_txt2": "global warming is a concern", "0_output": "", "1_conv_id": "62a8299ab32e49f8b9802c88d9231a57", "1_model_name": "BAAI/bge-large-en-v1.5", "1_txt0": "this world is on fire", "1_txt1": "the end is nigh", "1_txt2": "global warming is a concern", "1_output": ""}
|
7 |
{"tstamp": 1732776233.9052, "task_type": "sts", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "615a38d8968e48a48e56ba94895deb2d", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_txt0": "Dr Doom", "0_txt1": "Victor", "0_txt2": "Queen Victoria", "0_output": "", "1_conv_id": "0367c9249fc04750991acda9f322d00f", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_txt0": "Dr Doom", "1_txt1": "Victor", "1_txt2": "Queen Victoria", "1_output": ""}
|
8 |
+
{"tstamp": 1732976387.0718, "task_type": "sts", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "faf2b18bdd6544029c54f9511838c1bd", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_txt0": "David Yurman Madison® 85mm Link Bracelet STERLING SILVER Medium | Female | B35046MSSM", "0_txt1": "David Yurman DY Madison Chain Small Bracelet, 8.5mm | M | Silver\t", "0_txt2": "Rachel Riley Girl's Smocked Cotton Chambray Dress, Size 2-10 | Navy | 2", "0_output": "", "1_conv_id": "f31fdd0e33004fe9a011eda632c44101", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_txt0": "David Yurman Madison® 85mm Link Bracelet STERLING SILVER Medium | Female | B35046MSSM", "1_txt1": "David Yurman DY Madison Chain Small Bracelet, 8.5mm | M | Silver\t", "1_txt2": "Rachel Riley Girl's Smocked Cotton Chambray Dress, Size 2-10 | Navy | 2", "1_output": ""}
|
data/sts_individual-ffd1dcbf-aab7-4cd1-bfa8-5b0d8404144b.jsonl
CHANGED
@@ -14,3 +14,5 @@
|
|
14 |
{"tstamp": 1732776205.4916, "task_type": "sts", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1732776205.4452, "finish": 1732776205.4916, "ip": "", "conv_id": "0367c9249fc04750991acda9f322d00f", "model_name": "Salesforce/SFR-Embedding-2_R", "txt0": "Dr Doom", "txt1": "Victor", "txt2": "Queen Victoria", "output": ""}
|
15 |
{"tstamp": 1732776275.7418, "task_type": "sts", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1732776274.9717, "finish": 1732776275.7418, "ip": "", "conv_id": "3e55128b80a84afe91737cb8fea3f8ae", "model_name": "Salesforce/SFR-Embedding-2_R", "txt0": "bear", "txt1": "depression", "txt2": "sloth", "output": ""}
|
16 |
{"tstamp": 1732776275.7418, "task_type": "sts", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1732776274.9717, "finish": 1732776275.7418, "ip": "", "conv_id": "6fc8fa769b02401c95ce4f2cd52d652c", "model_name": "text-embedding-3-large", "txt0": "bear", "txt1": "depression", "txt2": "sloth", "output": ""}
|
|
|
|
|
|
14 |
{"tstamp": 1732776205.4916, "task_type": "sts", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1732776205.4452, "finish": 1732776205.4916, "ip": "", "conv_id": "0367c9249fc04750991acda9f322d00f", "model_name": "Salesforce/SFR-Embedding-2_R", "txt0": "Dr Doom", "txt1": "Victor", "txt2": "Queen Victoria", "output": ""}
|
15 |
{"tstamp": 1732776275.7418, "task_type": "sts", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1732776274.9717, "finish": 1732776275.7418, "ip": "", "conv_id": "3e55128b80a84afe91737cb8fea3f8ae", "model_name": "Salesforce/SFR-Embedding-2_R", "txt0": "bear", "txt1": "depression", "txt2": "sloth", "output": ""}
|
16 |
{"tstamp": 1732776275.7418, "task_type": "sts", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1732776274.9717, "finish": 1732776275.7418, "ip": "", "conv_id": "6fc8fa769b02401c95ce4f2cd52d652c", "model_name": "text-embedding-3-large", "txt0": "bear", "txt1": "depression", "txt2": "sloth", "output": ""}
|
17 |
+
{"tstamp": 1732976315.8037, "task_type": "sts", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1732976315.7663, "finish": 1732976315.8037, "ip": "", "conv_id": "faf2b18bdd6544029c54f9511838c1bd", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "txt0": "David Yurman Madison® 85mm Link Bracelet STERLING SILVER Medium | Female | B35046MSSM", "txt1": "David Yurman DY Madison Chain Small Bracelet, 8.5mm | M | Silver\t", "txt2": "Rachel Riley Girl's Smocked Cotton Chambray Dress, Size 2-10 | Navy | 2", "output": ""}
|
18 |
+
{"tstamp": 1732976315.8037, "task_type": "sts", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1732976315.7663, "finish": 1732976315.8037, "ip": "", "conv_id": "f31fdd0e33004fe9a011eda632c44101", "model_name": "intfloat/multilingual-e5-large-instruct", "txt0": "David Yurman Madison® 85mm Link Bracelet STERLING SILVER Medium | Female | B35046MSSM", "txt1": "David Yurman DY Madison Chain Small Bracelet, 8.5mm | M | Silver\t", "txt2": "Rachel Riley Girl's Smocked Cotton Chambray Dress, Size 2-10 | Navy | 2", "output": ""}
|