Scheduled Commit
Browse files
data/retrieval_battle-d2470a18-3fc5-4a21-86ba-77a6dda751e0.jsonl
CHANGED
@@ -27,3 +27,6 @@
|
|
27 |
{"tstamp": 1725975058.1392, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "594e49d199c840db970e395664b1160a", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "who was the first king of poland?", "0_output": [["who was the first king of poland?", "Piast dynasty\n\nThe House of Piast was the first historical ruling dynasty of Poland. The first documented Polish monarch was Duke Mieszko I (–992). The Piasts' royal rule in Poland ended in 1370 with the death of King Casimir III the Great."]], "0_corpus": "wikipedia", "1_conv_id": "edb97aa23413419e8b65cbdc90305239", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "who was the first king of poland?", "1_output": [["who was the first king of poland?", "Kingdom of Poland\n\nThe Kingdom of Poland (; Latin: Regnum Poloniae) was a monarchy in Eastern Europe during the medieval period from 1025 until 1385.\nBackground\nThe West Slavic tribe of Polans who lived in what is today the historic region of Greater Poland, gave rise to a state in the early 10th century, which would become the nascent predecessor of the Kingdom of Poland. Following the Christianization of Poland in 966, and the emergence of the Duchy of Poland during the rule of Mieszko I, his eldest son Bolesław I the Brave inherited his father's dukedom and subsequently was crowned as king.\nHistory"]], "1_corpus": "wikipedia"}
|
28 |
{"tstamp": 1725980120.5972, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "bb04a4df53e94e6796caf345ffea3b6e", "0_model_name": "text-embedding-3-large", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "4104a75b0fcd46caacf76b17552feef6", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
|
29 |
{"tstamp": 1725982756.3539, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "a824724a9a5242deb63f1e8e6c0b57c7", "0_model_name": "text-embedding-3-large", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "c6028cf0e24d46a0914f2ce4afd639d5", "1_model_name": "embed-english-v3.0", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}
|
|
|
|
|
|
|
|
27 |
{"tstamp": 1725975058.1392, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "594e49d199c840db970e395664b1160a", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "who was the first king of poland?", "0_output": [["who was the first king of poland?", "Piast dynasty\n\nThe House of Piast was the first historical ruling dynasty of Poland. The first documented Polish monarch was Duke Mieszko I (–992). The Piasts' royal rule in Poland ended in 1370 with the death of King Casimir III the Great."]], "0_corpus": "wikipedia", "1_conv_id": "edb97aa23413419e8b65cbdc90305239", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "who was the first king of poland?", "1_output": [["who was the first king of poland?", "Kingdom of Poland\n\nThe Kingdom of Poland (; Latin: Regnum Poloniae) was a monarchy in Eastern Europe during the medieval period from 1025 until 1385.\nBackground\nThe West Slavic tribe of Polans who lived in what is today the historic region of Greater Poland, gave rise to a state in the early 10th century, which would become the nascent predecessor of the Kingdom of Poland. Following the Christianization of Poland in 966, and the emergence of the Duchy of Poland during the rule of Mieszko I, his eldest son Bolesław I the Brave inherited his father's dukedom and subsequently was crowned as king.\nHistory"]], "1_corpus": "wikipedia"}
|
28 |
{"tstamp": 1725980120.5972, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "bb04a4df53e94e6796caf345ffea3b6e", "0_model_name": "text-embedding-3-large", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "4104a75b0fcd46caacf76b17552feef6", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
|
29 |
{"tstamp": 1725982756.3539, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "a824724a9a5242deb63f1e8e6c0b57c7", "0_model_name": "text-embedding-3-large", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "c6028cf0e24d46a0914f2ce4afd639d5", "1_model_name": "embed-english-v3.0", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}
|
30 |
+
{"tstamp": 1725982780.3998, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f5cc13153a5547469e14d1680b0ec530", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "0_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "0_corpus": "arxiv", "1_conv_id": "70b31ca9e2a340e4a099d135f53eb4e7", "1_model_name": "voyage-multilingual-2", "1_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "1_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "1_corpus": "arxiv"}
|
31 |
+
{"tstamp": 1725982809.2419, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "6882760c59f740caaf4940756c126185", "0_model_name": "BM25", "0_prompt": "What's the purpose of `if __name__ == '__main__'`?", "0_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What is the purpose of __name__? What does __name__ do? I have only seen it paired with __main__ and nothing else.\nI know that the classic if __name__ == __main__: defines the behavior when acting as a package vs running as stand-alone.\nHowever what other usages are there for __name__?\n\nA: __name__ is \"__main__\" if you're executing the script directly. If you're importing a module, __name__ is the name of the module.\nfoo.py:\nprint(__name__)\n\nbar.py \nimport foo\n\nRun the scripts:\n$ python foo.py\n__main__\n$ python bar.py \nfoo\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "dfca038067864241b4fb46591b7b6446", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "What's the purpose of `if __name__ == '__main__'`?", "1_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: html dangerous tags to avoid while developing a chat application I am developing a chat application using PHP and jQuery... all messages sent by the chat pair are appended to a <div>. While doing this, I found that tags such as <script> can be entered by users. What tags should I avoid while appending the user's message?\n\nA: Avoid tags that will inject content into the page:\n\n\n*\n\n*applet\n\n*script\n\n*style\n\n*link\n\n*iframe\n\n\nOther tags may be necessary to remove, as well. Maybe you don't want to embed images. These are things that you will need to determine. At the bare minimum, allow <p> and <br />.\n"]], "1_corpus": "stackexchange"}
|
32 |
+
{"tstamp": 1725982915.424, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "e4bc5f9ca59a42908642fd62eeb2a49b", "0_model_name": "text-embedding-004", "0_prompt": "Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "0_output": [["Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "Title: Emotion Dynamics Modeling via BERT\n\nAbstract: Emotion dynamics modeling is a significant task in emotion recognition in conversation. It aims to predict conversational emotions when building empathetic dialogue systems. Existing studies mainly develop models based on Recurrent Neural Networks (RNNs). They cannot benefit from the power of the recently-developed pre-training strategies for better token representation learning in conversations. More seriously, it is hard to distinguish the dependency of interlocutors and the emotional influence among interlocutors by simply assembling the features on top of RNNs. In this paper, we develop a series of BERT-based models to specifically capture the inter-interlocutor and intra-interlocutor dependencies of the conversational emotion dynamics. Concretely, we first substitute BERT for RNNs to enrich the token representations. Then, a Flat-structured BERT (F-BERT) is applied to link up utterances in a conversation directly, and a Hierarchically-structured BERT (H-BERT) is employed to distinguish the interlocutors when linking up utterances. More importantly, a Spatial-Temporal-structured BERT, namely ST-BERT, is proposed to further determine the emotional influence among interlocutors. Finally, we conduct extensive experiments on two popular emotion recognition in conversation benchmark datasets and demonstrate that our proposed models can attain around 5\\% and 10\\% improvement over the state-of-the-art baselines, respectively."]], "0_corpus": "arxiv", "1_conv_id": "a6e3babb0a8448278de74917fcbecc6c", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "1_output": [["Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "Title: EmotionX-IDEA: Emotion BERT -- an Affectional Model for Conversation\n\nAbstract: In this paper, we investigate the emotion recognition ability of the pre-training language model, namely BERT. By the nature of the framework of BERT, a two-sentence structure, we adapt BERT to continues dialogue emotion prediction tasks, which rely heavily on the sentence-level context-aware understanding. The experiments show that by mapping the continues dialogue into a causal utterance pair, which is constructed by the utterance and the reply utterance, models can better capture the emotions of the reply utterance. The present method has achieved 0.815 and 0.885 micro F1 score in the testing dataset of Friends and EmotionPush, respectively."]], "1_corpus": "arxiv"}
|
data/retrieval_individual-d2470a18-3fc5-4a21-86ba-77a6dda751e0.jsonl
CHANGED
@@ -110,3 +110,9 @@
|
|
110 |
{"tstamp": 1725982748.8308, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1725982748.1356, "finish": 1725982748.8308, "ip": "", "conv_id": "c6028cf0e24d46a0914f2ce4afd639d5", "model_name": "embed-english-v3.0", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
|
111 |
{"tstamp": 1725982775.7001, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1725982775.2864, "finish": 1725982775.7001, "ip": "", "conv_id": "f5cc13153a5547469e14d1680b0ec530", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "corpus": "arxiv"}
|
112 |
{"tstamp": 1725982775.7001, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1725982775.2864, "finish": 1725982775.7001, "ip": "", "conv_id": "70b31ca9e2a340e4a099d135f53eb4e7", "model_name": "voyage-multilingual-2", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "corpus": "arxiv"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
110 |
{"tstamp": 1725982748.8308, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1725982748.1356, "finish": 1725982748.8308, "ip": "", "conv_id": "c6028cf0e24d46a0914f2ce4afd639d5", "model_name": "embed-english-v3.0", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
|
111 |
{"tstamp": 1725982775.7001, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1725982775.2864, "finish": 1725982775.7001, "ip": "", "conv_id": "f5cc13153a5547469e14d1680b0ec530", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "corpus": "arxiv"}
|
112 |
{"tstamp": 1725982775.7001, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1725982775.2864, "finish": 1725982775.7001, "ip": "", "conv_id": "70b31ca9e2a340e4a099d135f53eb4e7", "model_name": "voyage-multilingual-2", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "corpus": "arxiv"}
|
113 |
+
{"tstamp": 1725982789.8633, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1725982789.0694, "finish": 1725982789.8633, "ip": "", "conv_id": "6882760c59f740caaf4940756c126185", "model_name": "BM25", "prompt": "What's the purpose of `if __name__ == '__main__'`?", "output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What is the purpose of __name__? What does __name__ do? I have only seen it paired with __main__ and nothing else.\nI know that the classic if __name__ == __main__: defines the behavior when acting as a package vs running as stand-alone.\nHowever what other usages are there for __name__?\n\nA: __name__ is \"__main__\" if you're executing the script directly. If you're importing a module, __name__ is the name of the module.\nfoo.py:\nprint(__name__)\n\nbar.py \nimport foo\n\nRun the scripts:\n$ python foo.py\n__main__\n$ python bar.py \nfoo\n\n"]], "corpus": "stackexchange"}
|
114 |
+
{"tstamp": 1725982789.8633, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1725982789.0694, "finish": 1725982789.8633, "ip": "", "conv_id": "dfca038067864241b4fb46591b7b6446", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "What's the purpose of `if __name__ == '__main__'`?", "output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: html dangerous tags to avoid while developing a chat application I am developing a chat application using PHP and jQuery... all messages sent by the chat pair are appended to a <div>. While doing this, I found that tags such as <script> can be entered by users. What tags should I avoid while appending the user's message?\n\nA: Avoid tags that will inject content into the page:\n\n\n*\n\n*applet\n\n*script\n\n*style\n\n*link\n\n*iframe\n\n\nOther tags may be necessary to remove, as well. Maybe you don't want to embed images. These are things that you will need to determine. At the bare minimum, allow <p> and <br />.\n"]], "corpus": "stackexchange"}
|
115 |
+
{"tstamp": 1725982903.2096, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1725982900.8498, "finish": 1725982903.2096, "ip": "", "conv_id": "e4bc5f9ca59a42908642fd62eeb2a49b", "model_name": "text-embedding-004", "prompt": "Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "output": [["Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "Title: Emotion Dynamics Modeling via BERT\n\nAbstract: Emotion dynamics modeling is a significant task in emotion recognition in conversation. It aims to predict conversational emotions when building empathetic dialogue systems. Existing studies mainly develop models based on Recurrent Neural Networks (RNNs). They cannot benefit from the power of the recently-developed pre-training strategies for better token representation learning in conversations. More seriously, it is hard to distinguish the dependency of interlocutors and the emotional influence among interlocutors by simply assembling the features on top of RNNs. In this paper, we develop a series of BERT-based models to specifically capture the inter-interlocutor and intra-interlocutor dependencies of the conversational emotion dynamics. Concretely, we first substitute BERT for RNNs to enrich the token representations. Then, a Flat-structured BERT (F-BERT) is applied to link up utterances in a conversation directly, and a Hierarchically-structured BERT (H-BERT) is employed to distinguish the interlocutors when linking up utterances. More importantly, a Spatial-Temporal-structured BERT, namely ST-BERT, is proposed to further determine the emotional influence among interlocutors. Finally, we conduct extensive experiments on two popular emotion recognition in conversation benchmark datasets and demonstrate that our proposed models can attain around 5\\% and 10\\% improvement over the state-of-the-art baselines, respectively."]], "corpus": "arxiv"}
|
116 |
+
{"tstamp": 1725982903.2096, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1725982900.8498, "finish": 1725982903.2096, "ip": "", "conv_id": "a6e3babb0a8448278de74917fcbecc6c", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "output": [["Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "Title: EmotionX-IDEA: Emotion BERT -- an Affectional Model for Conversation\n\nAbstract: In this paper, we investigate the emotion recognition ability of the pre-training language model, namely BERT. By the nature of the framework of BERT, a two-sentence structure, we adapt BERT to continues dialogue emotion prediction tasks, which rely heavily on the sentence-level context-aware understanding. The experiments show that by mapping the continues dialogue into a causal utterance pair, which is constructed by the utterance and the reply utterance, models can better capture the emotions of the reply utterance. The present method has achieved 0.815 and 0.885 micro F1 score in the testing dataset of Friends and EmotionPush, respectively."]], "corpus": "arxiv"}
|
117 |
+
{"tstamp": 1725982934.6319, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1725982932.5044, "finish": 1725982934.6319, "ip": "", "conv_id": "32e73c43e5cc4a2781e1ac2beb77a15d", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model\n\nAbstract: Commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. Therefore, our work is to develop a model that automatically writes the commit message. To this end, we release 345K datasets consisting of code modification and commit messages in six programming languages (Python, PHP, Go, Java, JavaScript, and Ruby). Similar to the neural machine translation (NMT) model, using our dataset, we feed the code modification to the encoder input and the commit message to the decoder input and measure the result of the generated commit message with BLEU-4. Also, we propose the following two training methods to improve the result of generating the commit message: (1) A method of preprocessing the input to feed the code modification to the encoder input. (2) A method that uses an initial weight suitable for the code domain to reduce the gap in contextual representation between programming language (PL) and natural language (NL). Training code, dataset, and pre-trained weights are available at https://github.com/graykode/commit-autosuggestions"]], "corpus": "arxiv"}
|
118 |
+
{"tstamp": 1725982934.6319, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1725982932.5044, "finish": 1725982934.6319, "ip": "", "conv_id": "af488a05f5af4ff8a252c50fd2e0742b", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model\n\nAbstract: Commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. Therefore, our work is to develop a model that automatically writes the commit message. To this end, we release 345K datasets consisting of code modification and commit messages in six programming languages (Python, PHP, Go, Java, JavaScript, and Ruby). Similar to the neural machine translation (NMT) model, using our dataset, we feed the code modification to the encoder input and the commit message to the decoder input and measure the result of the generated commit message with BLEU-4. Also, we propose the following two training methods to improve the result of generating the commit message: (1) A method of preprocessing the input to feed the code modification to the encoder input. (2) A method that uses an initial weight suitable for the code domain to reduce the gap in contextual representation between programming language (PL) and natural language (NL). Training code, dataset, and pre-trained weights are available at https://github.com/graykode/commit-autosuggestions"]], "corpus": "arxiv"}
|