Muennighoff commited on
Commit
199dfd9
·
verified ·
1 Parent(s): 1de4eee

Scheduled Commit

Browse files
data/retrieval_battle-d2470a18-3fc5-4a21-86ba-77a6dda751e0.jsonl CHANGED
@@ -30,3 +30,4 @@
30
  {"tstamp": 1725982780.3998, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f5cc13153a5547469e14d1680b0ec530", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "0_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "0_corpus": "arxiv", "1_conv_id": "70b31ca9e2a340e4a099d135f53eb4e7", "1_model_name": "voyage-multilingual-2", "1_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "1_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "1_corpus": "arxiv"}
31
  {"tstamp": 1725982809.2419, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "6882760c59f740caaf4940756c126185", "0_model_name": "BM25", "0_prompt": "What's the purpose of `if __name__ == '__main__'`?", "0_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What is the purpose of __name__? What does __name__ do? I have only seen it paired with __main__ and nothing else.\nI know that the classic if __name__ == __main__: defines the behavior when acting as a package vs running as stand-alone.\nHowever what other usages are there for __name__?\n\nA: __name__ is \"__main__\" if you're executing the script directly. If you're importing a module, __name__ is the name of the module.\nfoo.py:\nprint(__name__)\n\nbar.py \nimport foo\n\nRun the scripts:\n$ python foo.py\n__main__\n$ python bar.py \nfoo\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "dfca038067864241b4fb46591b7b6446", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "What's the purpose of `if __name__ == '__main__'`?", "1_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: html dangerous tags to avoid while developing a chat application I am developing a chat application using PHP and jQuery... all messages sent by the chat pair are appended to a <div>. While doing this, I found that tags such as <script> can be entered by users. What tags should I avoid while appending the user's message?\n\nA: Avoid tags that will inject content into the page:\n\n\n*\n\n*applet\n\n*script\n\n*style\n\n*link\n\n*iframe\n\n\nOther tags may be necessary to remove, as well. Maybe you don't want to embed images. These are things that you will need to determine. At the bare minimum, allow <p> and <br />.\n"]], "1_corpus": "stackexchange"}
32
  {"tstamp": 1725982915.424, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "e4bc5f9ca59a42908642fd62eeb2a49b", "0_model_name": "text-embedding-004", "0_prompt": "Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "0_output": [["Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "Title: Emotion Dynamics Modeling via BERT\n\nAbstract: Emotion dynamics modeling is a significant task in emotion recognition in conversation. It aims to predict conversational emotions when building empathetic dialogue systems. Existing studies mainly develop models based on Recurrent Neural Networks (RNNs). They cannot benefit from the power of the recently-developed pre-training strategies for better token representation learning in conversations. More seriously, it is hard to distinguish the dependency of interlocutors and the emotional influence among interlocutors by simply assembling the features on top of RNNs. In this paper, we develop a series of BERT-based models to specifically capture the inter-interlocutor and intra-interlocutor dependencies of the conversational emotion dynamics. Concretely, we first substitute BERT for RNNs to enrich the token representations. Then, a Flat-structured BERT (F-BERT) is applied to link up utterances in a conversation directly, and a Hierarchically-structured BERT (H-BERT) is employed to distinguish the interlocutors when linking up utterances. More importantly, a Spatial-Temporal-structured BERT, namely ST-BERT, is proposed to further determine the emotional influence among interlocutors. Finally, we conduct extensive experiments on two popular emotion recognition in conversation benchmark datasets and demonstrate that our proposed models can attain around 5\\% and 10\\% improvement over the state-of-the-art baselines, respectively."]], "0_corpus": "arxiv", "1_conv_id": "a6e3babb0a8448278de74917fcbecc6c", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "1_output": [["Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "Title: EmotionX-IDEA: Emotion BERT -- an Affectional Model for Conversation\n\nAbstract: In this paper, we investigate the emotion recognition ability of the pre-training language model, namely BERT. By the nature of the framework of BERT, a two-sentence structure, we adapt BERT to continues dialogue emotion prediction tasks, which rely heavily on the sentence-level context-aware understanding. The experiments show that by mapping the continues dialogue into a causal utterance pair, which is constructed by the utterance and the reply utterance, models can better capture the emotions of the reply utterance. The present method has achieved 0.815 and 0.885 micro F1 score in the testing dataset of Friends and EmotionPush, respectively."]], "1_corpus": "arxiv"}
 
 
30
  {"tstamp": 1725982780.3998, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f5cc13153a5547469e14d1680b0ec530", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "0_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "0_corpus": "arxiv", "1_conv_id": "70b31ca9e2a340e4a099d135f53eb4e7", "1_model_name": "voyage-multilingual-2", "1_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "1_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "1_corpus": "arxiv"}
31
  {"tstamp": 1725982809.2419, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "6882760c59f740caaf4940756c126185", "0_model_name": "BM25", "0_prompt": "What's the purpose of `if __name__ == '__main__'`?", "0_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: What is the purpose of __name__? What does __name__ do? I have only seen it paired with __main__ and nothing else.\nI know that the classic if __name__ == __main__: defines the behavior when acting as a package vs running as stand-alone.\nHowever what other usages are there for __name__?\n\nA: __name__ is \"__main__\" if you're executing the script directly. If you're importing a module, __name__ is the name of the module.\nfoo.py:\nprint(__name__)\n\nbar.py \nimport foo\n\nRun the scripts:\n$ python foo.py\n__main__\n$ python bar.py \nfoo\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "dfca038067864241b4fb46591b7b6446", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "What's the purpose of `if __name__ == '__main__'`?", "1_output": [["What's the purpose of `if __name__ == '__main__'`?", "Stackoverflow Stackexchange\n\nQ: html dangerous tags to avoid while developing a chat application I am developing a chat application using PHP and jQuery... all messages sent by the chat pair are appended to a <div>. While doing this, I found that tags such as <script> can be entered by users. What tags should I avoid while appending the user's message?\n\nA: Avoid tags that will inject content into the page:\n\n\n*\n\n*applet\n\n*script\n\n*style\n\n*link\n\n*iframe\n\n\nOther tags may be necessary to remove, as well. Maybe you don't want to embed images. These are things that you will need to determine. At the bare minimum, allow <p> and <br />.\n"]], "1_corpus": "stackexchange"}
32
  {"tstamp": 1725982915.424, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "e4bc5f9ca59a42908642fd62eeb2a49b", "0_model_name": "text-embedding-004", "0_prompt": "Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "0_output": [["Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "Title: Emotion Dynamics Modeling via BERT\n\nAbstract: Emotion dynamics modeling is a significant task in emotion recognition in conversation. It aims to predict conversational emotions when building empathetic dialogue systems. Existing studies mainly develop models based on Recurrent Neural Networks (RNNs). They cannot benefit from the power of the recently-developed pre-training strategies for better token representation learning in conversations. More seriously, it is hard to distinguish the dependency of interlocutors and the emotional influence among interlocutors by simply assembling the features on top of RNNs. In this paper, we develop a series of BERT-based models to specifically capture the inter-interlocutor and intra-interlocutor dependencies of the conversational emotion dynamics. Concretely, we first substitute BERT for RNNs to enrich the token representations. Then, a Flat-structured BERT (F-BERT) is applied to link up utterances in a conversation directly, and a Hierarchically-structured BERT (H-BERT) is employed to distinguish the interlocutors when linking up utterances. More importantly, a Spatial-Temporal-structured BERT, namely ST-BERT, is proposed to further determine the emotional influence among interlocutors. Finally, we conduct extensive experiments on two popular emotion recognition in conversation benchmark datasets and demonstrate that our proposed models can attain around 5\\% and 10\\% improvement over the state-of-the-art baselines, respectively."]], "0_corpus": "arxiv", "1_conv_id": "a6e3babb0a8448278de74917fcbecc6c", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "1_output": [["Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "Title: EmotionX-IDEA: Emotion BERT -- an Affectional Model for Conversation\n\nAbstract: In this paper, we investigate the emotion recognition ability of the pre-training language model, namely BERT. By the nature of the framework of BERT, a two-sentence structure, we adapt BERT to continues dialogue emotion prediction tasks, which rely heavily on the sentence-level context-aware understanding. The experiments show that by mapping the continues dialogue into a causal utterance pair, which is constructed by the utterance and the reply utterance, models can better capture the emotions of the reply utterance. The present method has achieved 0.815 and 0.885 micro F1 score in the testing dataset of Friends and EmotionPush, respectively."]], "1_corpus": "arxiv"}
33
+ {"tstamp": 1725996783.0814, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "b2acbbc350094924bdedf4ecb9a552ce", "0_model_name": "voyage-multilingual-2", "0_prompt": "what is your husband name", "0_output": [["what is your husband name", "Husband\n\nA husband is a man involved in a marital relationship, commonly referred to as a spouse. The specific rights, responsibilities, and societal status attributed to a husband can vary significantly across different cultures and historical periods, reflecting a global perspective on this role.\nIn many parts of the world, monogamous marriage is the prevailing norm, where a husband and wife form the basic unit of a family. Legal systems in numerous countries enforce monogamy and prohibit bigamy and polygamy. Traditionally, husbands often held the position of being the head of the household and the primary provider, a role that was often considered paternalistic. However, the evolving dynamics of modern society have led to a shift in these roles. Today, a husband is not automatically designated as the sole breadwinner, especially when his spouse pursues a more financially rewarding career. This change reflects a global trend in the changing dynamics of gender roles and family structures."]], "0_corpus": "wikipedia", "1_conv_id": "d1807b6899644a87863014377319a67b", "1_model_name": "BM25", "1_prompt": "what is your husband name", "1_output": [["what is your husband name", "Bulgarian language\n\nThe head-shaking gesture used to signify \"no\" in Western Europe may also be used interrogatively, with the meaning of \"what is it?\" or \"what's wrong?\".\nBulgarian has an extensive vocabulary covering family relationships. The biggest range of words is for uncles and aunts, e.g. chicho (your father's brother), vuicho (your mother's brother), svako (your aunt's husband); an even larger number of synonyms for these three exists in the various dialects of Bulgarian, including kaleko, lelincho, tetin, etc. The words do not only refer to the closest members of the family (such as brat – brother, but batko/bate – older brother, sestra – sister, but kaka – older sister), but extend to its furthest reaches, e.g. badzhanak from Turkish bacanak (the relationship of the husbands of two sisters to each other) and etarva (the relationships of two brothers' wives to each other). For all in-laws, there are specific names, e.g. a woman's husband's brother is her devеr and her husband's sister is her zalva. In the traditional rural extended family before 1900, there existed separate subcategories for different brothers-in-law/sisters-in-law of a woman with regard to their age relative to hers, e.g. instead of simply a dever there could be a braino (older), a draginko (younger), or an ubavenkyo (who is still a child)."]], "1_corpus": "wikipedia"}
data/retrieval_individual-d2470a18-3fc5-4a21-86ba-77a6dda751e0.jsonl CHANGED
@@ -116,3 +116,5 @@
116
  {"tstamp": 1725982903.2096, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1725982900.8498, "finish": 1725982903.2096, "ip": "", "conv_id": "a6e3babb0a8448278de74917fcbecc6c", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "output": [["Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "Title: EmotionX-IDEA: Emotion BERT -- an Affectional Model for Conversation\n\nAbstract: In this paper, we investigate the emotion recognition ability of the pre-training language model, namely BERT. By the nature of the framework of BERT, a two-sentence structure, we adapt BERT to continues dialogue emotion prediction tasks, which rely heavily on the sentence-level context-aware understanding. The experiments show that by mapping the continues dialogue into a causal utterance pair, which is constructed by the utterance and the reply utterance, models can better capture the emotions of the reply utterance. The present method has achieved 0.815 and 0.885 micro F1 score in the testing dataset of Friends and EmotionPush, respectively."]], "corpus": "arxiv"}
117
  {"tstamp": 1725982934.6319, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1725982932.5044, "finish": 1725982934.6319, "ip": "", "conv_id": "32e73c43e5cc4a2781e1ac2beb77a15d", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model\n\nAbstract: Commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. Therefore, our work is to develop a model that automatically writes the commit message. To this end, we release 345K datasets consisting of code modification and commit messages in six programming languages (Python, PHP, Go, Java, JavaScript, and Ruby). Similar to the neural machine translation (NMT) model, using our dataset, we feed the code modification to the encoder input and the commit message to the decoder input and measure the result of the generated commit message with BLEU-4. Also, we propose the following two training methods to improve the result of generating the commit message: (1) A method of preprocessing the input to feed the code modification to the encoder input. (2) A method that uses an initial weight suitable for the code domain to reduce the gap in contextual representation between programming language (PL) and natural language (NL). Training code, dataset, and pre-trained weights are available at https://github.com/graykode/commit-autosuggestions"]], "corpus": "arxiv"}
118
  {"tstamp": 1725982934.6319, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1725982932.5044, "finish": 1725982934.6319, "ip": "", "conv_id": "af488a05f5af4ff8a252c50fd2e0742b", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model\n\nAbstract: Commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. Therefore, our work is to develop a model that automatically writes the commit message. To this end, we release 345K datasets consisting of code modification and commit messages in six programming languages (Python, PHP, Go, Java, JavaScript, and Ruby). Similar to the neural machine translation (NMT) model, using our dataset, we feed the code modification to the encoder input and the commit message to the decoder input and measure the result of the generated commit message with BLEU-4. Also, we propose the following two training methods to improve the result of generating the commit message: (1) A method of preprocessing the input to feed the code modification to the encoder input. (2) A method that uses an initial weight suitable for the code domain to reduce the gap in contextual representation between programming language (PL) and natural language (NL). Training code, dataset, and pre-trained weights are available at https://github.com/graykode/commit-autosuggestions"]], "corpus": "arxiv"}
 
 
 
116
  {"tstamp": 1725982903.2096, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1725982900.8498, "finish": 1725982903.2096, "ip": "", "conv_id": "a6e3babb0a8448278de74917fcbecc6c", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "output": [["Seeking research on using two-sentence BERT structure for emotion detection in dialogue contexts.", "Title: EmotionX-IDEA: Emotion BERT -- an Affectional Model for Conversation\n\nAbstract: In this paper, we investigate the emotion recognition ability of the pre-training language model, namely BERT. By the nature of the framework of BERT, a two-sentence structure, we adapt BERT to continues dialogue emotion prediction tasks, which rely heavily on the sentence-level context-aware understanding. The experiments show that by mapping the continues dialogue into a causal utterance pair, which is constructed by the utterance and the reply utterance, models can better capture the emotions of the reply utterance. The present method has achieved 0.815 and 0.885 micro F1 score in the testing dataset of Friends and EmotionPush, respectively."]], "corpus": "arxiv"}
117
  {"tstamp": 1725982934.6319, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1725982932.5044, "finish": 1725982934.6319, "ip": "", "conv_id": "32e73c43e5cc4a2781e1ac2beb77a15d", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model\n\nAbstract: Commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. Therefore, our work is to develop a model that automatically writes the commit message. To this end, we release 345K datasets consisting of code modification and commit messages in six programming languages (Python, PHP, Go, Java, JavaScript, and Ruby). Similar to the neural machine translation (NMT) model, using our dataset, we feed the code modification to the encoder input and the commit message to the decoder input and measure the result of the generated commit message with BLEU-4. Also, we propose the following two training methods to improve the result of generating the commit message: (1) A method of preprocessing the input to feed the code modification to the encoder input. (2) A method that uses an initial weight suitable for the code domain to reduce the gap in contextual representation between programming language (PL) and natural language (NL). Training code, dataset, and pre-trained weights are available at https://github.com/graykode/commit-autosuggestions"]], "corpus": "arxiv"}
118
  {"tstamp": 1725982934.6319, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1725982932.5044, "finish": 1725982934.6319, "ip": "", "conv_id": "af488a05f5af4ff8a252c50fd2e0742b", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model\n\nAbstract: Commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. Therefore, our work is to develop a model that automatically writes the commit message. To this end, we release 345K datasets consisting of code modification and commit messages in six programming languages (Python, PHP, Go, Java, JavaScript, and Ruby). Similar to the neural machine translation (NMT) model, using our dataset, we feed the code modification to the encoder input and the commit message to the decoder input and measure the result of the generated commit message with BLEU-4. Also, we propose the following two training methods to improve the result of generating the commit message: (1) A method of preprocessing the input to feed the code modification to the encoder input. (2) A method that uses an initial weight suitable for the code domain to reduce the gap in contextual representation between programming language (PL) and natural language (NL). Training code, dataset, and pre-trained weights are available at https://github.com/graykode/commit-autosuggestions"]], "corpus": "arxiv"}
119
+ {"tstamp": 1725996767.4767, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1725996764.9452, "finish": 1725996767.4767, "ip": "", "conv_id": "b2acbbc350094924bdedf4ecb9a552ce", "model_name": "voyage-multilingual-2", "prompt": "what is your husband name", "output": [["what is your husband name", "Husband\n\nA husband is a man involved in a marital relationship, commonly referred to as a spouse. The specific rights, responsibilities, and societal status attributed to a husband can vary significantly across different cultures and historical periods, reflecting a global perspective on this role.\nIn many parts of the world, monogamous marriage is the prevailing norm, where a husband and wife form the basic unit of a family. Legal systems in numerous countries enforce monogamy and prohibit bigamy and polygamy. Traditionally, husbands often held the position of being the head of the household and the primary provider, a role that was often considered paternalistic. However, the evolving dynamics of modern society have led to a shift in these roles. Today, a husband is not automatically designated as the sole breadwinner, especially when his spouse pursues a more financially rewarding career. This change reflects a global trend in the changing dynamics of gender roles and family structures."]], "corpus": "wikipedia"}
120
+ {"tstamp": 1725996767.4767, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1725996764.9452, "finish": 1725996767.4767, "ip": "", "conv_id": "d1807b6899644a87863014377319a67b", "model_name": "BM25", "prompt": "what is your husband name", "output": [["what is your husband name", "Bulgarian language\n\nThe head-shaking gesture used to signify \"no\" in Western Europe may also be used interrogatively, with the meaning of \"what is it?\" or \"what's wrong?\".\nBulgarian has an extensive vocabulary covering family relationships. The biggest range of words is for uncles and aunts, e.g. chicho (your father's brother), vuicho (your mother's brother), svako (your aunt's husband); an even larger number of synonyms for these three exists in the various dialects of Bulgarian, including kaleko, lelincho, tetin, etc. The words do not only refer to the closest members of the family (such as brat – brother, but batko/bate – older brother, sestra – sister, but kaka – older sister), but extend to its furthest reaches, e.g. badzhanak from Turkish bacanak (the relationship of the husbands of two sisters to each other) and etarva (the relationships of two brothers' wives to each other). For all in-laws, there are specific names, e.g. a woman's husband's brother is her devеr and her husband's sister is her zalva. In the traditional rural extended family before 1900, there existed separate subcategories for different brothers-in-law/sisters-in-law of a woman with regard to their age relative to hers, e.g. instead of simply a dever there could be a braino (older), a draginko (younger), or an ubavenkyo (who is still a child)."]], "corpus": "wikipedia"}