Muennighoff commited on
Commit
a17a35f
·
verified ·
1 Parent(s): 65870e5

Scheduled Commit

Browse files
data/retrieval_individual-76065722-116b-41cc-961b-aebfed552f79.jsonl CHANGED
@@ -753,3 +753,5 @@
753
  {"tstamp": 1728082557.036, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1728082554.6941, "finish": 1728082557.036, "ip": "", "conv_id": "0b2ee1eccea94f15967f7aca15b5b10f", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Cognitive test\n\nCognitive tests are assessments of the cognitive capabilities of humans and other animals. Tests administered to humans include various forms of IQ tests; those administered to animals include the mirror test (a test of visual self-awareness) and the T maze test (which tests learning ability). Such testing is used in psychology and psychometrics, as well as other fields studying human and animal intelligence."]], "corpus": "wikipedia"}
754
  {"tstamp": 1728118556.313, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1728118554.0868, "finish": 1728118556.313, "ip": "", "conv_id": "ffce9627d8a24ee6a1a5fa75bd2ae601", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model\n\nAbstract: Commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. Therefore, our work is to develop a model that automatically writes the commit message. To this end, we release 345K datasets consisting of code modification and commit messages in six programming languages (Python, PHP, Go, Java, JavaScript, and Ruby). Similar to the neural machine translation (NMT) model, using our dataset, we feed the code modification to the encoder input and the commit message to the decoder input and measure the result of the generated commit message with BLEU-4. Also, we propose the following two training methods to improve the result of generating the commit message: (1) A method of preprocessing the input to feed the code modification to the encoder input. (2) A method that uses an initial weight suitable for the code domain to reduce the gap in contextual representation between programming language (PL) and natural language (NL). Training code, dataset, and pre-trained weights are available at https://github.com/graykode/commit-autosuggestions"]], "corpus": "arxiv"}
755
  {"tstamp": 1728118556.313, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1728118554.0868, "finish": 1728118556.313, "ip": "", "conv_id": "5661e16a0ee146c1acef688099f5efd8", "model_name": "GritLM/GritLM-7B", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "corpus": "arxiv"}
 
 
 
753
  {"tstamp": 1728082557.036, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1728082554.6941, "finish": 1728082557.036, "ip": "", "conv_id": "0b2ee1eccea94f15967f7aca15b5b10f", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Cognitive test\n\nCognitive tests are assessments of the cognitive capabilities of humans and other animals. Tests administered to humans include various forms of IQ tests; those administered to animals include the mirror test (a test of visual self-awareness) and the T maze test (which tests learning ability). Such testing is used in psychology and psychometrics, as well as other fields studying human and animal intelligence."]], "corpus": "wikipedia"}
754
  {"tstamp": 1728118556.313, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1728118554.0868, "finish": 1728118556.313, "ip": "", "conv_id": "ffce9627d8a24ee6a1a5fa75bd2ae601", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model\n\nAbstract: Commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. Therefore, our work is to develop a model that automatically writes the commit message. To this end, we release 345K datasets consisting of code modification and commit messages in six programming languages (Python, PHP, Go, Java, JavaScript, and Ruby). Similar to the neural machine translation (NMT) model, using our dataset, we feed the code modification to the encoder input and the commit message to the decoder input and measure the result of the generated commit message with BLEU-4. Also, we propose the following two training methods to improve the result of generating the commit message: (1) A method of preprocessing the input to feed the code modification to the encoder input. (2) A method that uses an initial weight suitable for the code domain to reduce the gap in contextual representation between programming language (PL) and natural language (NL). Training code, dataset, and pre-trained weights are available at https://github.com/graykode/commit-autosuggestions"]], "corpus": "arxiv"}
755
  {"tstamp": 1728118556.313, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1728118554.0868, "finish": 1728118556.313, "ip": "", "conv_id": "5661e16a0ee146c1acef688099f5efd8", "model_name": "GritLM/GritLM-7B", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "corpus": "arxiv"}
756
+ {"tstamp": 1728160038.1597, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1728160035.5587, "finish": 1728160038.1597, "ip": "", "conv_id": "fcf2dec5ae494c03b8eeef04d870bb15", "model_name": "text-embedding-3-large", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
757
+ {"tstamp": 1728160038.1597, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1728160035.5587, "finish": 1728160038.1597, "ip": "", "conv_id": "eda7ec20c18f4e45aeec048bbb30d465", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}