Muennighoff commited on
Commit
2f88faf
·
verified ·
1 Parent(s): cffda5b

Scheduled Commit

Browse files
data/retrieval_individual-bf21cabf-e911-4688-b82f-7a6511b7478a.jsonl CHANGED
@@ -279,3 +279,5 @@
279
  {"tstamp": 1736515223.5788, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1736515220.5379, "finish": 1736515223.5788, "ip": "", "conv_id": "ac64789399e84800a82cdd0c4da2c433", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Who wrote about a guide more popular than the Encyclopedia Galactica?", "output": [["Who wrote about a guide more popular than the Encyclopedia Galactica?", "The Hitchhiker's Guide to the Galaxy (fictional)\n\nThe Hitchhiker's Guide to the Galaxy is a fictional electronic guide book in the multimedia scifi/comedy series of the same name by Douglas Adams. The Guide serves as \"the standard repository for all knowledge and wisdom\" for many members of the series' galaxy-spanning civilization. Entries from the guidebook are used as comic narration to bridge events and provide background information in every version of the story. The guide is published by \"Megadodo Publications\", a publishing company on Ursa Minor Beta, and it is written and edited by many characters throughout the series."]], "corpus": "wikipedia"}
280
  {"tstamp": 1736515545.4809, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1736515543.3581, "finish": 1736515545.4809, "ip": "", "conv_id": "13d9ee620ee14bf7a264c181658b42d9", "model_name": "text-embedding-3-large", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "corpus": "arxiv"}
281
  {"tstamp": 1736515545.4809, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1736515543.3581, "finish": 1736515545.4809, "ip": "", "conv_id": "cb5c5d2b3c2f44ac9f6d685877e8ddb4", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model\n\nAbstract: Commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. Therefore, our work is to develop a model that automatically writes the commit message. To this end, we release 345K datasets consisting of code modification and commit messages in six programming languages (Python, PHP, Go, Java, JavaScript, and Ruby). Similar to the neural machine translation (NMT) model, using our dataset, we feed the code modification to the encoder input and the commit message to the decoder input and measure the result of the generated commit message with BLEU-4. Also, we propose the following two training methods to improve the result of generating the commit message: (1) A method of preprocessing the input to feed the code modification to the encoder input. (2) A method that uses an initial weight suitable for the code domain to reduce the gap in contextual representation between programming language (PL) and natural language (NL). Training code, dataset, and pre-trained weights are available at https://github.com/graykode/commit-autosuggestions"]], "corpus": "arxiv"}
 
 
 
279
  {"tstamp": 1736515223.5788, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1736515220.5379, "finish": 1736515223.5788, "ip": "", "conv_id": "ac64789399e84800a82cdd0c4da2c433", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Who wrote about a guide more popular than the Encyclopedia Galactica?", "output": [["Who wrote about a guide more popular than the Encyclopedia Galactica?", "The Hitchhiker's Guide to the Galaxy (fictional)\n\nThe Hitchhiker's Guide to the Galaxy is a fictional electronic guide book in the multimedia scifi/comedy series of the same name by Douglas Adams. The Guide serves as \"the standard repository for all knowledge and wisdom\" for many members of the series' galaxy-spanning civilization. Entries from the guidebook are used as comic narration to bridge events and provide background information in every version of the story. The guide is published by \"Megadodo Publications\", a publishing company on Ursa Minor Beta, and it is written and edited by many characters throughout the series."]], "corpus": "wikipedia"}
280
  {"tstamp": 1736515545.4809, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1736515543.3581, "finish": 1736515545.4809, "ip": "", "conv_id": "13d9ee620ee14bf7a264c181658b42d9", "model_name": "text-embedding-3-large", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "corpus": "arxiv"}
281
  {"tstamp": 1736515545.4809, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1736515543.3581, "finish": 1736515545.4809, "ip": "", "conv_id": "cb5c5d2b3c2f44ac9f6d685877e8ddb4", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model\n\nAbstract: Commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. Therefore, our work is to develop a model that automatically writes the commit message. To this end, we release 345K datasets consisting of code modification and commit messages in six programming languages (Python, PHP, Go, Java, JavaScript, and Ruby). Similar to the neural machine translation (NMT) model, using our dataset, we feed the code modification to the encoder input and the commit message to the decoder input and measure the result of the generated commit message with BLEU-4. Also, we propose the following two training methods to improve the result of generating the commit message: (1) A method of preprocessing the input to feed the code modification to the encoder input. (2) A method that uses an initial weight suitable for the code domain to reduce the gap in contextual representation between programming language (PL) and natural language (NL). Training code, dataset, and pre-trained weights are available at https://github.com/graykode/commit-autosuggestions"]], "corpus": "arxiv"}
282
+ {"tstamp": 1736532022.3545, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1736532020.0644, "finish": 1736532022.3545, "ip": "", "conv_id": "af83a1396c60470eb371f95092a95e6e", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "why llm embeddings are so bad", "output": [["why llm embeddings are so bad", "Nonlinear dimensionality reduction\n\nLike LLE, Hessian LLE is also based on sparse matrix techniques. It tends to yield results of a much higher quality than LLE. Unfortunately, it has a very costly computational complexity, so it is not well-suited for heavily sampled manifolds. It has no internal model.\nModified Locally-Linear Embedding (MLLE)\nModified LLE (MLLE) is another LLE variant which uses multiple weights in each neighborhood to address the local weight matrix conditioning problem which leads to distortions in LLE maps. Loosely speaking the multiple weights are the local orthogonal projection of the original weights produced by LLE. The creators of this regularised variant are also the authors of Local Tangent Space Alignment (LTSA), which is implicit in the MLLE formulation when realising that the global optimisation of the orthogonal projections of each weight vector, in-essence, aligns the local tangent spaces of every data point. The theoretical and empirical implications from the correct application of this algorithm are far-reaching.\nLocal tangent space alignment\nLTSA is based on the intuition that when a manifold is correctly unfolded, all of the tangent hyperplanes to the manifold will become aligned. It begins by computing the k-nearest neighbors of every point. It computes the tangent space at every point by computing the d-first principal components in each local neighborhood. It then optimizes to find an embedding that aligns the tangent spaces."]], "corpus": "wikipedia"}
283
+ {"tstamp": 1736532022.3545, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1736532020.0644, "finish": 1736532022.3545, "ip": "", "conv_id": "1e90eb3cf9ff4f71994318b693b1a01a", "model_name": "GritLM/GritLM-7B", "prompt": "why llm embeddings are so bad", "output": [["why llm embeddings are so bad", "Neural machine translation\n\nSince they are not trained on translation, they also do not feature an encoder-decoder architecture. Instead, they just consist of a transformer's decoder.\nIn order to be competitive on the machine translation task, LLMs need to be much larger than other NMT systems. E.g., GPT-3 has 175 billion parameters, while mBART has 680 million and the original transformer-big has “only” 213 million. This means that they are computationally more expensive to train and use.\nA generative LLM can be prompted in a zero-shot fashion by just asking it to translate a text into another language without giving any further examples in the prompt. Or one can include one or several example translations in the prompt before asking to translate the text in question. This is then called one-shot or few-shot learning, respectively. For example, the following prompts were used by Hendy et al. (2023) for zero-shot and one-shot translation:\n### Translate this sentence from [source language] to [target language], Source:\n[source sentence]\n### Target:\nTranslate this into 1. [target language]:\n[shot 1 source]\n1. [shot 1 reference]\nTranslate this into 1. [target language]:\n[input]\n1.\nLiterature\nKoehn, Philipp (2020). Neural Machine Translation. Cambridge University Press.\nStahlberg, Felix (2020). Neural Machine Translation: A Review and Survey."]], "corpus": "wikipedia"}