Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
d8e1068
·
verified ·
1 Parent(s): 122f969

Scheduled Commit

Browse files
data/retrieval_individual-89f7d27e-df7d-414d-a213-dcd1c0b2f2a2.jsonl CHANGED
@@ -237,3 +237,5 @@
237
  {"tstamp": 1730303633.0776, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1730303630.8121, "finish": 1730303633.0776, "ip": "", "conv_id": "7c96c2e3fb3c4a178c622a8825d56455", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "corpus": "arxiv"}
238
  {"tstamp": 1730303668.1166, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1730303667.8171, "finish": 1730303668.1166, "ip": "", "conv_id": "c1d522542fb04e50b23cb905a9b2501e", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
239
  {"tstamp": 1730303668.1166, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1730303667.8171, "finish": 1730303668.1166, "ip": "", "conv_id": "a141a0cb2a884c289b6a0247cd964329", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
 
 
 
237
  {"tstamp": 1730303633.0776, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1730303630.8121, "finish": 1730303633.0776, "ip": "", "conv_id": "7c96c2e3fb3c4a178c622a8825d56455", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "corpus": "arxiv"}
238
  {"tstamp": 1730303668.1166, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1730303667.8171, "finish": 1730303668.1166, "ip": "", "conv_id": "c1d522542fb04e50b23cb905a9b2501e", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
239
  {"tstamp": 1730303668.1166, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1730303667.8171, "finish": 1730303668.1166, "ip": "", "conv_id": "a141a0cb2a884c289b6a0247cd964329", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
240
+ {"tstamp": 1730310275.1796, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1730310274.7367, "finish": 1730310275.1796, "ip": "", "conv_id": "98ae99889cf9466cac31e589a8456b0b", "model_name": "BM25", "prompt": "what is the difference between data and storage on an iphone?", "output": [["what is the difference between data and storage on an iphone?", "Stackoverflow Stackexchange\n\nQ: Firebase : Differences between realtime database and file storage I learnt about real-time data storage and hosting storage from this post Difference between Data Storage and Hosting Storage?\nBut i am still not clear about real time Database and and newly introduced file storage.\nDoes anybody have some brief explanation about it ?\nThanks in advance .\n(As per the concern about duplicated with Difference between Data Storage and Hosting Storage? what the problem solves and what i am asking are two different things and hosting storage and file storage are different in case of google firebase )\n\nA: *Realtime database store data only json format and it is specially used in app where data is synchronized concurrently like ola app(user location),sensex(Nifty) app where data not persist .\n*Firebase Storage just only store data like memory card.It is specially used for store backend data of app. \n"]], "corpus": "stackexchange"}
241
+ {"tstamp": 1730310275.1796, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1730310274.7367, "finish": 1730310275.1796, "ip": "", "conv_id": "ec166fefa3c44f4f9ed2fafe45038f7a", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "what is the difference between data and storage on an iphone?", "output": [["what is the difference between data and storage on an iphone?", "Apple Stackexchange\n\nQ: My storage was not updated I just changed my storage plan for 50 GB for $0.99 per month. When i went on settings>General>Storage and icloud usage, where it said storage above it i didnt have more storage. But the second box, where it said icloud abov it had upgraded to 50 GB. But i dont have any more storage for me to be able to get more apps etc. \n\nA: Your iCloud storage is how much data you can keep online, in the cloud.\nThis is variable, depending on how much you pay, for what you need. Your first 5GB is free.\nThe iPhone itself has a fixed hardware memory, or space, which can only be chosen when you buy it, in 8, 16, 32, 64, 128, or 256 GB configurations.\nThe total you see in Settings > General > Storage will never be quite this figure, as some of that is needed for the formatting of the memory & the data structure itself.\nThis memory is where it stores all apps, music, documents etc that your phone can use.\n\n"]], "corpus": "stackexchange"}