Muennighoff commited on
Commit
1332b3c
·
verified ·
1 Parent(s): f8409df

Scheduled Commit

Browse files
data/retrieval_individual-bf21cabf-e911-4688-b82f-7a6511b7478a.jsonl CHANGED
@@ -277,3 +277,5 @@
277
  {"tstamp": 1736515141.1687, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1736515138.9319, "finish": 1736515141.1687, "ip": "", "conv_id": "47cd99f1c73a42d3a0fdf56dd1ed3ed5", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "output": [["Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "Habitable zone\n\nK2-18b is an exoplanet 124 light-years away, orbiting in the habitable zone of the K2-18, a red dwarf. This planet is significant for water vapor found in its atmosphere; this was announced on September 17, 2019.\nIn September 2020, astronomers identified 24 superhabitable planet (planets better than Earth) contenders, from among more than 4000 confirmed exoplanets at present, based on astrophysical parameters, as well as the natural history of known life forms on the Earth.\nHabitability outside the HZ\nLiquid-water environments have been found to exist in the absence of atmospheric pressure and at temperatures outside the HZ temperature range. For example, Saturn's moons Titan and Enceladus and Jupiter's moons Europa and Ganymede, all of which are outside the habitable zone, may hold large volumes of liquid water in subsurface oceans.\nOutside the HZ, tidal heating and radioactive decay are two possible heat sources that could contribute to the existence of liquid water. Abbot and Switzer (2011) put forward the possibility that subsurface water could exist on rogue planets as a result of radioactive decay-based heating and insulation by a thick surface layer of ice.\nWith some theorising that life on Earth may have actually originated in stable, subsurface habitats, it has been suggested that it may be common for wet subsurface extraterrestrial habitats such as these to 'teem with life'. On Earth itself, living organisms may be found more than below the surface."]], "corpus": "wikipedia"}
278
  {"tstamp": 1736515223.5788, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1736515220.5379, "finish": 1736515223.5788, "ip": "", "conv_id": "cdfd4fd9292c47ad9eafd5e65905866d", "model_name": "text-embedding-3-large", "prompt": "Who wrote about a guide more popular than the Encyclopedia Galactica?", "output": [["Who wrote about a guide more popular than the Encyclopedia Galactica?", "The Hitchhiker's Guide to the Galaxy (fictional)\n\nIn the original radio scripts, the Guide's voice was called the \"Narrator\" and in the 2004–2005 series, \"The Voice\". For all of the radio series and the 1981 TV series, the role was credited as \"The Book\", though this was changed to \"Narrator/The Guide\" for the 2005 movie.\nIn the first two phases of the radio series, the LP album adaptations of the first radio series and in the television series, the Guide was voiced by British actor Peter Jones. During the 2004–2005 radio series, The Guide was voiced by William Franklyn. In the film version, it was voiced by Stephen Fry. In the Hexagonal Phase of the radio series, based on the novel, And Another Thing..., the Guide was voiced by the series co-creator John Lloyd.\nOverview\nThe \"wholly remarkable\" Guide is described as being Megadodo Publications' most successful book, being more popular than The Celestial Homecare Omnibus, better-selling than 53 More Things To Do in Zero Gravity and more controversial than philosophical author Oolon Colluphid's blockbuster \"God Trilogy\" (Where God Went Wrong, Some More of God's Greatest Mistakes, and Who is this God Person Anyway?).\nIt is said to have supplanted the rival Encyclopedia Galactica as a \"standard repository of all knowledge and wisdom\" in some parts of the galaxy for two reasons, one of them being that has a slightly cheaper price, and the other that it has the words \"DON'T PANIC\" printed on its cover."]], "corpus": "wikipedia"}
279
  {"tstamp": 1736515223.5788, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1736515220.5379, "finish": 1736515223.5788, "ip": "", "conv_id": "ac64789399e84800a82cdd0c4da2c433", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Who wrote about a guide more popular than the Encyclopedia Galactica?", "output": [["Who wrote about a guide more popular than the Encyclopedia Galactica?", "The Hitchhiker's Guide to the Galaxy (fictional)\n\nThe Hitchhiker's Guide to the Galaxy is a fictional electronic guide book in the multimedia scifi/comedy series of the same name by Douglas Adams. The Guide serves as \"the standard repository for all knowledge and wisdom\" for many members of the series' galaxy-spanning civilization. Entries from the guidebook are used as comic narration to bridge events and provide background information in every version of the story. The guide is published by \"Megadodo Publications\", a publishing company on Ursa Minor Beta, and it is written and edited by many characters throughout the series."]], "corpus": "wikipedia"}
 
 
 
277
  {"tstamp": 1736515141.1687, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1736515138.9319, "finish": 1736515141.1687, "ip": "", "conv_id": "47cd99f1c73a42d3a0fdf56dd1ed3ed5", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "output": [["Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "Habitable zone\n\nK2-18b is an exoplanet 124 light-years away, orbiting in the habitable zone of the K2-18, a red dwarf. This planet is significant for water vapor found in its atmosphere; this was announced on September 17, 2019.\nIn September 2020, astronomers identified 24 superhabitable planet (planets better than Earth) contenders, from among more than 4000 confirmed exoplanets at present, based on astrophysical parameters, as well as the natural history of known life forms on the Earth.\nHabitability outside the HZ\nLiquid-water environments have been found to exist in the absence of atmospheric pressure and at temperatures outside the HZ temperature range. For example, Saturn's moons Titan and Enceladus and Jupiter's moons Europa and Ganymede, all of which are outside the habitable zone, may hold large volumes of liquid water in subsurface oceans.\nOutside the HZ, tidal heating and radioactive decay are two possible heat sources that could contribute to the existence of liquid water. Abbot and Switzer (2011) put forward the possibility that subsurface water could exist on rogue planets as a result of radioactive decay-based heating and insulation by a thick surface layer of ice.\nWith some theorising that life on Earth may have actually originated in stable, subsurface habitats, it has been suggested that it may be common for wet subsurface extraterrestrial habitats such as these to 'teem with life'. On Earth itself, living organisms may be found more than below the surface."]], "corpus": "wikipedia"}
278
  {"tstamp": 1736515223.5788, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1736515220.5379, "finish": 1736515223.5788, "ip": "", "conv_id": "cdfd4fd9292c47ad9eafd5e65905866d", "model_name": "text-embedding-3-large", "prompt": "Who wrote about a guide more popular than the Encyclopedia Galactica?", "output": [["Who wrote about a guide more popular than the Encyclopedia Galactica?", "The Hitchhiker's Guide to the Galaxy (fictional)\n\nIn the original radio scripts, the Guide's voice was called the \"Narrator\" and in the 2004–2005 series, \"The Voice\". For all of the radio series and the 1981 TV series, the role was credited as \"The Book\", though this was changed to \"Narrator/The Guide\" for the 2005 movie.\nIn the first two phases of the radio series, the LP album adaptations of the first radio series and in the television series, the Guide was voiced by British actor Peter Jones. During the 2004–2005 radio series, The Guide was voiced by William Franklyn. In the film version, it was voiced by Stephen Fry. In the Hexagonal Phase of the radio series, based on the novel, And Another Thing..., the Guide was voiced by the series co-creator John Lloyd.\nOverview\nThe \"wholly remarkable\" Guide is described as being Megadodo Publications' most successful book, being more popular than The Celestial Homecare Omnibus, better-selling than 53 More Things To Do in Zero Gravity and more controversial than philosophical author Oolon Colluphid's blockbuster \"God Trilogy\" (Where God Went Wrong, Some More of God's Greatest Mistakes, and Who is this God Person Anyway?).\nIt is said to have supplanted the rival Encyclopedia Galactica as a \"standard repository of all knowledge and wisdom\" in some parts of the galaxy for two reasons, one of them being that has a slightly cheaper price, and the other that it has the words \"DON'T PANIC\" printed on its cover."]], "corpus": "wikipedia"}
279
  {"tstamp": 1736515223.5788, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1736515220.5379, "finish": 1736515223.5788, "ip": "", "conv_id": "ac64789399e84800a82cdd0c4da2c433", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Who wrote about a guide more popular than the Encyclopedia Galactica?", "output": [["Who wrote about a guide more popular than the Encyclopedia Galactica?", "The Hitchhiker's Guide to the Galaxy (fictional)\n\nThe Hitchhiker's Guide to the Galaxy is a fictional electronic guide book in the multimedia scifi/comedy series of the same name by Douglas Adams. The Guide serves as \"the standard repository for all knowledge and wisdom\" for many members of the series' galaxy-spanning civilization. Entries from the guidebook are used as comic narration to bridge events and provide background information in every version of the story. The guide is published by \"Megadodo Publications\", a publishing company on Ursa Minor Beta, and it is written and edited by many characters throughout the series."]], "corpus": "wikipedia"}
280
+ {"tstamp": 1736515545.4809, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1736515543.3581, "finish": 1736515545.4809, "ip": "", "conv_id": "13d9ee620ee14bf7a264c181658b42d9", "model_name": "text-embedding-3-large", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "corpus": "arxiv"}
281
+ {"tstamp": 1736515545.4809, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1736515543.3581, "finish": 1736515545.4809, "ip": "", "conv_id": "cb5c5d2b3c2f44ac9f6d685877e8ddb4", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model\n\nAbstract: Commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. Therefore, our work is to develop a model that automatically writes the commit message. To this end, we release 345K datasets consisting of code modification and commit messages in six programming languages (Python, PHP, Go, Java, JavaScript, and Ruby). Similar to the neural machine translation (NMT) model, using our dataset, we feed the code modification to the encoder input and the commit message to the decoder input and measure the result of the generated commit message with BLEU-4. Also, we propose the following two training methods to improve the result of generating the commit message: (1) A method of preprocessing the input to feed the code modification to the encoder input. (2) A method that uses an initial weight suitable for the code domain to reduce the gap in contextual representation between programming language (PL) and natural language (NL). Training code, dataset, and pre-trained weights are available at https://github.com/graykode/commit-autosuggestions"]], "corpus": "arxiv"}
data/retrieval_side_by_side-bf21cabf-e911-4688-b82f-7a6511b7478a.jsonl CHANGED
@@ -17,3 +17,4 @@
17
  {"tstamp": 1736417635.6321, "task_type": "retrieval", "type": "leftvote", "models": ["Alibaba-NLP/gte-Qwen2-7B-instruct", "sentence-transformers/all-MiniLM-L6-v2"], "ip": "", "0_conv_id": "aff9f6dbf03f47b5a7f77f62219f13c0", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "Open source framework for prompt-learning in NLP tasks", "0_output": [["Open source framework for prompt-learning in NLP tasks", "Title: OpenPrompt: An Open-source Framework for Prompt-learning\n\nAbstract: Prompt-learning has become a new paradigm in modern natural language processing, which directly adapts pre-trained language models (PLMs) to $cloze$-style prediction, autoregressive modeling, or sequence to sequence generation, resulting in promising performances on various tasks. However, no standard implementation framework of prompt-learning is proposed yet, and most existing prompt-learning codebases, often unregulated, only provide limited implementations for specific scenarios. Since there are many details such as templating strategy, initializing strategy, and verbalizing strategy, etc. need to be considered in prompt-learning, practitioners face impediments to quickly adapting the desired prompt learning methods to their applications. In this paper, we present {OpenPrompt}, a unified easy-to-use toolkit to conduct prompt-learning over PLMs. OpenPrompt is a research-friendly framework that is equipped with efficiency, modularity, and extendibility, and its combinability allows the freedom to combine different PLMs, task formats, and prompting modules in a unified paradigm. Users could expediently deploy prompt-learning frameworks and evaluate the generalization of them on different NLP tasks without constraints. OpenPrompt is publicly released at {\\url{ https://github.com/thunlp/OpenPrompt}}."]], "0_corpus": "arxiv", "1_conv_id": "9a479d3b7d3c4d1887492407970f176e", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "Open source framework for prompt-learning in NLP tasks", "1_output": [["Open source framework for prompt-learning in NLP tasks", "Title: Pressure and intermittency in passive vector turbulence\n\nAbstract: We investigate the scaling properties a model of passive vector turbulence with pressure and in the presence of a large-scale anisotropy. The leading scaling exponents of the structure functions are proven to be anomalous. The anisotropic exponents are organized in hierarchical families growing without bound with the degree of anisotropy. Nonlocality produces poles in the inertial-range dynamics corresponding to the dimensional scaling solution. The increase with the P\\'{e}clet number of hyperskewness and higher odd-dimensional ratios signals the persistence of anisotropy effects also in the inertial range."]], "1_corpus": "arxiv"}
18
  {"tstamp": 1736487164.7787, "task_type": "retrieval", "type": "rightvote", "models": ["mixedbread-ai/mxbai-embed-large-v1", "voyage-multilingual-2"], "ip": "", "0_conv_id": "702224ece89d4e6bbd00623d2d41b798", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Progress in artificial intelligence\n\nSub-human Optical character recognition for printed text (nearing par-human for Latin-script typewritten text)\nObject recognition\nVarious robotics tasks that may require advances in robot hardware as well as AI, including:\nStable bipedal locomotion: Bipedal robots can walk, but are less stable than human walkers (as of 2017)\nHumanoid soccer\nSpeech recognition: \"nearly equal to human performance\" (2017)\nExplainability. Current medical systems can diagnose certain medical conditions well, but cannot explain to users why they made the diagnosis.\nMany tests of fluid intelligence (2020)\nBongard visual cognition problems, such as the Bongard-LOGO benchmark (2020)\nVisual Commonsense Reasoning (VCR) benchmark (as of 2020)\nStock market prediction: Financial data collection and processing using Machine Learning algorithms\nAngry Birds video game, as of 2020\nVarious tasks that are difficult to solve without contextual knowledge, including:\nTranslation\nWord-sense disambiguation\nProposed tests of artificial intelligence\nIn his famous Turing test, Alan Turing picked language, the defining feature of human beings, for its basis. The Turing test is now considered too exploitable to be a meaningful benchmark.\nThe Feigenbaum test, proposed by the inventor of expert systems, tests a machine's knowledge and expertise about a specific subject. A paper by Jim Gray of Microsoft in 2003 suggested extending the Turing test to speech understanding, speaking and recognizing objects and behavior."]], "0_corpus": "wikipedia", "1_conv_id": "c71b5ec0100b4e1f849a3dc4a8e8b5c5", "1_model_name": "voyage-multilingual-2", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "1_corpus": "wikipedia"}
19
  {"tstamp": 1736515389.11, "task_type": "retrieval", "type": "leftvote", "models": ["text-embedding-3-large", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "cdfd4fd9292c47ad9eafd5e65905866d", "0_model_name": "text-embedding-3-large", "0_prompt": "Who wrote about a guide more popular than the Encyclopedia Galactica?", "0_output": [["Who wrote about a guide more popular than the Encyclopedia Galactica?", "The Hitchhiker's Guide to the Galaxy (fictional)\n\nIn the original radio scripts, the Guide's voice was called the \"Narrator\" and in the 2004–2005 series, \"The Voice\". For all of the radio series and the 1981 TV series, the role was credited as \"The Book\", though this was changed to \"Narrator/The Guide\" for the 2005 movie.\nIn the first two phases of the radio series, the LP album adaptations of the first radio series and in the television series, the Guide was voiced by British actor Peter Jones. During the 2004–2005 radio series, The Guide was voiced by William Franklyn. In the film version, it was voiced by Stephen Fry. In the Hexagonal Phase of the radio series, based on the novel, And Another Thing..., the Guide was voiced by the series co-creator John Lloyd.\nOverview\nThe \"wholly remarkable\" Guide is described as being Megadodo Publications' most successful book, being more popular than The Celestial Homecare Omnibus, better-selling than 53 More Things To Do in Zero Gravity and more controversial than philosophical author Oolon Colluphid's blockbuster \"God Trilogy\" (Where God Went Wrong, Some More of God's Greatest Mistakes, and Who is this God Person Anyway?).\nIt is said to have supplanted the rival Encyclopedia Galactica as a \"standard repository of all knowledge and wisdom\" in some parts of the galaxy for two reasons, one of them being that has a slightly cheaper price, and the other that it has the words \"DON'T PANIC\" printed on its cover."]], "0_corpus": "wikipedia", "1_conv_id": "ac64789399e84800a82cdd0c4da2c433", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Who wrote about a guide more popular than the Encyclopedia Galactica?", "1_output": [["Who wrote about a guide more popular than the Encyclopedia Galactica?", "The Hitchhiker's Guide to the Galaxy (fictional)\n\nThe Hitchhiker's Guide to the Galaxy is a fictional electronic guide book in the multimedia scifi/comedy series of the same name by Douglas Adams. The Guide serves as \"the standard repository for all knowledge and wisdom\" for many members of the series' galaxy-spanning civilization. Entries from the guidebook are used as comic narration to bridge events and provide background information in every version of the story. The guide is published by \"Megadodo Publications\", a publishing company on Ursa Minor Beta, and it is written and edited by many characters throughout the series."]], "1_corpus": "wikipedia"}
 
 
17
  {"tstamp": 1736417635.6321, "task_type": "retrieval", "type": "leftvote", "models": ["Alibaba-NLP/gte-Qwen2-7B-instruct", "sentence-transformers/all-MiniLM-L6-v2"], "ip": "", "0_conv_id": "aff9f6dbf03f47b5a7f77f62219f13c0", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "Open source framework for prompt-learning in NLP tasks", "0_output": [["Open source framework for prompt-learning in NLP tasks", "Title: OpenPrompt: An Open-source Framework for Prompt-learning\n\nAbstract: Prompt-learning has become a new paradigm in modern natural language processing, which directly adapts pre-trained language models (PLMs) to $cloze$-style prediction, autoregressive modeling, or sequence to sequence generation, resulting in promising performances on various tasks. However, no standard implementation framework of prompt-learning is proposed yet, and most existing prompt-learning codebases, often unregulated, only provide limited implementations for specific scenarios. Since there are many details such as templating strategy, initializing strategy, and verbalizing strategy, etc. need to be considered in prompt-learning, practitioners face impediments to quickly adapting the desired prompt learning methods to their applications. In this paper, we present {OpenPrompt}, a unified easy-to-use toolkit to conduct prompt-learning over PLMs. OpenPrompt is a research-friendly framework that is equipped with efficiency, modularity, and extendibility, and its combinability allows the freedom to combine different PLMs, task formats, and prompting modules in a unified paradigm. Users could expediently deploy prompt-learning frameworks and evaluate the generalization of them on different NLP tasks without constraints. OpenPrompt is publicly released at {\\url{ https://github.com/thunlp/OpenPrompt}}."]], "0_corpus": "arxiv", "1_conv_id": "9a479d3b7d3c4d1887492407970f176e", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "Open source framework for prompt-learning in NLP tasks", "1_output": [["Open source framework for prompt-learning in NLP tasks", "Title: Pressure and intermittency in passive vector turbulence\n\nAbstract: We investigate the scaling properties a model of passive vector turbulence with pressure and in the presence of a large-scale anisotropy. The leading scaling exponents of the structure functions are proven to be anomalous. The anisotropic exponents are organized in hierarchical families growing without bound with the degree of anisotropy. Nonlocality produces poles in the inertial-range dynamics corresponding to the dimensional scaling solution. The increase with the P\\'{e}clet number of hyperskewness and higher odd-dimensional ratios signals the persistence of anisotropy effects also in the inertial range."]], "1_corpus": "arxiv"}
18
  {"tstamp": 1736487164.7787, "task_type": "retrieval", "type": "rightvote", "models": ["mixedbread-ai/mxbai-embed-large-v1", "voyage-multilingual-2"], "ip": "", "0_conv_id": "702224ece89d4e6bbd00623d2d41b798", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Progress in artificial intelligence\n\nSub-human Optical character recognition for printed text (nearing par-human for Latin-script typewritten text)\nObject recognition\nVarious robotics tasks that may require advances in robot hardware as well as AI, including:\nStable bipedal locomotion: Bipedal robots can walk, but are less stable than human walkers (as of 2017)\nHumanoid soccer\nSpeech recognition: \"nearly equal to human performance\" (2017)\nExplainability. Current medical systems can diagnose certain medical conditions well, but cannot explain to users why they made the diagnosis.\nMany tests of fluid intelligence (2020)\nBongard visual cognition problems, such as the Bongard-LOGO benchmark (2020)\nVisual Commonsense Reasoning (VCR) benchmark (as of 2020)\nStock market prediction: Financial data collection and processing using Machine Learning algorithms\nAngry Birds video game, as of 2020\nVarious tasks that are difficult to solve without contextual knowledge, including:\nTranslation\nWord-sense disambiguation\nProposed tests of artificial intelligence\nIn his famous Turing test, Alan Turing picked language, the defining feature of human beings, for its basis. The Turing test is now considered too exploitable to be a meaningful benchmark.\nThe Feigenbaum test, proposed by the inventor of expert systems, tests a machine's knowledge and expertise about a specific subject. A paper by Jim Gray of Microsoft in 2003 suggested extending the Turing test to speech understanding, speaking and recognizing objects and behavior."]], "0_corpus": "wikipedia", "1_conv_id": "c71b5ec0100b4e1f849a3dc4a8e8b5c5", "1_model_name": "voyage-multilingual-2", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "1_corpus": "wikipedia"}
19
  {"tstamp": 1736515389.11, "task_type": "retrieval", "type": "leftvote", "models": ["text-embedding-3-large", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "cdfd4fd9292c47ad9eafd5e65905866d", "0_model_name": "text-embedding-3-large", "0_prompt": "Who wrote about a guide more popular than the Encyclopedia Galactica?", "0_output": [["Who wrote about a guide more popular than the Encyclopedia Galactica?", "The Hitchhiker's Guide to the Galaxy (fictional)\n\nIn the original radio scripts, the Guide's voice was called the \"Narrator\" and in the 2004–2005 series, \"The Voice\". For all of the radio series and the 1981 TV series, the role was credited as \"The Book\", though this was changed to \"Narrator/The Guide\" for the 2005 movie.\nIn the first two phases of the radio series, the LP album adaptations of the first radio series and in the television series, the Guide was voiced by British actor Peter Jones. During the 2004–2005 radio series, The Guide was voiced by William Franklyn. In the film version, it was voiced by Stephen Fry. In the Hexagonal Phase of the radio series, based on the novel, And Another Thing..., the Guide was voiced by the series co-creator John Lloyd.\nOverview\nThe \"wholly remarkable\" Guide is described as being Megadodo Publications' most successful book, being more popular than The Celestial Homecare Omnibus, better-selling than 53 More Things To Do in Zero Gravity and more controversial than philosophical author Oolon Colluphid's blockbuster \"God Trilogy\" (Where God Went Wrong, Some More of God's Greatest Mistakes, and Who is this God Person Anyway?).\nIt is said to have supplanted the rival Encyclopedia Galactica as a \"standard repository of all knowledge and wisdom\" in some parts of the galaxy for two reasons, one of them being that has a slightly cheaper price, and the other that it has the words \"DON'T PANIC\" printed on its cover."]], "0_corpus": "wikipedia", "1_conv_id": "ac64789399e84800a82cdd0c4da2c433", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Who wrote about a guide more popular than the Encyclopedia Galactica?", "1_output": [["Who wrote about a guide more popular than the Encyclopedia Galactica?", "The Hitchhiker's Guide to the Galaxy (fictional)\n\nThe Hitchhiker's Guide to the Galaxy is a fictional electronic guide book in the multimedia scifi/comedy series of the same name by Douglas Adams. The Guide serves as \"the standard repository for all knowledge and wisdom\" for many members of the series' galaxy-spanning civilization. Entries from the guidebook are used as comic narration to bridge events and provide background information in every version of the story. The guide is published by \"Megadodo Publications\", a publishing company on Ursa Minor Beta, and it is written and edited by many characters throughout the series."]], "1_corpus": "wikipedia"}
20
+ {"tstamp": 1736515551.7303, "task_type": "retrieval", "type": "tievote", "models": ["text-embedding-3-large", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "13d9ee620ee14bf7a264c181658b42d9", "0_model_name": "text-embedding-3-large", "0_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "0_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "0_corpus": "arxiv", "1_conv_id": "cb5c5d2b3c2f44ac9f6d685877e8ddb4", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "1_output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model\n\nAbstract: Commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. Therefore, our work is to develop a model that automatically writes the commit message. To this end, we release 345K datasets consisting of code modification and commit messages in six programming languages (Python, PHP, Go, Java, JavaScript, and Ruby). Similar to the neural machine translation (NMT) model, using our dataset, we feed the code modification to the encoder input and the commit message to the decoder input and measure the result of the generated commit message with BLEU-4. Also, we propose the following two training methods to improve the result of generating the commit message: (1) A method of preprocessing the input to feed the code modification to the encoder input. (2) A method that uses an initial weight suitable for the code domain to reduce the gap in contextual representation between programming language (PL) and natural language (NL). Training code, dataset, and pre-trained weights are available at https://github.com/graykode/commit-autosuggestions"]], "1_corpus": "arxiv"}