Scheduled Commit
Browse files
data/retrieval_individual-76065722-116b-41cc-961b-aebfed552f79.jsonl
CHANGED
@@ -751,3 +751,5 @@
|
|
751 |
{"tstamp": 1728072032.4517, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1728072031.6174, "finish": 1728072032.4517, "ip": "", "conv_id": "bb130adbce034d3f9132d2463ffe825f", "model_name": "text-embedding-3-large", "prompt": "low rank that is good", "output": [["low rank that is good", "Rank\n\nA rank is a position in a hierarchy. It can be formally recognized—for example, cardinal, chief executive officer, general, professor—or unofficial.\nPeople\nFormal ranks\nAcademic rank\nCorporate title\nDiplomatic rank\nHierarchy of the Catholic Church\nImperial, royal and noble ranks\nMilitary rank\nPolice rank\nUnofficial ranks\nSocial class\nSocial position\nSocial status\nEither\nSeniority\nMathematics\nRank (differential topology)\nRank (graph theory)\nRank (linear algebra), the dimension of the vector space generated (or spanned) by a matrix's columns\nRank (set theory)\nRank (type theory)\nRank of an abelian group, the cardinality of a maximal linearly independent subset\nRank of a free module\nRank of a greedoid, the maximal size of a feasible set\nRank of a group, the smallest cardinality of a generating set for the group\nRank of a Lie group – see Cartan subgroup\nRank of a matroid, the maximal size of an independent set\nRank of a partition, at least two definitions in number theory\nRank of a tensor\nRank of a vector bundle\nRank statistics\nOther"]], "corpus": "wikipedia"}
|
752 |
{"tstamp": 1728082557.036, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1728082554.6941, "finish": 1728082557.036, "ip": "", "conv_id": "15d122194f7c4430baa95e53df80722a", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Progress in artificial intelligence\n\nSub-human Optical character recognition for printed text (nearing par-human for Latin-script typewritten text)\nObject recognition\nVarious robotics tasks that may require advances in robot hardware as well as AI, including:\nStable bipedal locomotion: Bipedal robots can walk, but are less stable than human walkers (as of 2017)\nHumanoid soccer\nSpeech recognition: \"nearly equal to human performance\" (2017)\nExplainability. Current medical systems can diagnose certain medical conditions well, but cannot explain to users why they made the diagnosis.\nMany tests of fluid intelligence (2020)\nBongard visual cognition problems, such as the Bongard-LOGO benchmark (2020)\nVisual Commonsense Reasoning (VCR) benchmark (as of 2020)\nStock market prediction: Financial data collection and processing using Machine Learning algorithms\nAngry Birds video game, as of 2020\nVarious tasks that are difficult to solve without contextual knowledge, including:\nTranslation\nWord-sense disambiguation\nProposed tests of artificial intelligence\nIn his famous Turing test, Alan Turing picked language, the defining feature of human beings, for its basis. The Turing test is now considered too exploitable to be a meaningful benchmark.\nThe Feigenbaum test, proposed by the inventor of expert systems, tests a machine's knowledge and expertise about a specific subject. A paper by Jim Gray of Microsoft in 2003 suggested extending the Turing test to speech understanding, speaking and recognizing objects and behavior."]], "corpus": "wikipedia"}
|
753 |
{"tstamp": 1728082557.036, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1728082554.6941, "finish": 1728082557.036, "ip": "", "conv_id": "0b2ee1eccea94f15967f7aca15b5b10f", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Cognitive test\n\nCognitive tests are assessments of the cognitive capabilities of humans and other animals. Tests administered to humans include various forms of IQ tests; those administered to animals include the mirror test (a test of visual self-awareness) and the T maze test (which tests learning ability). Such testing is used in psychology and psychometrics, as well as other fields studying human and animal intelligence."]], "corpus": "wikipedia"}
|
|
|
|
|
|
751 |
{"tstamp": 1728072032.4517, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1728072031.6174, "finish": 1728072032.4517, "ip": "", "conv_id": "bb130adbce034d3f9132d2463ffe825f", "model_name": "text-embedding-3-large", "prompt": "low rank that is good", "output": [["low rank that is good", "Rank\n\nA rank is a position in a hierarchy. It can be formally recognized—for example, cardinal, chief executive officer, general, professor—or unofficial.\nPeople\nFormal ranks\nAcademic rank\nCorporate title\nDiplomatic rank\nHierarchy of the Catholic Church\nImperial, royal and noble ranks\nMilitary rank\nPolice rank\nUnofficial ranks\nSocial class\nSocial position\nSocial status\nEither\nSeniority\nMathematics\nRank (differential topology)\nRank (graph theory)\nRank (linear algebra), the dimension of the vector space generated (or spanned) by a matrix's columns\nRank (set theory)\nRank (type theory)\nRank of an abelian group, the cardinality of a maximal linearly independent subset\nRank of a free module\nRank of a greedoid, the maximal size of a feasible set\nRank of a group, the smallest cardinality of a generating set for the group\nRank of a Lie group – see Cartan subgroup\nRank of a matroid, the maximal size of an independent set\nRank of a partition, at least two definitions in number theory\nRank of a tensor\nRank of a vector bundle\nRank statistics\nOther"]], "corpus": "wikipedia"}
|
752 |
{"tstamp": 1728082557.036, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1728082554.6941, "finish": 1728082557.036, "ip": "", "conv_id": "15d122194f7c4430baa95e53df80722a", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Progress in artificial intelligence\n\nSub-human Optical character recognition for printed text (nearing par-human for Latin-script typewritten text)\nObject recognition\nVarious robotics tasks that may require advances in robot hardware as well as AI, including:\nStable bipedal locomotion: Bipedal robots can walk, but are less stable than human walkers (as of 2017)\nHumanoid soccer\nSpeech recognition: \"nearly equal to human performance\" (2017)\nExplainability. Current medical systems can diagnose certain medical conditions well, but cannot explain to users why they made the diagnosis.\nMany tests of fluid intelligence (2020)\nBongard visual cognition problems, such as the Bongard-LOGO benchmark (2020)\nVisual Commonsense Reasoning (VCR) benchmark (as of 2020)\nStock market prediction: Financial data collection and processing using Machine Learning algorithms\nAngry Birds video game, as of 2020\nVarious tasks that are difficult to solve without contextual knowledge, including:\nTranslation\nWord-sense disambiguation\nProposed tests of artificial intelligence\nIn his famous Turing test, Alan Turing picked language, the defining feature of human beings, for its basis. The Turing test is now considered too exploitable to be a meaningful benchmark.\nThe Feigenbaum test, proposed by the inventor of expert systems, tests a machine's knowledge and expertise about a specific subject. A paper by Jim Gray of Microsoft in 2003 suggested extending the Turing test to speech understanding, speaking and recognizing objects and behavior."]], "corpus": "wikipedia"}
|
753 |
{"tstamp": 1728082557.036, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1728082554.6941, "finish": 1728082557.036, "ip": "", "conv_id": "0b2ee1eccea94f15967f7aca15b5b10f", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Cognitive test\n\nCognitive tests are assessments of the cognitive capabilities of humans and other animals. Tests administered to humans include various forms of IQ tests; those administered to animals include the mirror test (a test of visual self-awareness) and the T maze test (which tests learning ability). Such testing is used in psychology and psychometrics, as well as other fields studying human and animal intelligence."]], "corpus": "wikipedia"}
|
754 |
+
{"tstamp": 1728118556.313, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1728118554.0868, "finish": 1728118556.313, "ip": "", "conv_id": "ffce9627d8a24ee6a1a5fa75bd2ae601", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model\n\nAbstract: Commit message is a document that summarizes source code changes in natural language. A good commit message clearly shows the source code changes, so this enhances collaboration between developers. Therefore, our work is to develop a model that automatically writes the commit message. To this end, we release 345K datasets consisting of code modification and commit messages in six programming languages (Python, PHP, Go, Java, JavaScript, and Ruby). Similar to the neural machine translation (NMT) model, using our dataset, we feed the code modification to the encoder input and the commit message to the decoder input and measure the result of the generated commit message with BLEU-4. Also, we propose the following two training methods to improve the result of generating the commit message: (1) A method of preprocessing the input to feed the code modification to the encoder input. (2) A method that uses an initial weight suitable for the code domain to reduce the gap in contextual representation between programming language (PL) and natural language (NL). Training code, dataset, and pre-trained weights are available at https://github.com/graykode/commit-autosuggestions"]], "corpus": "arxiv"}
|
755 |
+
{"tstamp": 1728118556.313, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1728118554.0868, "finish": 1728118556.313, "ip": "", "conv_id": "5661e16a0ee146c1acef688099f5efd8", "model_name": "GritLM/GritLM-7B", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "corpus": "arxiv"}
|