Scheduled Commit
Browse files
data/retrieval_individual-2d705d76-7cd3-4951-9d95-1d5b5bc6d548.jsonl
CHANGED
@@ -263,3 +263,7 @@
|
|
263 |
{"tstamp": 1731853168.1512, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1731853167.7416, "finish": 1731853168.1512, "ip": "", "conv_id": "297aa5fb4ca64a34a45ee2e01f2e7c79", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "software developer java", "output": [["software developer java", "Stackoverflow Stackexchange\n\nQ: svgwrite - How to enable write/add other objects as layers of an existing .svg file I have an svg file and I want to insert other svg object inside this existent.\nimport svgwrite\n\ndwg = svgwrite.Drawing('model.svg')\nsquare = dwg.add(dwg.rect(20,20),(80,80), fill='blue'))\ndwg.save()\n\nit returns a new file with this shape ignoring my previous file.\nHow could I write this?\nThank's\n\nA: The svgwrite library doesn't support this -- its purpose is to create new SVG files, not work with existing ones. Looking at the source for the Drawing class, you can see that when you save your drawing, it opens the file for writing and truncates; anything that was previously in that file is lost as a result:\ndef save(self):\n \"\"\" Write the XML string to **filename**. \"\"\"\n fileobj = io.open(self.filename, mode='w', encoding='utf-8')\n self.write(fileobj)\n fileobj.close()\n\n"]], "corpus": "stackexchange"}
|
264 |
{"tstamp": 1731857767.7957, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1731857765.4854, "finish": 1731857767.7957, "ip": "", "conv_id": "abc7cbed21ab4b1a9b5224247b5e1dbc", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
|
265 |
{"tstamp": 1731857767.7957, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1731857765.4854, "finish": 1731857767.7957, "ip": "", "conv_id": "3d42482f8e484d26b808906f48dbffdf", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
|
|
|
|
|
|
|
|
|
|
263 |
{"tstamp": 1731853168.1512, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1731853167.7416, "finish": 1731853168.1512, "ip": "", "conv_id": "297aa5fb4ca64a34a45ee2e01f2e7c79", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "software developer java", "output": [["software developer java", "Stackoverflow Stackexchange\n\nQ: svgwrite - How to enable write/add other objects as layers of an existing .svg file I have an svg file and I want to insert other svg object inside this existent.\nimport svgwrite\n\ndwg = svgwrite.Drawing('model.svg')\nsquare = dwg.add(dwg.rect(20,20),(80,80), fill='blue'))\ndwg.save()\n\nit returns a new file with this shape ignoring my previous file.\nHow could I write this?\nThank's\n\nA: The svgwrite library doesn't support this -- its purpose is to create new SVG files, not work with existing ones. Looking at the source for the Drawing class, you can see that when you save your drawing, it opens the file for writing and truncates; anything that was previously in that file is lost as a result:\ndef save(self):\n \"\"\" Write the XML string to **filename**. \"\"\"\n fileobj = io.open(self.filename, mode='w', encoding='utf-8')\n self.write(fileobj)\n fileobj.close()\n\n"]], "corpus": "stackexchange"}
|
264 |
{"tstamp": 1731857767.7957, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1731857765.4854, "finish": 1731857767.7957, "ip": "", "conv_id": "abc7cbed21ab4b1a9b5224247b5e1dbc", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
|
265 |
{"tstamp": 1731857767.7957, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1731857765.4854, "finish": 1731857767.7957, "ip": "", "conv_id": "3d42482f8e484d26b808906f48dbffdf", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
|
266 |
+
{"tstamp": 1731898529.0948, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1731898526.931, "finish": 1731898529.0948, "ip": "", "conv_id": "e22ef6bc42344826a2cf3a85cb4f74a7", "model_name": "text-embedding-004", "prompt": "I am investigating the impact of integrating random and generic data on the memorization capabilities of large language models, specifically focusing on the REMIX subtopic. How does this integration affect model parameters and gradients, and what scenarios illustrate its potential to enhance the retention of specialized knowledge without compromising original capabilities?", "output": [["I am investigating the impact of integrating random and generic data on the memorization capabilities of large language models, specifically focusing on the REMIX subtopic. How does this integration affect model parameters and gradients, and what scenarios illustrate its potential to enhance the retention of specialized knowledge without compromising original capabilities?", "Title: Evaluating the External and Parametric Knowledge Fusion of Large Language Models\n\nAbstract: Integrating external knowledge into large language models (LLMs) presents a promising solution to overcome the limitations imposed by their antiquated and static parametric memory. Prior studies, however, have tended to over-reliance on external knowledge, underestimating the valuable contributions of an LLMs' intrinsic parametric knowledge. The efficacy of LLMs in blending external and parametric knowledge remains largely unexplored, especially in cases where external knowledge is incomplete and necessitates supplementation by their parametric knowledge. We propose to deconstruct knowledge fusion into four distinct scenarios, offering the first thorough investigation of LLM behavior across each. We develop a systematic pipeline for data construction and knowledge infusion to simulate these fusion scenarios, facilitating a series of controlled experiments. Our investigation reveals that enhancing parametric knowledge within LLMs can significantly bolster their capability for knowledge integration. Nonetheless, we identify persistent challenges in memorizing and eliciting parametric knowledge, and determining parametric knowledge boundaries. Our findings aim to steer future explorations on harmonizing external and parametric knowledge within LLMs."]], "corpus": "arxiv"}
|
267 |
+
{"tstamp": 1731898529.0948, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1731898526.931, "finish": 1731898529.0948, "ip": "", "conv_id": "9c12823134014bbc8571b7624d0efb4f", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "I am investigating the impact of integrating random and generic data on the memorization capabilities of large language models, specifically focusing on the REMIX subtopic. How does this integration affect model parameters and gradients, and what scenarios illustrate its potential to enhance the retention of specialized knowledge without compromising original capabilities?", "output": [["I am investigating the impact of integrating random and generic data on the memorization capabilities of large language models, specifically focusing on the REMIX subtopic. How does this integration affect model parameters and gradients, and what scenarios illustrate its potential to enhance the retention of specialized knowledge without compromising original capabilities?", "Title: When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories\n\nAbstract: Despite their impressive performance on diverse tasks, large language models (LMs) still struggle with tasks requiring rich world knowledge, implying the limitations of relying solely on their parameters to encode a wealth of world knowledge. This paper aims to understand LMs' strengths and limitations in memorizing factual knowledge, by conducting large-scale knowledge probing experiments of 10 models and 4 augmentation methods on PopQA, our new open-domain QA dataset with 14k questions. We find that LMs struggle with less popular factual knowledge, and that scaling fails to appreciably improve memorization of factual knowledge in the long tail. We then show that retrieval-augmented LMs largely outperform orders of magnitude larger LMs, while unassisted LMs remain competitive in questions about high-popularity entities. Based on those findings, we devise a simple, yet effective, method for powerful and efficient retrieval-augmented LMs, which retrieves non-parametric memories only when necessary. Experimental results show that this significantly improves models' performance while reducing the inference costs."]], "corpus": "arxiv"}
|
268 |
+
{"tstamp": 1731898573.7798, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1731898573.5183, "finish": 1731898573.7798, "ip": "", "conv_id": "ce6995b436d74fb6997586cf004aecfb", "model_name": "GritLM/GritLM-7B", "prompt": "I am investigating the impact of integrating random and generic data on the memorization capabilities of large language models, specifically focusing on the REMIX subtopic. How does this integration affect model parameters and gradients, and what scenarios illustrate its potential to enhance the retention of specialized knowledge without compromising original capabilities?", "output": [["I am investigating the impact of integrating random and generic data on the memorization capabilities of large language models, specifically focusing on the REMIX subtopic. How does this integration affect model parameters and gradients, and what scenarios illustrate its potential to enhance the retention of specialized knowledge without compromising original capabilities?", "Title: Beyond Memorization: The Challenge of Random Memory Access in Language Models\n\nAbstract: Recent developments in Language Models (LMs) have shown their effectiveness in NLP tasks, particularly in knowledge-intensive tasks. However, the mechanisms underlying knowledge storage and memory access within their parameters remain elusive. In this paper, we investigate whether a generative LM (e.g., GPT-2) is able to access its memory sequentially or randomly. Through carefully-designed synthetic tasks, covering the scenarios of full recitation, selective recitation and grounded question answering, we reveal that LMs manage to sequentially access their memory while encountering challenges in randomly accessing memorized content. We find that techniques including recitation and permutation improve the random memory access capability of LMs. Furthermore, by applying this intervention to realistic scenarios of open-domain question answering, we validate that enhancing random access by recitation leads to notable improvements in question answering. The code to reproduce our experiments can be found at https://github.com/sail-sg/lm-random-memory-access."]], "corpus": "arxiv"}
|
269 |
+
{"tstamp": 1731898573.7798, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1731898573.5183, "finish": 1731898573.7798, "ip": "", "conv_id": "4869a57a566741e69f3b38a53b492de3", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "I am investigating the impact of integrating random and generic data on the memorization capabilities of large language models, specifically focusing on the REMIX subtopic. How does this integration affect model parameters and gradients, and what scenarios illustrate its potential to enhance the retention of specialized knowledge without compromising original capabilities?", "output": [["I am investigating the impact of integrating random and generic data on the memorization capabilities of large language models, specifically focusing on the REMIX subtopic. How does this integration affect model parameters and gradients, and what scenarios illustrate its potential to enhance the retention of specialized knowledge without compromising original capabilities?", "Title: Beyond Memorization: The Challenge of Random Memory Access in Language Models\n\nAbstract: Recent developments in Language Models (LMs) have shown their effectiveness in NLP tasks, particularly in knowledge-intensive tasks. However, the mechanisms underlying knowledge storage and memory access within their parameters remain elusive. In this paper, we investigate whether a generative LM (e.g., GPT-2) is able to access its memory sequentially or randomly. Through carefully-designed synthetic tasks, covering the scenarios of full recitation, selective recitation and grounded question answering, we reveal that LMs manage to sequentially access their memory while encountering challenges in randomly accessing memorized content. We find that techniques including recitation and permutation improve the random memory access capability of LMs. Furthermore, by applying this intervention to realistic scenarios of open-domain question answering, we validate that enhancing random access by recitation leads to notable improvements in question answering. The code to reproduce our experiments can be found at https://github.com/sail-sg/lm-random-memory-access."]], "corpus": "arxiv"}
|