Scheduled Commit
Browse files
data/retrieval_battle-8abaf4d0-63ed-4c7b-8187-1a073a1031a0.jsonl
CHANGED
@@ -1,2 +1,3 @@
|
|
1 |
{"tstamp": 1725573861.126, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "bd34d35404de46a091a5e843e7b18bbe", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "can opaque objects refract light?", "0_output": [["can opaque objects refract light?", "Physics Stackexchange\n\nQ: How do opaque materials have an index of refraction? The index of refraction defines how much light is bent as it passes through an object, right? So how can opaque objects, which don't transmit light, have a value for the index of refraction? (e.g., Blender Docs lists the IoR of steel as 2.5)\n\nA: Index of refraction refers to the speed of light in a material, which comes up when determining how much light is reflected vs. refracted. In an opaque material the refracted light is absorbed, but the intensity of reflection still depends on the illumination angle.\nFor example, light reflected at Brewster's angle is completely polarized in the plane of the surface, because the angle between the reflected and refracted light is 90 degrees. The effect is easy to see in the polished floor of a long hallway if you have polarizing sunglasses. If you can find some physical polished steel and it has refractive index 2.5, you should see completely polarized light reflected at a 22 degree angle from the horizontal.\n"]], "0_corpus": "stackexchange", "1_conv_id": "3a0628641b7b4139aab29d82bdc80f09", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "can opaque objects refract light?", "1_output": [["can opaque objects refract light?", "Physics Stackexchange\n\nQ: How do opaque materials have an index of refraction? The index of refraction defines how much light is bent as it passes through an object, right? So how can opaque objects, which don't transmit light, have a value for the index of refraction? (e.g., Blender Docs lists the IoR of steel as 2.5)\n\nA: Index of refraction refers to the speed of light in a material, which comes up when determining how much light is reflected vs. refracted. In an opaque material the refracted light is absorbed, but the intensity of reflection still depends on the illumination angle.\nFor example, light reflected at Brewster's angle is completely polarized in the plane of the surface, because the angle between the reflected and refracted light is 90 degrees. The effect is easy to see in the polished floor of a long hallway if you have polarizing sunglasses. If you can find some physical polished steel and it has refractive index 2.5, you should see completely polarized light reflected at a 22 degree angle from the horizontal.\n"]], "1_corpus": "stackexchange"}
|
2 |
{"tstamp": 1725574039.6484, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "84e89a4c74b84851939eba6bec6ef05f", "0_model_name": "voyage-multilingual-2", "0_prompt": "Looking for a paper on how influence functions can analyze the generalization patterns of large language models.", "0_output": [["Looking for a paper on how influence functions can analyze the generalization patterns of large language models.", "Title: Studying Large Language Model Generalization with Influence Functions\n\nAbstract: When trying to gain better visibility into a machine learning model in order to understand and mitigate the associated risks, a potentially valuable source of evidence is: which training examples most contribute to a given behavior? Influence functions aim to answer a counterfactual: how would the model's parameters (and hence its outputs) change if a given sequence were added to the training set? While influence functions have produced insights for small models, they are difficult to scale to large language models (LLMs) due to the difficulty of computing an inverse-Hessian-vector product (IHVP). We use the Eigenvalue-corrected Kronecker-Factored Approximate Curvature (EK-FAC) approximation to scale influence functions up to LLMs with up to 52 billion parameters. In our experiments, EK-FAC achieves similar accuracy to traditional influence function estimators despite the IHVP computation being orders of magnitude faster. We investigate two algorithmic techniques to reduce the cost of computing gradients of candidate training sequences: TF-IDF filtering and query batching. We use influence functions to investigate the generalization patterns of LLMs, including the sparsity of the influence patterns, increasing abstraction with scale, math and programming abilities, cross-lingual generalization, and role-playing behavior. Despite many apparently sophisticated forms of generalization, we identify a surprising limitation: influences decay to near-zero when the order of key phrases is flipped. Overall, influence functions give us a powerful new tool for studying the generalization properties of LLMs."]], "0_corpus": "arxiv", "1_conv_id": "d07546bc228d40d8bab7af1410f664a8", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Looking for a paper on how influence functions can analyze the generalization patterns of large language models.", "1_output": [["Looking for a paper on how influence functions can analyze the generalization patterns of large language models.", "Title: Studying Large Language Model Generalization with Influence Functions\n\nAbstract: When trying to gain better visibility into a machine learning model in order to understand and mitigate the associated risks, a potentially valuable source of evidence is: which training examples most contribute to a given behavior? Influence functions aim to answer a counterfactual: how would the model's parameters (and hence its outputs) change if a given sequence were added to the training set? While influence functions have produced insights for small models, they are difficult to scale to large language models (LLMs) due to the difficulty of computing an inverse-Hessian-vector product (IHVP). We use the Eigenvalue-corrected Kronecker-Factored Approximate Curvature (EK-FAC) approximation to scale influence functions up to LLMs with up to 52 billion parameters. In our experiments, EK-FAC achieves similar accuracy to traditional influence function estimators despite the IHVP computation being orders of magnitude faster. We investigate two algorithmic techniques to reduce the cost of computing gradients of candidate training sequences: TF-IDF filtering and query batching. We use influence functions to investigate the generalization patterns of LLMs, including the sparsity of the influence patterns, increasing abstraction with scale, math and programming abilities, cross-lingual generalization, and role-playing behavior. Despite many apparently sophisticated forms of generalization, we identify a surprising limitation: influences decay to near-zero when the order of key phrases is flipped. Overall, influence functions give us a powerful new tool for studying the generalization properties of LLMs."]], "1_corpus": "arxiv"}
|
|
|
|
1 |
{"tstamp": 1725573861.126, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "bd34d35404de46a091a5e843e7b18bbe", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "can opaque objects refract light?", "0_output": [["can opaque objects refract light?", "Physics Stackexchange\n\nQ: How do opaque materials have an index of refraction? The index of refraction defines how much light is bent as it passes through an object, right? So how can opaque objects, which don't transmit light, have a value for the index of refraction? (e.g., Blender Docs lists the IoR of steel as 2.5)\n\nA: Index of refraction refers to the speed of light in a material, which comes up when determining how much light is reflected vs. refracted. In an opaque material the refracted light is absorbed, but the intensity of reflection still depends on the illumination angle.\nFor example, light reflected at Brewster's angle is completely polarized in the plane of the surface, because the angle between the reflected and refracted light is 90 degrees. The effect is easy to see in the polished floor of a long hallway if you have polarizing sunglasses. If you can find some physical polished steel and it has refractive index 2.5, you should see completely polarized light reflected at a 22 degree angle from the horizontal.\n"]], "0_corpus": "stackexchange", "1_conv_id": "3a0628641b7b4139aab29d82bdc80f09", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "can opaque objects refract light?", "1_output": [["can opaque objects refract light?", "Physics Stackexchange\n\nQ: How do opaque materials have an index of refraction? The index of refraction defines how much light is bent as it passes through an object, right? So how can opaque objects, which don't transmit light, have a value for the index of refraction? (e.g., Blender Docs lists the IoR of steel as 2.5)\n\nA: Index of refraction refers to the speed of light in a material, which comes up when determining how much light is reflected vs. refracted. In an opaque material the refracted light is absorbed, but the intensity of reflection still depends on the illumination angle.\nFor example, light reflected at Brewster's angle is completely polarized in the plane of the surface, because the angle between the reflected and refracted light is 90 degrees. The effect is easy to see in the polished floor of a long hallway if you have polarizing sunglasses. If you can find some physical polished steel and it has refractive index 2.5, you should see completely polarized light reflected at a 22 degree angle from the horizontal.\n"]], "1_corpus": "stackexchange"}
|
2 |
{"tstamp": 1725574039.6484, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "84e89a4c74b84851939eba6bec6ef05f", "0_model_name": "voyage-multilingual-2", "0_prompt": "Looking for a paper on how influence functions can analyze the generalization patterns of large language models.", "0_output": [["Looking for a paper on how influence functions can analyze the generalization patterns of large language models.", "Title: Studying Large Language Model Generalization with Influence Functions\n\nAbstract: When trying to gain better visibility into a machine learning model in order to understand and mitigate the associated risks, a potentially valuable source of evidence is: which training examples most contribute to a given behavior? Influence functions aim to answer a counterfactual: how would the model's parameters (and hence its outputs) change if a given sequence were added to the training set? While influence functions have produced insights for small models, they are difficult to scale to large language models (LLMs) due to the difficulty of computing an inverse-Hessian-vector product (IHVP). We use the Eigenvalue-corrected Kronecker-Factored Approximate Curvature (EK-FAC) approximation to scale influence functions up to LLMs with up to 52 billion parameters. In our experiments, EK-FAC achieves similar accuracy to traditional influence function estimators despite the IHVP computation being orders of magnitude faster. We investigate two algorithmic techniques to reduce the cost of computing gradients of candidate training sequences: TF-IDF filtering and query batching. We use influence functions to investigate the generalization patterns of LLMs, including the sparsity of the influence patterns, increasing abstraction with scale, math and programming abilities, cross-lingual generalization, and role-playing behavior. Despite many apparently sophisticated forms of generalization, we identify a surprising limitation: influences decay to near-zero when the order of key phrases is flipped. Overall, influence functions give us a powerful new tool for studying the generalization properties of LLMs."]], "0_corpus": "arxiv", "1_conv_id": "d07546bc228d40d8bab7af1410f664a8", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Looking for a paper on how influence functions can analyze the generalization patterns of large language models.", "1_output": [["Looking for a paper on how influence functions can analyze the generalization patterns of large language models.", "Title: Studying Large Language Model Generalization with Influence Functions\n\nAbstract: When trying to gain better visibility into a machine learning model in order to understand and mitigate the associated risks, a potentially valuable source of evidence is: which training examples most contribute to a given behavior? Influence functions aim to answer a counterfactual: how would the model's parameters (and hence its outputs) change if a given sequence were added to the training set? While influence functions have produced insights for small models, they are difficult to scale to large language models (LLMs) due to the difficulty of computing an inverse-Hessian-vector product (IHVP). We use the Eigenvalue-corrected Kronecker-Factored Approximate Curvature (EK-FAC) approximation to scale influence functions up to LLMs with up to 52 billion parameters. In our experiments, EK-FAC achieves similar accuracy to traditional influence function estimators despite the IHVP computation being orders of magnitude faster. We investigate two algorithmic techniques to reduce the cost of computing gradients of candidate training sequences: TF-IDF filtering and query batching. We use influence functions to investigate the generalization patterns of LLMs, including the sparsity of the influence patterns, increasing abstraction with scale, math and programming abilities, cross-lingual generalization, and role-playing behavior. Despite many apparently sophisticated forms of generalization, we identify a surprising limitation: influences decay to near-zero when the order of key phrases is flipped. Overall, influence functions give us a powerful new tool for studying the generalization properties of LLMs."]], "1_corpus": "arxiv"}
|
3 |
+
{"tstamp": 1725589646.7829, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "11d4ad19e5c448a69cf0bb410f31a20c", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "how many tablespoons is 3 cloves of garlic?", "0_output": [["how many tablespoons is 3 cloves of garlic?", "Ell Stackexchange\n\nQ: How many pieces of garlic? How much garlic? uncountable noun I learned that we should say\n\nHow much gravel do you have now?\n\nsince \"gravel\" is an uncountable noun, but it's also okay to say\n\nHow many pieces of gravel do you have now?\n\nThen, how about garlic, which is another uncountable noun?\nI know I can say two cloves of garlic,\nbut what should I say if it's a whole piece of garlic?\n\nHow much garlic do you want?\nHow many pieces of garlic do you want?\n\nAre both okay?\n\nA: Garlic is a bulb, and referred to as such. When shopping you can ask for \"three bulbs of garlic\". You can also say \"garlic head\" or \"head of garlic\". For example, \"Wrap each garlic head in foil and roast for 30 minutes\".\n"]], "0_corpus": "stackexchange", "1_conv_id": "47de5ffc5c2a4abfb5f1387677c0f57b", "1_model_name": "BM25", "1_prompt": "how many tablespoons is 3 cloves of garlic?", "1_output": [["how many tablespoons is 3 cloves of garlic?", "Ell Stackexchange\n\nQ: How many pieces of garlic? How much garlic? uncountable noun I learned that we should say\n\nHow much gravel do you have now?\n\nsince \"gravel\" is an uncountable noun, but it's also okay to say\n\nHow many pieces of gravel do you have now?\n\nThen, how about garlic, which is another uncountable noun?\nI know I can say two cloves of garlic,\nbut what should I say if it's a whole piece of garlic?\n\nHow much garlic do you want?\nHow many pieces of garlic do you want?\n\nAre both okay?\n\nA: Garlic is a bulb, and referred to as such. When shopping you can ask for \"three bulbs of garlic\". You can also say \"garlic head\" or \"head of garlic\". For example, \"Wrap each garlic head in foil and roast for 30 minutes\".\n"]], "1_corpus": "stackexchange"}
|