Scheduled Commit
Browse files
data/retrieval_battle-0405e71f-a8d1-4211-85fa-8ad0c343a2d1.jsonl
CHANGED
@@ -1 +1,3 @@
|
|
1 |
{"tstamp": 1728526515.9646, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "9290ab22d9c7475ea51e180c362b4e28", "0_model_name": "text-embedding-3-large", "0_prompt": "Paper evaluating reasoning abilities of language models in multiple languages and introducing MGSM benchmark ", "0_output": [["Paper evaluating reasoning abilities of language models in multiple languages and introducing MGSM benchmark ", "Title: Language Models are Multilingual Chain-of-Thought Reasoners\n\nAbstract: We evaluate the reasoning abilities of large language models in multilingual settings. We introduce the Multilingual Grade School Math (MGSM) benchmark, by manually translating 250 grade-school math problems from the GSM8K dataset (Cobbe et al., 2021) into ten typologically diverse languages. We find that the ability to solve MGSM problems via chain-of-thought prompting emerges with increasing model scale, and that models have strikingly strong multilingual reasoning abilities, even in underrepresented languages such as Bengali and Swahili. Finally, we show that the multilingual reasoning abilities of language models extend to other tasks such as commonsense reasoning and word-in-context semantic judgment. The MGSM benchmark is publicly available at https://github.com/google-research/url-nlp."]], "0_corpus": "arxiv", "1_conv_id": "54128236418142079f9d517904464def", "1_model_name": "text-embedding-004", "1_prompt": "Paper evaluating reasoning abilities of language models in multiple languages and introducing MGSM benchmark ", "1_output": [["Paper evaluating reasoning abilities of language models in multiple languages and introducing MGSM benchmark ", "Title: Language Models are Multilingual Chain-of-Thought Reasoners\n\nAbstract: We evaluate the reasoning abilities of large language models in multilingual settings. We introduce the Multilingual Grade School Math (MGSM) benchmark, by manually translating 250 grade-school math problems from the GSM8K dataset (Cobbe et al., 2021) into ten typologically diverse languages. We find that the ability to solve MGSM problems via chain-of-thought prompting emerges with increasing model scale, and that models have strikingly strong multilingual reasoning abilities, even in underrepresented languages such as Bengali and Swahili. Finally, we show that the multilingual reasoning abilities of language models extend to other tasks such as commonsense reasoning and word-in-context semantic judgment. The MGSM benchmark is publicly available at https://github.com/google-research/url-nlp."]], "1_corpus": "arxiv"}
|
|
|
|
|
|
1 |
{"tstamp": 1728526515.9646, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "9290ab22d9c7475ea51e180c362b4e28", "0_model_name": "text-embedding-3-large", "0_prompt": "Paper evaluating reasoning abilities of language models in multiple languages and introducing MGSM benchmark ", "0_output": [["Paper evaluating reasoning abilities of language models in multiple languages and introducing MGSM benchmark ", "Title: Language Models are Multilingual Chain-of-Thought Reasoners\n\nAbstract: We evaluate the reasoning abilities of large language models in multilingual settings. We introduce the Multilingual Grade School Math (MGSM) benchmark, by manually translating 250 grade-school math problems from the GSM8K dataset (Cobbe et al., 2021) into ten typologically diverse languages. We find that the ability to solve MGSM problems via chain-of-thought prompting emerges with increasing model scale, and that models have strikingly strong multilingual reasoning abilities, even in underrepresented languages such as Bengali and Swahili. Finally, we show that the multilingual reasoning abilities of language models extend to other tasks such as commonsense reasoning and word-in-context semantic judgment. The MGSM benchmark is publicly available at https://github.com/google-research/url-nlp."]], "0_corpus": "arxiv", "1_conv_id": "54128236418142079f9d517904464def", "1_model_name": "text-embedding-004", "1_prompt": "Paper evaluating reasoning abilities of language models in multiple languages and introducing MGSM benchmark ", "1_output": [["Paper evaluating reasoning abilities of language models in multiple languages and introducing MGSM benchmark ", "Title: Language Models are Multilingual Chain-of-Thought Reasoners\n\nAbstract: We evaluate the reasoning abilities of large language models in multilingual settings. We introduce the Multilingual Grade School Math (MGSM) benchmark, by manually translating 250 grade-school math problems from the GSM8K dataset (Cobbe et al., 2021) into ten typologically diverse languages. We find that the ability to solve MGSM problems via chain-of-thought prompting emerges with increasing model scale, and that models have strikingly strong multilingual reasoning abilities, even in underrepresented languages such as Bengali and Swahili. Finally, we show that the multilingual reasoning abilities of language models extend to other tasks such as commonsense reasoning and word-in-context semantic judgment. The MGSM benchmark is publicly available at https://github.com/google-research/url-nlp."]], "1_corpus": "arxiv"}
|
2 |
+
{"tstamp": 1728527384.4474, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "a951c573f73540528a5f52a10b81e7cc", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "do i need a visa for a connecting flight in japan?", "0_output": [["do i need a visa for a connecting flight in japan?", "Meta Stackexchange\n\nQ: Hit by the spam detector on Travel.SE I want to edit this answer of mine on Travel.SE to add a paragraph about how to purchase a certain kind of train ticket in Japan, and I am hit by the spam detector. A screenshot of the added section follows. My guess for why it is rejected is that it contains a fair amount of Chinese characters (which are also used in Japan) which I include for reference to assist communication between a non-Japanese-speaking traveler and a ticket office clerk. If this is the case, I believe this is a problem with the spam detector and not with my answer, as this is not an unreasonable thing to post on a travel-related website.\n\n\n\n\n\nA: These are enabled and disabled frequently to deal with large waves of incoming spam.\nI've disabled it for now to allow you to post your answer; it may be re-enabled if needed.\n"]], "0_corpus": "stackexchange", "1_conv_id": "0d88c6c6275246bdb436079e83116613", "1_model_name": "BM25", "1_prompt": "do i need a visa for a connecting flight in japan?", "1_output": [["do i need a visa for a connecting flight in japan?", "Stackoverflow Stackexchange\n\nQ: How to return a wrong message if there is no device connected when I use VISA? For exmaple, as follows, I can simply initialize my device by using following code if my device is connected properly.\nfrom visa import *\nmy_instrument = instrument(\"GPIB::14\")\n\nBut what if the device is not connected to the computer? What I want to do is that before I initialize the device, firstly I want to check whether the device is connected properly? How to achieve that?\n\nA: You could do it two ways:\n1) Check if it is in the get_instruments_list()\nfrom visa import *\nmy_instrument_name = \"GPIB::14\"\nif my_instrument_name in visa.get_instruments_list():\n print('Instrument exists connecting to it')\n my_instrument = instrument(my_instrument_name)\nelse:\n print('Instrument not found, not connecting')\n\n2) Try to connect and catch the exception, you will need to wait for the timeout to occur \nfrom visa import *\nmy_instrument_name = \"GPIB::14\"\ntry:\n my_instrument = instrument(my_instrument_name)\n print('Instrument connected')\nexcept(visa.VisaIOError):\n print('Instrument not connected (timeout error)')\n\n"]], "1_corpus": "stackexchange"}
|
3 |
+
{"tstamp": 1728527397.0709, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "cc6a9c93730c43f599cbb130dbfb9457", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "access is denied windows 7 administrator?", "0_output": [["access is denied windows 7 administrator?", "Superuser Stackexchange\n\nQ: Access denined to Administrator on Windows7 Home edition I have Windows7 Home edition on my home pc. I am the administrator of the machine. I recently changed my own password from control panel and changed permissions for another user. Since then I am getting access denied on some of the files I had created myself in a folder named C:\\data. The file (dbmaintain.dll created by me using Visual Stidio 2008 C++) shows that my userid has Full Control and also that I have ownership of the object. Yet I cannot make a copy of this file from command prompt or windows explorer running in administrative mode. There are serveral other files in the same folder which have the same problem.\n"]], "0_corpus": "stackexchange", "1_conv_id": "58a3b76a57b94d80b8140e5109b147ea", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "access is denied windows 7 administrator?", "1_output": [["access is denied windows 7 administrator?", "Superuser Stackexchange\n\nQ: Access denined to Administrator on Windows7 Home edition I have Windows7 Home edition on my home pc. I am the administrator of the machine. I recently changed my own password from control panel and changed permissions for another user. Since then I am getting access denied on some of the files I had created myself in a folder named C:\\data. The file (dbmaintain.dll created by me using Visual Stidio 2008 C++) shows that my userid has Full Control and also that I have ownership of the object. Yet I cannot make a copy of this file from command prompt or windows explorer running in administrative mode. There are serveral other files in the same folder which have the same problem.\n"]], "1_corpus": "stackexchange"}
|
data/retrieval_individual-0405e71f-a8d1-4211-85fa-8ad0c343a2d1.jsonl
CHANGED
@@ -2,3 +2,7 @@
|
|
2 |
{"tstamp": 1728526461.5118, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1728526460.0837, "finish": 1728526461.5118, "ip": "", "conv_id": "54128236418142079f9d517904464def", "model_name": "text-embedding-004", "prompt": "Paper evaluating reasoning abilities of language models in multiple languages and introducing MGSM benchmark ", "output": [["Paper evaluating reasoning abilities of language models in multiple languages and introducing MGSM benchmark ", "Title: Language Models are Multilingual Chain-of-Thought Reasoners\n\nAbstract: We evaluate the reasoning abilities of large language models in multilingual settings. We introduce the Multilingual Grade School Math (MGSM) benchmark, by manually translating 250 grade-school math problems from the GSM8K dataset (Cobbe et al., 2021) into ten typologically diverse languages. We find that the ability to solve MGSM problems via chain-of-thought prompting emerges with increasing model scale, and that models have strikingly strong multilingual reasoning abilities, even in underrepresented languages such as Bengali and Swahili. Finally, we show that the multilingual reasoning abilities of language models extend to other tasks such as commonsense reasoning and word-in-context semantic judgment. The MGSM benchmark is publicly available at https://github.com/google-research/url-nlp."]], "corpus": "arxiv"}
|
3 |
{"tstamp": 1728526571.3535, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1728526570.1103, "finish": 1728526571.3535, "ip": "", "conv_id": "3c3a59d2689e4c9b87adf1e3d200f9f6", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
|
4 |
{"tstamp": 1728526571.3535, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1728526570.1103, "finish": 1728526571.3535, "ip": "", "conv_id": "db6206a09d1d4c168cad5d7b421a63ba", "model_name": "text-embedding-004", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
|
|
|
|
|
|
|
|
|
|
2 |
{"tstamp": 1728526461.5118, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1728526460.0837, "finish": 1728526461.5118, "ip": "", "conv_id": "54128236418142079f9d517904464def", "model_name": "text-embedding-004", "prompt": "Paper evaluating reasoning abilities of language models in multiple languages and introducing MGSM benchmark ", "output": [["Paper evaluating reasoning abilities of language models in multiple languages and introducing MGSM benchmark ", "Title: Language Models are Multilingual Chain-of-Thought Reasoners\n\nAbstract: We evaluate the reasoning abilities of large language models in multilingual settings. We introduce the Multilingual Grade School Math (MGSM) benchmark, by manually translating 250 grade-school math problems from the GSM8K dataset (Cobbe et al., 2021) into ten typologically diverse languages. We find that the ability to solve MGSM problems via chain-of-thought prompting emerges with increasing model scale, and that models have strikingly strong multilingual reasoning abilities, even in underrepresented languages such as Bengali and Swahili. Finally, we show that the multilingual reasoning abilities of language models extend to other tasks such as commonsense reasoning and word-in-context semantic judgment. The MGSM benchmark is publicly available at https://github.com/google-research/url-nlp."]], "corpus": "arxiv"}
|
3 |
{"tstamp": 1728526571.3535, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1728526570.1103, "finish": 1728526571.3535, "ip": "", "conv_id": "3c3a59d2689e4c9b87adf1e3d200f9f6", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
|
4 |
{"tstamp": 1728526571.3535, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1728526570.1103, "finish": 1728526571.3535, "ip": "", "conv_id": "db6206a09d1d4c168cad5d7b421a63ba", "model_name": "text-embedding-004", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
|
5 |
+
{"tstamp": 1728527369.3239, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1728527367.7508, "finish": 1728527369.3239, "ip": "", "conv_id": "a951c573f73540528a5f52a10b81e7cc", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "do i need a visa for a connecting flight in japan?", "output": [["do i need a visa for a connecting flight in japan?", "Meta Stackexchange\n\nQ: Hit by the spam detector on Travel.SE I want to edit this answer of mine on Travel.SE to add a paragraph about how to purchase a certain kind of train ticket in Japan, and I am hit by the spam detector. A screenshot of the added section follows. My guess for why it is rejected is that it contains a fair amount of Chinese characters (which are also used in Japan) which I include for reference to assist communication between a non-Japanese-speaking traveler and a ticket office clerk. If this is the case, I believe this is a problem with the spam detector and not with my answer, as this is not an unreasonable thing to post on a travel-related website.\n\n\n\n\n\nA: These are enabled and disabled frequently to deal with large waves of incoming spam.\nI've disabled it for now to allow you to post your answer; it may be re-enabled if needed.\n"]], "corpus": "stackexchange"}
|
6 |
+
{"tstamp": 1728527369.3239, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1728527367.7508, "finish": 1728527369.3239, "ip": "", "conv_id": "0d88c6c6275246bdb436079e83116613", "model_name": "BM25", "prompt": "do i need a visa for a connecting flight in japan?", "output": [["do i need a visa for a connecting flight in japan?", "Stackoverflow Stackexchange\n\nQ: How to return a wrong message if there is no device connected when I use VISA? For exmaple, as follows, I can simply initialize my device by using following code if my device is connected properly.\nfrom visa import *\nmy_instrument = instrument(\"GPIB::14\")\n\nBut what if the device is not connected to the computer? What I want to do is that before I initialize the device, firstly I want to check whether the device is connected properly? How to achieve that?\n\nA: You could do it two ways:\n1) Check if it is in the get_instruments_list()\nfrom visa import *\nmy_instrument_name = \"GPIB::14\"\nif my_instrument_name in visa.get_instruments_list():\n print('Instrument exists connecting to it')\n my_instrument = instrument(my_instrument_name)\nelse:\n print('Instrument not found, not connecting')\n\n2) Try to connect and catch the exception, you will need to wait for the timeout to occur \nfrom visa import *\nmy_instrument_name = \"GPIB::14\"\ntry:\n my_instrument = instrument(my_instrument_name)\n print('Instrument connected')\nexcept(visa.VisaIOError):\n print('Instrument not connected (timeout error)')\n\n"]], "corpus": "stackexchange"}
|
7 |
+
{"tstamp": 1728527393.8243, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1728527393.0031, "finish": 1728527393.8243, "ip": "", "conv_id": "cc6a9c93730c43f599cbb130dbfb9457", "model_name": "GritLM/GritLM-7B", "prompt": "access is denied windows 7 administrator?", "output": [["access is denied windows 7 administrator?", "Superuser Stackexchange\n\nQ: Access denined to Administrator on Windows7 Home edition I have Windows7 Home edition on my home pc. I am the administrator of the machine. I recently changed my own password from control panel and changed permissions for another user. Since then I am getting access denied on some of the files I had created myself in a folder named C:\\data. The file (dbmaintain.dll created by me using Visual Stidio 2008 C++) shows that my userid has Full Control and also that I have ownership of the object. Yet I cannot make a copy of this file from command prompt or windows explorer running in administrative mode. There are serveral other files in the same folder which have the same problem.\n"]], "corpus": "stackexchange"}
|
8 |
+
{"tstamp": 1728527393.8243, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1728527393.0031, "finish": 1728527393.8243, "ip": "", "conv_id": "58a3b76a57b94d80b8140e5109b147ea", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "access is denied windows 7 administrator?", "output": [["access is denied windows 7 administrator?", "Superuser Stackexchange\n\nQ: Access denined to Administrator on Windows7 Home edition I have Windows7 Home edition on my home pc. I am the administrator of the machine. I recently changed my own password from control panel and changed permissions for another user. Since then I am getting access denied on some of the files I had created myself in a folder named C:\\data. The file (dbmaintain.dll created by me using Visual Stidio 2008 C++) shows that my userid has Full Control and also that I have ownership of the object. Yet I cannot make a copy of this file from command prompt or windows explorer running in administrative mode. There are serveral other files in the same folder which have the same problem.\n"]], "corpus": "stackexchange"}
|