Scheduled Commit
Browse files
data/retrieval_battle-d2470a18-3fc5-4a21-86ba-77a6dda751e0.jsonl
CHANGED
@@ -26,3 +26,4 @@
|
|
26 |
{"tstamp": 1725975008.6538, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "29295340ee424b259caf39a700b51a30", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "kto był pierwszym królem polski ?", "0_output": [["kto był pierwszym królem polski ?", "Kłomino\n\nKłomino () is a ghost town in Poland, in Szczecinek County, West Pomeranian Voivodeship. In the past it was a large garrison and training ground of the German Wehrmacht and after World War II it was a base of the Soviet Army and for many years it existed only on Soviet military maps. Since 1993, when the Red Army withdrew its forces from Poland, it has lain empty. Until 1992, there were 5,000 residents in Kłomino, today it is a ghost town. As of 2012, Kłomino had 12 residents. The town is located about 12 kilometres (7 miles) away from Borne Sulinowo."]], "0_corpus": "wikipedia", "1_conv_id": "d3e5d45d66e14d0ebebb878c3a51c7aa", "1_model_name": "text-embedding-3-large", "1_prompt": "kto był pierwszym królem polski ?", "1_output": [["kto był pierwszym królem polski ?", "List of national founders\n\nBolesław I was a remarkable politician, strategist, and statesman.He not only turned Poland into a country comparable to older western monarchies, but he raised it to the front rank of European states. Bolesław conducted successful military campaigns in the west, south and east. He consolidated Polish lands and conquered territories outside the borders of modern-day Poland, including Slovakia, Moravia, Red Ruthenia, Meissen, Lusatia, and Bohemia. He was a powerful mediator in Central European affairs. Finally, as the culmination of his reign, in 1025 he had himself crowned King of Poland. He was the first Polish ruler to receive the title of rex (Latin: \"king\"). Władysław II Jagiełło (c. 1352/1362 – 1434) was the Grand Duke of Lithuania (1377–1434) and then the King of Poland (1386–1434), first alongside his wife Jadwiga until 1399, and then sole King of Poland. He ruled in Lithuania from 1377. Born a pagan, in 1386 he converted to Catholicism and was baptized as Władysław in Kraków, married the young Queen Jadwiga, and was crowned King of Poland as Władysław II Jagiełło. In 1387 he converted Lithuania to Christianity. "]], "1_corpus": "wikipedia"}
|
27 |
{"tstamp": 1725975058.1392, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "594e49d199c840db970e395664b1160a", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "who was the first king of poland?", "0_output": [["who was the first king of poland?", "Piast dynasty\n\nThe House of Piast was the first historical ruling dynasty of Poland. The first documented Polish monarch was Duke Mieszko I (–992). The Piasts' royal rule in Poland ended in 1370 with the death of King Casimir III the Great."]], "0_corpus": "wikipedia", "1_conv_id": "edb97aa23413419e8b65cbdc90305239", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "who was the first king of poland?", "1_output": [["who was the first king of poland?", "Kingdom of Poland\n\nThe Kingdom of Poland (; Latin: Regnum Poloniae) was a monarchy in Eastern Europe during the medieval period from 1025 until 1385.\nBackground\nThe West Slavic tribe of Polans who lived in what is today the historic region of Greater Poland, gave rise to a state in the early 10th century, which would become the nascent predecessor of the Kingdom of Poland. Following the Christianization of Poland in 966, and the emergence of the Duchy of Poland during the rule of Mieszko I, his eldest son Bolesław I the Brave inherited his father's dukedom and subsequently was crowned as king.\nHistory"]], "1_corpus": "wikipedia"}
|
28 |
{"tstamp": 1725980120.5972, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "bb04a4df53e94e6796caf345ffea3b6e", "0_model_name": "text-embedding-3-large", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "4104a75b0fcd46caacf76b17552feef6", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
|
|
|
|
26 |
{"tstamp": 1725975008.6538, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "29295340ee424b259caf39a700b51a30", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "kto był pierwszym królem polski ?", "0_output": [["kto był pierwszym królem polski ?", "Kłomino\n\nKłomino () is a ghost town in Poland, in Szczecinek County, West Pomeranian Voivodeship. In the past it was a large garrison and training ground of the German Wehrmacht and after World War II it was a base of the Soviet Army and for many years it existed only on Soviet military maps. Since 1993, when the Red Army withdrew its forces from Poland, it has lain empty. Until 1992, there were 5,000 residents in Kłomino, today it is a ghost town. As of 2012, Kłomino had 12 residents. The town is located about 12 kilometres (7 miles) away from Borne Sulinowo."]], "0_corpus": "wikipedia", "1_conv_id": "d3e5d45d66e14d0ebebb878c3a51c7aa", "1_model_name": "text-embedding-3-large", "1_prompt": "kto był pierwszym królem polski ?", "1_output": [["kto był pierwszym królem polski ?", "List of national founders\n\nBolesław I was a remarkable politician, strategist, and statesman.He not only turned Poland into a country comparable to older western monarchies, but he raised it to the front rank of European states. Bolesław conducted successful military campaigns in the west, south and east. He consolidated Polish lands and conquered territories outside the borders of modern-day Poland, including Slovakia, Moravia, Red Ruthenia, Meissen, Lusatia, and Bohemia. He was a powerful mediator in Central European affairs. Finally, as the culmination of his reign, in 1025 he had himself crowned King of Poland. He was the first Polish ruler to receive the title of rex (Latin: \"king\"). Władysław II Jagiełło (c. 1352/1362 – 1434) was the Grand Duke of Lithuania (1377–1434) and then the King of Poland (1386–1434), first alongside his wife Jadwiga until 1399, and then sole King of Poland. He ruled in Lithuania from 1377. Born a pagan, in 1386 he converted to Catholicism and was baptized as Władysław in Kraków, married the young Queen Jadwiga, and was crowned King of Poland as Władysław II Jagiełło. In 1387 he converted Lithuania to Christianity. "]], "1_corpus": "wikipedia"}
|
27 |
{"tstamp": 1725975058.1392, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "594e49d199c840db970e395664b1160a", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "who was the first king of poland?", "0_output": [["who was the first king of poland?", "Piast dynasty\n\nThe House of Piast was the first historical ruling dynasty of Poland. The first documented Polish monarch was Duke Mieszko I (–992). The Piasts' royal rule in Poland ended in 1370 with the death of King Casimir III the Great."]], "0_corpus": "wikipedia", "1_conv_id": "edb97aa23413419e8b65cbdc90305239", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "who was the first king of poland?", "1_output": [["who was the first king of poland?", "Kingdom of Poland\n\nThe Kingdom of Poland (; Latin: Regnum Poloniae) was a monarchy in Eastern Europe during the medieval period from 1025 until 1385.\nBackground\nThe West Slavic tribe of Polans who lived in what is today the historic region of Greater Poland, gave rise to a state in the early 10th century, which would become the nascent predecessor of the Kingdom of Poland. Following the Christianization of Poland in 966, and the emergence of the Duchy of Poland during the rule of Mieszko I, his eldest son Bolesław I the Brave inherited his father's dukedom and subsequently was crowned as king.\nHistory"]], "1_corpus": "wikipedia"}
|
28 |
{"tstamp": 1725980120.5972, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "bb04a4df53e94e6796caf345ffea3b6e", "0_model_name": "text-embedding-3-large", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "4104a75b0fcd46caacf76b17552feef6", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
|
29 |
+
{"tstamp": 1725982756.3539, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "a824724a9a5242deb63f1e8e6c0b57c7", "0_model_name": "text-embedding-3-large", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "c6028cf0e24d46a0914f2ce4afd639d5", "1_model_name": "embed-english-v3.0", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}
|
data/retrieval_individual-d2470a18-3fc5-4a21-86ba-77a6dda751e0.jsonl
CHANGED
@@ -104,3 +104,9 @@
|
|
104 |
{"tstamp": 1725980082.4862, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1725980079.764, "finish": 1725980082.4862, "ip": "", "conv_id": "4104a75b0fcd46caacf76b17552feef6", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
|
105 |
{"tstamp": 1725980687.1773, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1725980684.5141, "finish": 1725980687.1773, "ip": "", "conv_id": "ef037f832027409eb0d19b870ad2e348", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
|
106 |
{"tstamp": 1725980687.1773, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1725980684.5141, "finish": 1725980687.1773, "ip": "", "conv_id": "cb32ad5bff5747eaa6370765848200cc", "model_name": "BM25", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Artificial intelligence\n\nIn the late teens and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player. The program was taught only the rules of the game and developed strategy by itself. GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text. These programs, and others, inspired an aggressive AI boom, where large companies began investing billions in AI research. According to AI Impacts, about $50 billion annually was invested in \"AI\" around 2022 in the U.S. alone and about 20% of the new U.S. Computer Science PhD graduates have specialized in \"AI\".\nAbout 800,000 \"AI\"-related U.S. job openings existed in 2022.\nPhilosophy\nDefining artificial intelligence\nAlan Turing wrote in 1950 \"I propose to consider the question 'can machines think'?\" He advised changing the question from whether a machine \"thinks\", to \"whether or not it is possible for machinery to show intelligent behaviour\". He devised the Turing test, which measures the ability of a machine to simulate human conversation. Since we can only observe the behavior of the machine, it does not matter if it is \"actually\" thinking or literally has a \"mind\". Turing notes that we can not determine these things about other people but \"it is usual to have a polite convention that everyone thinks.\""]], "corpus": "wikipedia"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
104 |
{"tstamp": 1725980082.4862, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1725980079.764, "finish": 1725980082.4862, "ip": "", "conv_id": "4104a75b0fcd46caacf76b17552feef6", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
|
105 |
{"tstamp": 1725980687.1773, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1725980684.5141, "finish": 1725980687.1773, "ip": "", "conv_id": "ef037f832027409eb0d19b870ad2e348", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "corpus": "wikipedia"}
|
106 |
{"tstamp": 1725980687.1773, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1725980684.5141, "finish": 1725980687.1773, "ip": "", "conv_id": "cb32ad5bff5747eaa6370765848200cc", "model_name": "BM25", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Artificial intelligence\n\nIn the late teens and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player. The program was taught only the rules of the game and developed strategy by itself. GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text. These programs, and others, inspired an aggressive AI boom, where large companies began investing billions in AI research. According to AI Impacts, about $50 billion annually was invested in \"AI\" around 2022 in the U.S. alone and about 20% of the new U.S. Computer Science PhD graduates have specialized in \"AI\".\nAbout 800,000 \"AI\"-related U.S. job openings existed in 2022.\nPhilosophy\nDefining artificial intelligence\nAlan Turing wrote in 1950 \"I propose to consider the question 'can machines think'?\" He advised changing the question from whether a machine \"thinks\", to \"whether or not it is possible for machinery to show intelligent behaviour\". He devised the Turing test, which measures the ability of a machine to simulate human conversation. Since we can only observe the behavior of the machine, it does not matter if it is \"actually\" thinking or literally has a \"mind\". Turing notes that we can not determine these things about other people but \"it is usual to have a polite convention that everyone thinks.\""]], "corpus": "wikipedia"}
|
107 |
+
{"tstamp": 1725982665.689, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1725982663.4371, "finish": 1725982665.689, "ip": "", "conv_id": "6c8a2904d29c4201933a500685b5bba7", "model_name": "GritLM/GritLM-7B", "prompt": "hello", "output": [["hello", "Hello (disambiguation)\n\n\"Hello\", by OMFG, 2014 \"Hello\", by Purple Kiss, from the EP Into Violet, 2021\n\"Hello\", by Prince, included as a B-side of \"Pop Life\", 1985\n\"Hello\", by Prism from the album See Forever Eyes, 1978\n\"Hello\", by Rock Goddess from the album Young and Free, 1987\n\"Hello\", by Shinee from the album Lucifer, 2010\n\"Hello\", by T.I. from the album King, 2006\n\"Hello\", by T.I. from the album Trouble Man: Heavy Is the Head, 2012\n\"Hello\", by Twice from the album Formula of Love: O+T=<3\n\"Hello\", by Ty Dolla Sign from the album Campaign, 2016\n\"Hello\", by will.i.am from the album #willpower, 2013\n\"Hello\", by Zooey Deschanel from the album Trolls, 2016\n\"Hello!\", from the musical The Book of Mormon, 2011\nOther media\nHello! (magazine), a British celebrity magazine\nHello (web series), a 2017 Indian web series\nComputing and technology\nHello, a discontinued software program for Picasa\nHello, a discontinued WebRTC program for Firefox\nHello.jpg, a notorious image featured on shock site Goatse.cx\nHello (company), a sleep-tracking company\nHello (social network), a social networking service founded by Orkut Büyükkökten\nHello world program, a software programming demonstration\nWindows Hello, a feature of Microsoft Windows 10.\nOther uses\nErnest Hello (1828–1885), French critic\nHello (airline), a Swiss charter airline\nHello convention, bidding convention in bridge"]], "corpus": "wikipedia"}
|
108 |
+
{"tstamp": 1725982665.689, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1725982663.4371, "finish": 1725982665.689, "ip": "", "conv_id": "71ed16f9f37a41fcb443fb4080c1c6d3", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "hello", "output": [["hello", "Hello\n\nHello is a salutation or greeting in the English language. It is first attested in writing from 1826.\nEarly uses\nHello, with that spelling, was used in publications in the U.S. as early as the 18 October 1826 edition of the Norwich Courier of Norwich, Connecticut. Another early use was an 1833 American book called The Sketches and Eccentricities of Col. David Crockett, of West Tennessee, which was reprinted that same year in The London Literary Gazette. The word was extensively used in literature by the 1860s.\nEtymology\nAccording to the Oxford English Dictionary, hello is an alteration of hallo, hollo, which came from Old High German \"halâ, holâ, emphatic imperative of halôn, holôn to fetch, used especially in hailing a ferryman\". It also connects the development of hello to the influence of an earlier form, holla, whose origin is in the French holà (roughly, 'whoa there!', from French là 'there'). As in addition to hello, halloo, hallo, hollo, hullo and (rarely) hillo also exist as variants or related words, the word can be spelt using any of all five vowels.\nBill Bryson asserts in his book Mother Tongue that \"hello\" is a contraction of the Old English phrase hál béo þu (\"Hale be thou\", or \"whole be thou\", meaning a wish for good health; cf. \"goodbye\" which is a contraction of \"God be with ye\")."]], "corpus": "wikipedia"}
|
109 |
+
{"tstamp": 1725982748.8308, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1725982748.1356, "finish": 1725982748.8308, "ip": "", "conv_id": "a824724a9a5242deb63f1e8e6c0b57c7", "model_name": "text-embedding-3-large", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
|
110 |
+
{"tstamp": 1725982748.8308, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1725982748.1356, "finish": 1725982748.8308, "ip": "", "conv_id": "c6028cf0e24d46a0914f2ce4afd639d5", "model_name": "embed-english-v3.0", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
|
111 |
+
{"tstamp": 1725982775.7001, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1725982775.2864, "finish": 1725982775.7001, "ip": "", "conv_id": "f5cc13153a5547469e14d1680b0ec530", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "corpus": "arxiv"}
|
112 |
+
{"tstamp": 1725982775.7001, "task_type": "retrieval", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1725982775.2864, "finish": 1725982775.7001, "ip": "", "conv_id": "70b31ca9e2a340e4a099d135f53eb4e7", "model_name": "voyage-multilingual-2", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: OctoPack: Instruction Tuning Code Large Language Models\n\nAbstract: Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack."]], "corpus": "arxiv"}
|
data/retrieval_side_by_side-d2470a18-3fc5-4a21-86ba-77a6dda751e0.jsonl
CHANGED
@@ -1,2 +1,4 @@
|
|
1 |
{"tstamp": 1725872022.74, "task_type": "retrieval", "type": "leftvote", "models": ["GritLM/GritLM-7B", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "11d0ec5bc2064028a0c11c8ce0319bc7", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "0_output": [["Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "Kepler-62f\n\nKepler-62f (also known by its Kepler Object of Interest designation KOI-701.04) is a super-Earth exoplanet orbiting within the habitable zone of the star Kepler-62, the outermost of five such planets discovered around the star by NASA's Kepler spacecraft. It is located about from Earth in the constellation of Lyra."]], "0_corpus": "wikipedia", "1_conv_id": "313d24c69fa4482e9dcfdb2b6a512ad8", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "1_output": [["Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "Habitable zone\n\nK2-18b is an exoplanet 124 light-years away, orbiting in the habitable zone of the K2-18, a red dwarf. This planet is significant for water vapor found in its atmosphere; this was announced on September 17, 2019.\nIn September 2020, astronomers identified 24 superhabitable planet (planets better than Earth) contenders, from among more than 4000 confirmed exoplanets at present, based on astrophysical parameters, as well as the natural history of known life forms on the Earth.\nHabitability outside the HZ\nLiquid-water environments have been found to exist in the absence of atmospheric pressure and at temperatures outside the HZ temperature range. For example, Saturn's moons Titan and Enceladus and Jupiter's moons Europa and Ganymede, all of which are outside the habitable zone, may hold large volumes of liquid water in subsurface oceans.\nOutside the HZ, tidal heating and radioactive decay are two possible heat sources that could contribute to the existence of liquid water. Abbot and Switzer (2011) put forward the possibility that subsurface water could exist on rogue planets as a result of radioactive decay-based heating and insulation by a thick surface layer of ice.\nWith some theorising that life on Earth may have actually originated in stable, subsurface habitats, it has been suggested that it may be common for wet subsurface extraterrestrial habitats such as these to 'teem with life'. On Earth itself, living organisms may be found more than below the surface."]], "1_corpus": "wikipedia"}
|
2 |
{"tstamp": 1725979464.5931, "task_type": "retrieval", "type": "tievote", "models": ["Salesforce/SFR-Embedding-2_R", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "9e394e85b60141a49be63ca84c9b77aa", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "0_corpus": "wikipedia", "1_conv_id": "a9d0e1c15bb34e3289ecd12a216fb9bb", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
|
|
|
|
|
|
1 |
{"tstamp": 1725872022.74, "task_type": "retrieval", "type": "leftvote", "models": ["GritLM/GritLM-7B", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "11d0ec5bc2064028a0c11c8ce0319bc7", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "0_output": [["Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "Kepler-62f\n\nKepler-62f (also known by its Kepler Object of Interest designation KOI-701.04) is a super-Earth exoplanet orbiting within the habitable zone of the star Kepler-62, the outermost of five such planets discovered around the star by NASA's Kepler spacecraft. It is located about from Earth in the constellation of Lyra."]], "0_corpus": "wikipedia", "1_conv_id": "313d24c69fa4482e9dcfdb2b6a512ad8", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "1_output": [["Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "Habitable zone\n\nK2-18b is an exoplanet 124 light-years away, orbiting in the habitable zone of the K2-18, a red dwarf. This planet is significant for water vapor found in its atmosphere; this was announced on September 17, 2019.\nIn September 2020, astronomers identified 24 superhabitable planet (planets better than Earth) contenders, from among more than 4000 confirmed exoplanets at present, based on astrophysical parameters, as well as the natural history of known life forms on the Earth.\nHabitability outside the HZ\nLiquid-water environments have been found to exist in the absence of atmospheric pressure and at temperatures outside the HZ temperature range. For example, Saturn's moons Titan and Enceladus and Jupiter's moons Europa and Ganymede, all of which are outside the habitable zone, may hold large volumes of liquid water in subsurface oceans.\nOutside the HZ, tidal heating and radioactive decay are two possible heat sources that could contribute to the existence of liquid water. Abbot and Switzer (2011) put forward the possibility that subsurface water could exist on rogue planets as a result of radioactive decay-based heating and insulation by a thick surface layer of ice.\nWith some theorising that life on Earth may have actually originated in stable, subsurface habitats, it has been suggested that it may be common for wet subsurface extraterrestrial habitats such as these to 'teem with life'. On Earth itself, living organisms may be found more than below the surface."]], "1_corpus": "wikipedia"}
|
2 |
{"tstamp": 1725979464.5931, "task_type": "retrieval", "type": "tievote", "models": ["Salesforce/SFR-Embedding-2_R", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "9e394e85b60141a49be63ca84c9b77aa", "0_model_name": "Salesforce/SFR-Embedding-2_R", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "0_corpus": "wikipedia", "1_conv_id": "a9d0e1c15bb34e3289ecd12a216fb9bb", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic)."]], "1_corpus": "wikipedia"}
|
3 |
+
{"tstamp": 1725982685.8762, "task_type": "retrieval", "type": "rightvote", "models": ["GritLM/GritLM-7B", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "6c8a2904d29c4201933a500685b5bba7", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "hello", "0_output": [["hello", "Hello (disambiguation)\n\n\"Hello\", by OMFG, 2014 \"Hello\", by Purple Kiss, from the EP Into Violet, 2021\n\"Hello\", by Prince, included as a B-side of \"Pop Life\", 1985\n\"Hello\", by Prism from the album See Forever Eyes, 1978\n\"Hello\", by Rock Goddess from the album Young and Free, 1987\n\"Hello\", by Shinee from the album Lucifer, 2010\n\"Hello\", by T.I. from the album King, 2006\n\"Hello\", by T.I. from the album Trouble Man: Heavy Is the Head, 2012\n\"Hello\", by Twice from the album Formula of Love: O+T=<3\n\"Hello\", by Ty Dolla Sign from the album Campaign, 2016\n\"Hello\", by will.i.am from the album #willpower, 2013\n\"Hello\", by Zooey Deschanel from the album Trolls, 2016\n\"Hello!\", from the musical The Book of Mormon, 2011\nOther media\nHello! (magazine), a British celebrity magazine\nHello (web series), a 2017 Indian web series\nComputing and technology\nHello, a discontinued software program for Picasa\nHello, a discontinued WebRTC program for Firefox\nHello.jpg, a notorious image featured on shock site Goatse.cx\nHello (company), a sleep-tracking company\nHello (social network), a social networking service founded by Orkut Büyükkökten\nHello world program, a software programming demonstration\nWindows Hello, a feature of Microsoft Windows 10.\nOther uses\nErnest Hello (1828–1885), French critic\nHello (airline), a Swiss charter airline\nHello convention, bidding convention in bridge"]], "0_corpus": "wikipedia", "1_conv_id": "71ed16f9f37a41fcb443fb4080c1c6d3", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "hello", "1_output": [["hello", "Hello\n\nHello is a salutation or greeting in the English language. It is first attested in writing from 1826.\nEarly uses\nHello, with that spelling, was used in publications in the U.S. as early as the 18 October 1826 edition of the Norwich Courier of Norwich, Connecticut. Another early use was an 1833 American book called The Sketches and Eccentricities of Col. David Crockett, of West Tennessee, which was reprinted that same year in The London Literary Gazette. The word was extensively used in literature by the 1860s.\nEtymology\nAccording to the Oxford English Dictionary, hello is an alteration of hallo, hollo, which came from Old High German \"halâ, holâ, emphatic imperative of halôn, holôn to fetch, used especially in hailing a ferryman\". It also connects the development of hello to the influence of an earlier form, holla, whose origin is in the French holà (roughly, 'whoa there!', from French là 'there'). As in addition to hello, halloo, hallo, hollo, hullo and (rarely) hillo also exist as variants or related words, the word can be spelt using any of all five vowels.\nBill Bryson asserts in his book Mother Tongue that \"hello\" is a contraction of the Old English phrase hál béo þu (\"Hale be thou\", or \"whole be thou\", meaning a wish for good health; cf. \"goodbye\" which is a contraction of \"God be with ye\")."]], "1_corpus": "wikipedia"}
|
4 |
+
{"tstamp": 1725982712.2223, "task_type": "retrieval", "type": "share", "models": ["GritLM/GritLM-7B", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "6c8a2904d29c4201933a500685b5bba7", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "hello", "0_output": [["hello", "Hello (disambiguation)\n\n\"Hello\", by OMFG, 2014 \"Hello\", by Purple Kiss, from the EP Into Violet, 2021\n\"Hello\", by Prince, included as a B-side of \"Pop Life\", 1985\n\"Hello\", by Prism from the album See Forever Eyes, 1978\n\"Hello\", by Rock Goddess from the album Young and Free, 1987\n\"Hello\", by Shinee from the album Lucifer, 2010\n\"Hello\", by T.I. from the album King, 2006\n\"Hello\", by T.I. from the album Trouble Man: Heavy Is the Head, 2012\n\"Hello\", by Twice from the album Formula of Love: O+T=<3\n\"Hello\", by Ty Dolla Sign from the album Campaign, 2016\n\"Hello\", by will.i.am from the album #willpower, 2013\n\"Hello\", by Zooey Deschanel from the album Trolls, 2016\n\"Hello!\", from the musical The Book of Mormon, 2011\nOther media\nHello! (magazine), a British celebrity magazine\nHello (web series), a 2017 Indian web series\nComputing and technology\nHello, a discontinued software program for Picasa\nHello, a discontinued WebRTC program for Firefox\nHello.jpg, a notorious image featured on shock site Goatse.cx\nHello (company), a sleep-tracking company\nHello (social network), a social networking service founded by Orkut Büyükkökten\nHello world program, a software programming demonstration\nWindows Hello, a feature of Microsoft Windows 10.\nOther uses\nErnest Hello (1828–1885), French critic\nHello (airline), a Swiss charter airline\nHello convention, bidding convention in bridge"]], "0_corpus": "wikipedia", "1_conv_id": "71ed16f9f37a41fcb443fb4080c1c6d3", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "hello", "1_output": [["hello", "Hello\n\nHello is a salutation or greeting in the English language. It is first attested in writing from 1826.\nEarly uses\nHello, with that spelling, was used in publications in the U.S. as early as the 18 October 1826 edition of the Norwich Courier of Norwich, Connecticut. Another early use was an 1833 American book called The Sketches and Eccentricities of Col. David Crockett, of West Tennessee, which was reprinted that same year in The London Literary Gazette. The word was extensively used in literature by the 1860s.\nEtymology\nAccording to the Oxford English Dictionary, hello is an alteration of hallo, hollo, which came from Old High German \"halâ, holâ, emphatic imperative of halôn, holôn to fetch, used especially in hailing a ferryman\". It also connects the development of hello to the influence of an earlier form, holla, whose origin is in the French holà (roughly, 'whoa there!', from French là 'there'). As in addition to hello, halloo, hallo, hollo, hullo and (rarely) hillo also exist as variants or related words, the word can be spelt using any of all five vowels.\nBill Bryson asserts in his book Mother Tongue that \"hello\" is a contraction of the Old English phrase hál béo þu (\"Hale be thou\", or \"whole be thou\", meaning a wish for good health; cf. \"goodbye\" which is a contraction of \"God be with ye\")."]], "1_corpus": "wikipedia"}
|