Scheduled Commit
Browse files
data/retrieval_individual-031dca12-9a97-4178-b56e-afd2ee86cecb.jsonl
CHANGED
@@ -70,3 +70,5 @@
|
|
70 |
{"tstamp": 1735867822.0001, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1735867821.7048, "finish": 1735867822.0001, "ip": "", "conv_id": "a2388458dae64dc2a436d25afca4a5b0", "model_name": "BM25", "prompt": "docker-compose", "output": [["docker-compose", "Stackoverflow Stackexchange\n\nQ: docker compose inside docker in a docker I am pretty new to docker and was following the documentation found here, trying deploy several containers inside dind using docker-compose 1.14.0 I get the following \ndocker run -v /home/dudarev/compose/:/compose/ --privileged docker:dind /compose/docker-compose\n/usr/local/bin/dockerd-entrypoint.sh: exec: line 21: /compose/docker-compose: not found\n\nDid I miss something?\n\nA: Add docker-compose installation to your Dockerfile before executing docker run. \nFor example, if you have an Ubuntu docker, add to your Dockerfile:\nRUN aptitude -y install docker-compose\nRUN ln -s /usr/local/bin/docker-compose /compose/docker-compose\n\nBecause it looks like if your entry-point looks up docker compose in /compose folder, while docker-compose is installed in /usr/local/bin by default.\n\nIf you want a concrete docker-compose version (for example 1.20.0-rc2):\n\nRUN curl -L https://github.com/docker/compose/releases/download/1.20.0-rc2/docker-compose-`uname -s`-`uname -m` -o /compose/docker-compose\nRUN chmod +x /compose/docker-compose\n\n"]], "corpus": "stackexchange"}
|
71 |
{"tstamp": 1735880951.8511, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1735880949.6514, "finish": 1735880951.8511, "ip": "", "conv_id": "97b2d252ce924e6eb7e236b115fa5967", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Artificial general intelligence\n\nMany interdisciplinary approaches (e.g. cognitive science, computational intelligence, and decision making) consider additional traits such as imagination (the ability to form novel mental images and concepts) and autonomy.\nComputer-based systems that exhibit many of these capabilities exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent). However, no consensus holds that modern AI systems possess them to an adequate degree.\nPhysical traits\nOther capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its expression. These include:\nthe ability to sense (e.g. see, hear, etc.), and\nthe ability to act (e.g. move and manipulate objects, change location to explore, etc.)\nThis includes the ability to detect and respond to hazard.\nTests for human-level AGI\nSeveral tests meant to confirm human-level AGI have been considered, including:\nThe Turing Test (Turing)\nA machine and a human both converse unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it. The AI Eugene Goostman, imitating a 13-year-old boy, achieved Turing's estimate of convincing 33% of judges that it was human in 2014.\nThe Robot College Student Test (Goertzel)"]], "corpus": "wikipedia"}
|
72 |
{"tstamp": 1735880951.8511, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1735880949.6514, "finish": 1735880951.8511, "ip": "", "conv_id": "2b52de57f0b04e9b8ed6df95818c67cd", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "History of robots\n\nA similar tale can be found in the near contemporary Indian Buddhist Jataka tales, but here the intricacy of the automation does not match that of Master Yan.Prior to the introduction of Buddhism in the Common Era, Chinese philosophers did not seriously consider the distinction between appearance and reality. The Liezi rebuts Buddhist philosophies and likens human creative powers to that of the Creator. The Indian Lokapannatti, a collection of cycles and lores produced in the 11th or 12th century AD, tells the story of how an army of automated soldiers (bhuta vahana yanta or \"Spirit movement machines\") were crafted to protect the relics of Buddha in a secret stupa. The plans for making such humanoid automatons were stolen from the kingdom of Rome, a generic term for the Greco-Roman-Byzantine culture. According to the Lokapannatti, the Yavanas (\"Greek-speakers\") used the automatons to carry out trade and farming, but also captured and executed criminals. Roman automation makers who left the kingdom were pursued and killed by the automatons. According to the Lokapannatti, the emperor Asoka hears the story of the secret stupa and sets out to find it. "]], "corpus": "wikipedia"}
|
|
|
|
|
|
70 |
{"tstamp": 1735867822.0001, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1735867821.7048, "finish": 1735867822.0001, "ip": "", "conv_id": "a2388458dae64dc2a436d25afca4a5b0", "model_name": "BM25", "prompt": "docker-compose", "output": [["docker-compose", "Stackoverflow Stackexchange\n\nQ: docker compose inside docker in a docker I am pretty new to docker and was following the documentation found here, trying deploy several containers inside dind using docker-compose 1.14.0 I get the following \ndocker run -v /home/dudarev/compose/:/compose/ --privileged docker:dind /compose/docker-compose\n/usr/local/bin/dockerd-entrypoint.sh: exec: line 21: /compose/docker-compose: not found\n\nDid I miss something?\n\nA: Add docker-compose installation to your Dockerfile before executing docker run. \nFor example, if you have an Ubuntu docker, add to your Dockerfile:\nRUN aptitude -y install docker-compose\nRUN ln -s /usr/local/bin/docker-compose /compose/docker-compose\n\nBecause it looks like if your entry-point looks up docker compose in /compose folder, while docker-compose is installed in /usr/local/bin by default.\n\nIf you want a concrete docker-compose version (for example 1.20.0-rc2):\n\nRUN curl -L https://github.com/docker/compose/releases/download/1.20.0-rc2/docker-compose-`uname -s`-`uname -m` -o /compose/docker-compose\nRUN chmod +x /compose/docker-compose\n\n"]], "corpus": "stackexchange"}
|
71 |
{"tstamp": 1735880951.8511, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1735880949.6514, "finish": 1735880951.8511, "ip": "", "conv_id": "97b2d252ce924e6eb7e236b115fa5967", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "Artificial general intelligence\n\nMany interdisciplinary approaches (e.g. cognitive science, computational intelligence, and decision making) consider additional traits such as imagination (the ability to form novel mental images and concepts) and autonomy.\nComputer-based systems that exhibit many of these capabilities exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent). However, no consensus holds that modern AI systems possess them to an adequate degree.\nPhysical traits\nOther capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its expression. These include:\nthe ability to sense (e.g. see, hear, etc.), and\nthe ability to act (e.g. move and manipulate objects, change location to explore, etc.)\nThis includes the ability to detect and respond to hazard.\nTests for human-level AGI\nSeveral tests meant to confirm human-level AGI have been considered, including:\nThe Turing Test (Turing)\nA machine and a human both converse unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it. The AI Eugene Goostman, imitating a 13-year-old boy, achieved Turing's estimate of convincing 33% of judges that it was human in 2014.\nThe Robot College Student Test (Goertzel)"]], "corpus": "wikipedia"}
|
72 |
{"tstamp": 1735880951.8511, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1735880949.6514, "finish": 1735880951.8511, "ip": "", "conv_id": "2b52de57f0b04e9b8ed6df95818c67cd", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "Which test was devised to determine whether robots can think?", "output": [["Which test was devised to determine whether robots can think?", "History of robots\n\nA similar tale can be found in the near contemporary Indian Buddhist Jataka tales, but here the intricacy of the automation does not match that of Master Yan.Prior to the introduction of Buddhism in the Common Era, Chinese philosophers did not seriously consider the distinction between appearance and reality. The Liezi rebuts Buddhist philosophies and likens human creative powers to that of the Creator. The Indian Lokapannatti, a collection of cycles and lores produced in the 11th or 12th century AD, tells the story of how an army of automated soldiers (bhuta vahana yanta or \"Spirit movement machines\") were crafted to protect the relics of Buddha in a secret stupa. The plans for making such humanoid automatons were stolen from the kingdom of Rome, a generic term for the Greco-Roman-Byzantine culture. According to the Lokapannatti, the Yavanas (\"Greek-speakers\") used the automatons to carry out trade and farming, but also captured and executed criminals. Roman automation makers who left the kingdom were pursued and killed by the automatons. According to the Lokapannatti, the emperor Asoka hears the story of the secret stupa and sets out to find it. "]], "corpus": "wikipedia"}
|
73 |
+
{"tstamp": 1735886760.7115, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1735886759.8982, "finish": 1735886760.7115, "ip": "", "conv_id": "d8edfd788d4b4e868a1f332d0a137784", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "મહાભારત અનુસાર પાંડવો કોની સાથે લડ્યા હતા?", "output": [["મહાભારત અનુસાર પાંડવો કોની સાથે લડ્યા હતા?", "Karna Parva\n\nThen Bhima headed to kill Dussasana. Bhima engaged in duel with Dushasana. First they fought by means of archery. Bhima broke Dushasana's bow. Dushasana took sword, Bhima cut off sword. Bhima cut all weapons in similar manner till Dushasana took mace. When Dushasana took up mace, Bhima also took his mace. Soon both started mace duel with each other. Bhima beat Dushasana many times with his mace. Bhima broke Dushyasana's mace. Both of them began wrestling. Finally Bhima defeated and brutally killed Dussasana in front of Duryodhana. Bhima killed Dussashana by separating and detaching his hands from the body. Bhima beat Dussashana in heart region. Bhima squeezed blood from Dussasana's heart and dressed Draupadi's hair. Bhima also drank off remaining blood and fulfilled his oath.\nMeanwhile Karna wreaking the havoc among Pandava Army. Many of the warriors running away from the wrath of Karna. When all five Pandavas attacking alone Karna, Karna defeated all the Pandavas single handedly and make them retreat. Karna then started again destroying the Pandava's troops. Nakula, Sahadeva, Satyaki and Upapandavas attacked Karna but defeated them. Nakula engaged the battle with Karna however Karna defeated him and destroyed his chariot but spared him.\nDuryodhana and Yudhisthira engaged in the battle. Although Duryodhana showed his might, Yudhisthira defeated him. Kripa, Karna, Ashwatthama come to rescue Duryodhana. Ashwatthama defeated Yudhisthira and Satyaki come to attack Ashwatthama. When Satyaki broken the Ashwatthama's bow, Karna attacked Satyaki, although Satyaki gave a tough fight, Karna defeats him. To save Satyaki, Arjuna attacked Karna. However Karna defeated Arjuna and injured both Arjuna and Krishna. Then to rescue Arjuna, other four Pandavas, Satyaki and fourteen Pandava warriors along with their armies attacked an alone Karna like how a bunch of hyenas attacked an alone lion. However Karna checked them off. Both Karna and Arjuna fought vigorously but Karna withdraws the battle due to sunset."]], "corpus": "wikipedia"}
|
74 |
+
{"tstamp": 1735886760.7115, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1735886759.8982, "finish": 1735886760.7115, "ip": "", "conv_id": "48ab50b9b0754ec9bb1ce10ced81bad1", "model_name": "text-embedding-3-large", "prompt": "મહાભારત અનુસાર પાંડવો કોની સાથે લડ્યા હતા?", "output": [["મહાભારત અનુસાર પાંડવો કોની સાથે લડ્યા હતા?", "Kunti\n\nAfter the death of Pandu and Madri, Kunti took care of all five Pandava children taking them back to Hastinapur. Dhritrashtra's sons never liked them. During their childhood, Duryodhana poisoned and tried to kill Bhima but he was saved. Kunti was hurt by this but was consoled by Vidura. Later the Kuru princes were sent to train under Drona.\nHiding\nAfter the princes finished their training, they returned to Hastinapura. After some time Duryodhana and his maternal uncle Shakuni tried to burn Pandavas alive along with Kunti for which they built the palace out of lac (Lakshagriha) in a village named Varanāvata. The Pandavas, though, managed to escape the house of lac with the help of Vidura through a secret tunnel.\nAfter surviving from the Lakshagriha Kunti and five Pandavas lived in Ekachakra village. During their stay, Kunti and the Pandavas become aware of a demon, Bakasura, who ate people. Villagers had to send one member of their family and food to Bakasura, who devour both. When Kunti heard the cries of a Brahmin – who had provided her and her son's shelter in Ekachakra, Kunti consoled him and suggested that instead of a Brahmin's family, her son Bhima would face the demon. Kunti engineered a plot where Bhima would be able to face and kill the demon. The powerful Bhima brought his might to the fore and defeated Bakasura."]], "corpus": "wikipedia"}
|
data/retrieval_side_by_side-031dca12-9a97-4178-b56e-afd2ee86cecb.jsonl
CHANGED
@@ -7,3 +7,4 @@
|
|
7 |
{"tstamp": 1735784180.2397, "task_type": "retrieval", "type": "leftvote", "models": ["nomic-ai/nomic-embed-text-v1.5", "BM25"], "ip": "", "0_conv_id": "049b65ef66f84fcca0c9db82d37afcae", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": " large-scale benchmark performance of text embeddings.", "0_output": [[" large-scale benchmark performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "06ad5de35228497797340e5c60d68623", "1_model_name": "BM25", "1_prompt": " large-scale benchmark performance of text embeddings.", "1_output": [[" large-scale benchmark performance of text embeddings.", "Title: Text Embeddings by Weakly-Supervised Contrastive Pre-training\n\nAbstract: This paper presents E5, a family of state-of-the-art text embeddings that transfer well to a wide range of tasks. The model is trained in a contrastive manner with weak supervision signals from our curated large-scale text pair dataset (called CCPairs). E5 can be readily used as a general-purpose embedding model for any tasks requiring a single-vector representation of texts such as retrieval, clustering, and classification, achieving strong performance in both zero-shot and fine-tuned settings. We conduct extensive evaluations on 56 datasets from the BEIR and MTEB benchmarks. For zero-shot settings, E5 is the first model that outperforms the strong BM25 baseline on the BEIR retrieval benchmark without using any labeled data. When fine-tuned, E5 obtains the best results on the MTEB benchmark, beating existing embedding models with 40x more parameters."]], "1_corpus": "arxiv"}
|
8 |
{"tstamp": 1735784220.645, "task_type": "retrieval", "type": "rightvote", "models": ["nomic-ai/nomic-embed-text-v1.5", "BM25"], "ip": "", "0_conv_id": "647544fa18aa4abdbd4947d16e570bea", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "graphrag", "0_output": [["graphrag", "Title: GraphStorm: all-in-one graph machine learning framework for industry applications\n\nAbstract: Graph machine learning (GML) is effective in many business applications. However, making GML easy to use and applicable to industry applications with massive datasets remain challenging. We developed GraphStorm, which provides an end-to-end solution for scalable graph construction, graph model training and inference. GraphStorm has the following desirable properties: (a) Easy to use: it can perform graph construction and model training and inference with just a single command; (b) Expert-friendly: GraphStorm contains many advanced GML modeling techniques to handle complex graph data and improve model performance; (c) Scalable: every component in GraphStorm can operate on graphs with billions of nodes and can scale model training and inference to different hardware without changing any code. GraphStorm has been used and deployed for over a dozen billion-scale industry applications after its release in May 2023. It is open-sourced in Github: https://github.com/awslabs/graphstorm."]], "0_corpus": "arxiv", "1_conv_id": "5d9860a987f3427d9fb481d535d7623a", "1_model_name": "BM25", "1_prompt": "graphrag", "1_output": [["graphrag", "Title: From Local to Global: A Graph RAG Approach to Query-Focused Summarization\n\nAbstract: The use of retrieval-augmented generation (RAG) to retrieve relevant information from an external knowledge source enables large language models (LLMs) to answer questions over private and/or previously unseen document collections. However, RAG fails on global questions directed at an entire text corpus, such as \"What are the main themes in the dataset?\", since this is inherently a query-focused summarization (QFS) task, rather than an explicit retrieval task. Prior QFS methods, meanwhile, fail to scale to the quantities of text indexed by typical RAG systems. To combine the strengths of these contrasting methods, we propose a Graph RAG approach to question answering over private text corpora that scales with both the generality of user questions and the quantity of source text to be indexed. Our approach uses an LLM to build a graph-based text index in two stages: first to derive an entity knowledge graph from the source documents, then to pregenerate community summaries for all groups of closely-related entities. Given a question, each community summary is used to generate a partial response, before all partial responses are again summarized in a final response to the user. For a class of global sensemaking questions over datasets in the 1 million token range, we show that Graph RAG leads to substantial improvements over a na\\\"ive RAG baseline for both the comprehensiveness and diversity of generated answers. An open-source, Python-based implementation of both global and local Graph RAG approaches is forthcoming at https://aka.ms/graphrag."]], "1_corpus": "arxiv"}
|
9 |
{"tstamp": 1735813821.3167, "task_type": "retrieval", "type": "tievote", "models": ["GritLM/GritLM-7B", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "4d3c77760fbc45e29c805b8328732316", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "da87365cab72421987770a1bcd747b11", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Artificial general intelligence\n\nMany interdisciplinary approaches (e.g. cognitive science, computational intelligence, and decision making) consider additional traits such as imagination (the ability to form novel mental images and concepts) and autonomy.\nComputer-based systems that exhibit many of these capabilities exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent). However, no consensus holds that modern AI systems possess them to an adequate degree.\nPhysical traits\nOther capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its expression. These include:\nthe ability to sense (e.g. see, hear, etc.), and\nthe ability to act (e.g. move and manipulate objects, change location to explore, etc.)\nThis includes the ability to detect and respond to hazard.\nTests for human-level AGI\nSeveral tests meant to confirm human-level AGI have been considered, including:\nThe Turing Test (Turing)\nA machine and a human both converse unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it. The AI Eugene Goostman, imitating a 13-year-old boy, achieved Turing's estimate of convincing 33% of judges that it was human in 2014.\nThe Robot College Student Test (Goertzel)"]], "1_corpus": "wikipedia"}
|
|
|
|
7 |
{"tstamp": 1735784180.2397, "task_type": "retrieval", "type": "leftvote", "models": ["nomic-ai/nomic-embed-text-v1.5", "BM25"], "ip": "", "0_conv_id": "049b65ef66f84fcca0c9db82d37afcae", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": " large-scale benchmark performance of text embeddings.", "0_output": [[" large-scale benchmark performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "06ad5de35228497797340e5c60d68623", "1_model_name": "BM25", "1_prompt": " large-scale benchmark performance of text embeddings.", "1_output": [[" large-scale benchmark performance of text embeddings.", "Title: Text Embeddings by Weakly-Supervised Contrastive Pre-training\n\nAbstract: This paper presents E5, a family of state-of-the-art text embeddings that transfer well to a wide range of tasks. The model is trained in a contrastive manner with weak supervision signals from our curated large-scale text pair dataset (called CCPairs). E5 can be readily used as a general-purpose embedding model for any tasks requiring a single-vector representation of texts such as retrieval, clustering, and classification, achieving strong performance in both zero-shot and fine-tuned settings. We conduct extensive evaluations on 56 datasets from the BEIR and MTEB benchmarks. For zero-shot settings, E5 is the first model that outperforms the strong BM25 baseline on the BEIR retrieval benchmark without using any labeled data. When fine-tuned, E5 obtains the best results on the MTEB benchmark, beating existing embedding models with 40x more parameters."]], "1_corpus": "arxiv"}
|
8 |
{"tstamp": 1735784220.645, "task_type": "retrieval", "type": "rightvote", "models": ["nomic-ai/nomic-embed-text-v1.5", "BM25"], "ip": "", "0_conv_id": "647544fa18aa4abdbd4947d16e570bea", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "graphrag", "0_output": [["graphrag", "Title: GraphStorm: all-in-one graph machine learning framework for industry applications\n\nAbstract: Graph machine learning (GML) is effective in many business applications. However, making GML easy to use and applicable to industry applications with massive datasets remain challenging. We developed GraphStorm, which provides an end-to-end solution for scalable graph construction, graph model training and inference. GraphStorm has the following desirable properties: (a) Easy to use: it can perform graph construction and model training and inference with just a single command; (b) Expert-friendly: GraphStorm contains many advanced GML modeling techniques to handle complex graph data and improve model performance; (c) Scalable: every component in GraphStorm can operate on graphs with billions of nodes and can scale model training and inference to different hardware without changing any code. GraphStorm has been used and deployed for over a dozen billion-scale industry applications after its release in May 2023. It is open-sourced in Github: https://github.com/awslabs/graphstorm."]], "0_corpus": "arxiv", "1_conv_id": "5d9860a987f3427d9fb481d535d7623a", "1_model_name": "BM25", "1_prompt": "graphrag", "1_output": [["graphrag", "Title: From Local to Global: A Graph RAG Approach to Query-Focused Summarization\n\nAbstract: The use of retrieval-augmented generation (RAG) to retrieve relevant information from an external knowledge source enables large language models (LLMs) to answer questions over private and/or previously unseen document collections. However, RAG fails on global questions directed at an entire text corpus, such as \"What are the main themes in the dataset?\", since this is inherently a query-focused summarization (QFS) task, rather than an explicit retrieval task. Prior QFS methods, meanwhile, fail to scale to the quantities of text indexed by typical RAG systems. To combine the strengths of these contrasting methods, we propose a Graph RAG approach to question answering over private text corpora that scales with both the generality of user questions and the quantity of source text to be indexed. Our approach uses an LLM to build a graph-based text index in two stages: first to derive an entity knowledge graph from the source documents, then to pregenerate community summaries for all groups of closely-related entities. Given a question, each community summary is used to generate a partial response, before all partial responses are again summarized in a final response to the user. For a class of global sensemaking questions over datasets in the 1 million token range, we show that Graph RAG leads to substantial improvements over a na\\\"ive RAG baseline for both the comprehensiveness and diversity of generated answers. An open-source, Python-based implementation of both global and local Graph RAG approaches is forthcoming at https://aka.ms/graphrag."]], "1_corpus": "arxiv"}
|
9 |
{"tstamp": 1735813821.3167, "task_type": "retrieval", "type": "tievote", "models": ["GritLM/GritLM-7B", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "4d3c77760fbc45e29c805b8328732316", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Which test was devised to determine whether robots can think?", "0_output": [["Which test was devised to determine whether robots can think?", "Turing test\n\nThe test was introduced by Turing in his 1950 paper \"Computing Machinery and Intelligence\" while working at the University of Manchester. It opens with the words: \"I propose to consider the question, 'Can machines think? Because \"thinking\" is difficult to define, Turing chooses to \"replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.\" Turing describes the new form of the problem in terms of a three-person game called the \"imitation game\", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: \"Are there imaginable digital computers which would do well in the imitation game?\" This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that \"machines can think\".\nSince Turing introduced his test, it has been both highly influential and widely criticized, and has become an important concept in the philosophy of artificial intelligence. Philosopher John Searle would comment on the Turing test in his Chinese room argument, a thought experiment that stipulates that a machine cannot have a \"mind\", \"understanding\", or \"consciousness\", regardless of how intelligently or human-like the program may make the computer behave. Searle criticizes Turing's test and claims it is insufficient to detect the presence of consciousness.\nHistory"]], "0_corpus": "wikipedia", "1_conv_id": "da87365cab72421987770a1bcd747b11", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Which test was devised to determine whether robots can think?", "1_output": [["Which test was devised to determine whether robots can think?", "Artificial general intelligence\n\nMany interdisciplinary approaches (e.g. cognitive science, computational intelligence, and decision making) consider additional traits such as imagination (the ability to form novel mental images and concepts) and autonomy.\nComputer-based systems that exhibit many of these capabilities exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent). However, no consensus holds that modern AI systems possess them to an adequate degree.\nPhysical traits\nOther capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its expression. These include:\nthe ability to sense (e.g. see, hear, etc.), and\nthe ability to act (e.g. move and manipulate objects, change location to explore, etc.)\nThis includes the ability to detect and respond to hazard.\nTests for human-level AGI\nSeveral tests meant to confirm human-level AGI have been considered, including:\nThe Turing Test (Turing)\nA machine and a human both converse unseen with a second human, who must evaluate which of the two is the machine, which passes the test if it can fool the evaluator a significant fraction of the time. Note: Turing does not prescribe what should qualify as intelligence, only that knowing that it is a machine should disqualify it. The AI Eugene Goostman, imitating a 13-year-old boy, achieved Turing's estimate of convincing 33% of judges that it was human in 2014.\nThe Robot College Student Test (Goertzel)"]], "1_corpus": "wikipedia"}
|
10 |
+
{"tstamp": 1735886851.2016, "task_type": "retrieval", "type": "bothbadvote", "models": ["intfloat/multilingual-e5-large-instruct", "text-embedding-3-large"], "ip": "", "0_conv_id": "d8edfd788d4b4e868a1f332d0a137784", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "મહાભારત અનુસાર પાંડવો કોની સાથે લડ્યા હતા?", "0_output": [["મહાભારત અનુસાર પાંડવો કોની સાથે લડ્યા હતા?", "Karna Parva\n\nThen Bhima headed to kill Dussasana. Bhima engaged in duel with Dushasana. First they fought by means of archery. Bhima broke Dushasana's bow. Dushasana took sword, Bhima cut off sword. Bhima cut all weapons in similar manner till Dushasana took mace. When Dushasana took up mace, Bhima also took his mace. Soon both started mace duel with each other. Bhima beat Dushasana many times with his mace. Bhima broke Dushyasana's mace. Both of them began wrestling. Finally Bhima defeated and brutally killed Dussasana in front of Duryodhana. Bhima killed Dussashana by separating and detaching his hands from the body. Bhima beat Dussashana in heart region. Bhima squeezed blood from Dussasana's heart and dressed Draupadi's hair. Bhima also drank off remaining blood and fulfilled his oath.\nMeanwhile Karna wreaking the havoc among Pandava Army. Many of the warriors running away from the wrath of Karna. When all five Pandavas attacking alone Karna, Karna defeated all the Pandavas single handedly and make them retreat. Karna then started again destroying the Pandava's troops. Nakula, Sahadeva, Satyaki and Upapandavas attacked Karna but defeated them. Nakula engaged the battle with Karna however Karna defeated him and destroyed his chariot but spared him.\nDuryodhana and Yudhisthira engaged in the battle. Although Duryodhana showed his might, Yudhisthira defeated him. Kripa, Karna, Ashwatthama come to rescue Duryodhana. Ashwatthama defeated Yudhisthira and Satyaki come to attack Ashwatthama. When Satyaki broken the Ashwatthama's bow, Karna attacked Satyaki, although Satyaki gave a tough fight, Karna defeats him. To save Satyaki, Arjuna attacked Karna. However Karna defeated Arjuna and injured both Arjuna and Krishna. Then to rescue Arjuna, other four Pandavas, Satyaki and fourteen Pandava warriors along with their armies attacked an alone Karna like how a bunch of hyenas attacked an alone lion. However Karna checked them off. Both Karna and Arjuna fought vigorously but Karna withdraws the battle due to sunset."]], "0_corpus": "wikipedia", "1_conv_id": "48ab50b9b0754ec9bb1ce10ced81bad1", "1_model_name": "text-embedding-3-large", "1_prompt": "મહાભારત અનુસાર પાંડવો કોની સાથે લડ્યા હતા?", "1_output": [["મહાભારત અનુસાર પાંડવો કોની સાથે લડ્યા હતા?", "Kunti\n\nAfter the death of Pandu and Madri, Kunti took care of all five Pandava children taking them back to Hastinapur. Dhritrashtra's sons never liked them. During their childhood, Duryodhana poisoned and tried to kill Bhima but he was saved. Kunti was hurt by this but was consoled by Vidura. Later the Kuru princes were sent to train under Drona.\nHiding\nAfter the princes finished their training, they returned to Hastinapura. After some time Duryodhana and his maternal uncle Shakuni tried to burn Pandavas alive along with Kunti for which they built the palace out of lac (Lakshagriha) in a village named Varanāvata. The Pandavas, though, managed to escape the house of lac with the help of Vidura through a secret tunnel.\nAfter surviving from the Lakshagriha Kunti and five Pandavas lived in Ekachakra village. During their stay, Kunti and the Pandavas become aware of a demon, Bakasura, who ate people. Villagers had to send one member of their family and food to Bakasura, who devour both. When Kunti heard the cries of a Brahmin – who had provided her and her son's shelter in Ekachakra, Kunti consoled him and suggested that instead of a Brahmin's family, her son Bhima would face the demon. Kunti engineered a plot where Bhima would be able to face and kill the demon. The powerful Bhima brought his might to the fore and defeated Bakasura."]], "1_corpus": "wikipedia"}
|