Scheduled Commit
Browse files
data/retrieval_individual-d2470a18-3fc5-4a21-86ba-77a6dda751e0.jsonl
CHANGED
@@ -16,3 +16,5 @@
|
|
16 |
{"tstamp": 1725871967.0727, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1725871964.5289, "finish": 1725871967.0727, "ip": "", "conv_id": "313d24c69fa4482e9dcfdb2b6a512ad8", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "output": [["Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "Habitable zone\n\nK2-18b is an exoplanet 124 light-years away, orbiting in the habitable zone of the K2-18, a red dwarf. This planet is significant for water vapor found in its atmosphere; this was announced on September 17, 2019.\nIn September 2020, astronomers identified 24 superhabitable planet (planets better than Earth) contenders, from among more than 4000 confirmed exoplanets at present, based on astrophysical parameters, as well as the natural history of known life forms on the Earth.\nHabitability outside the HZ\nLiquid-water environments have been found to exist in the absence of atmospheric pressure and at temperatures outside the HZ temperature range. For example, Saturn's moons Titan and Enceladus and Jupiter's moons Europa and Ganymede, all of which are outside the habitable zone, may hold large volumes of liquid water in subsurface oceans.\nOutside the HZ, tidal heating and radioactive decay are two possible heat sources that could contribute to the existence of liquid water. Abbot and Switzer (2011) put forward the possibility that subsurface water could exist on rogue planets as a result of radioactive decay-based heating and insulation by a thick surface layer of ice.\nWith some theorising that life on Earth may have actually originated in stable, subsurface habitats, it has been suggested that it may be common for wet subsurface extraterrestrial habitats such as these to 'teem with life'. On Earth itself, living organisms may be found more than below the surface."]], "corpus": "wikipedia"}
|
17 |
{"tstamp": 1725873568.6716, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1725873567.4445, "finish": 1725873568.6716, "ip": "", "conv_id": "7c455e60baa34d00809519b151622546", "model_name": "text-embedding-004", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
18 |
{"tstamp": 1725873568.6716, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1725873567.4445, "finish": 1725873568.6716, "ip": "", "conv_id": "9e1cbce0dce2498ca7c1fe1ffdcbf508", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
|
|
|
|
|
16 |
{"tstamp": 1725871967.0727, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1725871964.5289, "finish": 1725871967.0727, "ip": "", "conv_id": "313d24c69fa4482e9dcfdb2b6a512ad8", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "output": [["Find me a planet near α 23h 06m 29.368s and δ −05° 02′ 29.04″ that could be habitable.", "Habitable zone\n\nK2-18b is an exoplanet 124 light-years away, orbiting in the habitable zone of the K2-18, a red dwarf. This planet is significant for water vapor found in its atmosphere; this was announced on September 17, 2019.\nIn September 2020, astronomers identified 24 superhabitable planet (planets better than Earth) contenders, from among more than 4000 confirmed exoplanets at present, based on astrophysical parameters, as well as the natural history of known life forms on the Earth.\nHabitability outside the HZ\nLiquid-water environments have been found to exist in the absence of atmospheric pressure and at temperatures outside the HZ temperature range. For example, Saturn's moons Titan and Enceladus and Jupiter's moons Europa and Ganymede, all of which are outside the habitable zone, may hold large volumes of liquid water in subsurface oceans.\nOutside the HZ, tidal heating and radioactive decay are two possible heat sources that could contribute to the existence of liquid water. Abbot and Switzer (2011) put forward the possibility that subsurface water could exist on rogue planets as a result of radioactive decay-based heating and insulation by a thick surface layer of ice.\nWith some theorising that life on Earth may have actually originated in stable, subsurface habitats, it has been suggested that it may be common for wet subsurface extraterrestrial habitats such as these to 'teem with life'. On Earth itself, living organisms may be found more than below the surface."]], "corpus": "wikipedia"}
|
17 |
{"tstamp": 1725873568.6716, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1725873567.4445, "finish": 1725873568.6716, "ip": "", "conv_id": "7c455e60baa34d00809519b151622546", "model_name": "text-embedding-004", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
18 |
{"tstamp": 1725873568.6716, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1725873567.4445, "finish": 1725873568.6716, "ip": "", "conv_id": "9e1cbce0dce2498ca7c1fe1ffdcbf508", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
|
19 |
+
{"tstamp": 1725876082.394, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1725876082.0276, "finish": 1725876082.394, "ip": "", "conv_id": "95d6e5eff616405ead0f9025ac669fcc", "model_name": "text-embedding-004", "prompt": "performance of embedding", "output": [["performance of embedding", "Title: Enhancing Embedding Performance through Large Language Model-based Text Enrichment and Rewriting\n\nAbstract: Embedding models are crucial for various natural language processing tasks but can be limited by factors such as limited vocabulary, lack of context, and grammatical errors. This paper proposes a novel approach to improve embedding performance by leveraging large language models (LLMs) to enrich and rewrite input text before the embedding process. By utilizing ChatGPT 3.5 to provide additional context, correct inaccuracies, and incorporate metadata, the proposed method aims to enhance the utility and accuracy of embedding models. The effectiveness of this approach is evaluated on three datasets: Banking77Classification, TwitterSemEval 2015, and Amazon Counter-factual Classification. Results demonstrate significant improvements over the baseline model on the TwitterSemEval 2015 dataset, with the best-performing prompt achieving a score of 85.34 compared to the previous best of 81.52 on the Massive Text Embedding Benchmark (MTEB) Leaderboard. However, performance on the other two datasets was less impressive, highlighting the importance of considering domain-specific characteristics. The findings suggest that LLM-based text enrichment has shown promising results to improve embedding performance, particularly in certain domains. Hence, numerous limitations in the process of embedding can be avoided."]], "corpus": "arxiv"}
|
20 |
+
{"tstamp": 1725876082.394, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1725876082.0276, "finish": 1725876082.394, "ip": "", "conv_id": "9fb2642e0b3a4ecca2190f260bebfa19", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "performance of embedding", "output": [["performance of embedding", "Title: Performance Embeddings: A Similarity-based Approach to Automatic Performance Optimization\n\nAbstract: Performance optimization is an increasingly challenging but often repetitive task. While each platform has its quirks, the underlying code transformations rely on data movement and computational characteristics that recur across applications. This paper proposes to leverage those similarities by constructing an embedding space for subprograms. The continuous space captures both static and dynamic properties of loop nests via symbolic code analysis and performance profiling, respectively. Performance embeddings enable direct knowledge transfer of performance tuning between applications, which can result from autotuning or tailored improvements. We demonstrate this transfer tuning approach on case studies in deep neural networks, dense and sparse linear algebra compositions, and numerical weather prediction stencils. Transfer tuning reduces the search complexity by up to four orders of magnitude and outperforms the MKL library in sparse-dense matrix multiplication. The results exhibit clear correspondences between program characteristics and optimizations, outperforming prior specialized state-of-the-art approaches and generalizing beyond their capabilities."]], "corpus": "arxiv"}
|
data/sts_battle-d2470a18-3fc5-4a21-86ba-77a6dda751e0.jsonl
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"tstamp": 1725876170.0737, "task_type": "sts", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "f4baf0ee2d4b4f60ae3c41ef10f3f9ba", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_txt0": "Incoming strays increased by 19 percent over the last three years and is expected to continue to climb.", "0_txt1": "The last three years saw a 19 percent increased in incoming strays.", "0_txt2": "Recent years have seen a reduction in the number of incoming strays.", "0_output": "", "1_conv_id": "1446a504ceb5495e814e5616ef4c2491", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_txt0": "Incoming strays increased by 19 percent over the last three years and is expected to continue to climb.", "1_txt1": "The last three years saw a 19 percent increased in incoming strays.", "1_txt2": "Recent years have seen a reduction in the number of incoming strays.", "1_output": ""}
|
data/sts_individual-d2470a18-3fc5-4a21-86ba-77a6dda751e0.jsonl
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
{"tstamp": 1725876134.3023, "task_type": "sts", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1725876130.221, "finish": 1725876134.3023, "ip": "", "conv_id": "f4baf0ee2d4b4f60ae3c41ef10f3f9ba", "model_name": "intfloat/e5-mistral-7b-instruct", "txt0": "Incoming strays increased by 19 percent over the last three years and is expected to continue to climb.", "txt1": "The last three years saw a 19 percent increased in incoming strays.", "txt2": "Recent years have seen a reduction in the number of incoming strays.", "output": ""}
|
2 |
+
{"tstamp": 1725876134.3023, "task_type": "sts", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1725876130.221, "finish": 1725876134.3023, "ip": "", "conv_id": "1446a504ceb5495e814e5616ef4c2491", "model_name": "intfloat/multilingual-e5-large-instruct", "txt0": "Incoming strays increased by 19 percent over the last three years and is expected to continue to climb.", "txt1": "The last three years saw a 19 percent increased in incoming strays.", "txt2": "Recent years have seen a reduction in the number of incoming strays.", "output": ""}
|