Scheduled Commit
Browse files
data/retrieval_individual-a1f9e5e7-e4fc-4780-ab0e-fe41defd9698.jsonl
CHANGED
@@ -202,3 +202,5 @@
|
|
202 |
{"tstamp": 1724811637.5278, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1724811635.3769, "finish": 1724811637.5278, "ip": "", "conv_id": "22bf22080da24c80b0c56f0297ad8080", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: The classical double copy of non-singular black holes\n\nAbstract: We apply the classical double copy procedure to a class of regular, non-singular black hole solutions. We give several examples, paying particular attention to a string-theory-corrected black hole solution emerging from T-duality. Non-perturbative stringy corrections introduce an ultraviolet (UV) zero-point length cutoff which results in non-singular black hole spacetimes. Apart from the UV regulator, the solution is equivalent to the Bardeen black hole spacetime. We extend this solution to include an asymptotic de Sitter background. All Yang-Mills field theory quantities associated with the double copy are well-behaved and finite for all values of parameters. We present a thorough analysis of the black hole horizon structure, additionally uncovering a simple yet new connection between horizons on the gravity side and electric fields on the gauge theory side of the double copy."]], "corpus": "arxiv"}
|
203 |
{"tstamp": 1724828216.5662, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1724828214.2448, "finish": 1724828216.5662, "ip": "", "conv_id": "2432690c72b94bf38ab4f4c511cd66b3", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "when did university stop being free in australia", "output": [["when did university stop being free in australia", "Tertiary education in Australia\n\nBy the mid-1980s, it became the consensus of both major parties that the concept of 'free' tertiary education in Australia was untenable due to the increasing participation rate. Ironically, a subsequent Labor Government (the Bob Hawke/Paul Keating Government) was responsible for gradually re-introducing fees for university study. In a relatively innovative move, the method by which fees were re-introduced proved to be a system accepted by both Federal political parties and consequently is still in place today. The system is known as the Higher Education Contribution Scheme (HECS) and enables students to defer payment of fees until after they commence professional employment, and after their income exceeds a threshold level – at that point, the fees are automatically deducted through income tax.\nBy the late 1980s, the Australian tertiary education system was still a three-tier system, composed of:\nAll tertiary institutions established as universities by acts of parliament (e.g. Sydney, Monash, La Trobe, Griffith)\nA collection of institutes of technology (such as the Royal Melbourne Institute of Technology (RMIT))\nA collection of colleges of Technical and Further Education (TAFE)\nHowever, by this point, the roles of the universities, institutes of technology and the CSIRO had also become blurred. Institutes of technology had moved from their traditional role of undergraduate teaching and industry-consulting towards conducting pure and applied research. They also had the ability to award degrees through to Doctor of Philosophy (PhD) level."]], "corpus": "wikipedia"}
|
204 |
{"tstamp": 1724828216.5662, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1724828214.2448, "finish": 1724828216.5662, "ip": "", "conv_id": "803eafc8bca347f5bc423819a29873f3", "model_name": "text-embedding-3-large", "prompt": "when did university stop being free in australia", "output": [["when did university stop being free in australia", "Tertiary education in Australia\n\nIn 1958, Monash University was established. States Grants (Universities) Act 1958 allocated funding to States for capital and recurrent expenditure in universities for the triennial 1958 to 1960. In 1959, Australian Universities Commission Act of 1959 was established the AUC as a statutory body to advise the Commonwealth Government on university matters. Between 1958 and 1960 there was more than a 13% annual increase in university enrolments. By 1960, there were 53,000 students in ten universities. There was a spate of universities established in the 1960s and 70s: Macquarie University (1964), La Trobe University (1964), the University of Newcastle (1965), Flinders University (1966), James Cook University (1970), Griffith University (1971), Deakin University (1974), Murdoch University (1975), and the University of Wollongong (1975). By 1960, the number of students enrolled in Australian Universities had reached 53,000. By 1975 there were 148,000 students in 19 universities.\nUntil 1973, university tuition was funded either through Commonwealth scholarships, which were based on merit, or through fees. Tertiary education in Australia was structured into three sectors:\nUniversities\nInstitutes of technology (a hybrid between a university and a technical college)\nTechnical colleges\nDuring the early 1970s, there was a significant push to make tertiary education in Australia more accessible to working and middle-class people. In 1973, the Whitlam Labor Government abolished university fees. This increased the university participation rate.\nIn 1974, the Commonwealth assumed full responsibility for funding higher education (i.e., universities and Colleges of Advanced Education (CAEs)) and established the Commonwealth Tertiary Education Commission (CTEC), which had an advisory role and responsibility for allocating government funding among universities. However, in 1975, in the context of federal political crisis and economic recession, triennial funding of universities was suspended. Demand remained with growth directed to CAEs and State-controlled TAFE colleges."]], "corpus": "wikipedia"}
|
|
|
|
|
|
202 |
{"tstamp": 1724811637.5278, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1724811635.3769, "finish": 1724811637.5278, "ip": "", "conv_id": "22bf22080da24c80b0c56f0297ad8080", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: The classical double copy of non-singular black holes\n\nAbstract: We apply the classical double copy procedure to a class of regular, non-singular black hole solutions. We give several examples, paying particular attention to a string-theory-corrected black hole solution emerging from T-duality. Non-perturbative stringy corrections introduce an ultraviolet (UV) zero-point length cutoff which results in non-singular black hole spacetimes. Apart from the UV regulator, the solution is equivalent to the Bardeen black hole spacetime. We extend this solution to include an asymptotic de Sitter background. All Yang-Mills field theory quantities associated with the double copy are well-behaved and finite for all values of parameters. We present a thorough analysis of the black hole horizon structure, additionally uncovering a simple yet new connection between horizons on the gravity side and electric fields on the gauge theory side of the double copy."]], "corpus": "arxiv"}
|
203 |
{"tstamp": 1724828216.5662, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1724828214.2448, "finish": 1724828216.5662, "ip": "", "conv_id": "2432690c72b94bf38ab4f4c511cd66b3", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "when did university stop being free in australia", "output": [["when did university stop being free in australia", "Tertiary education in Australia\n\nBy the mid-1980s, it became the consensus of both major parties that the concept of 'free' tertiary education in Australia was untenable due to the increasing participation rate. Ironically, a subsequent Labor Government (the Bob Hawke/Paul Keating Government) was responsible for gradually re-introducing fees for university study. In a relatively innovative move, the method by which fees were re-introduced proved to be a system accepted by both Federal political parties and consequently is still in place today. The system is known as the Higher Education Contribution Scheme (HECS) and enables students to defer payment of fees until after they commence professional employment, and after their income exceeds a threshold level – at that point, the fees are automatically deducted through income tax.\nBy the late 1980s, the Australian tertiary education system was still a three-tier system, composed of:\nAll tertiary institutions established as universities by acts of parliament (e.g. Sydney, Monash, La Trobe, Griffith)\nA collection of institutes of technology (such as the Royal Melbourne Institute of Technology (RMIT))\nA collection of colleges of Technical and Further Education (TAFE)\nHowever, by this point, the roles of the universities, institutes of technology and the CSIRO had also become blurred. Institutes of technology had moved from their traditional role of undergraduate teaching and industry-consulting towards conducting pure and applied research. They also had the ability to award degrees through to Doctor of Philosophy (PhD) level."]], "corpus": "wikipedia"}
|
204 |
{"tstamp": 1724828216.5662, "task_type": "retrieval", "type": "chat", "model": "text-embedding-3-large", "gen_params": {}, "start": 1724828214.2448, "finish": 1724828216.5662, "ip": "", "conv_id": "803eafc8bca347f5bc423819a29873f3", "model_name": "text-embedding-3-large", "prompt": "when did university stop being free in australia", "output": [["when did university stop being free in australia", "Tertiary education in Australia\n\nIn 1958, Monash University was established. States Grants (Universities) Act 1958 allocated funding to States for capital and recurrent expenditure in universities for the triennial 1958 to 1960. In 1959, Australian Universities Commission Act of 1959 was established the AUC as a statutory body to advise the Commonwealth Government on university matters. Between 1958 and 1960 there was more than a 13% annual increase in university enrolments. By 1960, there were 53,000 students in ten universities. There was a spate of universities established in the 1960s and 70s: Macquarie University (1964), La Trobe University (1964), the University of Newcastle (1965), Flinders University (1966), James Cook University (1970), Griffith University (1971), Deakin University (1974), Murdoch University (1975), and the University of Wollongong (1975). By 1960, the number of students enrolled in Australian Universities had reached 53,000. By 1975 there were 148,000 students in 19 universities.\nUntil 1973, university tuition was funded either through Commonwealth scholarships, which were based on merit, or through fees. Tertiary education in Australia was structured into three sectors:\nUniversities\nInstitutes of technology (a hybrid between a university and a technical college)\nTechnical colleges\nDuring the early 1970s, there was a significant push to make tertiary education in Australia more accessible to working and middle-class people. In 1973, the Whitlam Labor Government abolished university fees. This increased the university participation rate.\nIn 1974, the Commonwealth assumed full responsibility for funding higher education (i.e., universities and Colleges of Advanced Education (CAEs)) and established the Commonwealth Tertiary Education Commission (CTEC), which had an advisory role and responsibility for allocating government funding among universities. However, in 1975, in the context of federal political crisis and economic recession, triennial funding of universities was suspended. Demand remained with growth directed to CAEs and State-controlled TAFE colleges."]], "corpus": "wikipedia"}
|
205 |
+
{"tstamp": 1724837449.3524, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1724837444.2122, "finish": 1724837449.3524, "ip": "", "conv_id": "966c0753f57c4f38b4b8445a91797bb8", "model_name": "GritLM/GritLM-7B", "prompt": "Research focusing on enhancing federated learning efficiency through dynamic sampling and parameter selection.", "output": [["Research focusing on enhancing federated learning efficiency through dynamic sampling and parameter selection.", "Title: Dynamic Sampling and Selective Masking for Communication-Efficient Federated Learning\n\nAbstract: Federated learning (FL) is a novel machine learning setting that enables on-device intelligence via decentralized training and federated optimization. Deep neural networks' rapid development facilitates the learning techniques for modeling complex problems and emerges into federated deep learning under the federated setting. However, the tremendous amount of model parameters burdens the communication network with a high load of transportation. This paper introduces two approaches for improving communication efficiency by dynamic sampling and top-$k$ selective masking. The former controls the fraction of selected client models dynamically, while the latter selects parameters with top-$k$ largest values of difference for federated updating. Experiments on convolutional image classification and recurrent language modeling are conducted on three public datasets to show our proposed methods' effectiveness."]], "corpus": "arxiv"}
|
206 |
+
{"tstamp": 1724837449.3524, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1724837444.2122, "finish": 1724837449.3524, "ip": "", "conv_id": "2c317f8d24b04590880899c2bd809085", "model_name": "BM25", "prompt": "Research focusing on enhancing federated learning efficiency through dynamic sampling and parameter selection.", "output": [["Research focusing on enhancing federated learning efficiency through dynamic sampling and parameter selection.", "Title: Dynamic Sampling and Selective Masking for Communication-Efficient Federated Learning\n\nAbstract: Federated learning (FL) is a novel machine learning setting that enables on-device intelligence via decentralized training and federated optimization. Deep neural networks' rapid development facilitates the learning techniques for modeling complex problems and emerges into federated deep learning under the federated setting. However, the tremendous amount of model parameters burdens the communication network with a high load of transportation. This paper introduces two approaches for improving communication efficiency by dynamic sampling and top-$k$ selective masking. The former controls the fraction of selected client models dynamically, while the latter selects parameters with top-$k$ largest values of difference for federated updating. Experiments on convolutional image classification and recurrent language modeling are conducted on three public datasets to show our proposed methods' effectiveness."]], "corpus": "arxiv"}
|
data/retrieval_side_by_side-a1f9e5e7-e4fc-4780-ab0e-fe41defd9698.jsonl
CHANGED
@@ -8,3 +8,4 @@
|
|
8 |
{"tstamp": 1724741142.2506, "task_type": "retrieval", "type": "tievote", "models": ["GritLM/GritLM-7B", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "7ea0ae677b484f6d80ec3218a6363a37", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "8b88dac6ec7e4f8b9e52654a9e1e848e", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
|
9 |
{"tstamp": 1724741244.9324, "task_type": "retrieval", "type": "tievote", "models": ["GritLM/GritLM-7B", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "4222e5ac65f04a3a930fcd58da8eaf03", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Paper analyzing the performance of BERT vs classical machine learning techniques for NLP", "0_output": [["Paper analyzing the performance of BERT vs classical machine learning techniques for NLP", "Title: Comparing BERT against traditional machine learning text classification\n\nAbstract: The BERT model has arisen as a popular state-of-the-art machine learning model in the recent years that is able to cope with multiple NLP tasks such as supervised text classification without human supervision. Its flexibility to cope with any type of corpus delivering great results has make this approach very popular not only in academia but also in the industry. Although, there are lots of different approaches that have been used throughout the years with success. In this work, we first present BERT and include a little review on classical NLP approaches. Then, we empirically test with a suite of experiments dealing different scenarios the behaviour of BERT against the traditional TF-IDF vocabulary fed to machine learning algorithms. Our purpose of this work is to add empirical evidence to support or refuse the use of BERT as a default on NLP tasks. Experiments show the superiority of BERT and its independence of features of the NLP problem such as the language of the text adding empirical evidence to use BERT as a default technique to be used in NLP problems."]], "0_corpus": "arxiv", "1_conv_id": "ff547e29046c46f383e1e758da91d071", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Paper analyzing the performance of BERT vs classical machine learning techniques for NLP", "1_output": [["Paper analyzing the performance of BERT vs classical machine learning techniques for NLP", "Title: Comparing BERT against traditional machine learning text classification\n\nAbstract: The BERT model has arisen as a popular state-of-the-art machine learning model in the recent years that is able to cope with multiple NLP tasks such as supervised text classification without human supervision. Its flexibility to cope with any type of corpus delivering great results has make this approach very popular not only in academia but also in the industry. Although, there are lots of different approaches that have been used throughout the years with success. In this work, we first present BERT and include a little review on classical NLP approaches. Then, we empirically test with a suite of experiments dealing different scenarios the behaviour of BERT against the traditional TF-IDF vocabulary fed to machine learning algorithms. Our purpose of this work is to add empirical evidence to support or refuse the use of BERT as a default on NLP tasks. Experiments show the superiority of BERT and its independence of features of the NLP problem such as the language of the text adding empirical evidence to use BERT as a default technique to be used in NLP problems."]], "1_corpus": "arxiv"}
|
10 |
{"tstamp": 1724791264.1923, "task_type": "retrieval", "type": "leftvote", "models": ["GritLM/GritLM-7B", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "b24a1feed0e14c1aab5f90319457f7de", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "how does montgomery died in the island of dr moreau", "0_output": [["how does montgomery died in the island of dr moreau", "The Island of Doctor Moreau\n\nAs time passes, Prendick becomes inured to the grotesqueness of the Beast Folk. However one day, the half-finished puma woman rips free of her restraints and escapes from the lab. Dr. Moreau pursues her, but the two end up fighting each other, leading to their mutual deaths. Montgomery breaks down and decides to share his alcohol with the Beast Folk. Prendick resolves to leave the island, but later hears a commotion outside in which Montgomery, his servant M'ling, and the Sayer of the Law die after a scuffle with the Beast Folk. At the same time, the compound burns down because Prendick has knocked over a lamp. With no chance of saving any of the provisions stored in the enclosure, Prendick realizes that Montgomery has also destroyed the only boats on the island during the night.\nPrendick lives with the Beast Folk on the island for months after the deaths of Moreau and Montgomery. As the time goes by, the Beast Folk increasingly revert to their original animal instincts, beginning to hunt the island's rabbits, returning to walking on all fours, and leaving their shared living areas for the wild. They cease to follow Prendick's instructions. Eventually, the Hyena-Swine kills Prendick's faithful Dog-Man companion created from a St. Bernard. With help from the Sloth Creature, Prendick shoots the Hyena-Swine in self-defence."]], "0_corpus": "wikipedia", "1_conv_id": "b2b61092525743a6862c78927dcafeb2", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "how does montgomery died in the island of dr moreau", "1_output": [["how does montgomery died in the island of dr moreau", "The Island of Dr. Moreau (1977 film)\n\nPlot In 1911, three men are floating in a lifeboat in the middle of the Pacific Ocean following the wreck of the ship Lady Vain. One dies at sea. After seventeen days at sea, the two remaining survivors land on an island, where one is killed by animals. The last survivor, Lady Vain'''s engineer Braddock, is nursed back to health in the compound governed by the scientist \"Dr. Moreau\". The other inhabitants of the compound include Moreau's associate, mercenary Montgomery; Moreau's mute, misshapen servant, M'Ling; and Maria, a young woman. Moreau warns Braddock not to leave the compound at night.\nMoreau welcomes Braddock as an honored guest and shares his library. One day, Braddock witnesses Moreau and Montgomery manhandling a chained creature who is not human; the island is home to more than just this one. They are the hybrid products of Moreau's experiments upon various species of wild animal. Moreau is injecting them with a serum containing human genetic material. At times, the human/animal hybrids still have their animal instincts and do not behave like a human. This sometimes enrages Moreau, who is left feeling that his experiments have not worked successfully. That night as Braddock is reeling from learning the truth, Maria goes to his room where they have sex. This is intended by Moreau."]], "1_corpus": "wikipedia"}
|
|
|
|
8 |
{"tstamp": 1724741142.2506, "task_type": "retrieval", "type": "tievote", "models": ["GritLM/GritLM-7B", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "7ea0ae677b484f6d80ec3218a6363a37", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "8b88dac6ec7e4f8b9e52654a9e1e848e", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
|
9 |
{"tstamp": 1724741244.9324, "task_type": "retrieval", "type": "tievote", "models": ["GritLM/GritLM-7B", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "4222e5ac65f04a3a930fcd58da8eaf03", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Paper analyzing the performance of BERT vs classical machine learning techniques for NLP", "0_output": [["Paper analyzing the performance of BERT vs classical machine learning techniques for NLP", "Title: Comparing BERT against traditional machine learning text classification\n\nAbstract: The BERT model has arisen as a popular state-of-the-art machine learning model in the recent years that is able to cope with multiple NLP tasks such as supervised text classification without human supervision. Its flexibility to cope with any type of corpus delivering great results has make this approach very popular not only in academia but also in the industry. Although, there are lots of different approaches that have been used throughout the years with success. In this work, we first present BERT and include a little review on classical NLP approaches. Then, we empirically test with a suite of experiments dealing different scenarios the behaviour of BERT against the traditional TF-IDF vocabulary fed to machine learning algorithms. Our purpose of this work is to add empirical evidence to support or refuse the use of BERT as a default on NLP tasks. Experiments show the superiority of BERT and its independence of features of the NLP problem such as the language of the text adding empirical evidence to use BERT as a default technique to be used in NLP problems."]], "0_corpus": "arxiv", "1_conv_id": "ff547e29046c46f383e1e758da91d071", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "Paper analyzing the performance of BERT vs classical machine learning techniques for NLP", "1_output": [["Paper analyzing the performance of BERT vs classical machine learning techniques for NLP", "Title: Comparing BERT against traditional machine learning text classification\n\nAbstract: The BERT model has arisen as a popular state-of-the-art machine learning model in the recent years that is able to cope with multiple NLP tasks such as supervised text classification without human supervision. Its flexibility to cope with any type of corpus delivering great results has make this approach very popular not only in academia but also in the industry. Although, there are lots of different approaches that have been used throughout the years with success. In this work, we first present BERT and include a little review on classical NLP approaches. Then, we empirically test with a suite of experiments dealing different scenarios the behaviour of BERT against the traditional TF-IDF vocabulary fed to machine learning algorithms. Our purpose of this work is to add empirical evidence to support or refuse the use of BERT as a default on NLP tasks. Experiments show the superiority of BERT and its independence of features of the NLP problem such as the language of the text adding empirical evidence to use BERT as a default technique to be used in NLP problems."]], "1_corpus": "arxiv"}
|
10 |
{"tstamp": 1724791264.1923, "task_type": "retrieval", "type": "leftvote", "models": ["GritLM/GritLM-7B", "BAAI/bge-large-en-v1.5"], "ip": "", "0_conv_id": "b24a1feed0e14c1aab5f90319457f7de", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "how does montgomery died in the island of dr moreau", "0_output": [["how does montgomery died in the island of dr moreau", "The Island of Doctor Moreau\n\nAs time passes, Prendick becomes inured to the grotesqueness of the Beast Folk. However one day, the half-finished puma woman rips free of her restraints and escapes from the lab. Dr. Moreau pursues her, but the two end up fighting each other, leading to their mutual deaths. Montgomery breaks down and decides to share his alcohol with the Beast Folk. Prendick resolves to leave the island, but later hears a commotion outside in which Montgomery, his servant M'ling, and the Sayer of the Law die after a scuffle with the Beast Folk. At the same time, the compound burns down because Prendick has knocked over a lamp. With no chance of saving any of the provisions stored in the enclosure, Prendick realizes that Montgomery has also destroyed the only boats on the island during the night.\nPrendick lives with the Beast Folk on the island for months after the deaths of Moreau and Montgomery. As the time goes by, the Beast Folk increasingly revert to their original animal instincts, beginning to hunt the island's rabbits, returning to walking on all fours, and leaving their shared living areas for the wild. They cease to follow Prendick's instructions. Eventually, the Hyena-Swine kills Prendick's faithful Dog-Man companion created from a St. Bernard. With help from the Sloth Creature, Prendick shoots the Hyena-Swine in self-defence."]], "0_corpus": "wikipedia", "1_conv_id": "b2b61092525743a6862c78927dcafeb2", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "how does montgomery died in the island of dr moreau", "1_output": [["how does montgomery died in the island of dr moreau", "The Island of Dr. Moreau (1977 film)\n\nPlot In 1911, three men are floating in a lifeboat in the middle of the Pacific Ocean following the wreck of the ship Lady Vain. One dies at sea. After seventeen days at sea, the two remaining survivors land on an island, where one is killed by animals. The last survivor, Lady Vain'''s engineer Braddock, is nursed back to health in the compound governed by the scientist \"Dr. Moreau\". The other inhabitants of the compound include Moreau's associate, mercenary Montgomery; Moreau's mute, misshapen servant, M'Ling; and Maria, a young woman. Moreau warns Braddock not to leave the compound at night.\nMoreau welcomes Braddock as an honored guest and shares his library. One day, Braddock witnesses Moreau and Montgomery manhandling a chained creature who is not human; the island is home to more than just this one. They are the hybrid products of Moreau's experiments upon various species of wild animal. Moreau is injecting them with a serum containing human genetic material. At times, the human/animal hybrids still have their animal instincts and do not behave like a human. This sometimes enrages Moreau, who is left feeling that his experiments have not worked successfully. That night as Braddock is reeling from learning the truth, Maria goes to his room where they have sex. This is intended by Moreau."]], "1_corpus": "wikipedia"}
|
11 |
+
{"tstamp": 1724837475.3974, "task_type": "retrieval", "type": "tievote", "models": ["GritLM/GritLM-7B", "BM25"], "ip": "", "0_conv_id": "966c0753f57c4f38b4b8445a91797bb8", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Research focusing on enhancing federated learning efficiency through dynamic sampling and parameter selection.", "0_output": [["Research focusing on enhancing federated learning efficiency through dynamic sampling and parameter selection.", "Title: Dynamic Sampling and Selective Masking for Communication-Efficient Federated Learning\n\nAbstract: Federated learning (FL) is a novel machine learning setting that enables on-device intelligence via decentralized training and federated optimization. Deep neural networks' rapid development facilitates the learning techniques for modeling complex problems and emerges into federated deep learning under the federated setting. However, the tremendous amount of model parameters burdens the communication network with a high load of transportation. This paper introduces two approaches for improving communication efficiency by dynamic sampling and top-$k$ selective masking. The former controls the fraction of selected client models dynamically, while the latter selects parameters with top-$k$ largest values of difference for federated updating. Experiments on convolutional image classification and recurrent language modeling are conducted on three public datasets to show our proposed methods' effectiveness."]], "0_corpus": "arxiv", "1_conv_id": "2c317f8d24b04590880899c2bd809085", "1_model_name": "BM25", "1_prompt": "Research focusing on enhancing federated learning efficiency through dynamic sampling and parameter selection.", "1_output": [["Research focusing on enhancing federated learning efficiency through dynamic sampling and parameter selection.", "Title: Dynamic Sampling and Selective Masking for Communication-Efficient Federated Learning\n\nAbstract: Federated learning (FL) is a novel machine learning setting that enables on-device intelligence via decentralized training and federated optimization. Deep neural networks' rapid development facilitates the learning techniques for modeling complex problems and emerges into federated deep learning under the federated setting. However, the tremendous amount of model parameters burdens the communication network with a high load of transportation. This paper introduces two approaches for improving communication efficiency by dynamic sampling and top-$k$ selective masking. The former controls the fraction of selected client models dynamically, while the latter selects parameters with top-$k$ largest values of difference for federated updating. Experiments on convolutional image classification and recurrent language modeling are conducted on three public datasets to show our proposed methods' effectiveness."]], "1_corpus": "arxiv"}
|