Scheduled Commit
Browse files
data/retrieval_individual-5b78c89f-244c-4bc4-9693-2c3c7acb6977.jsonl
CHANGED
@@ -314,3 +314,4 @@
|
|
314 |
{"tstamp": 1723175283.263, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1723175282.9443, "finish": 1723175283.263, "ip": "", "conv_id": "df6dd040a3394d9d93ce2b2b39f4f7c0", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "I am searching for identifying the bias in NLP models using deep neural networks.", "output": [["I am searching for identifying the bias in NLP models using deep neural networks.", "GPT-3\n\nSoftware models are trained to learn by using thousands or millions of examples in a \"structure... loosely based on the neural architecture of the brain\". One architecture used in natural language processing (NLP) is a neural network based on a deep learning model that was introduced in 2017—the transformer architecture. There are a number of NLP systems capable of processing, mining, organizing, connecting and contrasting textual input, as well as correctly answering questions.\nOn June 11, 2018, OpenAI researchers and engineers published a paper introducing the first generative pre-trained transformer (GPT)a type of generative large language model that is pre-trained with an enormous and diverse text corpus in datasets, followed by discriminative fine-tuning to focus on a specific task. GPT models are transformer-based deep-learning neural network architectures. Previously, the best-performing neural NLP models commonly employed supervised learning from large amounts of manually-labeled data, which made it prohibitively expensive and time-consuming to train extremely large language models. The first GPT model was known as \"GPT-1,\" and it was followed by \"GPT-2\" in February 2019. Created as a direct scale-up of its predecessor, GPT-2 had both its parameter count and dataset size increased by a factor of 10. It had 1.5 billion parameters, and was trained on a dataset of 8 million web pages.\nIn February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which they claimed was \"largest language model ever published at 17 billion parameters.\" It performed better than any other language model at a variety of tasks, including summarizing texts and answering questions."]], "corpus": "wikipedia"}
|
315 |
{"tstamp": 1723175779.6365, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723175777.4894, "finish": 1723175779.6365, "ip": "", "conv_id": "3d7926945c184090b38fb21dcb82c31f", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "What is bias in text embeddings?", "output": [["What is bias in text embeddings?", "Title: An approximate threshold condition for a non-autonomous system: an application to a vector-borne infection\n\nAbstract: A non-autonomous system is proposed to model the seasonal pattern of dengue fever. We found that an approximate threshold condition for infection persistence describes all possible behavior of the system. As far as we know, the kind of analysis here proposed is entirely new. No precise mathematical theorems are demonstrated but we give enough numerical evidence to support the conclusions."]], "corpus": "arxiv"}
|
316 |
{"tstamp": 1723175940.0238, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723175939.939, "finish": 1723175940.0238, "ip": "", "conv_id": "9e0fc2417b2e4cf8bc7046231e60f17e", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "What is the evaluation metric to measure the word embedding bias in NLP models?", "output": [["What is the evaluation metric to measure the word embedding bias in NLP models?", "Title: Chaos or Noise - Difficulties of a Distinction\n\nAbstract: In experiments, the dynamical behavior of systems is reflected in time series. Due to the finiteness of the observational data set it is not possible to reconstruct the invariant measure up to arbitrary fine resolution and arbitrary high embedding dimension. These restrictions limit our ability to distinguish between signals generated by different systems, such as regular, chaotic or stochastic ones, when analyzed from a time series point of view. We propose to classify the signal behavior, without referring to any specific model, as stochastic or deterministic on a certain scale of the resolution $\\epsilon$, according to the dependence of the $(\\epsilon,\\tau)$-entropy, $h(\\epsilon, \\tau)$, and of the finite size Lyapunov exponent, $\\lambda(\\epsilon)$, on $\\epsilon$."]], "corpus": "arxiv"}
|
|
|
|
314 |
{"tstamp": 1723175283.263, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1723175282.9443, "finish": 1723175283.263, "ip": "", "conv_id": "df6dd040a3394d9d93ce2b2b39f4f7c0", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "I am searching for identifying the bias in NLP models using deep neural networks.", "output": [["I am searching for identifying the bias in NLP models using deep neural networks.", "GPT-3\n\nSoftware models are trained to learn by using thousands or millions of examples in a \"structure... loosely based on the neural architecture of the brain\". One architecture used in natural language processing (NLP) is a neural network based on a deep learning model that was introduced in 2017—the transformer architecture. There are a number of NLP systems capable of processing, mining, organizing, connecting and contrasting textual input, as well as correctly answering questions.\nOn June 11, 2018, OpenAI researchers and engineers published a paper introducing the first generative pre-trained transformer (GPT)a type of generative large language model that is pre-trained with an enormous and diverse text corpus in datasets, followed by discriminative fine-tuning to focus on a specific task. GPT models are transformer-based deep-learning neural network architectures. Previously, the best-performing neural NLP models commonly employed supervised learning from large amounts of manually-labeled data, which made it prohibitively expensive and time-consuming to train extremely large language models. The first GPT model was known as \"GPT-1,\" and it was followed by \"GPT-2\" in February 2019. Created as a direct scale-up of its predecessor, GPT-2 had both its parameter count and dataset size increased by a factor of 10. It had 1.5 billion parameters, and was trained on a dataset of 8 million web pages.\nIn February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which they claimed was \"largest language model ever published at 17 billion parameters.\" It performed better than any other language model at a variety of tasks, including summarizing texts and answering questions."]], "corpus": "wikipedia"}
|
315 |
{"tstamp": 1723175779.6365, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723175777.4894, "finish": 1723175779.6365, "ip": "", "conv_id": "3d7926945c184090b38fb21dcb82c31f", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "What is bias in text embeddings?", "output": [["What is bias in text embeddings?", "Title: An approximate threshold condition for a non-autonomous system: an application to a vector-borne infection\n\nAbstract: A non-autonomous system is proposed to model the seasonal pattern of dengue fever. We found that an approximate threshold condition for infection persistence describes all possible behavior of the system. As far as we know, the kind of analysis here proposed is entirely new. No precise mathematical theorems are demonstrated but we give enough numerical evidence to support the conclusions."]], "corpus": "arxiv"}
|
316 |
{"tstamp": 1723175940.0238, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723175939.939, "finish": 1723175940.0238, "ip": "", "conv_id": "9e0fc2417b2e4cf8bc7046231e60f17e", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "What is the evaluation metric to measure the word embedding bias in NLP models?", "output": [["What is the evaluation metric to measure the word embedding bias in NLP models?", "Title: Chaos or Noise - Difficulties of a Distinction\n\nAbstract: In experiments, the dynamical behavior of systems is reflected in time series. Due to the finiteness of the observational data set it is not possible to reconstruct the invariant measure up to arbitrary fine resolution and arbitrary high embedding dimension. These restrictions limit our ability to distinguish between signals generated by different systems, such as regular, chaotic or stochastic ones, when analyzed from a time series point of view. We propose to classify the signal behavior, without referring to any specific model, as stochastic or deterministic on a certain scale of the resolution $\\epsilon$, according to the dependence of the $(\\epsilon,\\tau)$-entropy, $h(\\epsilon, \\tau)$, and of the finite size Lyapunov exponent, $\\lambda(\\epsilon)$, on $\\epsilon$."]], "corpus": "arxiv"}
|
317 |
+
{"tstamp": 1723176708.8336, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723176708.7374, "finish": 1723176708.8336, "ip": "", "conv_id": "9500f38e078443fa8b24657ad550db29", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "Extract the applications of Large Language Models in text processing.", "output": [["Extract the applications of Large Language Models in text processing.", "Title: Helical vs. fundamental solitons in optical fibers\n\nAbstract: We consider solitons in a nonlinear optical fiber with a single polarization in a region of parameters where it carries exactly two distinct modes, the fundamental one and the first-order helical mode. From the viewpoint of applications to dense-WDM communication systems, this opens way to double the number of channels carried by the fiber. Aside from that, experimental observation of helical (spinning) solitons and collisions between them and with fundamental solitons are issues of fundamental interest. We introduce a system of coupled nonlinear Schroedinger equations for fundamental and helical modes, which have nonstandard values of the cross-phase-modulation coupling constants, and investigate, analytically and numerically, results of \"complete\" and \"incomplete\" collisions between solitons carried by the two modes. We conclude that the collision-induced crosstalk is partly attenuated in comparison with the usual WDM system, which sometimes may be crucially important, preventing merger of the colliding solitons into a breather. The interaction between the two modes is found to be additionally strongly suppressed in comparison with that in the WDM system in the case when a dispersion-shifted or dispersion-compensated fiber is used."]], "corpus": "arxiv"}
|
data/retrieval_single_choice-5b78c89f-244c-4bc4-9693-2c3c7acb6977.jsonl
CHANGED
@@ -1 +1,3 @@
|
|
1 |
{"tstamp": 1723175816.5552, "task_type": "retrieval", "type": "flag", "models": "sentence-transformers/all-MiniLM-L6-v2", "ip": "", "conv_id": "3d7926945c184090b38fb21dcb82c31f", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "What is bias in text embeddings?", "output": [["What is bias in text embeddings?", "Title: An approximate threshold condition for a non-autonomous system: an application to a vector-borne infection\n\nAbstract: A non-autonomous system is proposed to model the seasonal pattern of dengue fever. We found that an approximate threshold condition for infection persistence describes all possible behavior of the system. As far as we know, the kind of analysis here proposed is entirely new. No precise mathematical theorems are demonstrated but we give enough numerical evidence to support the conclusions."]], "corpus": "arxiv"}
|
|
|
|
|
|
1 |
{"tstamp": 1723175816.5552, "task_type": "retrieval", "type": "flag", "models": "sentence-transformers/all-MiniLM-L6-v2", "ip": "", "conv_id": "3d7926945c184090b38fb21dcb82c31f", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "What is bias in text embeddings?", "output": [["What is bias in text embeddings?", "Title: An approximate threshold condition for a non-autonomous system: an application to a vector-borne infection\n\nAbstract: A non-autonomous system is proposed to model the seasonal pattern of dengue fever. We found that an approximate threshold condition for infection persistence describes all possible behavior of the system. As far as we know, the kind of analysis here proposed is entirely new. No precise mathematical theorems are demonstrated but we give enough numerical evidence to support the conclusions."]], "corpus": "arxiv"}
|
2 |
+
{"tstamp": 1723176548.8827, "task_type": "retrieval", "type": "downvote", "models": "sentence-transformers/all-MiniLM-L6-v2", "ip": "", "conv_id": "9e0fc2417b2e4cf8bc7046231e60f17e", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "What is the evaluation metric to measure the word embedding bias in NLP models?", "output": [["What is the evaluation metric to measure the word embedding bias in NLP models?", "Title: Chaos or Noise - Difficulties of a Distinction\n\nAbstract: In experiments, the dynamical behavior of systems is reflected in time series. Due to the finiteness of the observational data set it is not possible to reconstruct the invariant measure up to arbitrary fine resolution and arbitrary high embedding dimension. These restrictions limit our ability to distinguish between signals generated by different systems, such as regular, chaotic or stochastic ones, when analyzed from a time series point of view. We propose to classify the signal behavior, without referring to any specific model, as stochastic or deterministic on a certain scale of the resolution $\\epsilon$, according to the dependence of the $(\\epsilon,\\tau)$-entropy, $h(\\epsilon, \\tau)$, and of the finite size Lyapunov exponent, $\\lambda(\\epsilon)$, on $\\epsilon$."]], "corpus": "arxiv"}
|
3 |
+
{"tstamp": 1723176728.8275, "task_type": "retrieval", "type": "downvote", "models": "sentence-transformers/all-MiniLM-L6-v2", "ip": "", "conv_id": "9500f38e078443fa8b24657ad550db29", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "Extract the applications of Large Language Models in text processing.", "output": [["Extract the applications of Large Language Models in text processing.", "Title: Helical vs. fundamental solitons in optical fibers\n\nAbstract: We consider solitons in a nonlinear optical fiber with a single polarization in a region of parameters where it carries exactly two distinct modes, the fundamental one and the first-order helical mode. From the viewpoint of applications to dense-WDM communication systems, this opens way to double the number of channels carried by the fiber. Aside from that, experimental observation of helical (spinning) solitons and collisions between them and with fundamental solitons are issues of fundamental interest. We introduce a system of coupled nonlinear Schroedinger equations for fundamental and helical modes, which have nonstandard values of the cross-phase-modulation coupling constants, and investigate, analytically and numerically, results of \"complete\" and \"incomplete\" collisions between solitons carried by the two modes. We conclude that the collision-induced crosstalk is partly attenuated in comparison with the usual WDM system, which sometimes may be crucially important, preventing merger of the colliding solitons into a breather. The interaction between the two modes is found to be additionally strongly suppressed in comparison with that in the WDM system in the case when a dispersion-shifted or dispersion-compensated fiber is used."]], "corpus": "arxiv"}
|