Scheduled Commit
Browse files
data/retrieval_individual-5b78c89f-244c-4bc4-9693-2c3c7acb6977.jsonl
CHANGED
@@ -313,3 +313,4 @@
|
|
313 |
{"tstamp": 1723175283.263, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1723175282.9443, "finish": 1723175283.263, "ip": "", "conv_id": "d07a8e0ac5d9401bbd81d7b9f9449ece", "model_name": "text-embedding-004", "prompt": "I am searching for identifying the bias in NLP models using deep neural networks.", "output": [["I am searching for identifying the bias in NLP models using deep neural networks.", "Algorithmic bias\n\nTechnical There have been several attempts to create methods and tools that can detect and observe biases within an algorithm. These emergent fields focus on tools which are typically applied to the (training) data used by the program rather than the algorithm's internal processes. These methods may also analyze a program's output and its usefulness and therefore may involve the analysis of its confusion matrix (or table of confusion). Explainable AI to detect algorithm Bias is a suggested way to detect the existence of bias in an algorithm or learning model. Using machine learning to detect bias is called, \"conducting an AI audit\", where the \"auditor\" is an algorithm that goes through the AI model and the training data to identify biases.\nEnsuring that an AI tool such as a classifier is free from bias is more difficult than just removing the sensitive information\nfrom its input signals, because this is typically implicit in other signals. For example, the hobbies, sports and schools attended\nby a job candidate might reveal their gender to the software, even when this is removed from the analysis. Solutions to this\nproblem involve ensuring that the intelligent agent does not have any information that could be used to reconstruct the protected\nand sensitive information about the subject, as first demonstrated in where a deep learning network was simultaneously trained to learn a task while at the same time being completely agnostic about the protected feature. A simpler method was proposed in the context of word embeddings, and involves removing information that is correlated with the protected characteristic."]], "corpus": "wikipedia"}
|
314 |
{"tstamp": 1723175283.263, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1723175282.9443, "finish": 1723175283.263, "ip": "", "conv_id": "df6dd040a3394d9d93ce2b2b39f4f7c0", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "I am searching for identifying the bias in NLP models using deep neural networks.", "output": [["I am searching for identifying the bias in NLP models using deep neural networks.", "GPT-3\n\nSoftware models are trained to learn by using thousands or millions of examples in a \"structure... loosely based on the neural architecture of the brain\". One architecture used in natural language processing (NLP) is a neural network based on a deep learning model that was introduced in 2017—the transformer architecture. There are a number of NLP systems capable of processing, mining, organizing, connecting and contrasting textual input, as well as correctly answering questions.\nOn June 11, 2018, OpenAI researchers and engineers published a paper introducing the first generative pre-trained transformer (GPT)a type of generative large language model that is pre-trained with an enormous and diverse text corpus in datasets, followed by discriminative fine-tuning to focus on a specific task. GPT models are transformer-based deep-learning neural network architectures. Previously, the best-performing neural NLP models commonly employed supervised learning from large amounts of manually-labeled data, which made it prohibitively expensive and time-consuming to train extremely large language models. The first GPT model was known as \"GPT-1,\" and it was followed by \"GPT-2\" in February 2019. Created as a direct scale-up of its predecessor, GPT-2 had both its parameter count and dataset size increased by a factor of 10. It had 1.5 billion parameters, and was trained on a dataset of 8 million web pages.\nIn February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which they claimed was \"largest language model ever published at 17 billion parameters.\" It performed better than any other language model at a variety of tasks, including summarizing texts and answering questions."]], "corpus": "wikipedia"}
|
315 |
{"tstamp": 1723175779.6365, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723175777.4894, "finish": 1723175779.6365, "ip": "", "conv_id": "3d7926945c184090b38fb21dcb82c31f", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "What is bias in text embeddings?", "output": [["What is bias in text embeddings?", "Title: An approximate threshold condition for a non-autonomous system: an application to a vector-borne infection\n\nAbstract: A non-autonomous system is proposed to model the seasonal pattern of dengue fever. We found that an approximate threshold condition for infection persistence describes all possible behavior of the system. As far as we know, the kind of analysis here proposed is entirely new. No precise mathematical theorems are demonstrated but we give enough numerical evidence to support the conclusions."]], "corpus": "arxiv"}
|
|
|
|
313 |
{"tstamp": 1723175283.263, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1723175282.9443, "finish": 1723175283.263, "ip": "", "conv_id": "d07a8e0ac5d9401bbd81d7b9f9449ece", "model_name": "text-embedding-004", "prompt": "I am searching for identifying the bias in NLP models using deep neural networks.", "output": [["I am searching for identifying the bias in NLP models using deep neural networks.", "Algorithmic bias\n\nTechnical There have been several attempts to create methods and tools that can detect and observe biases within an algorithm. These emergent fields focus on tools which are typically applied to the (training) data used by the program rather than the algorithm's internal processes. These methods may also analyze a program's output and its usefulness and therefore may involve the analysis of its confusion matrix (or table of confusion). Explainable AI to detect algorithm Bias is a suggested way to detect the existence of bias in an algorithm or learning model. Using machine learning to detect bias is called, \"conducting an AI audit\", where the \"auditor\" is an algorithm that goes through the AI model and the training data to identify biases.\nEnsuring that an AI tool such as a classifier is free from bias is more difficult than just removing the sensitive information\nfrom its input signals, because this is typically implicit in other signals. For example, the hobbies, sports and schools attended\nby a job candidate might reveal their gender to the software, even when this is removed from the analysis. Solutions to this\nproblem involve ensuring that the intelligent agent does not have any information that could be used to reconstruct the protected\nand sensitive information about the subject, as first demonstrated in where a deep learning network was simultaneously trained to learn a task while at the same time being completely agnostic about the protected feature. A simpler method was proposed in the context of word embeddings, and involves removing information that is correlated with the protected characteristic."]], "corpus": "wikipedia"}
|
314 |
{"tstamp": 1723175283.263, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1723175282.9443, "finish": 1723175283.263, "ip": "", "conv_id": "df6dd040a3394d9d93ce2b2b39f4f7c0", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "I am searching for identifying the bias in NLP models using deep neural networks.", "output": [["I am searching for identifying the bias in NLP models using deep neural networks.", "GPT-3\n\nSoftware models are trained to learn by using thousands or millions of examples in a \"structure... loosely based on the neural architecture of the brain\". One architecture used in natural language processing (NLP) is a neural network based on a deep learning model that was introduced in 2017—the transformer architecture. There are a number of NLP systems capable of processing, mining, organizing, connecting and contrasting textual input, as well as correctly answering questions.\nOn June 11, 2018, OpenAI researchers and engineers published a paper introducing the first generative pre-trained transformer (GPT)a type of generative large language model that is pre-trained with an enormous and diverse text corpus in datasets, followed by discriminative fine-tuning to focus on a specific task. GPT models are transformer-based deep-learning neural network architectures. Previously, the best-performing neural NLP models commonly employed supervised learning from large amounts of manually-labeled data, which made it prohibitively expensive and time-consuming to train extremely large language models. The first GPT model was known as \"GPT-1,\" and it was followed by \"GPT-2\" in February 2019. Created as a direct scale-up of its predecessor, GPT-2 had both its parameter count and dataset size increased by a factor of 10. It had 1.5 billion parameters, and was trained on a dataset of 8 million web pages.\nIn February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which they claimed was \"largest language model ever published at 17 billion parameters.\" It performed better than any other language model at a variety of tasks, including summarizing texts and answering questions."]], "corpus": "wikipedia"}
|
315 |
{"tstamp": 1723175779.6365, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723175777.4894, "finish": 1723175779.6365, "ip": "", "conv_id": "3d7926945c184090b38fb21dcb82c31f", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "What is bias in text embeddings?", "output": [["What is bias in text embeddings?", "Title: An approximate threshold condition for a non-autonomous system: an application to a vector-borne infection\n\nAbstract: A non-autonomous system is proposed to model the seasonal pattern of dengue fever. We found that an approximate threshold condition for infection persistence describes all possible behavior of the system. As far as we know, the kind of analysis here proposed is entirely new. No precise mathematical theorems are demonstrated but we give enough numerical evidence to support the conclusions."]], "corpus": "arxiv"}
|
316 |
+
{"tstamp": 1723175940.0238, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723175939.939, "finish": 1723175940.0238, "ip": "", "conv_id": "9e0fc2417b2e4cf8bc7046231e60f17e", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "What is the evaluation metric to measure the word embedding bias in NLP models?", "output": [["What is the evaluation metric to measure the word embedding bias in NLP models?", "Title: Chaos or Noise - Difficulties of a Distinction\n\nAbstract: In experiments, the dynamical behavior of systems is reflected in time series. Due to the finiteness of the observational data set it is not possible to reconstruct the invariant measure up to arbitrary fine resolution and arbitrary high embedding dimension. These restrictions limit our ability to distinguish between signals generated by different systems, such as regular, chaotic or stochastic ones, when analyzed from a time series point of view. We propose to classify the signal behavior, without referring to any specific model, as stochastic or deterministic on a certain scale of the resolution $\\epsilon$, according to the dependence of the $(\\epsilon,\\tau)$-entropy, $h(\\epsilon, \\tau)$, and of the finite size Lyapunov exponent, $\\lambda(\\epsilon)$, on $\\epsilon$."]], "corpus": "arxiv"}
|