Scheduled Commit
Browse files
data/retrieval_battle-f36c9f97-e795-4522-8eb7-7dee254b42c7.jsonl
CHANGED
@@ -86,3 +86,4 @@
|
|
86 |
{"tstamp": 1723718100.6415, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "58ec6aebb8f145edae728ea5dff27722", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "who came up with the theory of relativity?", "0_output": [["who came up with the theory of relativity?", "Theory of relativity\n\nThe theory transformed theoretical physics and astronomy during the 20th century, superseding a 200-year-old theory of mechanics created primarily by Isaac Newton. It introduced concepts including 4-dimensional spacetime as a unified entity of space and time, relativity of simultaneity, kinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, along with ushering in the nuclear age. With relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves.\nDevelopment and acceptance\nAlbert Einstein published the theory of special relativity in 1905, building on many theoretical results and empirical findings obtained by Albert A. Michelson, Hendrik Lorentz, Henri Poincaré and others. Max Planck, Hermann Minkowski and others did subsequent work.\nEinstein developed general relativity between 1907 and 1915, with contributions by many others after 1915. The final form of general relativity was published in 1916.\nThe term \"theory of relativity\" was based on the expression \"relative theory\" () used in 1906 by Planck, who emphasized how the theory uses the principle of relativity. In the discussion section of the same paper, Alfred Bucherer used for the first time the expression \"theory of relativity\" ()."]], "0_corpus": "wikipedia", "1_conv_id": "010cddc19e4d427a834fa0230878e2af", "1_model_name": "embed-english-v3.0", "1_prompt": "who came up with the theory of relativity?", "1_output": [["who came up with the theory of relativity?", "Theory of relativity\n\nThe theory transformed theoretical physics and astronomy during the 20th century, superseding a 200-year-old theory of mechanics created primarily by Isaac Newton. It introduced concepts including 4-dimensional spacetime as a unified entity of space and time, relativity of simultaneity, kinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, along with ushering in the nuclear age. With relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves.\nDevelopment and acceptance\nAlbert Einstein published the theory of special relativity in 1905, building on many theoretical results and empirical findings obtained by Albert A. Michelson, Hendrik Lorentz, Henri Poincaré and others. Max Planck, Hermann Minkowski and others did subsequent work.\nEinstein developed general relativity between 1907 and 1915, with contributions by many others after 1915. The final form of general relativity was published in 1916.\nThe term \"theory of relativity\" was based on the expression \"relative theory\" () used in 1906 by Planck, who emphasized how the theory uses the principle of relativity. In the discussion section of the same paper, Alfred Bucherer used for the first time the expression \"theory of relativity\" ()."]], "1_corpus": "wikipedia"}
|
87 |
{"tstamp": 1723725557.1694, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "c143db0a911448b39011cf8639cdeaa0", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "what is cancer lymphodema", "0_output": [["what is cancer lymphodema", "Title: Microwave lymphedema assessment using deep learning with contour assisted backprojection\n\nAbstract: We present a method for early detection of lymphatic fluid accumulation in lymphedema patients based on microwave imaging of the limb volume across an air gap. The proposed algorithm uses contour information of the imaged limb surface to approximate the wave propagation velocity locally to solve the eikonal equation for implementing the adjoint imaging operator. This modified backprojection algorithm results in focused imagery close to the limb surface where lymphatic fluid accumulation presents itself. Next, a deep neural network based on U-Net architecture is employed to identify the location and extent of the lymphatic fluid. Simulation studies with various upper and lower arm profiles compare the focusing performance of the proposed contour assisted backprojection imaging with the baseline imaging approach that assumes homogeneous media. The empirical results of the simulation experiments show that the proposed imaging method significantly improves the ability of the deepnet model to identify the location and the volume of the excess fluid in the limb."]], "0_corpus": "arxiv", "1_conv_id": "34a8dc28e83e43fda8b5b8bd6fcbd97b", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "what is cancer lymphodema", "1_output": [["what is cancer lymphodema", "Title: Literal Movement Grammars\n\nAbstract: Literal movement grammars (LMGs) provide a general account of extraposition phenomena through an attribute mechanism allowing top-down displacement of syntactical information. LMGs provide a simple and efficient treatment of complex linguistic phenomena such as cross-serial dependencies in German and Dutch---separating the treatment of natural language into a parsing phase closely resembling traditional context-free treatment, and a disambiguation phase which can be carried out using matching, as opposed to full unification employed in most current grammar formalisms of linguistical relevance."]], "1_corpus": "arxiv"}
|
88 |
{"tstamp": 1723725611.3436, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "15df5739a8a94a79b276fd0c742b31c8", "0_model_name": "embed-english-v3.0", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "d67244c71dff47dd8578f0ae183983d6", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}
|
|
|
|
86 |
{"tstamp": 1723718100.6415, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "58ec6aebb8f145edae728ea5dff27722", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "who came up with the theory of relativity?", "0_output": [["who came up with the theory of relativity?", "Theory of relativity\n\nThe theory transformed theoretical physics and astronomy during the 20th century, superseding a 200-year-old theory of mechanics created primarily by Isaac Newton. It introduced concepts including 4-dimensional spacetime as a unified entity of space and time, relativity of simultaneity, kinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, along with ushering in the nuclear age. With relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves.\nDevelopment and acceptance\nAlbert Einstein published the theory of special relativity in 1905, building on many theoretical results and empirical findings obtained by Albert A. Michelson, Hendrik Lorentz, Henri Poincaré and others. Max Planck, Hermann Minkowski and others did subsequent work.\nEinstein developed general relativity between 1907 and 1915, with contributions by many others after 1915. The final form of general relativity was published in 1916.\nThe term \"theory of relativity\" was based on the expression \"relative theory\" () used in 1906 by Planck, who emphasized how the theory uses the principle of relativity. In the discussion section of the same paper, Alfred Bucherer used for the first time the expression \"theory of relativity\" ()."]], "0_corpus": "wikipedia", "1_conv_id": "010cddc19e4d427a834fa0230878e2af", "1_model_name": "embed-english-v3.0", "1_prompt": "who came up with the theory of relativity?", "1_output": [["who came up with the theory of relativity?", "Theory of relativity\n\nThe theory transformed theoretical physics and astronomy during the 20th century, superseding a 200-year-old theory of mechanics created primarily by Isaac Newton. It introduced concepts including 4-dimensional spacetime as a unified entity of space and time, relativity of simultaneity, kinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, along with ushering in the nuclear age. With relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves.\nDevelopment and acceptance\nAlbert Einstein published the theory of special relativity in 1905, building on many theoretical results and empirical findings obtained by Albert A. Michelson, Hendrik Lorentz, Henri Poincaré and others. Max Planck, Hermann Minkowski and others did subsequent work.\nEinstein developed general relativity between 1907 and 1915, with contributions by many others after 1915. The final form of general relativity was published in 1916.\nThe term \"theory of relativity\" was based on the expression \"relative theory\" () used in 1906 by Planck, who emphasized how the theory uses the principle of relativity. In the discussion section of the same paper, Alfred Bucherer used for the first time the expression \"theory of relativity\" ()."]], "1_corpus": "wikipedia"}
|
87 |
{"tstamp": 1723725557.1694, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "c143db0a911448b39011cf8639cdeaa0", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "what is cancer lymphodema", "0_output": [["what is cancer lymphodema", "Title: Microwave lymphedema assessment using deep learning with contour assisted backprojection\n\nAbstract: We present a method for early detection of lymphatic fluid accumulation in lymphedema patients based on microwave imaging of the limb volume across an air gap. The proposed algorithm uses contour information of the imaged limb surface to approximate the wave propagation velocity locally to solve the eikonal equation for implementing the adjoint imaging operator. This modified backprojection algorithm results in focused imagery close to the limb surface where lymphatic fluid accumulation presents itself. Next, a deep neural network based on U-Net architecture is employed to identify the location and extent of the lymphatic fluid. Simulation studies with various upper and lower arm profiles compare the focusing performance of the proposed contour assisted backprojection imaging with the baseline imaging approach that assumes homogeneous media. The empirical results of the simulation experiments show that the proposed imaging method significantly improves the ability of the deepnet model to identify the location and the volume of the excess fluid in the limb."]], "0_corpus": "arxiv", "1_conv_id": "34a8dc28e83e43fda8b5b8bd6fcbd97b", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "what is cancer lymphodema", "1_output": [["what is cancer lymphodema", "Title: Literal Movement Grammars\n\nAbstract: Literal movement grammars (LMGs) provide a general account of extraposition phenomena through an attribute mechanism allowing top-down displacement of syntactical information. LMGs provide a simple and efficient treatment of complex linguistic phenomena such as cross-serial dependencies in German and Dutch---separating the treatment of natural language into a parsing phase closely resembling traditional context-free treatment, and a disambiguation phase which can be carried out using matching, as opposed to full unification employed in most current grammar formalisms of linguistical relevance."]], "1_corpus": "arxiv"}
|
88 |
{"tstamp": 1723725611.3436, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "15df5739a8a94a79b276fd0c742b31c8", "0_model_name": "embed-english-v3.0", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "d67244c71dff47dd8578f0ae183983d6", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}
|
89 |
+
{"tstamp": 1723732221.0731, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "6467739d19d74bd18b68bb92509e39f4", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "Employee disengagement in an organisation", "0_output": [["Employee disengagement in an organisation", "Employee engagement\n\nEmployee engagement is a fundamental concept in the effort to understand and describe, both qualitatively and quantitatively, the nature of the relationship between an organization and its employees. An \"engaged employee\" is defined as one who is fully absorbed by and enthusiastic about their work and so takes positive action to further the organization's reputation and interests. An engaged employee has a positive attitude towards the organization and its values. In contrast, a disengaged employee may range from someone doing the bare minimum at work (aka 'coasting'), up to an employee who is actively damaging the company's work output and reputation.\nAn organization with \"high\" employee engagement might therefore be expected to outperform those with \"low\" employee engagement.\nEmployee engagement first appeared as a concept in management theory in the 1990s,\nbecoming widespread in management practice in the 2000s, but it remains contested. Despite academic critiques, employee engagement practices are well established in the management of human resources and of internal communications.\nEmployee engagement today has become synonymous with terms like 'employee experience' and 'employee satisfaction', although satisfaction is a different concept. Whereas engagement refers to work motivation, satisfaction is an employee's attitude about the job--whether they like it or not. The relevance is much more due to the vast majority of new generation professionals in the workforce who have a higher propensity to be 'distracted' and 'disengaged' at work. A recent survey by StaffConnect suggests that an overwhelming number of enterprise organizations today (74.24%) were planning to improve employee experience in 2018."]], "0_corpus": "wikipedia", "1_conv_id": "93a566743f014d7fabae868780558a84", "1_model_name": "BM25", "1_prompt": "Employee disengagement in an organisation", "1_output": [["Employee disengagement in an organisation", "Diffusion of responsibility\n\nMoral disengagement Diffusion of responsibility can negatively affect personal morals. With diffusion of responsibility, it has been found that people feel less accountable for their work. This lack of accountability can be due to the fact that labour is divided amongst members in a group and so no one member feels an overwhelming amount of responsibility for their organisation or their overall project. It has been found that many members get narrowed into focusing on their individual work, that they eventually forget about any moral aspects. Purely focusing on the functional aspects of their jobs is a result of division of labour, which is a mechanism for diffusion of responsibility. This can be highly concerning for organisations since division of labour is a common practice amongst many.\nMoral disengagement is likely to be particularly important in organisations because bureaucratic structures and the division of labour seem to lend themselves to moral disengagement mechanisms such as the diffusion and displacement of responsibility. Euphemistic labelling is also common in organisations, such as when managers refer to layoffs as “rightsizing.” Also, with victims out of sight, globalization makes it easier to ignore or distort the harmful consequences of business actions. Thus, moral disengagement seems highly relevant to understanding unethical behaviour in 21st century organizations."]], "1_corpus": "wikipedia"}
|
data/retrieval_individual-f36c9f97-e795-4522-8eb7-7dee254b42c7.jsonl
CHANGED
@@ -395,3 +395,5 @@
|
|
395 |
{"tstamp": 1723725495.6183, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1723725495.3917, "finish": 1723725495.6183, "ip": "", "conv_id": "34a8dc28e83e43fda8b5b8bd6fcbd97b", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "what is cancer lymphodema", "output": [["what is cancer lymphodema", "Title: Literal Movement Grammars\n\nAbstract: Literal movement grammars (LMGs) provide a general account of extraposition phenomena through an attribute mechanism allowing top-down displacement of syntactical information. LMGs provide a simple and efficient treatment of complex linguistic phenomena such as cross-serial dependencies in German and Dutch---separating the treatment of natural language into a parsing phase closely resembling traditional context-free treatment, and a disambiguation phase which can be carried out using matching, as opposed to full unification employed in most current grammar formalisms of linguistical relevance."]], "corpus": "arxiv"}
|
396 |
{"tstamp": 1723725599.3534, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1723725597.1153, "finish": 1723725599.3534, "ip": "", "conv_id": "15df5739a8a94a79b276fd0c742b31c8", "model_name": "embed-english-v3.0", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
|
397 |
{"tstamp": 1723725599.3534, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1723725597.1153, "finish": 1723725599.3534, "ip": "", "conv_id": "d67244c71dff47dd8578f0ae183983d6", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
|
|
|
|
|
|
395 |
{"tstamp": 1723725495.6183, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1723725495.3917, "finish": 1723725495.6183, "ip": "", "conv_id": "34a8dc28e83e43fda8b5b8bd6fcbd97b", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "what is cancer lymphodema", "output": [["what is cancer lymphodema", "Title: Literal Movement Grammars\n\nAbstract: Literal movement grammars (LMGs) provide a general account of extraposition phenomena through an attribute mechanism allowing top-down displacement of syntactical information. LMGs provide a simple and efficient treatment of complex linguistic phenomena such as cross-serial dependencies in German and Dutch---separating the treatment of natural language into a parsing phase closely resembling traditional context-free treatment, and a disambiguation phase which can be carried out using matching, as opposed to full unification employed in most current grammar formalisms of linguistical relevance."]], "corpus": "arxiv"}
|
396 |
{"tstamp": 1723725599.3534, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1723725597.1153, "finish": 1723725599.3534, "ip": "", "conv_id": "15df5739a8a94a79b276fd0c742b31c8", "model_name": "embed-english-v3.0", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
|
397 |
{"tstamp": 1723725599.3534, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1723725597.1153, "finish": 1723725599.3534, "ip": "", "conv_id": "d67244c71dff47dd8578f0ae183983d6", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "corpus": "arxiv"}
|
398 |
+
{"tstamp": 1723732190.2769, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1723732188.0093, "finish": 1723732190.2769, "ip": "", "conv_id": "6467739d19d74bd18b68bb92509e39f4", "model_name": "GritLM/GritLM-7B", "prompt": "Employee disengagement in an organisation", "output": [["Employee disengagement in an organisation", "Employee engagement\n\nEmployee engagement is a fundamental concept in the effort to understand and describe, both qualitatively and quantitatively, the nature of the relationship between an organization and its employees. An \"engaged employee\" is defined as one who is fully absorbed by and enthusiastic about their work and so takes positive action to further the organization's reputation and interests. An engaged employee has a positive attitude towards the organization and its values. In contrast, a disengaged employee may range from someone doing the bare minimum at work (aka 'coasting'), up to an employee who is actively damaging the company's work output and reputation.\nAn organization with \"high\" employee engagement might therefore be expected to outperform those with \"low\" employee engagement.\nEmployee engagement first appeared as a concept in management theory in the 1990s,\nbecoming widespread in management practice in the 2000s, but it remains contested. Despite academic critiques, employee engagement practices are well established in the management of human resources and of internal communications.\nEmployee engagement today has become synonymous with terms like 'employee experience' and 'employee satisfaction', although satisfaction is a different concept. Whereas engagement refers to work motivation, satisfaction is an employee's attitude about the job--whether they like it or not. The relevance is much more due to the vast majority of new generation professionals in the workforce who have a higher propensity to be 'distracted' and 'disengaged' at work. A recent survey by StaffConnect suggests that an overwhelming number of enterprise organizations today (74.24%) were planning to improve employee experience in 2018."]], "corpus": "wikipedia"}
|
399 |
+
{"tstamp": 1723732190.2769, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1723732188.0093, "finish": 1723732190.2769, "ip": "", "conv_id": "93a566743f014d7fabae868780558a84", "model_name": "BM25", "prompt": "Employee disengagement in an organisation", "output": [["Employee disengagement in an organisation", "Diffusion of responsibility\n\nMoral disengagement Diffusion of responsibility can negatively affect personal morals. With diffusion of responsibility, it has been found that people feel less accountable for their work. This lack of accountability can be due to the fact that labour is divided amongst members in a group and so no one member feels an overwhelming amount of responsibility for their organisation or their overall project. It has been found that many members get narrowed into focusing on their individual work, that they eventually forget about any moral aspects. Purely focusing on the functional aspects of their jobs is a result of division of labour, which is a mechanism for diffusion of responsibility. This can be highly concerning for organisations since division of labour is a common practice amongst many.\nMoral disengagement is likely to be particularly important in organisations because bureaucratic structures and the division of labour seem to lend themselves to moral disengagement mechanisms such as the diffusion and displacement of responsibility. Euphemistic labelling is also common in organisations, such as when managers refer to layoffs as “rightsizing.” Also, with victims out of sight, globalization makes it easier to ignore or distort the harmful consequences of business actions. Thus, moral disengagement seems highly relevant to understanding unethical behaviour in 21st century organizations."]], "corpus": "wikipedia"}
|