Muennighoff commited on
Commit
dc18ef6
·
verified ·
1 Parent(s): 0257076

Scheduled Commit

Browse files
data/retrieval_battle-928d45d9-19ab-4d05-9aae-4b6574886d0d.jsonl CHANGED
@@ -102,3 +102,4 @@
102
  {"tstamp": 1722584191.1787, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "d90a4a25e3d84080a6c7c96354cbc1c2", "0_model_name": "text-embedding-3-large", "0_prompt": "A new fine-tuning method for aligning language models with human preferences that is simpler and more stable than RLHF.", "0_output": [["A new fine-tuning method for aligning language models with human preferences that is simpler and more stable than RLHF.", "Title: Aligning Language Models with Offline Learning from Human Feedback\n\nAbstract: Learning from human preferences is crucial for language models (LMs) to effectively cater to human needs and societal values. Previous research has made notable progress by leveraging human feedback to follow instructions. However, these approaches rely primarily on online learning techniques like Proximal Policy Optimization (PPO), which have been proven unstable and challenging to tune for language models. Moreover, PPO requires complex distributed system implementation, hindering the efficiency of large-scale distributed training. In this study, we propose an offline learning from human feedback framework to align LMs without interacting with environments. Specifically, we explore filtering alignment (FA), reward-weighted regression (RWR), and conditional alignment (CA) to align language models to human preferences. By employing a loss function similar to supervised fine-tuning, our methods ensure more stable model training than PPO with a simple machine learning system~(MLSys) and much fewer (around 9\\%) computing resources. Experimental results demonstrate that conditional alignment outperforms other offline alignment methods and is comparable to PPO."]], "0_corpus": "arxiv", "1_conv_id": "f37e4569c83e4a85b6f92eaaef358f5b", "1_model_name": "embed-english-v3.0", "1_prompt": "A new fine-tuning method for aligning language models with human preferences that is simpler and more stable than RLHF.", "1_output": [["A new fine-tuning method for aligning language models with human preferences that is simpler and more stable than RLHF.", "Title: Transforming and Combining Rewards for Aligning Large Language Models\n\nAbstract: A common approach for aligning language models to human preferences is to first learn a reward model from preference data, and then use this reward model to update the language model. We study two closely related problems that arise in this approach. First, any monotone transformation of the reward model preserves preference ranking; is there a choice that is ``better'' than others? Second, we often wish to align language models to multiple properties: how should we combine multiple reward models? Using a probabilistic interpretation of the alignment procedure, we identify a natural choice for transformation for (the common case of) rewards learned from Bradley-Terry preference models. This derived transformation has two important properties. First, it emphasizes improving poorly-performing outputs, rather than outputs that already score well. This mitigates both underfitting (where some prompts are not improved) and reward hacking (where the model learns to exploit misspecification of the reward model). Second, it enables principled aggregation of rewards by linking summation to logical conjunction: the sum of transformed rewards corresponds to the probability that the output is ``good'' in all measured properties, in a sense we make precise. Experiments aligning language models to be both helpful and harmless using RLHF show substantial improvements over the baseline (non-transformed) approach."]], "1_corpus": "arxiv"}
103
  {"tstamp": 1722584239.5779, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "b0064d40e85943bbb37b3c62593be16f", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Research on sentence embedding models that balance accuracy and computational resources.", "0_output": [["Research on sentence embedding models that balance accuracy and computational resources.", "Title: Diffusive Lorentz gases and multibaker maps are compatible with irreversible thermodynamics\n\nAbstract: We show that simple diffusive systems, such as the Lorentz gas and multibaker maps are perfectly compatible with the laws of irreversible thermodynamics, despite the fact that the moving particles, or their equivalents, in these models do not interact with each other, and that the dynamics takes place in low-dimensional phase spaces. The interaction of moving particles with scatterers provides the dynamical mechanism responsible for an approach to equilibrium, under appropriate conditions. This analysis provides a refutation of the criticisms expressed recently by Cohen and Rondoni [Physica A 306 (2002) 117-128]."]], "0_corpus": "arxiv", "1_conv_id": "37bcff2bc6024cd0b00562487187d17c", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Research on sentence embedding models that balance accuracy and computational resources.", "1_output": [["Research on sentence embedding models that balance accuracy and computational resources.", "Title: Simple Techniques for Enhancing Sentence Embeddings in Generative Language Models\n\nAbstract: Sentence Embedding stands as a fundamental task within the realm of Natural Language Processing, finding extensive application in search engines, expert systems, and question-and-answer platforms. With the continuous evolution of large language models such as LLaMA and Mistral, research on sentence embedding has recently achieved notable breakthroughs. However, these advancements mainly pertain to fine-tuning scenarios, leaving explorations into computationally efficient direct inference methods for sentence representation in a nascent stage. This paper endeavors to bridge this research gap. Through comprehensive experimentation, we challenge the widely held belief in the necessity of an Explicit One-word Limitation for deriving sentence embeddings from Pre-trained Language Models (PLMs). We demonstrate that this approach, while beneficial for generative models under direct inference scenario, is not imperative for discriminative models or the fine-tuning of generative PLMs. This discovery sheds new light on the design of manual templates in future studies. Building upon this insight, we propose two innovative prompt engineering techniques capable of further enhancing the expressive power of PLMs' raw embeddings: Pretended Chain of Thought and Knowledge Enhancement. We confirm their effectiveness across various PLM types and provide a detailed exploration of the underlying factors contributing to their success."]], "1_corpus": "arxiv"}
104
  {"tstamp": 1722584585.5144, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "f62881d6d62a4db0a7a8ed02819c379f", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "I just had my wisdom teeth removed, what should I do?", "0_output": [["I just had my wisdom teeth removed, what should I do?", "Impacted wisdom teeth\n\nLocal treatment Pericoronitis is an infection of the operculum of a partially impacted wisdom tooth. It can be treated with local cleaning, an antiseptic rinse of the area and antibiotics if severe. Definitive treatment can be excision of the operculum, however, recurrence of these infections is high. Pericoronitis, while a small area of tissue, should be viewed with caution, because it lies near the anatomic planes of the neck and can progress to life-threatening neck infections.\nWisdom teeth removal\nWisdom teeth removal (extraction) is the most common treatment for impacted wisdom teeth. In the US, 10 million wisdom teeth are removed annually. The procedure can be either simple or surgical, depending on the depth of the impaction and angle of the tooth. Surgical removal is to create an incision in the mucosa of the mouth, remove bone of the mandible or maxilla adjacent the tooth, extract it or possibly section the tooth and extract it in pieces. This can be completed under local anaesthetic, sedation or general anaesthetic. As of 2020, the evidence is insufficient to recommend one type of surgical practice over another.\nRecovery, risks and complications\nMost people will experience pain and swelling (worst on the first post-operative day) then return to work after 2 to 3 days with the rate of discomfort decreased to about 25% by post-operative day 7 unless affected by dry socket: a disorder of wound healing that prolongs post-operative pain. It can be 4 to 6 weeks before patients are fully recovered with a full range of jaw movements."]], "0_corpus": "wikipedia", "1_conv_id": "6dbb2fcf58014611beabd74355e2f36f", "1_model_name": "BM25", "1_prompt": "I just had my wisdom teeth removed, what should I do?", "1_output": [["I just had my wisdom teeth removed, what should I do?", "Impacted wisdom teeth\n\nOne trial in adolescents who had orthodontic treatment comparing the removal of impacted lower wisdom teeth with retention was identified. It only examined the effect on late lower incisor crowding and was rated 'highly biased' by the authors. The authors concluded that there is not enough evidence to support either the routine removal or retention of asymptomatic impacted wisdom teeth. Another randomised controlled trial done in the UK has suggested that it is not reasonable to remove asymptomatic disease-free impacted wisdom tooth merely to prevent incisor crowding as there is not strong enough evidence to show this association.\nDue to the lack of sufficient evidence to determine whether such teeth should be removed or not, the patient's preference and values should be taken into account with clinical expertise exercised and careful consideration of risks and benefits to determine treatment. If it is decided to retain asymptomatic disease-free impacted wisdom teeth, clinical assessment at regular intervals is advisable to prevent undesirable outcomes (pericoronitis, root resorption, cyst formation, tumour formation, inflammation/infection).\nCoronectomy\nCoronectomy is a procedure where the crown of the impacted wisdom tooth is removed, but the roots are intentionally left in place. It is indicated when there is no disease of the dental pulp or infection around the crown of the tooth, and there is a high risk of inferior alveolar nerve injury."]], "1_corpus": "wikipedia"}
 
 
102
  {"tstamp": 1722584191.1787, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "d90a4a25e3d84080a6c7c96354cbc1c2", "0_model_name": "text-embedding-3-large", "0_prompt": "A new fine-tuning method for aligning language models with human preferences that is simpler and more stable than RLHF.", "0_output": [["A new fine-tuning method for aligning language models with human preferences that is simpler and more stable than RLHF.", "Title: Aligning Language Models with Offline Learning from Human Feedback\n\nAbstract: Learning from human preferences is crucial for language models (LMs) to effectively cater to human needs and societal values. Previous research has made notable progress by leveraging human feedback to follow instructions. However, these approaches rely primarily on online learning techniques like Proximal Policy Optimization (PPO), which have been proven unstable and challenging to tune for language models. Moreover, PPO requires complex distributed system implementation, hindering the efficiency of large-scale distributed training. In this study, we propose an offline learning from human feedback framework to align LMs without interacting with environments. Specifically, we explore filtering alignment (FA), reward-weighted regression (RWR), and conditional alignment (CA) to align language models to human preferences. By employing a loss function similar to supervised fine-tuning, our methods ensure more stable model training than PPO with a simple machine learning system~(MLSys) and much fewer (around 9\\%) computing resources. Experimental results demonstrate that conditional alignment outperforms other offline alignment methods and is comparable to PPO."]], "0_corpus": "arxiv", "1_conv_id": "f37e4569c83e4a85b6f92eaaef358f5b", "1_model_name": "embed-english-v3.0", "1_prompt": "A new fine-tuning method for aligning language models with human preferences that is simpler and more stable than RLHF.", "1_output": [["A new fine-tuning method for aligning language models with human preferences that is simpler and more stable than RLHF.", "Title: Transforming and Combining Rewards for Aligning Large Language Models\n\nAbstract: A common approach for aligning language models to human preferences is to first learn a reward model from preference data, and then use this reward model to update the language model. We study two closely related problems that arise in this approach. First, any monotone transformation of the reward model preserves preference ranking; is there a choice that is ``better'' than others? Second, we often wish to align language models to multiple properties: how should we combine multiple reward models? Using a probabilistic interpretation of the alignment procedure, we identify a natural choice for transformation for (the common case of) rewards learned from Bradley-Terry preference models. This derived transformation has two important properties. First, it emphasizes improving poorly-performing outputs, rather than outputs that already score well. This mitigates both underfitting (where some prompts are not improved) and reward hacking (where the model learns to exploit misspecification of the reward model). Second, it enables principled aggregation of rewards by linking summation to logical conjunction: the sum of transformed rewards corresponds to the probability that the output is ``good'' in all measured properties, in a sense we make precise. Experiments aligning language models to be both helpful and harmless using RLHF show substantial improvements over the baseline (non-transformed) approach."]], "1_corpus": "arxiv"}
103
  {"tstamp": 1722584239.5779, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "b0064d40e85943bbb37b3c62593be16f", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Research on sentence embedding models that balance accuracy and computational resources.", "0_output": [["Research on sentence embedding models that balance accuracy and computational resources.", "Title: Diffusive Lorentz gases and multibaker maps are compatible with irreversible thermodynamics\n\nAbstract: We show that simple diffusive systems, such as the Lorentz gas and multibaker maps are perfectly compatible with the laws of irreversible thermodynamics, despite the fact that the moving particles, or their equivalents, in these models do not interact with each other, and that the dynamics takes place in low-dimensional phase spaces. The interaction of moving particles with scatterers provides the dynamical mechanism responsible for an approach to equilibrium, under appropriate conditions. This analysis provides a refutation of the criticisms expressed recently by Cohen and Rondoni [Physica A 306 (2002) 117-128]."]], "0_corpus": "arxiv", "1_conv_id": "37bcff2bc6024cd0b00562487187d17c", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Research on sentence embedding models that balance accuracy and computational resources.", "1_output": [["Research on sentence embedding models that balance accuracy and computational resources.", "Title: Simple Techniques for Enhancing Sentence Embeddings in Generative Language Models\n\nAbstract: Sentence Embedding stands as a fundamental task within the realm of Natural Language Processing, finding extensive application in search engines, expert systems, and question-and-answer platforms. With the continuous evolution of large language models such as LLaMA and Mistral, research on sentence embedding has recently achieved notable breakthroughs. However, these advancements mainly pertain to fine-tuning scenarios, leaving explorations into computationally efficient direct inference methods for sentence representation in a nascent stage. This paper endeavors to bridge this research gap. Through comprehensive experimentation, we challenge the widely held belief in the necessity of an Explicit One-word Limitation for deriving sentence embeddings from Pre-trained Language Models (PLMs). We demonstrate that this approach, while beneficial for generative models under direct inference scenario, is not imperative for discriminative models or the fine-tuning of generative PLMs. This discovery sheds new light on the design of manual templates in future studies. Building upon this insight, we propose two innovative prompt engineering techniques capable of further enhancing the expressive power of PLMs' raw embeddings: Pretended Chain of Thought and Knowledge Enhancement. We confirm their effectiveness across various PLM types and provide a detailed exploration of the underlying factors contributing to their success."]], "1_corpus": "arxiv"}
104
  {"tstamp": 1722584585.5144, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "f62881d6d62a4db0a7a8ed02819c379f", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "I just had my wisdom teeth removed, what should I do?", "0_output": [["I just had my wisdom teeth removed, what should I do?", "Impacted wisdom teeth\n\nLocal treatment Pericoronitis is an infection of the operculum of a partially impacted wisdom tooth. It can be treated with local cleaning, an antiseptic rinse of the area and antibiotics if severe. Definitive treatment can be excision of the operculum, however, recurrence of these infections is high. Pericoronitis, while a small area of tissue, should be viewed with caution, because it lies near the anatomic planes of the neck and can progress to life-threatening neck infections.\nWisdom teeth removal\nWisdom teeth removal (extraction) is the most common treatment for impacted wisdom teeth. In the US, 10 million wisdom teeth are removed annually. The procedure can be either simple or surgical, depending on the depth of the impaction and angle of the tooth. Surgical removal is to create an incision in the mucosa of the mouth, remove bone of the mandible or maxilla adjacent the tooth, extract it or possibly section the tooth and extract it in pieces. This can be completed under local anaesthetic, sedation or general anaesthetic. As of 2020, the evidence is insufficient to recommend one type of surgical practice over another.\nRecovery, risks and complications\nMost people will experience pain and swelling (worst on the first post-operative day) then return to work after 2 to 3 days with the rate of discomfort decreased to about 25% by post-operative day 7 unless affected by dry socket: a disorder of wound healing that prolongs post-operative pain. It can be 4 to 6 weeks before patients are fully recovered with a full range of jaw movements."]], "0_corpus": "wikipedia", "1_conv_id": "6dbb2fcf58014611beabd74355e2f36f", "1_model_name": "BM25", "1_prompt": "I just had my wisdom teeth removed, what should I do?", "1_output": [["I just had my wisdom teeth removed, what should I do?", "Impacted wisdom teeth\n\nOne trial in adolescents who had orthodontic treatment comparing the removal of impacted lower wisdom teeth with retention was identified. It only examined the effect on late lower incisor crowding and was rated 'highly biased' by the authors. The authors concluded that there is not enough evidence to support either the routine removal or retention of asymptomatic impacted wisdom teeth. Another randomised controlled trial done in the UK has suggested that it is not reasonable to remove asymptomatic disease-free impacted wisdom tooth merely to prevent incisor crowding as there is not strong enough evidence to show this association.\nDue to the lack of sufficient evidence to determine whether such teeth should be removed or not, the patient's preference and values should be taken into account with clinical expertise exercised and careful consideration of risks and benefits to determine treatment. If it is decided to retain asymptomatic disease-free impacted wisdom teeth, clinical assessment at regular intervals is advisable to prevent undesirable outcomes (pericoronitis, root resorption, cyst formation, tumour formation, inflammation/infection).\nCoronectomy\nCoronectomy is a procedure where the crown of the impacted wisdom tooth is removed, but the roots are intentionally left in place. It is indicated when there is no disease of the dental pulp or infection around the crown of the tooth, and there is a high risk of inferior alveolar nerve injury."]], "1_corpus": "wikipedia"}
105
+ {"tstamp": 1722584614.2436, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "7b39622e84944baaba2ded12a8401e64", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "0_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "2ff86c912b7f4ad68f8bde33b852808d", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "1_output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "1_corpus": "arxiv"}
data/retrieval_individual-928d45d9-19ab-4d05-9aae-4b6574886d0d.jsonl CHANGED
@@ -355,3 +355,5 @@
355
  {"tstamp": 1722584202.8677, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722584202.7023, "finish": 1722584202.8677, "ip": "", "conv_id": "37bcff2bc6024cd0b00562487187d17c", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Research on sentence embedding models that balance accuracy and computational resources.", "output": [["Research on sentence embedding models that balance accuracy and computational resources.", "Title: Simple Techniques for Enhancing Sentence Embeddings in Generative Language Models\n\nAbstract: Sentence Embedding stands as a fundamental task within the realm of Natural Language Processing, finding extensive application in search engines, expert systems, and question-and-answer platforms. With the continuous evolution of large language models such as LLaMA and Mistral, research on sentence embedding has recently achieved notable breakthroughs. However, these advancements mainly pertain to fine-tuning scenarios, leaving explorations into computationally efficient direct inference methods for sentence representation in a nascent stage. This paper endeavors to bridge this research gap. Through comprehensive experimentation, we challenge the widely held belief in the necessity of an Explicit One-word Limitation for deriving sentence embeddings from Pre-trained Language Models (PLMs). We demonstrate that this approach, while beneficial for generative models under direct inference scenario, is not imperative for discriminative models or the fine-tuning of generative PLMs. This discovery sheds new light on the design of manual templates in future studies. Building upon this insight, we propose two innovative prompt engineering techniques capable of further enhancing the expressive power of PLMs' raw embeddings: Pretended Chain of Thought and Knowledge Enhancement. We confirm their effectiveness across various PLM types and provide a detailed exploration of the underlying factors contributing to their success."]], "corpus": "arxiv"}
356
  {"tstamp": 1722584569.2355, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722584568.5436, "finish": 1722584569.2355, "ip": "", "conv_id": "f62881d6d62a4db0a7a8ed02819c379f", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "I just had my wisdom teeth removed, what should I do?", "output": [["I just had my wisdom teeth removed, what should I do?", "Impacted wisdom teeth\n\nLocal treatment Pericoronitis is an infection of the operculum of a partially impacted wisdom tooth. It can be treated with local cleaning, an antiseptic rinse of the area and antibiotics if severe. Definitive treatment can be excision of the operculum, however, recurrence of these infections is high. Pericoronitis, while a small area of tissue, should be viewed with caution, because it lies near the anatomic planes of the neck and can progress to life-threatening neck infections.\nWisdom teeth removal\nWisdom teeth removal (extraction) is the most common treatment for impacted wisdom teeth. In the US, 10 million wisdom teeth are removed annually. The procedure can be either simple or surgical, depending on the depth of the impaction and angle of the tooth. Surgical removal is to create an incision in the mucosa of the mouth, remove bone of the mandible or maxilla adjacent the tooth, extract it or possibly section the tooth and extract it in pieces. This can be completed under local anaesthetic, sedation or general anaesthetic. As of 2020, the evidence is insufficient to recommend one type of surgical practice over another.\nRecovery, risks and complications\nMost people will experience pain and swelling (worst on the first post-operative day) then return to work after 2 to 3 days with the rate of discomfort decreased to about 25% by post-operative day 7 unless affected by dry socket: a disorder of wound healing that prolongs post-operative pain. It can be 4 to 6 weeks before patients are fully recovered with a full range of jaw movements."]], "corpus": "wikipedia"}
357
  {"tstamp": 1722584569.2355, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722584568.5436, "finish": 1722584569.2355, "ip": "", "conv_id": "6dbb2fcf58014611beabd74355e2f36f", "model_name": "BM25", "prompt": "I just had my wisdom teeth removed, what should I do?", "output": [["I just had my wisdom teeth removed, what should I do?", "Impacted wisdom teeth\n\nOne trial in adolescents who had orthodontic treatment comparing the removal of impacted lower wisdom teeth with retention was identified. It only examined the effect on late lower incisor crowding and was rated 'highly biased' by the authors. The authors concluded that there is not enough evidence to support either the routine removal or retention of asymptomatic impacted wisdom teeth. Another randomised controlled trial done in the UK has suggested that it is not reasonable to remove asymptomatic disease-free impacted wisdom tooth merely to prevent incisor crowding as there is not strong enough evidence to show this association.\nDue to the lack of sufficient evidence to determine whether such teeth should be removed or not, the patient's preference and values should be taken into account with clinical expertise exercised and careful consideration of risks and benefits to determine treatment. If it is decided to retain asymptomatic disease-free impacted wisdom teeth, clinical assessment at regular intervals is advisable to prevent undesirable outcomes (pericoronitis, root resorption, cyst formation, tumour formation, inflammation/infection).\nCoronectomy\nCoronectomy is a procedure where the crown of the impacted wisdom tooth is removed, but the roots are intentionally left in place. It is indicated when there is no disease of the dental pulp or infection around the crown of the tooth, and there is a high risk of inferior alveolar nerve injury."]], "corpus": "wikipedia"}
 
 
 
355
  {"tstamp": 1722584202.8677, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722584202.7023, "finish": 1722584202.8677, "ip": "", "conv_id": "37bcff2bc6024cd0b00562487187d17c", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Research on sentence embedding models that balance accuracy and computational resources.", "output": [["Research on sentence embedding models that balance accuracy and computational resources.", "Title: Simple Techniques for Enhancing Sentence Embeddings in Generative Language Models\n\nAbstract: Sentence Embedding stands as a fundamental task within the realm of Natural Language Processing, finding extensive application in search engines, expert systems, and question-and-answer platforms. With the continuous evolution of large language models such as LLaMA and Mistral, research on sentence embedding has recently achieved notable breakthroughs. However, these advancements mainly pertain to fine-tuning scenarios, leaving explorations into computationally efficient direct inference methods for sentence representation in a nascent stage. This paper endeavors to bridge this research gap. Through comprehensive experimentation, we challenge the widely held belief in the necessity of an Explicit One-word Limitation for deriving sentence embeddings from Pre-trained Language Models (PLMs). We demonstrate that this approach, while beneficial for generative models under direct inference scenario, is not imperative for discriminative models or the fine-tuning of generative PLMs. This discovery sheds new light on the design of manual templates in future studies. Building upon this insight, we propose two innovative prompt engineering techniques capable of further enhancing the expressive power of PLMs' raw embeddings: Pretended Chain of Thought and Knowledge Enhancement. We confirm their effectiveness across various PLM types and provide a detailed exploration of the underlying factors contributing to their success."]], "corpus": "arxiv"}
356
  {"tstamp": 1722584569.2355, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722584568.5436, "finish": 1722584569.2355, "ip": "", "conv_id": "f62881d6d62a4db0a7a8ed02819c379f", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "I just had my wisdom teeth removed, what should I do?", "output": [["I just had my wisdom teeth removed, what should I do?", "Impacted wisdom teeth\n\nLocal treatment Pericoronitis is an infection of the operculum of a partially impacted wisdom tooth. It can be treated with local cleaning, an antiseptic rinse of the area and antibiotics if severe. Definitive treatment can be excision of the operculum, however, recurrence of these infections is high. Pericoronitis, while a small area of tissue, should be viewed with caution, because it lies near the anatomic planes of the neck and can progress to life-threatening neck infections.\nWisdom teeth removal\nWisdom teeth removal (extraction) is the most common treatment for impacted wisdom teeth. In the US, 10 million wisdom teeth are removed annually. The procedure can be either simple or surgical, depending on the depth of the impaction and angle of the tooth. Surgical removal is to create an incision in the mucosa of the mouth, remove bone of the mandible or maxilla adjacent the tooth, extract it or possibly section the tooth and extract it in pieces. This can be completed under local anaesthetic, sedation or general anaesthetic. As of 2020, the evidence is insufficient to recommend one type of surgical practice over another.\nRecovery, risks and complications\nMost people will experience pain and swelling (worst on the first post-operative day) then return to work after 2 to 3 days with the rate of discomfort decreased to about 25% by post-operative day 7 unless affected by dry socket: a disorder of wound healing that prolongs post-operative pain. It can be 4 to 6 weeks before patients are fully recovered with a full range of jaw movements."]], "corpus": "wikipedia"}
357
  {"tstamp": 1722584569.2355, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722584568.5436, "finish": 1722584569.2355, "ip": "", "conv_id": "6dbb2fcf58014611beabd74355e2f36f", "model_name": "BM25", "prompt": "I just had my wisdom teeth removed, what should I do?", "output": [["I just had my wisdom teeth removed, what should I do?", "Impacted wisdom teeth\n\nOne trial in adolescents who had orthodontic treatment comparing the removal of impacted lower wisdom teeth with retention was identified. It only examined the effect on late lower incisor crowding and was rated 'highly biased' by the authors. The authors concluded that there is not enough evidence to support either the routine removal or retention of asymptomatic impacted wisdom teeth. Another randomised controlled trial done in the UK has suggested that it is not reasonable to remove asymptomatic disease-free impacted wisdom tooth merely to prevent incisor crowding as there is not strong enough evidence to show this association.\nDue to the lack of sufficient evidence to determine whether such teeth should be removed or not, the patient's preference and values should be taken into account with clinical expertise exercised and careful consideration of risks and benefits to determine treatment. If it is decided to retain asymptomatic disease-free impacted wisdom teeth, clinical assessment at regular intervals is advisable to prevent undesirable outcomes (pericoronitis, root resorption, cyst formation, tumour formation, inflammation/infection).\nCoronectomy\nCoronectomy is a procedure where the crown of the impacted wisdom tooth is removed, but the roots are intentionally left in place. It is indicated when there is no disease of the dental pulp or infection around the crown of the tooth, and there is a high risk of inferior alveolar nerve injury."]], "corpus": "wikipedia"}
358
+ {"tstamp": 1722584608.4325, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722584608.1666, "finish": 1722584608.4325, "ip": "", "conv_id": "7b39622e84944baaba2ded12a8401e64", "model_name": "GritLM/GritLM-7B", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
359
+ {"tstamp": 1722584608.4325, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722584608.1666, "finish": 1722584608.4325, "ip": "", "conv_id": "2ff86c912b7f4ad68f8bde33b852808d", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}