Muennighoff commited on
Commit
efe1322
·
verified ·
1 Parent(s): 9013e44

Scheduled Commit

Browse files
data/retrieval_battle-928d45d9-19ab-4d05-9aae-4b6574886d0d.jsonl CHANGED
@@ -67,3 +67,4 @@
67
  {"tstamp": 1722565919.0151, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "29399bd90bcd4038b96678302feff021", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "90c94b07b4454fef882b6b56791bc1cb", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}
68
  {"tstamp": 1722567339.9714, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "854aa33c4bef46eaac5aaa7c6b8c13a5", "0_model_name": "text-embedding-004", "0_prompt": "Which llm is best released in 2023", "0_output": [["Which llm is best released in 2023", "Llama (language model)\n\nBackground After the release of large language models such as GPT-3, a focus of research was up-scaling models which in some instances showed major increases in emergent capabilities. The release of ChatGPT and its surprise success caused an increase in attention to large language models.\nCompared with other responses to ChatGPT, Meta's Chief AI scientist Yann LeCun stated that large language models are best for aiding with writing.\nInitial release\nLLaMA was announced on February 24, 2023, via a blog post and a paper describing the model's training, architecture, and performance. The inference code used to run the model was publicly released under the open-source GPLv3 license. Access to the model's weights was managed by an application process, with access to be granted \"on a case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world\".\nLlama was trained on only publicly available information, and was trained at various different model sizes, with the intention to make it more accessible to different hardware.\nMeta AI reported the 13B parameter model performance on most NLP benchmarks exceeded that of the much larger GPT-3 (with 175B parameters), and the largest 65B model was competitive with state of the art models such as PaLM and Chinchilla."]], "0_corpus": "wikipedia", "1_conv_id": "200d3aa951724d9ab075a0d09f66697a", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "Which llm is best released in 2023", "1_output": [["Which llm is best released in 2023", "2023 in science\n\nFurther LLM developments during what has been called an \"AI boom\" include: local or open source versions of LLaMA which was leaked in March, news outlets report on GPT4-based Auto-GPT that given natural language commands uses the Internet and other tools in attempts to understand and achieve its tasks with unclear or so-far little practicality, a systematic evaluation of answers from four \"generative search engines\" suggests their outputs \"appear informative, but frequently contain unsupported statements and inaccurate citations\", a multi-modal open source tool for understanding and generating speech, a data scientist argues that \"researchers need to collaborate to develop open-source LLMs that are transparent\" and independent, Stability AI launches an open source LLM.\nOn 12 April, researchers demonstrate an '' that can create of models of natural phenomena from knowledge axioms and experimental data, showing the software can rediscover using logical reasoning and few data points.\n: a review suggests vitamin D3 may reduce cancer mortality by around 12% (31 Mar), review of experimental phototherapies against dementia cognitive decline (5 Apr), mice-tested L. reuteri-and-tryptophan-diet for checkpoint inhibitor potentiation (6 Apr), doxycycline post-exposure prophylaxis against STIs (6 Apr), an engineered probiotic against alcohol-induced damage (11 Apr), phase 2 trialed AXA1125 against long COVID fatigue (14 Apr), review finds cranberry products useful against UTIs in women (17 Apr), and macaques-tested low-intensity focus ultrasound delivery of AAV into brain regions against brain diseases (19 Apr). Progress in screening: an α-synuclein SAA (assay) against Parkinson's disease (12 Apr), and exogenously administered bioengineered sensors that amplify urinary cancer biomarkers for detection (24 Apr)."]], "1_corpus": "wikipedia"}
69
  {"tstamp": 1722573807.4205, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "912f6fc66df2453b9902bbca6492f982", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "A study that uses large transformer language models for high-fidelity image generation and benchmarking with DrawBench.", "0_output": [["A study that uses large transformer language models for high-fidelity image generation and benchmarking with DrawBench.", "Title: DeepArt: A Benchmark to Advance Fidelity Research in AI-Generated Content\n\nAbstract: This paper explores the image synthesis capabilities of GPT-4, a leading multi-modal large language model. We establish a benchmark for evaluating the fidelity of texture features in images generated by GPT-4, comprising manually painted pictures and their AI-generated counterparts. The contributions of this study are threefold: First, we provide an in-depth analysis of the fidelity of image synthesis features based on GPT-4, marking the first such study on this state-of-the-art model. Second, the quantitative and qualitative experiments fully reveals the limitations of the GPT-4 model in image synthesis. Third, we have compiled a unique benchmark of manual drawings and corresponding GPT-4-generated images, introducing a new task to advance fidelity research in AI-generated content (AIGC). The dataset is available at: \\url{https://github.com/rickwang28574/DeepArt}."]], "0_corpus": "arxiv", "1_conv_id": "ddd8d2df2bd24fc0abfb96c04060bd8a", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "A study that uses large transformer language models for high-fidelity image generation and benchmarking with DrawBench.", "1_output": [["A study that uses large transformer language models for high-fidelity image generation and benchmarking with DrawBench.", "Title: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding\n\nAbstract: We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment. See https://imagen.research.google/ for an overview of the results."]], "1_corpus": "arxiv"}
 
 
67
  {"tstamp": 1722565919.0151, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "29399bd90bcd4038b96678302feff021", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "0_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "0_corpus": "arxiv", "1_conv_id": "90c94b07b4454fef882b6b56791bc1cb", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "I am looking for a paper to help me deal with data constraints as I scale up my language model.", "1_output": [["I am looking for a paper to help me deal with data constraints as I scale up my language model.", "Title: Scaling Data-Constrained Language Models\n\nAbstract: The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations."]], "1_corpus": "arxiv"}
68
  {"tstamp": 1722567339.9714, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "854aa33c4bef46eaac5aaa7c6b8c13a5", "0_model_name": "text-embedding-004", "0_prompt": "Which llm is best released in 2023", "0_output": [["Which llm is best released in 2023", "Llama (language model)\n\nBackground After the release of large language models such as GPT-3, a focus of research was up-scaling models which in some instances showed major increases in emergent capabilities. The release of ChatGPT and its surprise success caused an increase in attention to large language models.\nCompared with other responses to ChatGPT, Meta's Chief AI scientist Yann LeCun stated that large language models are best for aiding with writing.\nInitial release\nLLaMA was announced on February 24, 2023, via a blog post and a paper describing the model's training, architecture, and performance. The inference code used to run the model was publicly released under the open-source GPLv3 license. Access to the model's weights was managed by an application process, with access to be granted \"on a case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world\".\nLlama was trained on only publicly available information, and was trained at various different model sizes, with the intention to make it more accessible to different hardware.\nMeta AI reported the 13B parameter model performance on most NLP benchmarks exceeded that of the much larger GPT-3 (with 175B parameters), and the largest 65B model was competitive with state of the art models such as PaLM and Chinchilla."]], "0_corpus": "wikipedia", "1_conv_id": "200d3aa951724d9ab075a0d09f66697a", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "Which llm is best released in 2023", "1_output": [["Which llm is best released in 2023", "2023 in science\n\nFurther LLM developments during what has been called an \"AI boom\" include: local or open source versions of LLaMA which was leaked in March, news outlets report on GPT4-based Auto-GPT that given natural language commands uses the Internet and other tools in attempts to understand and achieve its tasks with unclear or so-far little practicality, a systematic evaluation of answers from four \"generative search engines\" suggests their outputs \"appear informative, but frequently contain unsupported statements and inaccurate citations\", a multi-modal open source tool for understanding and generating speech, a data scientist argues that \"researchers need to collaborate to develop open-source LLMs that are transparent\" and independent, Stability AI launches an open source LLM.\nOn 12 April, researchers demonstrate an '' that can create of models of natural phenomena from knowledge axioms and experimental data, showing the software can rediscover using logical reasoning and few data points.\n: a review suggests vitamin D3 may reduce cancer mortality by around 12% (31 Mar), review of experimental phototherapies against dementia cognitive decline (5 Apr), mice-tested L. reuteri-and-tryptophan-diet for checkpoint inhibitor potentiation (6 Apr), doxycycline post-exposure prophylaxis against STIs (6 Apr), an engineered probiotic against alcohol-induced damage (11 Apr), phase 2 trialed AXA1125 against long COVID fatigue (14 Apr), review finds cranberry products useful against UTIs in women (17 Apr), and macaques-tested low-intensity focus ultrasound delivery of AAV into brain regions against brain diseases (19 Apr). Progress in screening: an α-synuclein SAA (assay) against Parkinson's disease (12 Apr), and exogenously administered bioengineered sensors that amplify urinary cancer biomarkers for detection (24 Apr)."]], "1_corpus": "wikipedia"}
69
  {"tstamp": 1722573807.4205, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "912f6fc66df2453b9902bbca6492f982", "0_model_name": "mixedbread-ai/mxbai-embed-large-v1", "0_prompt": "A study that uses large transformer language models for high-fidelity image generation and benchmarking with DrawBench.", "0_output": [["A study that uses large transformer language models for high-fidelity image generation and benchmarking with DrawBench.", "Title: DeepArt: A Benchmark to Advance Fidelity Research in AI-Generated Content\n\nAbstract: This paper explores the image synthesis capabilities of GPT-4, a leading multi-modal large language model. We establish a benchmark for evaluating the fidelity of texture features in images generated by GPT-4, comprising manually painted pictures and their AI-generated counterparts. The contributions of this study are threefold: First, we provide an in-depth analysis of the fidelity of image synthesis features based on GPT-4, marking the first such study on this state-of-the-art model. Second, the quantitative and qualitative experiments fully reveals the limitations of the GPT-4 model in image synthesis. Third, we have compiled a unique benchmark of manual drawings and corresponding GPT-4-generated images, introducing a new task to advance fidelity research in AI-generated content (AIGC). The dataset is available at: \\url{https://github.com/rickwang28574/DeepArt}."]], "0_corpus": "arxiv", "1_conv_id": "ddd8d2df2bd24fc0abfb96c04060bd8a", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "A study that uses large transformer language models for high-fidelity image generation and benchmarking with DrawBench.", "1_output": [["A study that uses large transformer language models for high-fidelity image generation and benchmarking with DrawBench.", "Title: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding\n\nAbstract: We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment. See https://imagen.research.google/ for an overview of the results."]], "1_corpus": "arxiv"}
70
+ {"tstamp": 1722573826.7142, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "072810e546564fb18958cc69bc67b16d", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "what does the @ symbol mean before a name?", "0_output": [["what does the @ symbol mean before a name?", "Stackoverflow Stackexchange\n\nQ: What does the @ symbol before a variable name mean in C#? I understand that the @ symbol can be used before a string literal to change how the compiler parses the string. But what does it mean when a variable name is prefixed with the @ symbol?\n\nA: The @ symbol allows you to use reserved word. For example:\nint @class = 15;\n\nThe above works, when the below wouldn't:\nint class = 15;\n\n"]], "0_corpus": "stackexchange", "1_conv_id": "d16af50f94c54574a2840ce74063a272", "1_model_name": "BM25", "1_prompt": "what does the @ symbol mean before a name?", "1_output": [["what does the @ symbol mean before a name?", "Stackoverflow Stackexchange\n\nQ: What does the @ symbol before a variable name mean in C#? I understand that the @ symbol can be used before a string literal to change how the compiler parses the string. But what does it mean when a variable name is prefixed with the @ symbol?\n\nA: The @ symbol allows you to use reserved word. For example:\nint @class = 15;\n\nThe above works, when the below wouldn't:\nint class = 15;\n\n"]], "1_corpus": "stackexchange"}
data/retrieval_individual-928d45d9-19ab-4d05-9aae-4b6574886d0d.jsonl CHANGED
@@ -259,3 +259,7 @@
259
  {"tstamp": 1722567324.1041, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722567321.9368, "finish": 1722567324.1041, "ip": "", "conv_id": "200d3aa951724d9ab075a0d09f66697a", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Which llm is best released in 2023", "output": [["Which llm is best released in 2023", "2023 in science\n\nFurther LLM developments during what has been called an \"AI boom\" include: local or open source versions of LLaMA which was leaked in March, news outlets report on GPT4-based Auto-GPT that given natural language commands uses the Internet and other tools in attempts to understand and achieve its tasks with unclear or so-far little practicality, a systematic evaluation of answers from four \"generative search engines\" suggests their outputs \"appear informative, but frequently contain unsupported statements and inaccurate citations\", a multi-modal open source tool for understanding and generating speech, a data scientist argues that \"researchers need to collaborate to develop open-source LLMs that are transparent\" and independent, Stability AI launches an open source LLM.\nOn 12 April, researchers demonstrate an '' that can create of models of natural phenomena from knowledge axioms and experimental data, showing the software can rediscover using logical reasoning and few data points.\n: a review suggests vitamin D3 may reduce cancer mortality by around 12% (31 Mar), review of experimental phototherapies against dementia cognitive decline (5 Apr), mice-tested L. reuteri-and-tryptophan-diet for checkpoint inhibitor potentiation (6 Apr), doxycycline post-exposure prophylaxis against STIs (6 Apr), an engineered probiotic against alcohol-induced damage (11 Apr), phase 2 trialed AXA1125 against long COVID fatigue (14 Apr), review finds cranberry products useful against UTIs in women (17 Apr), and macaques-tested low-intensity focus ultrasound delivery of AAV into brain regions against brain diseases (19 Apr). Progress in screening: an α-synuclein SAA (assay) against Parkinson's disease (12 Apr), and exogenously administered bioengineered sensors that amplify urinary cancer biomarkers for detection (24 Apr)."]], "corpus": "wikipedia"}
260
  {"tstamp": 1722573788.5388, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722573788.1826, "finish": 1722573788.5388, "ip": "", "conv_id": "912f6fc66df2453b9902bbca6492f982", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "A study that uses large transformer language models for high-fidelity image generation and benchmarking with DrawBench.", "output": [["A study that uses large transformer language models for high-fidelity image generation and benchmarking with DrawBench.", "Title: DeepArt: A Benchmark to Advance Fidelity Research in AI-Generated Content\n\nAbstract: This paper explores the image synthesis capabilities of GPT-4, a leading multi-modal large language model. We establish a benchmark for evaluating the fidelity of texture features in images generated by GPT-4, comprising manually painted pictures and their AI-generated counterparts. The contributions of this study are threefold: First, we provide an in-depth analysis of the fidelity of image synthesis features based on GPT-4, marking the first such study on this state-of-the-art model. Second, the quantitative and qualitative experiments fully reveals the limitations of the GPT-4 model in image synthesis. Third, we have compiled a unique benchmark of manual drawings and corresponding GPT-4-generated images, introducing a new task to advance fidelity research in AI-generated content (AIGC). The dataset is available at: \\url{https://github.com/rickwang28574/DeepArt}."]], "corpus": "arxiv"}
261
  {"tstamp": 1722573788.5388, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722573788.1826, "finish": 1722573788.5388, "ip": "", "conv_id": "ddd8d2df2bd24fc0abfb96c04060bd8a", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "A study that uses large transformer language models for high-fidelity image generation and benchmarking with DrawBench.", "output": [["A study that uses large transformer language models for high-fidelity image generation and benchmarking with DrawBench.", "Title: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding\n\nAbstract: We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment. See https://imagen.research.google/ for an overview of the results."]], "corpus": "arxiv"}
 
 
 
 
 
259
  {"tstamp": 1722567324.1041, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722567321.9368, "finish": 1722567324.1041, "ip": "", "conv_id": "200d3aa951724d9ab075a0d09f66697a", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Which llm is best released in 2023", "output": [["Which llm is best released in 2023", "2023 in science\n\nFurther LLM developments during what has been called an \"AI boom\" include: local or open source versions of LLaMA which was leaked in March, news outlets report on GPT4-based Auto-GPT that given natural language commands uses the Internet and other tools in attempts to understand and achieve its tasks with unclear or so-far little practicality, a systematic evaluation of answers from four \"generative search engines\" suggests their outputs \"appear informative, but frequently contain unsupported statements and inaccurate citations\", a multi-modal open source tool for understanding and generating speech, a data scientist argues that \"researchers need to collaborate to develop open-source LLMs that are transparent\" and independent, Stability AI launches an open source LLM.\nOn 12 April, researchers demonstrate an '' that can create of models of natural phenomena from knowledge axioms and experimental data, showing the software can rediscover using logical reasoning and few data points.\n: a review suggests vitamin D3 may reduce cancer mortality by around 12% (31 Mar), review of experimental phototherapies against dementia cognitive decline (5 Apr), mice-tested L. reuteri-and-tryptophan-diet for checkpoint inhibitor potentiation (6 Apr), doxycycline post-exposure prophylaxis against STIs (6 Apr), an engineered probiotic against alcohol-induced damage (11 Apr), phase 2 trialed AXA1125 against long COVID fatigue (14 Apr), review finds cranberry products useful against UTIs in women (17 Apr), and macaques-tested low-intensity focus ultrasound delivery of AAV into brain regions against brain diseases (19 Apr). Progress in screening: an α-synuclein SAA (assay) against Parkinson's disease (12 Apr), and exogenously administered bioengineered sensors that amplify urinary cancer biomarkers for detection (24 Apr)."]], "corpus": "wikipedia"}
260
  {"tstamp": 1722573788.5388, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722573788.1826, "finish": 1722573788.5388, "ip": "", "conv_id": "912f6fc66df2453b9902bbca6492f982", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "A study that uses large transformer language models for high-fidelity image generation and benchmarking with DrawBench.", "output": [["A study that uses large transformer language models for high-fidelity image generation and benchmarking with DrawBench.", "Title: DeepArt: A Benchmark to Advance Fidelity Research in AI-Generated Content\n\nAbstract: This paper explores the image synthesis capabilities of GPT-4, a leading multi-modal large language model. We establish a benchmark for evaluating the fidelity of texture features in images generated by GPT-4, comprising manually painted pictures and their AI-generated counterparts. The contributions of this study are threefold: First, we provide an in-depth analysis of the fidelity of image synthesis features based on GPT-4, marking the first such study on this state-of-the-art model. Second, the quantitative and qualitative experiments fully reveals the limitations of the GPT-4 model in image synthesis. Third, we have compiled a unique benchmark of manual drawings and corresponding GPT-4-generated images, introducing a new task to advance fidelity research in AI-generated content (AIGC). The dataset is available at: \\url{https://github.com/rickwang28574/DeepArt}."]], "corpus": "arxiv"}
261
  {"tstamp": 1722573788.5388, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722573788.1826, "finish": 1722573788.5388, "ip": "", "conv_id": "ddd8d2df2bd24fc0abfb96c04060bd8a", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "A study that uses large transformer language models for high-fidelity image generation and benchmarking with DrawBench.", "output": [["A study that uses large transformer language models for high-fidelity image generation and benchmarking with DrawBench.", "Title: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding\n\nAbstract: We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment. See https://imagen.research.google/ for an overview of the results."]], "corpus": "arxiv"}
262
+ {"tstamp": 1722573821.3194, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722573820.9603, "finish": 1722573821.3194, "ip": "", "conv_id": "072810e546564fb18958cc69bc67b16d", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "what does the @ symbol mean before a name?", "output": [["what does the @ symbol mean before a name?", "Stackoverflow Stackexchange\n\nQ: What does the @ symbol before a variable name mean in C#? I understand that the @ symbol can be used before a string literal to change how the compiler parses the string. But what does it mean when a variable name is prefixed with the @ symbol?\n\nA: The @ symbol allows you to use reserved word. For example:\nint @class = 15;\n\nThe above works, when the below wouldn't:\nint class = 15;\n\n"]], "corpus": "stackexchange"}
263
+ {"tstamp": 1722573821.3194, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722573820.9603, "finish": 1722573821.3194, "ip": "", "conv_id": "d16af50f94c54574a2840ce74063a272", "model_name": "BM25", "prompt": "what does the @ symbol mean before a name?", "output": [["what does the @ symbol mean before a name?", "Stackoverflow Stackexchange\n\nQ: What does the @ symbol before a variable name mean in C#? I understand that the @ symbol can be used before a string literal to change how the compiler parses the string. But what does it mean when a variable name is prefixed with the @ symbol?\n\nA: The @ symbol allows you to use reserved word. For example:\nint @class = 15;\n\nThe above works, when the below wouldn't:\nint class = 15;\n\n"]], "corpus": "stackexchange"}
264
+ {"tstamp": 1722573834.4428, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722573834.1414, "finish": 1722573834.4428, "ip": "", "conv_id": "6a00f63771b94b3bbe82e843a5ac9834", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "why are aero bars not allowed?", "output": [["why are aero bars not allowed?", "Stackoverflow Stackexchange\n\nQ: ComboBox QML does not show item text after selection I have a QML ComboBox whose model is defined as a C++ QList < QObject* >.\nWhen I open the drop down list, I can see all items defined in the C++ model, but after selection, the selected item is not shown.\nSo, items are only visible in the drop down element.\nThe relevant part of the qml file is:\nComboBox {\n id: placesCombo\n anchors.top: parent.top\n width: parent.width\n model: myModel\n delegate: ItemDelegate {\n width: placesCombo.width\n contentItem: Text {\n id: placesComboItem\n text: displayLabel\n elide: Text.ElideRight\n verticalAlignment: Text.AlignVCenter\n }\n }\n}\n\nHow to display in the closed combobox the item text previously selected in the drop down element?\n\nA: According to the docs:\n\ntextRole : string \nThis property holds the model role used for\n populating the combo box.\nWhen the model has multiple roles, textRole can be set to determine\n which role should be displayed.\n\nYou have to indicate the role of the model to be displayed through textRole.\nComboBox {\n id: placesCombo\n textRole: \"displayLabel\"\n ...\n}\n\n"]], "corpus": "stackexchange"}
265
+ {"tstamp": 1722573834.4428, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722573834.1414, "finish": 1722573834.4428, "ip": "", "conv_id": "9fd4e347bcf54287872f46a6d67c3e9b", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "why are aero bars not allowed?", "output": [["why are aero bars not allowed?", "Scifi Stackexchange\n\nQ: Why don't bars in star wars allow droids? In A New Hope Luke is told by the bartender at Mos Eisley that C3P0's kind were not served there \nIn Attack of the Clones while Anakin is taking Padme into hiding, they go into a bar and this time a droid bartender tells R2D2 (I think) to get out of the bar \nWhy is there such racism against droids in Star Wars bars?\n"]], "corpus": "stackexchange"}