Scheduled Commit
Browse files
data/retrieval_individual-ffd1dcbf-aab7-4cd1-bfa8-5b0d8404144b.jsonl
CHANGED
@@ -264,3 +264,5 @@
|
|
264 |
{"tstamp": 1732946125.0041, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1732946122.6063, "finish": 1732946125.0041, "ip": "", "conv_id": "5de2e9f72e7e4ba484d5c331d15b4ed9", "model_name": "embed-english-v3.0", "prompt": "what awards did the curious case of benjamin button win", "output": [["what awards did the curious case of benjamin button win", "The Curious Case of Benjamin Button (film)\n\nPeter Bradshaw in The Guardian called it \"166 minutes of twee tedium\", giving it one star out of five. Cosmo Landesman of the Sunday Times gave the film two out of five stars, writing: \"The film's premise serves no purpose. It's a gimmick that goes on for nearly three hours ... The Curious Case of Benjamin Button is an anodyne Hollywood film that offers a safe and sanitised view of life and death.\" James Christopher in The Times called it \"a tedious marathon of smoke and mirrors. In terms of the basic requirements of three-reel drama the film lacks substance, credibility, a decent script and characters you might actually care for.\" Derek Malcolm of London's Evening Standard felt that \"never at any point do you feel that there's anything more to it than a very strange story traversed by a film-maker who knows what he is doing but not always why he is doing it.\"\nAccolades\nAt the 81st Academy Awards, The Curious Case of Benjamin Button received a leading 13 Academy Award nominations, including Best Picture, Best Director for Fincher, Best Actor for Pitt, and Best Supporting Actress for Taraji P. Henson, and won three, for Best Art Direction, Best Makeup, and Best Visual Effects.\nTaraji P. Henson won Best Actress at the BET Awards for her role in the film combined with two other performances in Not Easily Broken, and The Family That Preys.\nThe film won all four awards it was nominated for at the 7th Visual Effects Society Awards, the categories of \"Outstanding Visual Effects in a Visual Effects-Driven Feature Motion Picture,\" \"Best Single Visual Effect of the Year,\" \"Outstanding Animated Character in a Live Action Feature Motion Picture,\" and \"Outstanding Compositing in a Feature Motion Picture.\""]], "corpus": "wikipedia"}
|
265 |
{"tstamp": 1732946177.2588, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1732946177.0006, "finish": 1732946177.2588, "ip": "", "conv_id": "c4d06987e156450c9fb708f6dc6f68db", "model_name": "text-embedding-004", "prompt": "How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "output": [["How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "Title: Teaching Algorithmic Reasoning via In-context Learning\n\nAbstract: Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines."]], "corpus": "arxiv"}
|
266 |
{"tstamp": 1732946177.2588, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1732946177.0006, "finish": 1732946177.2588, "ip": "", "conv_id": "7b2c3ef296d542f2bcfb79baa5a58ac2", "model_name": "embed-english-v3.0", "prompt": "How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "output": [["How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "Title: Teaching Algorithmic Reasoning via In-context Learning\n\nAbstract: Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines."]], "corpus": "arxiv"}
|
|
|
|
|
|
264 |
{"tstamp": 1732946125.0041, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1732946122.6063, "finish": 1732946125.0041, "ip": "", "conv_id": "5de2e9f72e7e4ba484d5c331d15b4ed9", "model_name": "embed-english-v3.0", "prompt": "what awards did the curious case of benjamin button win", "output": [["what awards did the curious case of benjamin button win", "The Curious Case of Benjamin Button (film)\n\nPeter Bradshaw in The Guardian called it \"166 minutes of twee tedium\", giving it one star out of five. Cosmo Landesman of the Sunday Times gave the film two out of five stars, writing: \"The film's premise serves no purpose. It's a gimmick that goes on for nearly three hours ... The Curious Case of Benjamin Button is an anodyne Hollywood film that offers a safe and sanitised view of life and death.\" James Christopher in The Times called it \"a tedious marathon of smoke and mirrors. In terms of the basic requirements of three-reel drama the film lacks substance, credibility, a decent script and characters you might actually care for.\" Derek Malcolm of London's Evening Standard felt that \"never at any point do you feel that there's anything more to it than a very strange story traversed by a film-maker who knows what he is doing but not always why he is doing it.\"\nAccolades\nAt the 81st Academy Awards, The Curious Case of Benjamin Button received a leading 13 Academy Award nominations, including Best Picture, Best Director for Fincher, Best Actor for Pitt, and Best Supporting Actress for Taraji P. Henson, and won three, for Best Art Direction, Best Makeup, and Best Visual Effects.\nTaraji P. Henson won Best Actress at the BET Awards for her role in the film combined with two other performances in Not Easily Broken, and The Family That Preys.\nThe film won all four awards it was nominated for at the 7th Visual Effects Society Awards, the categories of \"Outstanding Visual Effects in a Visual Effects-Driven Feature Motion Picture,\" \"Best Single Visual Effect of the Year,\" \"Outstanding Animated Character in a Live Action Feature Motion Picture,\" and \"Outstanding Compositing in a Feature Motion Picture.\""]], "corpus": "wikipedia"}
|
265 |
{"tstamp": 1732946177.2588, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1732946177.0006, "finish": 1732946177.2588, "ip": "", "conv_id": "c4d06987e156450c9fb708f6dc6f68db", "model_name": "text-embedding-004", "prompt": "How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "output": [["How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "Title: Teaching Algorithmic Reasoning via In-context Learning\n\nAbstract: Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines."]], "corpus": "arxiv"}
|
266 |
{"tstamp": 1732946177.2588, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1732946177.0006, "finish": 1732946177.2588, "ip": "", "conv_id": "7b2c3ef296d542f2bcfb79baa5a58ac2", "model_name": "embed-english-v3.0", "prompt": "How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "output": [["How to formulate and teach algorithms as skills in LLMs for improving algorithmic reasoning", "Title: Teaching Algorithmic Reasoning via In-context Learning\n\nAbstract: Large language models (LLMs) have shown increasing in-context learning capabilities through scaling up model and data size. Despite this progress, LLMs are still unable to solve algorithmic reasoning problems. While providing a rationale with the final answer has led to further improvements in multi-step reasoning problems, Anil et al. 2022 showed that even simple algorithmic reasoning tasks such as parity are far from solved. In this work, we identify and study four key stages for successfully teaching algorithmic reasoning to LLMs: (1) formulating algorithms as skills, (2) teaching multiple skills simultaneously (skill accumulation), (3) teaching how to combine skills (skill composition) and (4) teaching how to use skills as tools. We show that it is possible to teach algorithmic reasoning to LLMs via in-context learning, which we refer to as algorithmic prompting. We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks, and demonstrate significant boosts in performance over existing prompting techniques. In particular, for long parity, addition, multiplication and subtraction, we achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines."]], "corpus": "arxiv"}
|
267 |
+
{"tstamp": 1732975929.748, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1732975926.6987, "finish": 1732975929.748, "ip": "", "conv_id": "5d8c1698a1df43f5962169c3833398aa", "model_name": "BM25", "prompt": "embedding generation transformer model for ecommerce products sts task", "output": [["embedding generation transformer model for ecommerce products sts task", "Title: BERT Goes Shopping: Comparing Distributional Models for Product Representations\n\nAbstract: Word embeddings (e.g., word2vec) have been applied successfully to eCommerce products through~\\textit{prod2vec}. Inspired by the recent performance improvements on several NLP tasks brought by contextualized embeddings, we propose to transfer BERT-like architectures to eCommerce: our model -- ~\\textit{Prod2BERT} -- is trained to generate representations of products through masked session modeling. Through extensive experiments over multiple shops, different tasks, and a range of design choices, we systematically compare the accuracy of~\\textit{Prod2BERT} and~\\textit{prod2vec} embeddings: while~\\textit{Prod2BERT} is found to be superior in several scenarios, we highlight the importance of resources and hyperparameters in the best performing models. Finally, we provide guidelines to practitioners for training embeddings under a variety of computational and data constraints."]], "corpus": "arxiv"}
|
268 |
+
{"tstamp": 1732975929.748, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1732975926.6987, "finish": 1732975929.748, "ip": "", "conv_id": "7ac5b1ca06ab4980a2e904bc50b55a16", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "embedding generation transformer model for ecommerce products sts task", "output": [["embedding generation transformer model for ecommerce products sts task", "Title: Learning-To-Embed: Adopting Transformer based models for E-commerce Products Representation Learning\n\nAbstract: Learning low-dimensional representation for large number of products present in an e-commerce catalogue plays a vital role as they are helpful in tasks like product ranking, product recommendation, finding similar products, modelling user-behaviour etc. Recently, a lot of tasks in the NLP field are getting tackled using the Transformer based models and these deep models are widely applicable in the industries setting to solve various problems. With this motivation, we apply transformer based model for learning contextual representation of products in an e-commerce setting. In this work, we propose a novel approach of pre-training transformer based model on a users generated sessions dataset obtained from a large fashion e-commerce platform to obtain latent product representation. Once pre-trained, we show that the low-dimension representation of the products can be obtained given the product attributes information as a textual sentence. We mainly pre-train BERT, RoBERTa, ALBERT and XLNET variants of transformer model and show a quantitative analysis of the products representation obtained from these models with respect to Next Product Recommendation(NPR) and Content Ranking(CR) tasks. For both the tasks, we collect an evaluation data from the fashion e-commerce platform and observe that XLNET model outperform other variants with a MRR of 0.5 for NPR and NDCG of 0.634 for CR. XLNET model also outperforms the Word2Vec based non-transformer baseline on both the downstream tasks. To the best of our knowledge, this is the first and novel work for pre-training transformer based models using users generated sessions data containing products that are represented with rich attributes information for adoption in e-commerce setting. These models can be further fine-tuned in order to solve various downstream tasks in e-commerce, thereby eliminating the need to train a model from scratch."]], "corpus": "arxiv"}
|