Scheduled Commit
Browse files
data/retrieval_individual-25bfac43-43c3-4e03-a7e4-e33df32ce74f.jsonl
CHANGED
@@ -7,3 +7,4 @@
|
|
7 |
{"tstamp": 1722570512.7902, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722570512.5754, "finish": 1722570512.7902, "ip": "", "conv_id": "fa9bcaad1a8342d199e634f1689f6762", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A paper introducing GLUCOSE, a dataset for modeling implicit commonsense causal knowledge in narratives.", "output": [["A paper introducing GLUCOSE, a dataset for modeling implicit commonsense causal knowledge in narratives.", "Title: GLUCOSE: GeneraLized and COntextualized Story Explanations\n\nAbstract: When humans read or listen, they make implicit commonsense inferences that frame their understanding of what happened and why. As a step toward AI systems that can build similar mental models, we introduce GLUCOSE, a large-scale dataset of implicit commonsense causal knowledge, encoded as causal mini-theories about the world, each grounded in a narrative context. To construct GLUCOSE, we drew on cognitive psychology to identify ten dimensions of causal explanation, focusing on events, states, motivations, and emotions. Each GLUCOSE entry includes a story-specific causal statement paired with an inference rule generalized from the statement. This paper details two concrete contributions. First, we present our platform for effectively crowdsourcing GLUCOSE data at scale, which uses semi-structured templates to elicit causal explanations. Using this platform, we collected a total of ~670K specific statements and general rules that capture implicit commonsense knowledge about everyday situations. Second, we show that existing knowledge resources and pretrained language models do not include or readily predict GLUCOSE's rich inferential content. However, when state-of-the-art neural models are trained on this knowledge, they can start to make commonsense inferences on unseen stories that match humans' mental models."]], "corpus": "arxiv"}
|
8 |
{"tstamp": 1722570531.5163, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722570531.3633, "finish": 1722570531.5163, "ip": "", "conv_id": "ef959b98a3944f128ee5903cf8240661", "model_name": "GritLM/GritLM-7B", "prompt": "A paper that presents a new strategy for designing prompts to improve the performance of LLMs.", "output": [["A paper that presents a new strategy for designing prompts to improve the performance of LLMs.", "Title: Unleashing the Potential of Large Language Models as Prompt Optimizers: An Analogical Analysis with Gradient-based Model Optimizers\n\nAbstract: Automatic prompt optimization is an important approach to improving the performance of large language models (LLMs). Recent research demonstrates the potential of using LLMs as prompt optimizers, which can generate improved task prompts via iterative refinement. In this paper, we propose a novel perspective to investigate the design of LLM-based prompt optimizers, by drawing an analogy with gradient-based model optimizers. To connect these two approaches, we identify two pivotal factors in model parameter learning: update direction and update method. Focused on the two aspects, we borrow the theoretical framework and learning methods from gradient-based optimization to design improved strategies for LLM-based prompt optimizers. By systematically analyzing a rich set of improvement strategies, we further develop a capable Gradient-inspired LLM-based Prompt Optimizer called GPO. At each step, it first retrieves relevant prompts from the optimization trajectory as the update direction. Then, it utilizes the generation-based refinement strategy to perform the update, while controlling the edit distance through a cosine-based decay strategy. Extensive experiments demonstrate the effectiveness and efficiency of GPO. In particular, GPO brings an additional improvement of up to 56.8% on Big-Bench Hard and 55.3% on MMLU compared to baseline methods."]], "corpus": "arxiv"}
|
9 |
{"tstamp": 1722570531.5163, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722570531.3633, "finish": 1722570531.5163, "ip": "", "conv_id": "b2246019be7c456b939d67fbd7234a37", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A paper that presents a new strategy for designing prompts to improve the performance of LLMs.", "output": [["A paper that presents a new strategy for designing prompts to improve the performance of LLMs.", "Title: Efficient Prompting Methods for Large Language Models: A Survey\n\nAbstract: Prompting has become a mainstream paradigm for adapting large language models (LLMs) to specific natural language processing tasks. While this approach opens the door to in-context learning of LLMs, it brings the additional computational burden of model inference and human effort of manual-designed prompts, particularly when using lengthy and complex prompts to guide and control the behavior of LLMs. As a result, the LLM field has seen a remarkable surge in efficient prompting methods. In this paper, we present a comprehensive overview of these methods. At a high level, efficient prompting methods can broadly be categorized into two approaches: prompting with efficient computation and prompting with efficient design. The former involves various ways of compressing prompts, and the latter employs techniques for automatic prompt optimization. We present the basic concepts of prompting, review the advances for efficient prompting, and highlight future research directions."]], "corpus": "arxiv"}
|
|
|
|
7 |
{"tstamp": 1722570512.7902, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722570512.5754, "finish": 1722570512.7902, "ip": "", "conv_id": "fa9bcaad1a8342d199e634f1689f6762", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A paper introducing GLUCOSE, a dataset for modeling implicit commonsense causal knowledge in narratives.", "output": [["A paper introducing GLUCOSE, a dataset for modeling implicit commonsense causal knowledge in narratives.", "Title: GLUCOSE: GeneraLized and COntextualized Story Explanations\n\nAbstract: When humans read or listen, they make implicit commonsense inferences that frame their understanding of what happened and why. As a step toward AI systems that can build similar mental models, we introduce GLUCOSE, a large-scale dataset of implicit commonsense causal knowledge, encoded as causal mini-theories about the world, each grounded in a narrative context. To construct GLUCOSE, we drew on cognitive psychology to identify ten dimensions of causal explanation, focusing on events, states, motivations, and emotions. Each GLUCOSE entry includes a story-specific causal statement paired with an inference rule generalized from the statement. This paper details two concrete contributions. First, we present our platform for effectively crowdsourcing GLUCOSE data at scale, which uses semi-structured templates to elicit causal explanations. Using this platform, we collected a total of ~670K specific statements and general rules that capture implicit commonsense knowledge about everyday situations. Second, we show that existing knowledge resources and pretrained language models do not include or readily predict GLUCOSE's rich inferential content. However, when state-of-the-art neural models are trained on this knowledge, they can start to make commonsense inferences on unseen stories that match humans' mental models."]], "corpus": "arxiv"}
|
8 |
{"tstamp": 1722570531.5163, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722570531.3633, "finish": 1722570531.5163, "ip": "", "conv_id": "ef959b98a3944f128ee5903cf8240661", "model_name": "GritLM/GritLM-7B", "prompt": "A paper that presents a new strategy for designing prompts to improve the performance of LLMs.", "output": [["A paper that presents a new strategy for designing prompts to improve the performance of LLMs.", "Title: Unleashing the Potential of Large Language Models as Prompt Optimizers: An Analogical Analysis with Gradient-based Model Optimizers\n\nAbstract: Automatic prompt optimization is an important approach to improving the performance of large language models (LLMs). Recent research demonstrates the potential of using LLMs as prompt optimizers, which can generate improved task prompts via iterative refinement. In this paper, we propose a novel perspective to investigate the design of LLM-based prompt optimizers, by drawing an analogy with gradient-based model optimizers. To connect these two approaches, we identify two pivotal factors in model parameter learning: update direction and update method. Focused on the two aspects, we borrow the theoretical framework and learning methods from gradient-based optimization to design improved strategies for LLM-based prompt optimizers. By systematically analyzing a rich set of improvement strategies, we further develop a capable Gradient-inspired LLM-based Prompt Optimizer called GPO. At each step, it first retrieves relevant prompts from the optimization trajectory as the update direction. Then, it utilizes the generation-based refinement strategy to perform the update, while controlling the edit distance through a cosine-based decay strategy. Extensive experiments demonstrate the effectiveness and efficiency of GPO. In particular, GPO brings an additional improvement of up to 56.8% on Big-Bench Hard and 55.3% on MMLU compared to baseline methods."]], "corpus": "arxiv"}
|
9 |
{"tstamp": 1722570531.5163, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722570531.3633, "finish": 1722570531.5163, "ip": "", "conv_id": "b2246019be7c456b939d67fbd7234a37", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A paper that presents a new strategy for designing prompts to improve the performance of LLMs.", "output": [["A paper that presents a new strategy for designing prompts to improve the performance of LLMs.", "Title: Efficient Prompting Methods for Large Language Models: A Survey\n\nAbstract: Prompting has become a mainstream paradigm for adapting large language models (LLMs) to specific natural language processing tasks. While this approach opens the door to in-context learning of LLMs, it brings the additional computational burden of model inference and human effort of manual-designed prompts, particularly when using lengthy and complex prompts to guide and control the behavior of LLMs. As a result, the LLM field has seen a remarkable surge in efficient prompting methods. In this paper, we present a comprehensive overview of these methods. At a high level, efficient prompting methods can broadly be categorized into two approaches: prompting with efficient computation and prompting with efficient design. The former involves various ways of compressing prompts, and the latter employs techniques for automatic prompt optimization. We present the basic concepts of prompting, review the advances for efficient prompting, and highlight future research directions."]], "corpus": "arxiv"}
|
10 |
+
{"tstamp": 1722574411.1033, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722574410.834, "finish": 1722574411.1033, "ip": "", "conv_id": "bae95da9dfc5444b8484f0525c72a5f8", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Evaluating fairness of ChatGPT in providing recommendations in music and movies.", "output": [["Evaluating fairness of ChatGPT in providing recommendations in music and movies.", "Title: Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation\n\nAbstract: The remarkable achievements of Large Language Models (LLMs) have led to the emergence of a novel recommendation paradigm -- Recommendation via LLM (RecLLM). Nevertheless, it is important to note that LLMs may contain social prejudices, and therefore, the fairness of recommendations made by RecLLM requires further investigation. To avoid the potential risks of RecLLM, it is imperative to evaluate the fairness of RecLLM with respect to various sensitive attributes on the user side. Due to the differences between the RecLLM paradigm and the traditional recommendation paradigm, it is problematic to directly use the fairness benchmark of traditional recommendation. To address the dilemma, we propose a novel benchmark called Fairness of Recommendation via LLM (FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset that accounts for eight sensitive attributes1 in two recommendation scenarios: music and movies. By utilizing our FaiRLLM benchmark, we conducted an evaluation of ChatGPT and discovered that it still exhibits unfairness to some sensitive attributes when generating recommendations. Our code and dataset can be found at https://github.com/jizhi-zhang/FaiRLLM."]], "corpus": "arxiv"}
|