Scheduled Commit
Browse files
data/retrieval_battle-f36c9f97-e795-4522-8eb7-7dee254b42c7.jsonl
CHANGED
@@ -74,3 +74,4 @@
|
|
74 |
{"tstamp": 1723685128.0326, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "b30cfb04c6b44968b9df1aa74e3e3b92", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "θͺεδ»»εηζ", "0_output": [["θͺεδ»»εηζ", "Title: Self-Organized Criticality Effect on Stability: Magneto-Thermal Oscillations in a Granular YBCO Superconductor\n\nAbstract: We show that the self-organized criticality of the Bean's state in each of the grains of a granular superconductor results in magneto-thermal oscillations preceding a series of subsequent flux jumps. We find that the frequency of these oscillations is proportional to the external magnetic field sweep rate and is inversely proportional to the square root of the heat capacity. We demonstrate experimentally and theoretically the universality of this dependence that is mainly influenced by the granularity of the superconductor."]], "0_corpus": "arxiv", "1_conv_id": "655f3460f24545069cc21f897dccaa2d", "1_model_name": "voyage-multilingual-2", "1_prompt": "θͺεδ»»εηζ", "1_output": [["θͺεδ»»εηζ", "Title: AutoAgents: A Framework for Automatic Agent Generation\n\nAbstract: Large language models (LLMs) have enabled remarkable advances in automated task-solving with multi-agent systems. However, most existing LLM-based multi-agent approaches rely on predefined agents to handle simple tasks, limiting the adaptability of multi-agent collaboration to different scenarios. Therefore, we introduce AutoAgents, an innovative framework that adaptively generates and coordinates multiple specialized agents to build an AI team according to different tasks. Specifically, AutoAgents couples the relationship between tasks and roles by dynamically generating multiple required agents based on task content and planning solutions for the current task based on the generated expert agents. Multiple specialized agents collaborate with each other to efficiently accomplish tasks. Concurrently, an observer role is incorporated into the framework to reflect on the designated plans and agents' responses and improve upon them. Our experiments on various benchmarks demonstrate that AutoAgents generates more coherent and accurate solutions than the existing multi-agent methods. This underscores the significance of assigning different roles to different tasks and of team cooperation, offering new perspectives for tackling complex tasks. The repository of this project is available at https://github.com/Link-AGI/AutoAgents."]], "1_corpus": "arxiv"}
|
75 |
{"tstamp": 1723685201.3728, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "99c7605b2b3443e2af5d43d698bcb00e", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "automatic quest generation via LLM", "0_output": [["automatic quest generation via LLM", "Title: AutoManual: Generating Instruction Manuals by LLM Agents via Interactive Environmental Learning\n\nAbstract: Large Language Models (LLM) based agents have shown promise in autonomously completing tasks across various domains, e.g., robotics, games, and web navigation. However, these agents typically require elaborate design and expert prompts to solve tasks in specific domains, which limits their adaptability. We introduce AutoManual, a framework enabling LLM agents to autonomously build their understanding through interaction and adapt to new environments. AutoManual categorizes environmental knowledge into diverse rules and optimizes them in an online fashion by two agents: 1) The Planner codes actionable plans based on current rules for interacting with the environment. 2) The Builder updates the rules through a well-structured rule system that facilitates online rule management and essential detail retention. To mitigate hallucinations in managing rules, we introduce \\textit{case-conditioned prompting} strategy for the Builder. Finally, the Formulator agent compiles these rules into a comprehensive manual. The self-generated manual can not only improve the adaptability but also guide the planning of smaller LLMs while being human-readable. Given only one simple demonstration, AutoManual significantly improves task success rates, achieving 97.4\\% with GPT-4-turbo and 86.2\\% with GPT-3.5-turbo on ALFWorld benchmark tasks. The source code will be available soon."]], "0_corpus": "arxiv", "1_conv_id": "bef5e273ea5b4f33a0cc0d0a09b0933a", "1_model_name": "BM25", "1_prompt": "automatic quest generation via LLM", "1_output": [["automatic quest generation via LLM", "Title: EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms\n\nAbstract: The rise of powerful large language models (LLMs) has spurred a new trend in building LLM-based autonomous agents for solving complex tasks, especially multi-agent systems. Despite the remarkable progress, we notice that existing works are heavily dependent on human-designed frameworks, which greatly limits the functional scope and scalability of agent systems. How to automatically extend the specialized agent to multi-agent systems to improve task-solving capability still remains a significant challenge. In this paper, we introduce EvoAgent, a generic method to automatically extend expert agents to multi-agent systems via the evolutionary algorithm, thereby improving the effectiveness of LLM-based agents in solving tasks. Specifically, we consider the existing agent frameworks as the initial individual and then apply a series of evolutionary operators (e.g., mutation, crossover, selection, etc.) to generate multiple agents with diverse agent settings. EvoAgent can be generalized to any LLM-based agent framework, and can automatically extend the existing agent framework to multi-agent systems without any extra human designs. Experimental results across various tasks have shown that EvoAgent can automatically generate multiple expert agents and significantly enhance the task-solving capabilities of LLM-based agents."]], "1_corpus": "arxiv"}
|
76 |
{"tstamp": 1723685249.8148, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "57ba620d45ff4bc39ff0987f4d32c3ea", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "automatic game quest generation with LLM", "0_output": [["automatic game quest generation with LLM", "Title: Automatic Bug Detection in LLM-Powered Text-Based Games Using LLMs\n\nAbstract: Advancements in large language models (LLMs) are revolutionizing interactive game design, enabling dynamic plotlines and interactions between players and non-player characters (NPCs). However, LLMs may exhibit flaws such as hallucinations, forgetfulness, or misinterpretations of prompts, causing logical inconsistencies and unexpected deviations from intended designs. Automated techniques for detecting such game bugs are still lacking. To address this, we propose a systematic LLM-based method for automatically identifying such bugs from player game logs, eliminating the need for collecting additional data such as post-play surveys. Applied to a text-based game DejaBoom!, our approach effectively identifies bugs inherent in LLM-powered interactive games, surpassing unstructured LLM-powered bug-catching methods and filling the gap in automated detection of logical and design flaws."]], "0_corpus": "arxiv", "1_conv_id": "82985ff41a6c430db9049ecac2edbe89", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "automatic game quest generation with LLM", "1_output": [["automatic game quest generation with LLM", "Title: Game Generation via Large Language Models\n\nAbstract: Recently, the emergence of large language models (LLMs) has unlocked new opportunities for procedural content generation. However, recent attempts mainly focus on level generation for specific games with defined game rules such as Super Mario Bros. and Zelda. This paper investigates the game generation via LLMs. Based on video game description language, this paper proposes an LLM-based framework to generate game rules and levels simultaneously. Experiments demonstrate how the framework works with prompts considering different combinations of context. Our findings extend the current applications of LLMs and offer new insights for generating new games in the area of procedural content generation."]], "1_corpus": "arxiv"}
|
|
|
|
74 |
{"tstamp": 1723685128.0326, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "b30cfb04c6b44968b9df1aa74e3e3b92", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "θͺεδ»»εηζ", "0_output": [["θͺεδ»»εηζ", "Title: Self-Organized Criticality Effect on Stability: Magneto-Thermal Oscillations in a Granular YBCO Superconductor\n\nAbstract: We show that the self-organized criticality of the Bean's state in each of the grains of a granular superconductor results in magneto-thermal oscillations preceding a series of subsequent flux jumps. We find that the frequency of these oscillations is proportional to the external magnetic field sweep rate and is inversely proportional to the square root of the heat capacity. We demonstrate experimentally and theoretically the universality of this dependence that is mainly influenced by the granularity of the superconductor."]], "0_corpus": "arxiv", "1_conv_id": "655f3460f24545069cc21f897dccaa2d", "1_model_name": "voyage-multilingual-2", "1_prompt": "θͺεδ»»εηζ", "1_output": [["θͺεδ»»εηζ", "Title: AutoAgents: A Framework for Automatic Agent Generation\n\nAbstract: Large language models (LLMs) have enabled remarkable advances in automated task-solving with multi-agent systems. However, most existing LLM-based multi-agent approaches rely on predefined agents to handle simple tasks, limiting the adaptability of multi-agent collaboration to different scenarios. Therefore, we introduce AutoAgents, an innovative framework that adaptively generates and coordinates multiple specialized agents to build an AI team according to different tasks. Specifically, AutoAgents couples the relationship between tasks and roles by dynamically generating multiple required agents based on task content and planning solutions for the current task based on the generated expert agents. Multiple specialized agents collaborate with each other to efficiently accomplish tasks. Concurrently, an observer role is incorporated into the framework to reflect on the designated plans and agents' responses and improve upon them. Our experiments on various benchmarks demonstrate that AutoAgents generates more coherent and accurate solutions than the existing multi-agent methods. This underscores the significance of assigning different roles to different tasks and of team cooperation, offering new perspectives for tackling complex tasks. The repository of this project is available at https://github.com/Link-AGI/AutoAgents."]], "1_corpus": "arxiv"}
|
75 |
{"tstamp": 1723685201.3728, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "99c7605b2b3443e2af5d43d698bcb00e", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "automatic quest generation via LLM", "0_output": [["automatic quest generation via LLM", "Title: AutoManual: Generating Instruction Manuals by LLM Agents via Interactive Environmental Learning\n\nAbstract: Large Language Models (LLM) based agents have shown promise in autonomously completing tasks across various domains, e.g., robotics, games, and web navigation. However, these agents typically require elaborate design and expert prompts to solve tasks in specific domains, which limits their adaptability. We introduce AutoManual, a framework enabling LLM agents to autonomously build their understanding through interaction and adapt to new environments. AutoManual categorizes environmental knowledge into diverse rules and optimizes them in an online fashion by two agents: 1) The Planner codes actionable plans based on current rules for interacting with the environment. 2) The Builder updates the rules through a well-structured rule system that facilitates online rule management and essential detail retention. To mitigate hallucinations in managing rules, we introduce \\textit{case-conditioned prompting} strategy for the Builder. Finally, the Formulator agent compiles these rules into a comprehensive manual. The self-generated manual can not only improve the adaptability but also guide the planning of smaller LLMs while being human-readable. Given only one simple demonstration, AutoManual significantly improves task success rates, achieving 97.4\\% with GPT-4-turbo and 86.2\\% with GPT-3.5-turbo on ALFWorld benchmark tasks. The source code will be available soon."]], "0_corpus": "arxiv", "1_conv_id": "bef5e273ea5b4f33a0cc0d0a09b0933a", "1_model_name": "BM25", "1_prompt": "automatic quest generation via LLM", "1_output": [["automatic quest generation via LLM", "Title: EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms\n\nAbstract: The rise of powerful large language models (LLMs) has spurred a new trend in building LLM-based autonomous agents for solving complex tasks, especially multi-agent systems. Despite the remarkable progress, we notice that existing works are heavily dependent on human-designed frameworks, which greatly limits the functional scope and scalability of agent systems. How to automatically extend the specialized agent to multi-agent systems to improve task-solving capability still remains a significant challenge. In this paper, we introduce EvoAgent, a generic method to automatically extend expert agents to multi-agent systems via the evolutionary algorithm, thereby improving the effectiveness of LLM-based agents in solving tasks. Specifically, we consider the existing agent frameworks as the initial individual and then apply a series of evolutionary operators (e.g., mutation, crossover, selection, etc.) to generate multiple agents with diverse agent settings. EvoAgent can be generalized to any LLM-based agent framework, and can automatically extend the existing agent framework to multi-agent systems without any extra human designs. Experimental results across various tasks have shown that EvoAgent can automatically generate multiple expert agents and significantly enhance the task-solving capabilities of LLM-based agents."]], "1_corpus": "arxiv"}
|
76 |
{"tstamp": 1723685249.8148, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "57ba620d45ff4bc39ff0987f4d32c3ea", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "automatic game quest generation with LLM", "0_output": [["automatic game quest generation with LLM", "Title: Automatic Bug Detection in LLM-Powered Text-Based Games Using LLMs\n\nAbstract: Advancements in large language models (LLMs) are revolutionizing interactive game design, enabling dynamic plotlines and interactions between players and non-player characters (NPCs). However, LLMs may exhibit flaws such as hallucinations, forgetfulness, or misinterpretations of prompts, causing logical inconsistencies and unexpected deviations from intended designs. Automated techniques for detecting such game bugs are still lacking. To address this, we propose a systematic LLM-based method for automatically identifying such bugs from player game logs, eliminating the need for collecting additional data such as post-play surveys. Applied to a text-based game DejaBoom!, our approach effectively identifies bugs inherent in LLM-powered interactive games, surpassing unstructured LLM-powered bug-catching methods and filling the gap in automated detection of logical and design flaws."]], "0_corpus": "arxiv", "1_conv_id": "82985ff41a6c430db9049ecac2edbe89", "1_model_name": "intfloat/e5-mistral-7b-instruct", "1_prompt": "automatic game quest generation with LLM", "1_output": [["automatic game quest generation with LLM", "Title: Game Generation via Large Language Models\n\nAbstract: Recently, the emergence of large language models (LLMs) has unlocked new opportunities for procedural content generation. However, recent attempts mainly focus on level generation for specific games with defined game rules such as Super Mario Bros. and Zelda. This paper investigates the game generation via LLMs. Based on video game description language, this paper proposes an LLM-based framework to generate game rules and levels simultaneously. Experiments demonstrate how the framework works with prompts considering different combinations of context. Our findings extend the current applications of LLMs and offer new insights for generating new games in the area of procedural content generation."]], "1_corpus": "arxiv"}
|
77 |
+
{"tstamp": 1723687504.7765, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "c7a53620a2b448448040265261c4aa84", "0_model_name": "sentence-transformers/all-MiniLM-L6-v2", "0_prompt": "Abandoned cart retargerting", "0_output": [["Abandoned cart retargerting", "Cart Life\n\nCart Life is Hofmeier's first game, drawing upon his experience as an illustrator. He developed the game in his spare time using Adventure Game Studio while working what he describes as a \"bunch of bad jobs\". He initially planned to finish the development in 30 days but he worked on development for three years. He wanted to make a game which had no high scores, points, or action, and originally envisaged it as a comedy.\nOf the game's pixel art design, Hofmeier said that he did not choose it to be nostalgic but rather because he wanted players to fill in the extra details with their own thoughts and experiences, saying that it took more time and effort than other possible styles. During development Hofmeier spoke to a number of street vendors to research their work who were enthusiastic about the creation of the game.\nDuring development Richard Hofmeier experimented with many elements which were eventually removed, including a fourth character and a number of extra stores and locations. At the 2013 Independent Games Festival, Hofmeier spray-painted his own booth to instead display Porpentine's game Howling Dogs, saying that he thought \"Cart Life had already overstayed its welcome... I wanted people to see this game.\""]], "0_corpus": "wikipedia", "1_conv_id": "893229cb3ec146f08f260faf97d3f551", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "Abandoned cart retargerting", "1_output": [["Abandoned cart retargerting", "Shopping cart\n\nElectronic and magnetic Electronic systems are sometimes used by retailers. Each shopping cart is fitted with an electronic locking wheel clamp, or \"boot\". A transmitter with a thin wire is placed around the perimeter of the parking lot, and the boot locks when the cart leaves the designated area. Store personnel must then deactivate the lock with a handheld remote control to return the cart to stock. Often, a line is painted in front of the broadcast range to warn customers that their cart will stop when rolled past the line. However, these systems are very expensive to install and although helpful, are not foolproof. The wheels can be lifted over the electronic barrier and/or pushed hard enough that the locks break. There are also safety concerns if the person pushing the trolley is running, and also if the trolley doesn't lock and is taken onto a road, locking due to magnetic materials under the road. Some cities have required retailers to install locking wheel systems on their shopping carts. In some cases, electronic systems companies have encouraged passage of such laws to create a captive audience of potential customers.\nPhysical\nA low-tech form of theft prevention utilizes a physical impediment, such as vertical posts at the store entrance to keep carts from being taken into the parking lot. This method also impedes physically disabled customers, which may be illegal in many jurisdictions. For example, in the United States it would be a violation of the Americans with Disabilities Act of 1990."]], "1_corpus": "wikipedia"}
|
data/retrieval_individual-f36c9f97-e795-4522-8eb7-7dee254b42c7.jsonl
CHANGED
@@ -359,3 +359,7 @@
|
|
359 |
{"tstamp": 1723685156.5748, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1723685154.275, "finish": 1723685156.5748, "ip": "", "conv_id": "bef5e273ea5b4f33a0cc0d0a09b0933a", "model_name": "BM25", "prompt": "automatic quest generation via LLM", "output": [["automatic quest generation via LLM", "Title: EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms\n\nAbstract: The rise of powerful large language models (LLMs) has spurred a new trend in building LLM-based autonomous agents for solving complex tasks, especially multi-agent systems. Despite the remarkable progress, we notice that existing works are heavily dependent on human-designed frameworks, which greatly limits the functional scope and scalability of agent systems. How to automatically extend the specialized agent to multi-agent systems to improve task-solving capability still remains a significant challenge. In this paper, we introduce EvoAgent, a generic method to automatically extend expert agents to multi-agent systems via the evolutionary algorithm, thereby improving the effectiveness of LLM-based agents in solving tasks. Specifically, we consider the existing agent frameworks as the initial individual and then apply a series of evolutionary operators (e.g., mutation, crossover, selection, etc.) to generate multiple agents with diverse agent settings. EvoAgent can be generalized to any LLM-based agent framework, and can automatically extend the existing agent framework to multi-agent systems without any extra human designs. Experimental results across various tasks have shown that EvoAgent can automatically generate multiple expert agents and significantly enhance the task-solving capabilities of LLM-based agents."]], "corpus": "arxiv"}
|
360 |
{"tstamp": 1723685226.6387, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1723685226.5211, "finish": 1723685226.6387, "ip": "", "conv_id": "57ba620d45ff4bc39ff0987f4d32c3ea", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "automatic game quest generation with LLM", "output": [["automatic game quest generation with LLM", "Title: Automatic Bug Detection in LLM-Powered Text-Based Games Using LLMs\n\nAbstract: Advancements in large language models (LLMs) are revolutionizing interactive game design, enabling dynamic plotlines and interactions between players and non-player characters (NPCs). However, LLMs may exhibit flaws such as hallucinations, forgetfulness, or misinterpretations of prompts, causing logical inconsistencies and unexpected deviations from intended designs. Automated techniques for detecting such game bugs are still lacking. To address this, we propose a systematic LLM-based method for automatically identifying such bugs from player game logs, eliminating the need for collecting additional data such as post-play surveys. Applied to a text-based game DejaBoom!, our approach effectively identifies bugs inherent in LLM-powered interactive games, surpassing unstructured LLM-powered bug-catching methods and filling the gap in automated detection of logical and design flaws."]], "corpus": "arxiv"}
|
361 |
{"tstamp": 1723685226.6387, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1723685226.5211, "finish": 1723685226.6387, "ip": "", "conv_id": "82985ff41a6c430db9049ecac2edbe89", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "automatic game quest generation with LLM", "output": [["automatic game quest generation with LLM", "Title: Game Generation via Large Language Models\n\nAbstract: Recently, the emergence of large language models (LLMs) has unlocked new opportunities for procedural content generation. However, recent attempts mainly focus on level generation for specific games with defined game rules such as Super Mario Bros. and Zelda. This paper investigates the game generation via LLMs. Based on video game description language, this paper proposes an LLM-based framework to generate game rules and levels simultaneously. Experiments demonstrate how the framework works with prompts considering different combinations of context. Our findings extend the current applications of LLMs and offer new insights for generating new games in the area of procedural content generation."]], "corpus": "arxiv"}
|
|
|
|
|
|
|
|
|
|
359 |
{"tstamp": 1723685156.5748, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1723685154.275, "finish": 1723685156.5748, "ip": "", "conv_id": "bef5e273ea5b4f33a0cc0d0a09b0933a", "model_name": "BM25", "prompt": "automatic quest generation via LLM", "output": [["automatic quest generation via LLM", "Title: EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms\n\nAbstract: The rise of powerful large language models (LLMs) has spurred a new trend in building LLM-based autonomous agents for solving complex tasks, especially multi-agent systems. Despite the remarkable progress, we notice that existing works are heavily dependent on human-designed frameworks, which greatly limits the functional scope and scalability of agent systems. How to automatically extend the specialized agent to multi-agent systems to improve task-solving capability still remains a significant challenge. In this paper, we introduce EvoAgent, a generic method to automatically extend expert agents to multi-agent systems via the evolutionary algorithm, thereby improving the effectiveness of LLM-based agents in solving tasks. Specifically, we consider the existing agent frameworks as the initial individual and then apply a series of evolutionary operators (e.g., mutation, crossover, selection, etc.) to generate multiple agents with diverse agent settings. EvoAgent can be generalized to any LLM-based agent framework, and can automatically extend the existing agent framework to multi-agent systems without any extra human designs. Experimental results across various tasks have shown that EvoAgent can automatically generate multiple expert agents and significantly enhance the task-solving capabilities of LLM-based agents."]], "corpus": "arxiv"}
|
360 |
{"tstamp": 1723685226.6387, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1723685226.5211, "finish": 1723685226.6387, "ip": "", "conv_id": "57ba620d45ff4bc39ff0987f4d32c3ea", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "automatic game quest generation with LLM", "output": [["automatic game quest generation with LLM", "Title: Automatic Bug Detection in LLM-Powered Text-Based Games Using LLMs\n\nAbstract: Advancements in large language models (LLMs) are revolutionizing interactive game design, enabling dynamic plotlines and interactions between players and non-player characters (NPCs). However, LLMs may exhibit flaws such as hallucinations, forgetfulness, or misinterpretations of prompts, causing logical inconsistencies and unexpected deviations from intended designs. Automated techniques for detecting such game bugs are still lacking. To address this, we propose a systematic LLM-based method for automatically identifying such bugs from player game logs, eliminating the need for collecting additional data such as post-play surveys. Applied to a text-based game DejaBoom!, our approach effectively identifies bugs inherent in LLM-powered interactive games, surpassing unstructured LLM-powered bug-catching methods and filling the gap in automated detection of logical and design flaws."]], "corpus": "arxiv"}
|
361 |
{"tstamp": 1723685226.6387, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1723685226.5211, "finish": 1723685226.6387, "ip": "", "conv_id": "82985ff41a6c430db9049ecac2edbe89", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "automatic game quest generation with LLM", "output": [["automatic game quest generation with LLM", "Title: Game Generation via Large Language Models\n\nAbstract: Recently, the emergence of large language models (LLMs) has unlocked new opportunities for procedural content generation. However, recent attempts mainly focus on level generation for specific games with defined game rules such as Super Mario Bros. and Zelda. This paper investigates the game generation via LLMs. Based on video game description language, this paper proposes an LLM-based framework to generate game rules and levels simultaneously. Experiments demonstrate how the framework works with prompts considering different combinations of context. Our findings extend the current applications of LLMs and offer new insights for generating new games in the area of procedural content generation."]], "corpus": "arxiv"}
|
362 |
+
{"tstamp": 1723687426.6471, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1723687424.4888, "finish": 1723687426.6471, "ip": "", "conv_id": "c7a53620a2b448448040265261c4aa84", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "Abandoned cart retargerting", "output": [["Abandoned cart retargerting", "Cart Life\n\nCart Life is Hofmeier's first game, drawing upon his experience as an illustrator. He developed the game in his spare time using Adventure Game Studio while working what he describes as a \"bunch of bad jobs\". He initially planned to finish the development in 30 days but he worked on development for three years. He wanted to make a game which had no high scores, points, or action, and originally envisaged it as a comedy.\nOf the game's pixel art design, Hofmeier said that he did not choose it to be nostalgic but rather because he wanted players to fill in the extra details with their own thoughts and experiences, saying that it took more time and effort than other possible styles. During development Hofmeier spoke to a number of street vendors to research their work who were enthusiastic about the creation of the game.\nDuring development Richard Hofmeier experimented with many elements which were eventually removed, including a fourth character and a number of extra stores and locations. At the 2013 Independent Games Festival, Hofmeier spray-painted his own booth to instead display Porpentine's game Howling Dogs, saying that he thought \"Cart Life had already overstayed its welcome... I wanted people to see this game.\""]], "corpus": "wikipedia"}
|
363 |
+
{"tstamp": 1723687426.6471, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1723687424.4888, "finish": 1723687426.6471, "ip": "", "conv_id": "893229cb3ec146f08f260faf97d3f551", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "Abandoned cart retargerting", "output": [["Abandoned cart retargerting", "Shopping cart\n\nElectronic and magnetic Electronic systems are sometimes used by retailers. Each shopping cart is fitted with an electronic locking wheel clamp, or \"boot\". A transmitter with a thin wire is placed around the perimeter of the parking lot, and the boot locks when the cart leaves the designated area. Store personnel must then deactivate the lock with a handheld remote control to return the cart to stock. Often, a line is painted in front of the broadcast range to warn customers that their cart will stop when rolled past the line. However, these systems are very expensive to install and although helpful, are not foolproof. The wheels can be lifted over the electronic barrier and/or pushed hard enough that the locks break. There are also safety concerns if the person pushing the trolley is running, and also if the trolley doesn't lock and is taken onto a road, locking due to magnetic materials under the road. Some cities have required retailers to install locking wheel systems on their shopping carts. In some cases, electronic systems companies have encouraged passage of such laws to create a captive audience of potential customers.\nPhysical\nA low-tech form of theft prevention utilizes a physical impediment, such as vertical posts at the store entrance to keep carts from being taken into the parking lot. This method also impedes physically disabled customers, which may be illegal in many jurisdictions. For example, in the United States it would be a violation of the Americans with Disabilities Act of 1990."]], "corpus": "wikipedia"}
|
364 |
+
{"tstamp": 1723687547.1365, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1723687546.9485, "finish": 1723687547.1365, "ip": "", "conv_id": "7c49bfe3653b4fa4944dcb8026b8078e", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "Automatic customer follow up", "output": [["Automatic customer follow up", "Title: Improving Customer Experience in Call Centers with Intelligent Customer-Agent Pairing\n\nAbstract: Customer experience plays a critical role for a profitable organisation or company. A satisfied customer for a company corresponds to higher rates of customer retention, and better representation in the market. One way to improve customer experience is to optimize the functionality of its call center. In this work, we have collaborated with the largest provider of telecommunications and Internet access in the country, and we formulate the customer-agent pairing problem as a machine learning problem. The proposed learning-based method causes a significant improvement in performance of about $215\\%$ compared to a rule-based method."]], "corpus": "arxiv"}
|
365 |
+
{"tstamp": 1723687547.1365, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1723687546.9485, "finish": 1723687547.1365, "ip": "", "conv_id": "645dfee15ed047849cca81f10a31f707", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Automatic customer follow up", "output": [["Automatic customer follow up", "Title: Intent Matching based Customer Services Chatbot with Natural Language Understanding\n\nAbstract: Customer service is the lifeblood of any business. Excellent customer service not only generates return business but also creates new customers. Looking at the demanding market to provide a 24/7 service to customers, many organisations are increasingly engaged in popular social media and text messaging platforms such as WhatsApp and Facebook Messenger in providing a 24/7 service to customers in the current demanding market. In this paper, we present an intent matching based customer services chatbot (IMCSC), which is capable of replacing the customer service work of sales personnel, whilst interacting in a more natural and human-like manner through the employment of Natural Language Understanding (NLU). The bot is able to answer the most common frequently asked questions and we have also integrated features for the processing and exporting of customer orders to a Google Sheet."]], "corpus": "arxiv"}
|