File size: 5,186 Bytes
9344179
 
 
 
7c2cbd2
9344179
 
 
 
 
 
 
 
 
 
 
493c43f
9344179
493c43f
9344179
054d8ad
51c4df0
493c43f
9344179
bbaafa4
 
 
9344179
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51c4df0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9344179
51c4df0
9344179
 
51c4df0
 
9344179
 
 
 
51c4df0
 
 
 
 
9344179
51c4df0
9344179
 
 
 
 
 
51c4df0
9344179
 
 
51c4df0
9344179
51c4df0
 
 
 
9344179
51c4df0
 
9344179
 
 
7c2cbd2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
---
license: other
license_name: katanemo-research
license_link: >-
  https://huggingface.co/katanemo/Arch-Function-Chat-1.5B/blob/main/LICENSE
base_model:
- Qwen/Qwen2.5-Coder-1.5B-Instruct
language:
- en
pipeline_tag: text-generation
library_name: transformers
---

# katanemo/Arch-Function-Chat-1.5B

## Overview
The Arch-Function-Chat collection builds upon the Katanemo's [Arch-Function](https://huggingface.co/collections/katanemo/arch-function-66f209a693ea8df14317ad68) collection by extending its capabilities beyond function calling. This new collection maintains the state-of-the-art(SOTA) function calling performance of the original collection while adding powerful new features that make it even more versatile in real-world applications.

In addition to function calling capabilities, this collection now offers:

- **Clarify & refine**: Generates natural follow-up questions to collect missing information for function calling
- **Interpret & respond**: Provides human-friendly responses based on function execution results
- **Context management**: Mantains context in complex multi-turn interactions

*Note*: Arch-Function-Chat is now the primarly LLM used in then open source [Arch Gateway](https://github.com/katanemo/archgw) - An AI-native proxy for agents. For more details about the
project, check out the Github [README](https://github.com/katanemo/archgw/blob/main/README.md).

# Requirements
The code of Arch-Function-Chat-1.5B has been in the Hugging Face `transformers` library and we advise you to install latest version:
```bash
pip install transformers>=4.37.0
```


# How to use
We use the following example to illustrate how to use our model to perform function calling tasks. Please note that, our model works best with our provided prompt format. It allows us to extract JSON output that is similar to the [OpenAI's function calling](https://platform.openai.com/docs/guides/function-calling).


### Quickstart
````python
import json
from typing import Any, Dict, List
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "katanemo/Arch-Function-Chat-1.5B"
model = AutoModelForCausalLM.from_pretrained(
    model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Please use our provided prompt for best performance
TASK_PROMPT = (
    "You are a helpful assistant designed to assist with the user query by making one or more function calls if needed."
    "\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{tools}\n</tools>"
    "\n\nYour task is to decide which functions are needed and collect missing parameters if necessary."
)

FORMAT_PROMPT = (
    "\n\nBased on your analysis, provide your response in one of the following JSON formats:"
    '\n1. If no functions are needed:\n```json\n{"response": "Your response text here"}\n```'
    '\n2. If functions are needed but some required parameters are missing:\n```json\n{"required_functions": ["func_name1", "func_name2", ...], "clarification": "Text asking for missing parameters"}\n```'
    '\n3. If functions are needed and all required parameters are available:\n```json\n{"tool_calls": [{"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}},... (more tool calls as required)]}\n```'
)

# Define available tools
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the current weather for a location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "str",
                        "description": "The city and state, e.g. San Francisco, New York",
                    },
                    "unit": {
                        "type": "str",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "The unit of temperature to return",
                    },
                },
                "required": ["location"],
            },
        },
    }
]


# Helper function to create the system prompt for our model
def format_prompt(tools: List[Dict[str, Any]]):
    tools = "\n".join(
        [json.dumps(tool["function"], ensure_ascii=False) for tool in tools]
    )
    return TASK_PROMPT.format(tools=tools) + FORMAT_PROMPT


system_prompt = format_prompt(tools)

messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": "What is the weather in Seattle?"},
]

model_inputs = tokenizer.apply_chat_template(
    messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)

generated_ids = model.generate(**model_inputs, max_new_tokens=32768)

generated_ids = [
    output_ids[len(input_ids) :]
    for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
````

# License
Katanemo Arch-Function collection is distributed under the [Katanemo license](https://huggingface.co/katanemo/Arch-Function-Chat-1.5B/blob/main/LICENSE).