nehcgs commited on
Commit
51c4df0
·
verified ·
1 Parent(s): 054d8ad

Upload 2 files

Browse files
Files changed (2) hide show
  1. LICENSE +77 -0
  2. README.md +36 -45
LICENSE ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Katanemo Labs, Inc. COMMUNITY LICENSE AGREEMENT
2
+ **Version Release Date:** September 30th, 2024
3
+
4
+ This Katanemo Labs, Inc. COMMUNITY LICENSE AGREEMENT is based on the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. The terms and conditions have been adapted to reflect the proprietary nature of Katanemo Labs' materials.
5
+
6
+ ---
7
+
8
+ 1.Definitions
9
+ a. "Agreement": The terms and conditions for use, reproduction, distribution, and modification of the Katanemo Materials set forth herein.
10
+ b. "Documentation": The specifications, manuals, and documentation accompanying Katanemo LLMs v1.
11
+ c. "Licensee" or "you: The individual or entity entering into this Agreement, including your employer if you are acting on their behalf.
12
+ d. "Katanemo": The foundational large language models and software provided by Katanemo Labs, Inc., available at https://huggingface.co/katanemolabs.
13
+ e. "Katanemo Materials": Collectively, Katanemo's proprietary models and Documentation. Some Materials are derived from the Qwen language models licensed under the Qwen RESEARCH LICENSE AGREEMENT.
14
+ f. "Katanemo Labs" or "we": Katanemo Labs Inc., a Delaware, USA Corporation.
15
+
16
+ ---
17
+
18
+ 2.
19
+ By clicking "I Accept" or using any part of the Katanemo Materials, you agree to be bound by this Agreement.
20
+
21
+ ---
22
+
23
+ 3. License Rights and Redistribution
24
+ a. Grant of Rights
25
+ You are granted a non-exclusive, worldwide, non-transferable, and royalty-free license to:
26
+ - Use, reproduce, distribute, and modify the Katanemo Materials.
27
+ - Create derivative works based on the Katanemo Materials.
28
+
29
+ 4. Redistribution and Use
30
+ a. Distribution:
31
+ If you distribute the Katanemo Materials or a derivative work:
32
+ - Include a copy of this Agreement.
33
+ - Prominently display "Built with Katanemo" on a related website or documentation.
34
+
35
+ b. Attribution:
36
+ Include the following attribution notice:
37
+ "Katanemo is licensed under the Katanemo Labs Community License, Copyright © Katanemo Labs, Inc. All Rights Reserved."_
38
+
39
+ c. Compliance:
40
+ Your use must adhere to the Acceptable Use Policy, available at https://katanemolabs.com/katanemo/use-policy.
41
+
42
+ ---
43
+
44
+ 5. Additional Commercial Terms
45
+ If you are commercially using the Materials, you shall request a license from us.
46
+
47
+ ---
48
+
49
+ 6. Disclaimer of Warranty
50
+ The Katanemo Materials are provided "AS IS" without warranties of any kind, either express or implied, including but not limited to warranties of title, non-infringement, or fitness for a particular purpose.
51
+
52
+ ---
53
+
54
+ 7. Limitation of Liability
55
+ Katanemo Labs is not liable for any indirect, special, or consequential damages arising out of the use of the Katanemo Materials, even if advised of the possibility of such damages.
56
+
57
+ ---
58
+
59
+ 8. Intellectual Property
60
+ a. Trademarks
61
+ No trademark licenses are granted, except as required for attribution as described in Section 1.b. You may use the “Katanemo” mark according to Katanemo Labs' brand guidelines.
62
+
63
+ b. Ownership
64
+ You own any derivative works or modifications you create, except for portions owned by Katanemo Labs.
65
+
66
+ c. Litigation
67
+ If you file a lawsuit against Katanemo Labs regarding intellectual property, your license under this Agreement terminates.
68
+
69
+ ---
70
+
71
+ 9. Term and Termination
72
+ This Agreement continues until terminated. Katanemo Labs may terminate the Agreement if you breach any terms. Upon termination, you must cease using the Katanemo Materials.
73
+
74
+ ---
75
+
76
+ 10. Governing Law and Jurisdiction
77
+ This Agreement is governed by the laws of the State of Washington, USA. Any disputes will be resolved in the courts of California.
README.md CHANGED
@@ -19,7 +19,7 @@ The Arch-Function-Chat collection builds upon the Katanemo's [Arch-Function](htt
19
  In addition to function calling capabilities, this collection now offers:
20
 
21
  - **Clarify & refine**: Generates natural follow-up questions to collect missing information for function calling
22
- - **Result inerpretation**: Provides human-friendly responses based on function execution results
23
  - **Context management**: Mantains context in complex multi-turn interactions
24
 
25
  # Requirements
@@ -60,69 +60,60 @@ FORMAT_PROMPT = (
60
  )
61
 
62
  # Define available tools
63
- get_weather_api = {
64
- "type": "function",
65
- "function": {
66
- "name": "get_weather",
67
- "description": "Get the current weather for a location",
68
- "parameters": {
69
- "type": "object",
70
- "properties": {
71
- "location": {
72
- "type": "str",
73
- "description": "The city and state, e.g. San Francisco, New York",
74
- },
75
- "unit": {
76
- "type": "str",
77
- "enum": ["celsius", "fahrenheit"],
78
- "description": "The unit of temperature to return",
 
 
79
  },
 
80
  },
81
- "required": ["location"],
82
  },
83
- },
84
- }
85
-
86
- openai_format_tools = [get_weather_api]
87
-
88
 
89
- def convert_tools(tools: List[Dict[str, Any]]):
90
- converted = [json.dumps(tool["function"], ensure_ascii=False) for tool in tools]
91
- return "\n".join(converted)
92
 
93
  # Helper function to create the system prompt for our model
94
  def format_prompt(tools: List[Dict[str, Any]]):
95
- tools = convert_tools(tools)
96
- return TASK_PROMPT.format(tools=tools) + FORMAT_PROMPT
 
 
 
97
 
98
- system_prompt = format_prompt(openai_format_tools)
99
 
100
  messages = [
101
  {"role": "system", "content": system_prompt},
102
  {"role": "user", "content": "What is the weather in Seattle?"},
103
  ]
104
 
105
- inputs = tokenizer.apply_chat_template(
106
  messages, add_generation_prompt=True, return_tensors="pt"
107
  ).to(model.device)
108
 
109
- outputs = model.generate(
110
- inputs,
111
- max_new_tokens=512,
112
- do_sample=False,
113
- num_return_sequences=1,
114
- eos_token_id=tokenizer.eos_token_id,
115
- )
116
 
117
- response = tokenizer.decode(outputs[0][len(inputs[0]) :], skip_special_tokens=True)
118
- print(response)
119
- ````
 
120
 
121
- Then you should be able to see the following output string in JSON format:
122
- ````python
123
- ```json
124
- {"tool_calls": [{"name": "get_weather", "arguments": {"location": "Seattle"}}]}
125
- ```
126
  ````
127
 
128
  # License
 
19
  In addition to function calling capabilities, this collection now offers:
20
 
21
  - **Clarify & refine**: Generates natural follow-up questions to collect missing information for function calling
22
+ - **Interpret & respond**: Provides human-friendly responses based on function execution results
23
  - **Context management**: Mantains context in complex multi-turn interactions
24
 
25
  # Requirements
 
60
  )
61
 
62
  # Define available tools
63
+ tools = [
64
+ {
65
+ "type": "function",
66
+ "function": {
67
+ "name": "get_weather",
68
+ "description": "Get the current weather for a location",
69
+ "parameters": {
70
+ "type": "object",
71
+ "properties": {
72
+ "location": {
73
+ "type": "str",
74
+ "description": "The city and state, e.g. San Francisco, New York",
75
+ },
76
+ "unit": {
77
+ "type": "str",
78
+ "enum": ["celsius", "fahrenheit"],
79
+ "description": "The unit of temperature to return",
80
+ },
81
  },
82
+ "required": ["location"],
83
  },
 
84
  },
85
+ }
86
+ ]
 
 
 
87
 
 
 
 
88
 
89
  # Helper function to create the system prompt for our model
90
  def format_prompt(tools: List[Dict[str, Any]]):
91
+ tools = "\n".join(
92
+ [json.dumps(tool["function"], ensure_ascii=False) for tool in tools]
93
+ )
94
+ return TASK_PROMPT.format(tools=tools) + FORMAT_PROMPT
95
+
96
 
97
+ system_prompt = format_prompt(tools)
98
 
99
  messages = [
100
  {"role": "system", "content": system_prompt},
101
  {"role": "user", "content": "What is the weather in Seattle?"},
102
  ]
103
 
104
+ model_inputs = tokenizer.apply_chat_template(
105
  messages, add_generation_prompt=True, return_tensors="pt"
106
  ).to(model.device)
107
 
108
+ generated_ids = model.generate(**model_inputs, max_new_tokens=32768)
 
 
 
 
 
 
109
 
110
+ generated_ids = [
111
+ output_ids[len(input_ids) :]
112
+ for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
113
+ ]
114
 
115
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
116
+ print(response)
 
 
 
117
  ````
118
 
119
  # License