File size: 4,968 Bytes
494a20a
 
 
 
 
 
 
 
 
 
 
 
 
4c5690c
494a20a
4c5690c
494a20a
 
4c5690c
 
494a20a
4c5690c
 
 
 
 
 
494a20a
 
4c5690c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
494a20a
 
4c5690c
 
 
 
 
 
494a20a
4c5690c
 
 
 
 
 
 
 
494a20a
 
4c5690c
494a20a
4c5690c
 
494a20a
4c5690c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
494a20a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
---
library_name: ggml
language:
- ru
- en
pipeline_tag: text-generation
license: apache-2.0
license_name: apache-2.0
license_link: https://huggingface.co/MTSAIR/Kodify-Nano-GGUF/blob/main/Apache%20License%20MTS%20AI.docx
---

# Kodify-Nano-GGUF 🤖

Kodify-Nano-GGUF - GGUF версия модели [MTSAIR/Kodify-Nano](https://huggingface.co/MTSAIR/Kodify-Nano), оптимизированная для CPU/GPU-инференса и использованием Ollama/llama.cpp. Легковесная LLM для задач разработки кода с минимальными ресурсами.

Kodify-Nano-GGUF - GGUF version of [MTSAIR/Kodify-Nano](https://huggingface.co/MTSAIR/Kodify-Nano), optimized for CPU/GPU inference with Ollama/llama.cpp. Lightweight LLM for code development tasks with minimal resource requirements.


## Using the Image
You can run Kodify Nano on OLLAMA in two ways:

1. **Using Docker**  
2. **Locally** (provides faster responses than Docker)

### Method 1: Running Kodify Nano on OLLAMA in Docker

#### Without NVIDIA GPU:

```bash
docker run -e OLLAMA_HOST=0.0.0.0:8985 -p 8985:8985 --name ollama -d ollama/ollama
```

#### With NVIDIA GPU:

```bash
docker run --runtime nvidia -e OLLAMA_HOST=0.0.0.0:8985 -p 8985:8985 --name ollama -d ollama/ollama
```

> **Important:**  
> - Ensure Docker is installed and running  
> - If port 8985 is occupied, replace it with any available port and update plugin configuration

#### Load the model:

```bash
docker exec ollama ollama pull hf.co/MTSAIR/Kodify-Nano-GGUF
```

#### Rename the model:
```bash
docker exec ollama ollama cp hf.co/MTSAIR/Kodify-Nano-GGUF kodify_nano
```

#### Start the model:

```bash
docker exec ollama ollama run kodify_nano
```
---

### Method 2: Local Kodify Nano on OLLAMA

1. **Download OLLAMA:**  
https://ollama.com/download

2. **Set the port:**

```bash
export OLLAMA_HOST=0.0.0.0:8985
```

> **Note:** If port 8985 is occupied, replace it and update plugin configuration

3. **Start OLLAMA server:**  

```bash
ollama serve &
```

4. **Download the model:**  

```bash
ollama pull hf.co/MTSAIR/Kodify-Nano-GGUF
```

5. **Rename the model:**  

```bash
ollama cp hf.co/MTSAIR/Kodify-Nano-GGUF kodify_nano
```

6. **Run the model:**  

```bash
ollama run kodify_nano
```

## Plugin Installation

### For Visual Studio Code

1. Download the [latest Kodify plugin](https://mts.ai/ru/product/kodify/?utm_source=huggingface&utm_medium=pr&utm_campaign=post#models) for VS Code.
2. Open the **Extensions** panel on the left sidebar.
3. Click **Install from VSIX...** and select the downloaded plugin file.

### For JetBrains IDEs

1. Download the [latest Kodify plugin](https://mts.ai/ru/product/kodify/?utm_source=huggingface&utm_medium=pr&utm_campaign=post#models) for JetBrains.
2. Open the IDE and go to **Settings > Plugins**.
3. Click the gear icon (⚙️) and select **Install Plugin from Disk...**.
4. Choose the downloaded plugin file.
5. Restart the IDE when prompted.

---

### Changing the Port in Plugin Settings (for Visual Studio Code and JetBrains)

If you changed the Docker port from `8985`, update the plugin's `config.json`:

1. Open any file in the IDE.
2. Open the Kodify sidebar:
   - **VS Code**: `Ctrl+L` (`Cmd+L` on Mac).
   - **JetBrains**: `Ctrl+J` (`Cmd+J` on Mac).
3. Access the `config.json` file:
   - **Method 1**: Click **Open Settings** (VS Code) or **Kodify Config** (JetBrains), then navigate to **Configuration > Chat Settings > Open Config File**.
   - **Method 2**: Click the gear icon (⚙️) in the Kodify sidebar.
4. Modify the `apiBase` port under `tabAutocompleteModel` and `models`.
5. Save the file (`Ctrl+S` or **File > Save**).

---


## Available quantization variants:
- Kodify_Nano_q4_k_s.gguf (balanced)
- Kodify_Nano_q8_0.gguf (high quality)
- Kodify_Nano.gguf (best quality, unquantized)

Download using huggingface_hub:

```bash
pip install huggingface-hub
python -c "from huggingface_hub import hf_hub_download; hf_hub_download(repo_id='MTSAIR/Kodify-Nano-GGUF', filename='Kodify_Nano_q4_k_s.gguf', local_dir='./models')"
```


## Python Integration

Install Ollama Python library:

```bash
pip install ollama
```

Example code:

```python
import ollama

response = ollama.generate(
    model="kodify-nano",
    prompt="Write a Python function to calculate factorial",
    options={
        "temperature": 0.4,
        "top_p": 0.8,
        "num_ctx": 8192
    }
)

print(response['response'])
```

## Usage Examples

```python
response = ollama.generate(
    model="kodify-nano",
    prompt="""<s>[INST] 
Write a Python function that:
1. Accepts a list of numbers
2. Returns the median value
[/INST]""",
    options={"max_tokens": 512}
)

### Code Refactoring
response = ollama.generate(
    model="kodify-nano",
    prompt="""<s>[INST] 
Refactor this Python code:

def calc(a,b):
    s = a + b
    d = a - b
    p = a * b
    return s, d, p
[/INST]""",
    options={"temperature": 0.3}
)
```