Disclaimer
Your email will be used for anonymous survey. It will NOT be shared with anyone.
Introduction
This model is the GGUF version of OneSQL-v0.2-Qwen-7B.
Performances
Below is the self-evaluation results for each quantization and its improvement over OneSQL-v0.1-Qwen-7B-GGUF.
Quantization | EX score | v0.1 EX score |
---|---|---|
Q2_K | 39.60 | 29.79 |
Q3_K_S | 38.79 | 36.31 |
Q3_K_M | 41.93 | 39.24 |
Q3_K_L | 45.49 | 40.14 |
Q4_1 | 39.06 | |
Q4_K_S | 42.69 | |
Q4_K_M | 44.19 | 43.95 |
Q5_0 | 43.63 | 43.84 |
Q5_1 | 41.00 | |
Q5_K_S | 42.20 | |
Q5_K_M | 42.07 | |
Q6_K | 41.68 | |
Q8_0 | 41.09 |
Quick start
To use this model, craft your prompt to start with your database schema in the form of CREATE TABLE, followed by your natural language query preceded by --. Make sure your prompt ends with SELECT in order for the model to finish the query for you. There is no need to set other parameters like temperature or max token limit.
PROMPT="CREATE TABLE students (
id INTEGER PRIMARY KEY,
name TEXT,
age INTEGER,
grade TEXT
);
-- Find the three youngest students
SELECT "
PROMPT=$(printf "<|im_start|>system\nYou are a SQL expert. Return code only.<|im_end|>\n<|im_start|>user\n%s<|im_end|>\n<|im_start|>assistant\n" "$PROMPT")
llama.cpp/build/bin/llama-run file://OneSQL-v0.2-Qwen-7B-Q4_K_M.gguf "$PROMPT"
The model response is the finished SQL query without SELECT
* FROM students ORDER BY age ASC LIMIT 3
Caveats
- The performance drop from the original model is due to quantization itself, and the lack of beam search support in llama.cpp framework. Use at your own discretion.
- The 2-bit and 3-bit quantizations suffer from repetitive and unrelevant output token, hence are not recommended for usage.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Collection including onekq-ai/OneSQL-v0.2-Qwen-7B-GGUF
Collection
Text-to-SQL Model
โข
5 items
โข
Updated