Qwen2.5-Godzilla-Coder-51B

This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly.
ABOUT:
"It will pound your programming problems into the pavement... perfectly."
Tipping the scales at 101 layers and 1215 tensors... the monster lives.
Two monsters in fact.
This repo contains V1 source code. V2 will be in a separate repo.
Each model generates stronger, more compact code with an enhanced understanding of your instructions and follows what you tell them to the letter.
And then some.
These overpowered CODING ENGINEs are based on two of the best coder AIs:
"Qwen2.5-Coder-32B-Instruct"
[ https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct ]
and
"OlympicCoder-32B"
[ https://huggingface.co/open-r1/OlympicCoder-32B ]
These two models are stuffed into one MASSSIVE 51B merge that is stronger in performance and understanding than both donor models.
Test quants are here:
[ https://huggingface.co/DavidAU/Qwen2.5-Godzilla-Coder-51B-gguf ]
(full quants coming shortly...)
CONFIGS:
- #1 -> Qwen2.5-Coder-32B-Instruct primary/start, with OlympicCoder-32B as "finalizer". (this REPO)
- #2 -> OlympicCoder-32B as primary/start, with Qwen2.5-Coder-32B-Instruct as "finalizer".
NOTES:
- Each config/version will be very different from each other.
- Tool Calling is supported in both versions.
- Source(s) / full quanting to follow // full repos to follow.
- Model is fully operational at Q2k - both versions - and stronger than the base donor models in terms of raw performance.
- Final model size (including layers/tensors) / config subject to change.
Config / Settings
Model is set at 32k/32768 context.
Requirements [Qwen 2.5 32B Coder default settings]:
- Temp .5 to .7 (or lower)
- topk: 20, topp: .8, minp: .05
- rep pen: 1.1 (can be lower)
- Jinja Template (embedded) or CHATML template.
- A System Prompt is not required. (ran tests with blank system prompt)
Refer to either "Qwen2.5-Coder-32B-Instruct" and/or "OlympicCoder-32B" repos (above) for additional settings, benchmarks and usage.
Help, Adjustments, Samplers, Parameters and More
CHANGE THE NUMBER OF ACTIVE EXPERTS:
See this document:
https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts
Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
OTHER OPTIONS:
Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
- Downloads last month
- 49