--- base_model: - CultriX/Qwen2.5-14B-GeneralReasoning - allknowingroger/QwenSlerp6-14B - EVA-UNIT-01/EVA-Qwen2.5-14B-v0.0 - nbeerbower/Qwen2.5-Gutenberg-Doppel-14B - spacematt/Qwen2.5-Slick-Coder-14B-Instruct - CultriX/Qwen2.5-14B-Qwentangledv2 - allura-org/TQ2.5-14B-Sugarquill-v1 - sometimesanotion/Qwenvergence-14B-v3-Prose - CultriX/SeQwence-14B-EvolMerge - sthenno-com/miscii-14b-0218 - RDson/WomboCombo-R1-Coder-14B-Preview - prithivMLmods/Equuleus-Opus-14B-Exp library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [spacematt/Qwen2.5-Slick-Coder-14B-Instruct](https://huggingface.co/spacematt/Qwen2.5-Slick-Coder-14B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [CultriX/Qwen2.5-14B-GeneralReasoning](https://huggingface.co/CultriX/Qwen2.5-14B-GeneralReasoning) * [allknowingroger/QwenSlerp6-14B](https://huggingface.co/allknowingroger/QwenSlerp6-14B) * [EVA-UNIT-01/EVA-Qwen2.5-14B-v0.0](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.0) * [nbeerbower/Qwen2.5-Gutenberg-Doppel-14B](https://huggingface.co/nbeerbower/Qwen2.5-Gutenberg-Doppel-14B) * [CultriX/Qwen2.5-14B-Qwentangledv2](https://huggingface.co/CultriX/Qwen2.5-14B-Qwentangledv2) * [allura-org/TQ2.5-14B-Sugarquill-v1](https://huggingface.co/allura-org/TQ2.5-14B-Sugarquill-v1) * [sometimesanotion/Qwenvergence-14B-v3-Prose](https://huggingface.co/sometimesanotion/Qwenvergence-14B-v3-Prose) * [CultriX/SeQwence-14B-EvolMerge](https://huggingface.co/CultriX/SeQwence-14B-EvolMerge) * [sthenno-com/miscii-14b-0218](https://huggingface.co/sthenno-com/miscii-14b-0218) * [RDson/WomboCombo-R1-Coder-14B-Preview](https://huggingface.co/RDson/WomboCombo-R1-Coder-14B-Preview) * [prithivMLmods/Equuleus-Opus-14B-Exp](https://huggingface.co/prithivMLmods/Equuleus-Opus-14B-Exp) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: allura-org/TQ2.5-14B-Sugarquill-v1 - model: prithivMLmods/Equuleus-Opus-14B-Exp - model: sthenno-com/miscii-14b-0218 - model: CultriX/SeQwence-14B-EvolMerge - model: EVA-UNIT-01/EVA-Qwen2.5-14B-v0.0 - model: CultriX/Qwen2.5-14B-Qwentangledv2 - model: CultriX/Qwen2.5-14B-GeneralReasoning - model: nbeerbower/Qwen2.5-Gutenberg-Doppel-14B - model: allknowingroger/QwenSlerp6-14B - model: RDson/WomboCombo-R1-Coder-14B-Preview - model: sometimesanotion/Qwenvergence-14B-v3-Prose - model: spacematt/Qwen2.5-Slick-Coder-14B-Instruct merge_method: model_stock base_model: spacematt/Qwen2.5-Slick-Coder-14B-Instruct normalize: true int8_mask: true dtype: bfloat16 ```