Plesio-70B

Model banner

Model Information

Plesio-70B

70B parameters Llama-3.3 based Creative / Fresh Prose Co-writing/Roleplay/Adventure Generalist

A simple merge yet sovl in it's own way, This merge is inbetween Shimamura & Austral Winton, I wanted to give Austral a bit of shorter prose, So FYI for all the 10000+ Token reply lovers.

Thanks Auri for testing!

Using the Oh-so-great 0.2 Slerp merge weight with Winton as the Base.

Support me on Ko-Fi: https://ko-fi.com/deltavector

Quantized Versions

Available Downloads

  • GGUF FormatFor use with LLama.cpp & Forks(ty Auri and Bart)
  • EXL3 FormatFor use with TabbyAPI (Slower on Ampere))

Prompting

Model has been tuned with the LLama-3 Instruct formatting.

See Merging Config
https://files.catbox.moe/yw81rn.yml
            

Credits

Thank you to Lucy Knada, Auri, Ateron, Alicat, Intervitens, Cgato, Kubernetes Bad and the rest of Anthracite.

Downloads last month
52
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Providers NEW
Input a message to start chatting with Delta-Vector/Plesio-70B.

Model tree for Delta-Vector/Plesio-70B

Merge model
this model
Finetunes
1 model
Merges
1 model
Quantizations
2 models