--- library_name: peft license: apache-2.0 base_model: - HuggingFaceH4/zephyr-7b-beta tags: - trl - sft - unsloth - generated_from_trainer model-index: - name: ZoraBetaA1 results: [] language: - en --- Support us On **KO-FI** [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/J3J61D8NHV) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6633a73004501e16e7896b86/l4aVk8hOAXXeCoGZHMeE2.png) **ZoraBetaA family** # ZoraBetaA2 - EmpressOfRoleplay - ZoraBetaA2 is Our Brand new AI Model, finetuned from Our A1 model using [Iris-Uncensored-Reformat-R2](https://huggingface.co/datasets/N-Bot-Int/Iris-Uncensored-Reformat-R2?not-for-all-audiences=true) with higher step, ZoraBetaA2 showcase a Strong Roleplaying Capability With an Even Stronger Finetuned Bias toward Roleplaying Using **Zephyr Beta 7B**, ZoraBetaA2 also Shows a Great Roleplaying Capabilities, Without Hallucinating Much Unlike MistThena7B Finetuned Using Mistral 7b v0.1, ZoraBetaA2 Is A Much Biased To Roleplay AI Model, compared the A1 model, due to this, the A2 Model failed at Other Purpose Beyond Roleplaying This New Architecture allow us To Increase Roleplaying capabilities without Doing everything from scratch as **Zephyr Beta** has a Strong RP foundation already, Leading us to Scaffolding on this Architecture And Increasing Roleplaying capabilities further. - ZoraBetaA2 contains Cleaned Dataset, however its still relatively Unstable so please Report any issues found through our email [nexus.networkinteractives@gmail.com](nexus.networkinteractives@gmail.com) about any overfitting, or improvements for the future Models Once again feel free to Modify the LORA to your likings, However please consider Adding this Page for credits and if you'll increase its **Dataset**, then please handle it with care and ethical considerations - ZoraBetaA2 is - **Developed by:** N-Bot-Int - **License:** apache-2.0 - **Parent Model from model:** HuggingFaceH4/zephyr-7b-beta - **Dataset Combined Using:** UltraDatasetCleanerAndMoshpit-R1(Propietary Software) - # Notice - **For a Good Experience, Please use** - Low temperature 1.5, min_p = 0.1 and max_new_tokens = 128 - # Detail card: - Parameter - 7 Billion Parameters - (Please visit your GPU Vendor if you can Run 3B models) - Training - 300 Steps from Iris-Dataset-Reformat-R1 - Finetuning tool: - Unsloth AI - This Zephyr model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth) - Fine-tuned Using: - Google Colab