A newer version of this model is available: N-Bot-Int/MistThena7BV2-GGUF

Support Us Through

image/png

GGUF Version

GGUF with Quants! Allowing you to run models using KoboldCPP and other AI Environments!

Quantizations:

Quant Type Benefits Cons
Q4_K_M βœ… Smallest size (fastest inference) ❌ Lowest accuracy compared to other quants
βœ… Requires the least VRAM/RAM ❌ May struggle with complex reasoning
βœ… Ideal for edge devices & low-resource setups ❌ Can produce slightly degraded text quality
Q5_K_M βœ… Better accuracy than Q4, while still compact ❌ Slightly larger model size than Q4
βœ… Good balance between speed and precision ❌ Needs a bit more VRAM than Q4
βœ… Works well on mid-range GPUs ❌ Still not as accurate as higher-bit models
Q8_0 βœ… Highest accuracy (closest to full model) ❌ Requires significantly more VRAM/RAM
βœ… Best for complex reasoning & detailed outputs ❌ Slower inference compared to Q4 & Q5
βœ… Suitable for high-end GPUs & serious workloads ❌ Larger file size (takes more storage)

Model Details:

Read the Model details on huggingface Model Detail Here!

Downloads last month
153
GGUF
Model size
7.25B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for N-Bot-Int/MistThena7B-GGUF

Datasets used to train N-Bot-Int/MistThena7B-GGUF

Collection including N-Bot-Int/MistThena7B-GGUF