Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
XCollab
/
Llama-3.2-1B-4bit-gptq
like
0
Follow
XCollab
1
Text Generation
Transformers
English
llama
Quantize
4-bit precision
gptq
License:
mit
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
Llama-3.2-1B-4bit-gptq
/
quant_config.json
codewithdark
Add 4-bit GPTQ model quantized from meta-llama/Llama-3.2-1B (2025-06-25 10:08:23)
784b2f1
verified
23 days ago
raw
Copy download link
history
blame
contribute
delete
Safe
80 Bytes
{
"bits"
:
4
,
"group_size"
:
128
,
"desc_act"
:
true
,
"damp_percent"
:
0.01
}