Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
XCollab
/
Llama-3.2-1B-4bit-gptq
like
0
Follow
XCollab
1
Text Generation
Transformers
English
llama
Quantize
4-bit precision
gptq
License:
mit
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
Llama-3.2-1B-4bit-gptq
Ctrl+K
Ctrl+K
1 contributor
History:
3 commits
codewithdark
Create README.md
f7923c1
verified
21 days ago
.gitattributes
Safe
1.57 kB
Add 4-bit GPTQ model quantized from meta-llama/Llama-3.2-1B (2025-06-25 10:08:23)
21 days ago
README.md
147 Bytes
Create README.md
21 days ago
config.json
Safe
1.15 kB
Add 4-bit GPTQ model quantized from meta-llama/Llama-3.2-1B (2025-06-25 10:08:23)
21 days ago
gptq_model-4bit-128g.safetensors
Safe
1.56 GB
LFS
Add 4-bit GPTQ model quantized from meta-llama/Llama-3.2-1B (2025-06-25 10:08:23)
21 days ago
quant_config.json
Safe
80 Bytes
Add 4-bit GPTQ model quantized from meta-llama/Llama-3.2-1B (2025-06-25 10:08:23)
21 days ago
quantize_config.json
Safe
265 Bytes
Add 4-bit GPTQ model quantized from meta-llama/Llama-3.2-1B (2025-06-25 10:08:23)
21 days ago
special_tokens_map.json
Safe
301 Bytes
Add 4-bit GPTQ model quantized from meta-llama/Llama-3.2-1B (2025-06-25 10:08:23)
21 days ago
tokenizer.json
Safe
17.2 MB
LFS
Add 4-bit GPTQ model quantized from meta-llama/Llama-3.2-1B (2025-06-25 10:08:23)
21 days ago
tokenizer_config.json
Safe
50.5 kB
Add 4-bit GPTQ model quantized from meta-llama/Llama-3.2-1B (2025-06-25 10:08:23)
21 days ago