Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
jusjinuk
/
Meta-Llama-3-8B-2bit-SqueezeLLM
like
0
PyTorch
llama
arxiv:
2505.07004
License:
llama3
Model card
Files
Files and versions
Community
Model Card
How to run
References
Model Card
Base model:
meta-llama/Meta-Llama-3-8B
Quantization method: SqueezeLLM
Target bit-width: 2
Backend kernel: Any-Precision-LLM kernel (
ap-gemv
)
Calibration data: RedPajama (1024 sentences / 4096 tokens)
Calibration objective: Next-token prediction
How to run
Follow the instruction in
https://github.com/snu-mllab/GuidedQuant
.
References
Model Paper
Downloads last month
3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for
jusjinuk/Meta-Llama-3-8B-2bit-SqueezeLLM
Base model
meta-llama/Meta-Llama-3-8B
Quantized
(
262
)
this model
Collection including
jusjinuk/Meta-Llama-3-8B-2bit-SqueezeLLM
Pre-trained models (SqueezeLLM)
Collection
15 items
โข
Updated
Jun 19