Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
mlx-community
/
gemma-3-12b-it-qat-4bit
like
5
Follow
MLX Community
4.81k
Image-Text-to-Text
Transformers
Safetensors
MLX
OpenGVLab/MMPR-v1.2
multilingual
gemma3
internvl
custom_code
conversational
text-generation-inference
License:
qwen
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
34e0dfb
gemma-3-12b-it-qat-4bit
Ctrl+K
Ctrl+K
3 contributors
History:
4 commits
neilmehta24
convert model as bfloat16
34e0dfb
7 days ago
.gitattributes
Safe
1.57 kB
Upload folder using huggingface_hub
14 days ago
README.md
1.03 kB
Update README.md
11 days ago
added_tokens.json
Safe
35 Bytes
Upload folder using huggingface_hub
14 days ago
chat_template.json
Safe
1.62 kB
Upload folder using huggingface_hub
14 days ago
config.json
7.24 kB
convert model as bfloat16
7 days ago
generation_config.json
Safe
173 Bytes
Upload folder using huggingface_hub
14 days ago
model-00001-of-00002.safetensors
5.34 GB
LFS
convert model as bfloat16
7 days ago
model-00002-of-00002.safetensors
2.28 GB
LFS
convert model as bfloat16
7 days ago
model.safetensors.index.json
Safe
109 kB
Upload folder using huggingface_hub
14 days ago
preprocessor_config.json
Safe
570 Bytes
Upload folder using huggingface_hub
14 days ago
processor_config.json
Safe
70 Bytes
Upload folder using huggingface_hub
14 days ago
special_tokens_map.json
Safe
662 Bytes
Upload folder using huggingface_hub
14 days ago
tokenizer.json
Safe
33.4 MB
LFS
Upload folder using huggingface_hub
14 days ago
tokenizer.model
Safe
4.69 MB
LFS
Upload folder using huggingface_hub
14 days ago
tokenizer_config.json
Safe
1.16 MB
Upload folder using huggingface_hub
14 days ago