QuantFactory/diffullama-GGUF
This is quantized version of diffusionfamily/diffullama created using llama.cpp
Original Model Card
diffullama
This model is a fine-tuned version of [llama2].
Model description
Details and model loading can be seen https://github.com/HKUNLP/DiffuLLaMA.
Framework versions
- Transformers 4.44.2
- Pytorch 2.1.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
@misc{gong2024scalingdiffusionlanguagemodels,
title={Scaling Diffusion Language Models via Adaptation from Autoregressive Models},
author={Shansan Gong and Shivam Agarwal and Yizhe Zhang and Jiacheng Ye and Lin Zheng and Mukai Li and Chenxin An and Peilin Zhao and Wei Bi and Jiawei Han and Hao Peng and Lingpeng Kong},
year={2024},
eprint={2410.17891},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.17891},
}
- Downloads last month
- 186
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for QuantFactory/diffullama-GGUF
Base model
meta-llama/Llama-2-7b-hf