metadata
base_model:
- Qwen/Qwen2.5-Coder-7B-Instruct
datasets:
- luzimu/WebGen-Bench
language:
- en
license: mit
metrics:
- accuracy
pipeline_tag: text-generation
library_name: transformers
WebGen-LM
WebGen-LM is trained using the Bolt.diy trajectories generated from a subset of the training set of WebGen-Bench (🤗 luzimu/WebGen-Bench). It has been introduced in the paper WebGen-Bench: Evaluating LLMs on Generating Interactive and Functional Websites from Scratch.
The training data and code can be found at WebGen-Bench (Github).
The WebGen-LM family of models are as follows:
Models | HF Links |
---|---|
WebGen-LM-7B | 🤗 luzimu/WebGen-LM-7B |
WebGen-LM-14B | 🤗 luzimu/WebGen-LM-14B |
WebGen-LM-32B | 🤗 luzimu/WebGen-LM-32B |
Performance on WebGen-Bench
Citation
If you find our project useful, please cite:
@misc{lu2025webgenbenchevaluatingllmsgenerating,
title={WebGen-Bench: Evaluating LLMs on Generating Interactive and Functional Websites from Scratch},
author={Zimu Lu and Yunqiao Yang and Houxing Ren and Haotian Hou and Han Xiao and Ke Wang and Weikang Shi and Aojun Zhou and Mingjie Zhan and Hongsheng Li},
year={2025},
eprint={2505.03733},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.03733},
}