Getting error when deploying on HF Inference with 2 A100 GPU's of AWS on region us-east
#19 opened about 13 hours ago
by
streebo
FP8 Quants Please
➕
1
#18 opened 2 days ago
by
rjmehta
online demo please
👍
1
#15 opened 4 days ago
by
pypry
vram Requirements for full size
2
#14 opened 4 days ago
by
tazomatalax

demo Inference
➕
2
#13 opened 6 days ago
by
devops724
bug in chat template?
1
#11 opened 6 days ago
by
J22
Why is the chat_template mixed with Chinese and English?
👍
2
5
#8 opened 7 days ago
by
Daucloud
is docker image available?
#7 opened 7 days ago
by
ZhifengKong
Deployment support for sglang
👍
3
#5 opened 7 days ago
by
XiChen0415
vllm error:operator _C::marlin_qqq_gemm does not exist
3
#4 opened 8 days ago
by
HourseCircle
Category Error
👍
1
1
#3 opened 8 days ago
by
CO-IR
Sorry for Askin here
👀
4
#2 opened 8 days ago
by
ryg81
Official vllm support
👀
2
1
#1 opened 8 days ago
by
shash42
