Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

tiiuae
/
falcon-40b-instruct

Text Generation
Transformers
PyTorch
English
falcon
custom_code
text-generation-inference
Model card Files Files and versions
xet
Community
93
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

test case one

1
#33 opened about 2 years ago by
FALCONBoy

a100-80g memory but still call error

6
#32 opened about 2 years ago by
leocheung

How many of you are planning on using this as a programming assistent?

πŸ‘ 8
4
#31 opened about 2 years ago by
BliepBlop

Why not add system requirements on the model card?

9
#28 opened about 2 years ago by
johnjohndoedoe

Various Prompt Formats

8
#24 opened about 2 years ago by
gsaivinay

Caching doesn't work on multi gpu

4
#23 opened about 2 years ago by
srinivasbilla

Error on basic tutorial script provided

3
#22 opened about 2 years ago by
diegoAgher

How to finetune the falcon-40b

2
#21 opened about 2 years ago by
jiangix

Slow and Gibberish when inferencing

11
#20 opened about 2 years ago by
srinivasbilla

What is the required GPU memory to load the model?

7
#15 opened about 2 years ago by
nx

Response language issue with fastchat

1
#14 opened about 2 years ago by
manishl127

Is this REALLY viable for commercial use πŸ€”?

6
#13 opened about 2 years ago by
dataviral

How to try Flacon in HuggingChat?

🀯 πŸ‘ 2
5
#6 opened about 2 years ago by
promptgai

[Bug] Does not run

πŸ˜” 1
17
#2 opened about 2 years ago by
catid
  • Previous
  • 1
  • 2
  • Next
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs