|
|
|
Open-Llama |
|
|
|
This model is in maintenance mode only, we don't accept any new PRs changing its code. |
|
If you run into any issues running this model, please reinstall the last version that supported this model: v4.31.0. |
|
You can do so by running the following command: pip install -U transformers==4.31.0. |
|
|
|
This model differs from the OpenLLaMA models on the Hugging Face Hub, which primarily use the LLaMA architecture. |
|
|
|
Overview |
|
The Open-Llama model was proposed in the open source Open-Llama project by community developer s-JoL. |
|
The model is mainly based on LLaMA with some modifications, incorporating memory-efficient attention from Xformers, stable embedding from Bloom, and shared input-output embedding from PaLM. |
|
And the model is pre-trained on both Chinese and English, which gives it better performance on Chinese language tasks. |
|
This model was contributed by s-JoL. |
|
The original code was released on GitHub by s-JoL, but is now removed. |
|
OpenLlamaConfig |
|
[[autodoc]] OpenLlamaConfig |
|
OpenLlamaModel |
|
[[autodoc]] OpenLlamaModel |
|
- forward |
|
OpenLlamaForCausalLM |
|
[[autodoc]] OpenLlamaForCausalLM |
|
- forward |
|
OpenLlamaForSequenceClassification |
|
[[autodoc]] OpenLlamaForSequenceClassification |
|
- forward |