|
---
|
|
license: mit
|
|
datasets:
|
|
- flwrlabs/code-alpaca-20k
|
|
language:
|
|
- zho
|
|
- eng
|
|
- fra
|
|
- spa
|
|
- por
|
|
- deu
|
|
- ita
|
|
- rus
|
|
- jpn
|
|
- kor
|
|
- vie
|
|
- tha
|
|
- ara
|
|
base_model:
|
|
- Qwen/Qwen2.5-7B-Instruct
|
|
pipeline_tag: text-generation
|
|
library_name: peft
|
|
tags:
|
|
- text-generation-inference
|
|
- code
|
|
---
|
|
|
|
## Model Details
|
|
|
|
This PEFT adapter has been trained by using [Flower](https://flower.ai/), a friendly federated AI framework.
|
|
|
|
The adapter and benchmark results have been submitted to the [FlowerTune LLM Code Leaderboard](https://flower.ai/benchmarks/llm-leaderboard/code/).
|
|
|
|
Please check the following GitHub project for details on how to reproduce training and evaluation steps:
|
|
|
|
[FlowerTune-LLM-Labs](https://github.com/ethicalabs-ai/FlowerTune-LLM-Labs/blob/main/workspace/models/README.md) |