Model description
We fine-tuned the bass with LoRA using ORKG/SciQA benchmark for question answering over sholarly knowledge graphs.
We tried several training epochs: 3,5,7,10,15,20. The best performance was obtained on 15 epochs, so we pubished this model with 15 epochs.
For more details on the evaluation, please check this GitHub repoFIRESPARQL
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Sherry791/Meta-Llama-3-8B-Instruct-ft4sciqa
Base model
meta-llama/Meta-Llama-3-8B-Instruct