File size: 1,543 Bytes
4c6586c
 
b262ea2
 
4c6586c
 
 
 
cfb9ad5
 
7ac3e74
ac7719e
 
7ac3e74
5ed096a
ac7719e
d765b4c
 
5e7969c
d765b4c
01dec2f
 
5ed096a
2e30172
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
datasets:
- 2imi9/llama2_7B_data_10G
- 2imi9/llama2_7B_data_Course_materials
language:
- en
- zh
base_model:
- meta-llama/Llama-2-7b
pipeline_tag: question-answering
---
Teaching Assistance For Introduction to Computers
-

This teaching assistance model was fine-tuned using LlamaFactory (Zheng et al., 2024), leveraging the LLaMA 2 7B architecture. The fine-tuning process involved around 10GB of open-source bilingual data (Chinese and English) collected from Hugging Face and CSDN. Additionally, specialized datasets focused on introductory computer science topics were integrated to tailor the model for educational purposes. The result is an AI-powered assistant capable of supporting foundational computer science education.

Instruction To Test This Model
-
Please follow the guide in the LLaMA-Factory README file by cloning it from GitHub and accessing the LLaMA Board GUI (powered by Gradio) to launch the model: LLaMA-Factory GitHub README.

https://github.com/hiyouga/LLaMA-Factory/blob/main/README.md
-

@inproceedings{zheng2024llamafactory,
  title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models},
  author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma},
  booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},
  address={Bangkok, Thailand},
  publisher={Association for Computational Linguistics},
  year={2024},
  url={http://arxiv.org/abs/2403.13372}
}