Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,209 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
# Model Card for
|
7 |
-
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
|
|
10 |
|
|
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
-
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
29 |
|
30 |
-
|
|
|
31 |
|
32 |
-
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
|
46 |
-
|
|
|
47 |
|
48 |
-
|
|
|
|
|
|
|
|
|
49 |
|
50 |
-
|
|
|
51 |
|
52 |
-
|
|
|
|
|
53 |
|
54 |
-
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
|
62 |
-
|
63 |
|
64 |
-
|
65 |
|
66 |
-
|
67 |
|
68 |
-
Users
|
69 |
|
70 |
-
|
71 |
|
72 |
-
|
73 |
|
74 |
-
|
75 |
|
76 |
-
|
|
|
|
|
77 |
|
78 |
-
|
|
|
79 |
|
80 |
-
|
|
|
81 |
|
82 |
-
|
|
|
|
|
83 |
|
84 |
-
|
|
|
85 |
|
86 |
-
|
|
|
87 |
|
88 |
-
|
|
|
|
|
89 |
|
90 |
-
|
91 |
|
|
|
92 |
|
93 |
-
|
94 |
|
95 |
-
|
96 |
|
97 |
-
|
98 |
|
99 |
-
|
|
|
100 |
|
101 |
-
|
102 |
|
103 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
104 |
|
105 |
-
|
106 |
|
107 |
-
|
|
|
108 |
|
109 |
-
|
110 |
|
111 |
-
|
112 |
|
113 |
-
|
114 |
|
115 |
-
|
|
|
116 |
|
117 |
-
|
118 |
|
119 |
-
|
|
|
120 |
|
121 |
-
|
122 |
|
123 |
-
|
|
|
124 |
|
125 |
-
|
126 |
|
127 |
### Results
|
128 |
|
129 |
-
|
130 |
|
131 |
-
|
|
|
|
|
132 |
|
|
|
133 |
|
|
|
|
|
|
|
|
|
134 |
|
135 |
-
|
|
|
|
|
|
|
136 |
|
137 |
-
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
|
141 |
## Environmental Impact
|
142 |
|
143 |
-
|
144 |
-
|
145 |
-
|
|
|
146 |
|
147 |
-
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
|
153 |
-
## Technical Specifications
|
154 |
|
155 |
### Model Architecture and Objective
|
156 |
|
157 |
-
|
158 |
|
159 |
### Compute Infrastructure
|
160 |
|
161 |
-
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
|
171 |
-
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
|
191 |
-
|
192 |
|
193 |
-
|
194 |
|
195 |
-
|
196 |
|
197 |
## Model Card Contact
|
198 |
|
199 |
-
[
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
datasets:
|
4 |
+
- SajjadAyoubi/persian_qa
|
5 |
+
language: fa
|
6 |
+
metrics:
|
7 |
+
- f1
|
8 |
+
- exact_match
|
9 |
+
base_model: pedramyazdipoor/persian_xlm_roberta_large
|
10 |
+
pipeline_tag: question-answering
|
11 |
---
|
12 |
|
13 |
+
# Model Card for AmoooEBI/xlm-roberta-fa-qa-finetuned-on-PersianQA
|
|
|
|
|
14 |
|
15 |
+
This model is a version of `pedramyazdipoor/persian_xlm_roberta_large`, fine-tuned with LoRA for extractive question answering on the Persian language using the PersianQA dataset.
|
16 |
|
17 |
+
---
|
18 |
|
19 |
## Model Details
|
20 |
|
21 |
### Model Description
|
22 |
|
23 |
+
This is an XLM-RoBERTa model fine-tuned on the `SajjadAyoubi/persian_qa` dataset for extractive question answering in Persian. The model was trained using the parameter-efficient LoRA method, which significantly speeds up training while achieving high performance. It is designed to extract an answer to a question directly from a given context. The fine-tuning process has made it a top-performing model for this task in Persian.
|
24 |
|
25 |
+
- **Developed by**: Amir Mohammad Ebrahiminasab
|
26 |
+
- **Shared by**: Amir Mohammad Ebrahiminasab
|
27 |
+
- **Model type**: xlm-roberta
|
28 |
+
- **Language(s)**: fa (Persian)
|
29 |
+
- **License**: MIT
|
30 |
+
- **Finetuned from model**: `pedramyazdipoor/persian_xlm_roberta_large`
|
31 |
|
32 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
+
## Model Sources
|
35 |
|
36 |
+
- **Repository**: [AmoooEBI/xlm-roberta-fa-qa-finetuned-on-PersianQA](https://huggingface.co/AmoooEBI/xlm-roberta-fa-qa-finetuned-on-PersianQA)
|
37 |
+
- **Demo**: [Persian QA Chatbot – Hugging Face Space](https://huggingface.co/spaces/AmoooEBI/Persian-QA-Chatbot)
|
38 |
|
39 |
+
---
|
|
|
|
|
40 |
|
41 |
## Uses
|
42 |
|
|
|
|
|
43 |
### Direct Use
|
44 |
|
45 |
+
The model can be used directly for extractive question answering in Persian using the pipeline function.
|
|
|
|
|
46 |
|
47 |
+
```python
|
48 |
+
from transformers import pipeline
|
49 |
|
50 |
+
qa_pipeline = pipeline(
|
51 |
+
"question-answering",
|
52 |
+
model="AmoooEBI/xlm-roberta-fa-qa-finetuned-on-PersianQA",
|
53 |
+
tokenizer="AmoooEBI/xlm-roberta-fa-qa-finetuned-on-PersianQA"
|
54 |
+
)
|
55 |
|
56 |
+
context = "مهانداس کارامچاند گاندی رهبر سیاسی و معنوی هندیها بود که ملت هند را در راه آزادی از استعمار امپراتوری بریتانیا رهبری کرد."
|
57 |
+
question = "گاندی که بود؟"
|
58 |
|
59 |
+
result = qa_pipeline(question=question, context=context)
|
60 |
+
print(f"Answer: '{result['answer']}'")
|
61 |
+
````
|
62 |
|
63 |
+
---
|
|
|
|
|
64 |
|
65 |
## Bias, Risks, and Limitations
|
66 |
|
67 |
+
The model's performance is highly dependent on the quality and domain of the context provided. It was trained on the PersianQA dataset, which is largely based on Wikipedia articles. Therefore, its performance may degrade on texts with different styles, such as conversational or technical documents.
|
68 |
|
69 |
+
Like ParsBERT, this model also shows a preference for shorter answers, with its Exact Match score dropping for answers longer than the dataset's average. However, its F1-score remains high, indicating it can still identify the correct span of text with high token overlap.
|
70 |
|
71 |
+
---
|
72 |
|
73 |
+
## Recommendations
|
74 |
|
75 |
+
Users should be aware of the model's limitations, particularly the potential for lower Exact Match scores on long-form answers. For applications requiring high precision, outputs should be validated.
|
76 |
|
77 |
+
---
|
78 |
|
79 |
+
## How to Get Started with the Model
|
80 |
|
81 |
+
Use the code below to get started with the model using PyTorch.
|
82 |
|
83 |
+
```python
|
84 |
+
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
|
85 |
+
import torch
|
86 |
|
87 |
+
tokenizer = AutoTokenizer.from_pretrained("AmoooEBI/xlm-roberta-fa-qa-finetuned-on-PersianQA")
|
88 |
+
model = AutoModelForQuestionAnswering.from_pretrained("AmoooEBI/xlm-roberta-fa-qa-finetuned-on-PersianQA")
|
89 |
|
90 |
+
context = "پایتخت اسپانیا شهر مادرید است."
|
91 |
+
question = "پایتخت اسپانیا کجاست؟"
|
92 |
|
93 |
+
inputs = tokenizer(question, context, return_tensors="pt")
|
94 |
+
with torch.no_grad():
|
95 |
+
outputs = model(**inputs)
|
96 |
|
97 |
+
answer_start_index = outputs.start_logits.argmax()
|
98 |
+
answer_end_index = outputs.end_logits.argmax()
|
99 |
|
100 |
+
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
|
101 |
+
answer = tokenizer.decode(predict_answer_tokens)
|
102 |
|
103 |
+
print(f"Question: {question}")
|
104 |
+
print(f"Answer: {answer}")
|
105 |
+
```
|
106 |
|
107 |
+
---
|
108 |
|
109 |
+
## Training Details
|
110 |
|
111 |
+
### Training Data
|
112 |
|
113 |
+
The model was fine-tuned on the `SajjadAyoubi/persian_qa` dataset, which contains question-context-answer triplets in Persian, primarily from Wikipedia.
|
114 |
|
115 |
+
### Training Procedure
|
116 |
|
117 |
+
**Preprocessing**
|
118 |
+
The training data was tokenized using the XLM-RoBERTa tokenizer. Contexts longer than the model's maximum input size were split into overlapping chunks using a sliding window (`doc_stride=128`). The start and end positions of the answer token were then mapped to these chunks.
|
119 |
|
120 |
+
**Training Hyperparameters**
|
121 |
|
122 |
+
* Training regime: LoRA (Parameter-Efficient Fine-Tuning)
|
123 |
+
* `r`: 16
|
124 |
+
* `lora_alpha`: 32
|
125 |
+
* `lora_dropout`: 0.1
|
126 |
+
* `target_modules`: `["query", "value"]`
|
127 |
+
* **Learning Rate**: 2 × 10⁻⁵
|
128 |
+
* **Epochs**: 8
|
129 |
+
* **Batch Size**: 8
|
130 |
|
131 |
+
**Speeds, Sizes, Times**
|
132 |
|
133 |
+
* Training Time: \~3 hours on a single GPU
|
134 |
+
* Trainable Parameters: 0.281% of model parameters
|
135 |
|
136 |
+
---
|
137 |
|
138 |
+
## Evaluation
|
139 |
|
140 |
+
### Testing Data, Factors & Metrics
|
141 |
|
142 |
+
**Testing Data**
|
143 |
+
The evaluation was performed on the validation set of the `SajjadAyoubi/persian_qa` dataset.
|
144 |
|
145 |
+
**Factors**
|
146 |
|
147 |
+
* Answer Presence: Questions with and without answers
|
148 |
+
* Answer Length: Shorter vs. longer than the average (22.78 characters)
|
149 |
|
150 |
+
**Metrics**
|
151 |
|
152 |
+
* **F1-Score**: Measures token overlap
|
153 |
+
* **Exact Match (EM)**: Measures perfect span match
|
154 |
|
155 |
+
---
|
156 |
|
157 |
### Results
|
158 |
|
159 |
+
**Overall Performance on the Validation Set (LoRA Fine-Tuned)**
|
160 |
|
161 |
+
| Model Status | Exact Match | F1-Score |
|
162 |
+
| ----------------------- | ----------- | -------- |
|
163 |
+
| Fine-Tuned Model (LoRA) | 69.90% | 84.85% |
|
164 |
|
165 |
+
**Performance on Data Subsets**
|
166 |
|
167 |
+
| Case Type | Exact Match | F1-Score |
|
168 |
+
| ---------- | ----------- | -------- |
|
169 |
+
| Has Answer | 62.06% | 83.42% |
|
170 |
+
| No Answer | 88.17% | 88.17% |
|
171 |
|
172 |
+
| Answer Length | Exact Match | F1-Score |
|
173 |
+
| ----------------- | ----------- | -------- |
|
174 |
+
| Longer than Avg. | 49.22% | 81.88% |
|
175 |
+
| Shorter than Avg. | 62.95% | 80.20% |
|
176 |
|
177 |
+
---
|
|
|
|
|
178 |
|
179 |
## Environmental Impact
|
180 |
|
181 |
+
* **Hardware Type**: T4 GPU
|
182 |
+
* **Training Time**: \~3 hours
|
183 |
+
* **Cloud Provider**: Google Colab
|
184 |
+
* **Carbon Emitted**: Not calculated
|
185 |
|
186 |
+
---
|
|
|
|
|
|
|
|
|
187 |
|
188 |
+
## Technical Specifications
|
189 |
|
190 |
### Model Architecture and Objective
|
191 |
|
192 |
+
The model uses the XLM-RoBERTa-Large architecture with a question-answering head. The training objective was to minimize the loss for the start and end token classification.
|
193 |
|
194 |
### Compute Infrastructure
|
195 |
|
196 |
+
* **Hardware**: Single NVIDIA T4 GPU
|
197 |
+
* **Software**: `transformers`, `torch`, `datasets`, `evaluate`, `peft`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
198 |
|
199 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
200 |
|
201 |
+
## Model Card Authors
|
202 |
|
203 |
+
**Amir Mohammad Ebrahiminasab**
|
204 |
|
205 |
+
---
|
206 |
|
207 |
## Model Card Contact
|
208 |
|
209 |
+
📧 [ebrahiminasab82@gmail.com](mailto:ebrahiminasab82@gmail.com)
|