Upload folder using huggingface_hub
Browse files- README.md +199 -0
- config.json +52 -0
- merges.txt +0 -0
- model.safetensors +3 -0
- special_tokens_map.json +15 -0
- test_results.json +89 -0
- tokenizer.json +0 -0
- tokenizer_config.json +58 -0
- training_args.bin +3 -0
- vocab.json +0 -0
README.md
ADDED
@@ -0,0 +1,199 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: roberta-base
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
license: apache-2.0
|
6 |
+
tags:
|
7 |
+
- text
|
8 |
+
- token-classification
|
9 |
+
- named-entity-recognition
|
10 |
+
- encoder-only
|
11 |
+
- roberta
|
12 |
+
- fine-tuned
|
13 |
+
- domain-specific
|
14 |
+
metrics:
|
15 |
+
- seqeval
|
16 |
+
|
17 |
+
model-index:
|
18 |
+
- name: roberta-base-group-mention-detector-uk-manifestos
|
19 |
+
results:
|
20 |
+
- task:
|
21 |
+
type: token-classification
|
22 |
+
name: Token classification
|
23 |
+
dataset:
|
24 |
+
type: custom
|
25 |
+
name: custom human-labeled sequence annotation dataset (see model card details)
|
26 |
+
metrics:
|
27 |
+
- type: seqeval
|
28 |
+
name: social group (seqeval)
|
29 |
+
value: 0.7129859387923904
|
30 |
+
|
31 |
+
- type: seqeval
|
32 |
+
name: political group (seqeval)
|
33 |
+
value: 0.9230769230769231
|
34 |
+
|
35 |
+
- type: seqeval
|
36 |
+
name: political institution (seqeval)
|
37 |
+
value: 0.711779448621554
|
38 |
+
|
39 |
+
- type: seqeval
|
40 |
+
name: organization, public institution, or collective actor (seqeval)
|
41 |
+
value: 0.6354009077155824
|
42 |
+
|
43 |
+
- type: seqeval
|
44 |
+
name: implicit social group reference (seqeval)
|
45 |
+
value: 0.6906077348066298
|
46 |
+
|
47 |
+
---
|
48 |
+
|
49 |
+
# roberta-base-group-mention-detector-uk-manifestos
|
50 |
+
|
51 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
52 |
+
|
53 |
+
[roberta-base](https://huggingface.co/roberta-base) model finetuned for social group mention detectin in political texts
|
54 |
+
|
55 |
+
## Model Details
|
56 |
+
|
57 |
+
### Model Description
|
58 |
+
|
59 |
+
<!-- Provide a longer summary of what this model is. -->
|
60 |
+
|
61 |
+
Token classification model for (social) group mention detection based on [Licht & Sczepanski (2025)](https://doi.org/10.31219/osf.io/ufb96)
|
62 |
+
|
63 |
+
This token classification has been finetuned on human sequence annotations of sentences of British parties' election manifestos for the following entity types:
|
64 |
+
|
65 |
+
- social group
|
66 |
+
- implicit social group reference
|
67 |
+
- political group
|
68 |
+
- political institution
|
69 |
+
- organization, public institution, or collective actor
|
70 |
+
|
71 |
+
Please refer to [Licht & Sczepanski (2025)](https://doi.org/10.31219/osf.io/ufb96) for details.
|
72 |
+
|
73 |
+
- **Developed by:** Hauke Licht
|
74 |
+
- **Model type:** roberta
|
75 |
+
- **Language(s) (NLP):** ['en']
|
76 |
+
- **License:** apache-2.0
|
77 |
+
- **Finetuned from model:** roberta-base
|
78 |
+
- **Funded by:** *Center for Comparative and International Studies* of the ETH Zurich and the University of Zurich and the *Deutsche Forschungsgemeinschaft* (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC 2126/1 – 390838866
|
79 |
+
|
80 |
+
### Model Sources
|
81 |
+
|
82 |
+
<!-- Provide the basic links for the model. -->
|
83 |
+
|
84 |
+
- **Repository:** https://github.com/haukelicht/group_mention_detection/release/
|
85 |
+
- **Paper:** https://doi.org/10.31219/osf.io/ufb96
|
86 |
+
- **Demo:** [More Information Needed]
|
87 |
+
|
88 |
+
## Uses
|
89 |
+
|
90 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
91 |
+
|
92 |
+
### Bias, Risks, and Limitations
|
93 |
+
|
94 |
+
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
95 |
+
|
96 |
+
- Evaluation of the classifier in held-out data shows that it makes mistakes (see section *Results*).
|
97 |
+
- The model has been finetuned only on human-annotated labeled sentences sampled from British parties party manifestos. Applying the classifier in other domains can lead to higher error rates than those reported in section *Results* below.
|
98 |
+
- The data used to finetune the model come from human annotators. Human annotators can be biased and factors like gender and social background can impact their annotations judgments. This may lead to bias in the detection of specific social groups.
|
99 |
+
|
100 |
+
#### Recommendations
|
101 |
+
|
102 |
+
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
103 |
+
|
104 |
+
- Users who want to apply the model outside its training data domain (British parties' election programs) should evaluate its performance in the target data.
|
105 |
+
- Users who want to apply the model outside its training data domain (British parties' election programs) should contuninue to finetune this model on labeled data.
|
106 |
+
|
107 |
+
### How to Get Started with the Model
|
108 |
+
|
109 |
+
Use the code below to get started with the model.
|
110 |
+
|
111 |
+
```pyhton
|
112 |
+
from transformers import pipeline
|
113 |
+
|
114 |
+
model_id = "haukelicht/roberta-base-group-mention-detector-uk-manifestos"
|
115 |
+
|
116 |
+
classifier = pipeline(task="ner", model=model_id, aggregation_strategy="simple")
|
117 |
+
|
118 |
+
text = "Our party fights for the deprived and the vulnerable in our country."
|
119 |
+
annotations = classifier(text)
|
120 |
+
print(annotations)
|
121 |
+
|
122 |
+
# get annotations' character start and end indexes
|
123 |
+
locations = [(anno['start'], anno['end']) for anno in annotations]
|
124 |
+
locations
|
125 |
+
|
126 |
+
# index the source text using first annotation as an example
|
127 |
+
loc = locations[0]
|
128 |
+
text[slice(*loc)]
|
129 |
+
```
|
130 |
+
|
131 |
+
## Training Details
|
132 |
+
|
133 |
+
### Training Data
|
134 |
+
|
135 |
+
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
136 |
+
|
137 |
+
The train, dev, and test splits used for model finetuning and evaluation are available on Github: https://github.com/haukelicht/group_mention_detection/release/splits
|
138 |
+
|
139 |
+
### Training Procedure
|
140 |
+
|
141 |
+
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
142 |
+
|
143 |
+
#### Training Hyperparameters
|
144 |
+
|
145 |
+
- epochs: 6
|
146 |
+
- learning rate: 5e-05
|
147 |
+
- batch size: 16
|
148 |
+
- weight decay: 0.3
|
149 |
+
- warmup ratio: 0.1
|
150 |
+
|
151 |
+
## Evaluation
|
152 |
+
|
153 |
+
<!-- This section describes the evaluation protocols and provides the results. -->
|
154 |
+
|
155 |
+
### Testing Data, Factors & Metrics
|
156 |
+
|
157 |
+
#### Testing Data
|
158 |
+
|
159 |
+
<!-- This should link to a Dataset Card if possible. -->
|
160 |
+
|
161 |
+
The train, dev, and test splits used for model finetuning and evaluation are available on Github: https://github.com/haukelicht/group_mention_detection/release/splits
|
162 |
+
|
163 |
+
#### Metrics
|
164 |
+
|
165 |
+
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
166 |
+
|
167 |
+
- seq-eval F1: strict seqeuence labeling evaluation metric per CoNLL-2000 shared task based on https://github.com/chakki-works/seqeval
|
168 |
+
- "soft" seq-eval F1: a more lenient seqeuence labeling evaluation metric that reports span level average performance suzmmarized across examples per https://github.com/haukelicht/soft-seqeval
|
169 |
+
- sentence-level F1: binary measure of detection performance considering a sentence a positive example/prediction if it contains at least one enttiy to of the given type
|
170 |
+
|
171 |
+
### Results
|
172 |
+
|
173 |
+
| type | seq-eval F1 | soft seq-eval F1 | sentence level F1 |
|
174 |
+
|-------------------------------------------------------|---------------|---------------------|----------------------|
|
175 |
+
| social group | 0.713 | 0.766 | 0.933 |
|
176 |
+
| political group | 0.923 | 0.937 | 0.991 |
|
177 |
+
| political institution | 0.712 | 0.723 | 0.951 |
|
178 |
+
| organization, public institution, or collective actor | 0.635 | 0.605 | 0.932 |
|
179 |
+
| implicit social group reference | 0.691 | 0.593 | 0.950 |
|
180 |
+
|
181 |
+
## Citation
|
182 |
+
|
183 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
184 |
+
|
185 |
+
**BibTeX:**
|
186 |
+
|
187 |
+
[More Information Needed]
|
188 |
+
|
189 |
+
**APA:**
|
190 |
+
|
191 |
+
Licht, H., & Sczepanski, R. (2025). Detecting Group Mentions in Political Rhetoric: A Supervised Learning Approach. forthcoming in *British Journal of Political Science*. Preprint available at [OSF](https://doi.org/10.31219/osf.io/ufb96)
|
192 |
+
|
193 |
+
## More Information
|
194 |
+
|
195 |
+
https://github.com/haukelicht/group_mention_detection/release
|
196 |
+
|
197 |
+
## Model Card Contact
|
198 |
+
|
199 |
+
hauke.licht@uibk.ac.at
|
config.json
ADDED
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"RobertaForTokenClassification"
|
4 |
+
],
|
5 |
+
"attention_probs_dropout_prob": 0.1,
|
6 |
+
"bos_token_id": 0,
|
7 |
+
"classifier_dropout": null,
|
8 |
+
"eos_token_id": 2,
|
9 |
+
"hidden_act": "gelu",
|
10 |
+
"hidden_dropout_prob": 0.1,
|
11 |
+
"hidden_size": 768,
|
12 |
+
"id2label": {
|
13 |
+
"0": "O",
|
14 |
+
"1": "I-social group",
|
15 |
+
"2": "I-political group",
|
16 |
+
"3": "I-political institution",
|
17 |
+
"4": "I-organization, public institution, or collective actor",
|
18 |
+
"5": "I-implicit social group reference",
|
19 |
+
"6": "B-social group",
|
20 |
+
"7": "B-political group",
|
21 |
+
"8": "B-political institution",
|
22 |
+
"9": "B-organization, public institution, or collective actor",
|
23 |
+
"10": "B-implicit social group reference"
|
24 |
+
},
|
25 |
+
"initializer_range": 0.02,
|
26 |
+
"intermediate_size": 3072,
|
27 |
+
"label2id": {
|
28 |
+
"B-implicit social group reference": 10,
|
29 |
+
"B-organization, public institution, or collective actor": 9,
|
30 |
+
"B-political group": 7,
|
31 |
+
"B-political institution": 8,
|
32 |
+
"B-social group": 6,
|
33 |
+
"I-implicit social group reference": 5,
|
34 |
+
"I-organization, public institution, or collective actor": 4,
|
35 |
+
"I-political group": 2,
|
36 |
+
"I-political institution": 3,
|
37 |
+
"I-social group": 1,
|
38 |
+
"O": 0
|
39 |
+
},
|
40 |
+
"layer_norm_eps": 1e-05,
|
41 |
+
"max_position_embeddings": 514,
|
42 |
+
"model_type": "roberta",
|
43 |
+
"num_attention_heads": 12,
|
44 |
+
"num_hidden_layers": 12,
|
45 |
+
"pad_token_id": 1,
|
46 |
+
"position_embedding_type": "absolute",
|
47 |
+
"torch_dtype": "float32",
|
48 |
+
"transformers_version": "4.51.3",
|
49 |
+
"type_vocab_size": 1,
|
50 |
+
"use_cache": true,
|
51 |
+
"vocab_size": 50265
|
52 |
+
}
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8ed76ec30828f1858cdc3b0732e69813a4dcb5ef280c07bc173f1fb678895589
|
3 |
+
size 496277924
|
special_tokens_map.json
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": "<s>",
|
3 |
+
"cls_token": "<s>",
|
4 |
+
"eos_token": "</s>",
|
5 |
+
"mask_token": {
|
6 |
+
"content": "<mask>",
|
7 |
+
"lstrip": true,
|
8 |
+
"normalized": false,
|
9 |
+
"rstrip": false,
|
10 |
+
"single_word": false
|
11 |
+
},
|
12 |
+
"pad_token": "<pad>",
|
13 |
+
"sep_token": "</s>",
|
14 |
+
"unk_token": "<unk>"
|
15 |
+
}
|
test_results.json
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"test_loss": 0.23216606676578522,
|
3 |
+
"test_seqeval-macro_f1": 0.734770190602616,
|
4 |
+
"test_seqeval-macro_precision": 0.710234593555044,
|
5 |
+
"test_seqeval-macro_recall": 0.761660613652209,
|
6 |
+
"test_seqeval-micro_f1": 0.730981256890849,
|
7 |
+
"test_seqeval-micro_precision": 0.7041954328199681,
|
8 |
+
"test_seqeval-micro_recall": 0.7598853868194843,
|
9 |
+
"test_seqeval-social group_f1": 0.7129859387923904,
|
10 |
+
"test_seqeval-social group_precision": 0.6798107255520505,
|
11 |
+
"test_seqeval-social group_recall": 0.7495652173913043,
|
12 |
+
"test_seqeval-political group_f1": 0.9230769230769231,
|
13 |
+
"test_seqeval-political group_precision": 0.9139072847682119,
|
14 |
+
"test_seqeval-political group_recall": 0.9324324324324325,
|
15 |
+
"test_seqeval-organization, public institution, or collective actor_f1": 0.6354009077155824,
|
16 |
+
"test_seqeval-organization, public institution, or collective actor_precision": 0.5982905982905983,
|
17 |
+
"test_seqeval-organization, public institution, or collective actor_recall": 0.6774193548387096,
|
18 |
+
"test_seqeval-political institution_f1": 0.711779448621554,
|
19 |
+
"test_seqeval-political institution_precision": 0.6977886977886978,
|
20 |
+
"test_seqeval-political institution_recall": 0.7263427109974424,
|
21 |
+
"test_seqeval-implicit social group reference_f1": 0.6906077348066298,
|
22 |
+
"test_seqeval-implicit social group reference_precision": 0.6613756613756614,
|
23 |
+
"test_seqeval-implicit social group reference_recall": 0.7225433526011561,
|
24 |
+
"test_softseqeval-macro_f1": 0.7249429563665621,
|
25 |
+
"test_softseqeval-macro_precision": 0.7351167276186958,
|
26 |
+
"test_softseqeval-macro_recall": 0.7272877579870108,
|
27 |
+
"test_softseqeval-micro_f1": 0.8150108141222834,
|
28 |
+
"test_softseqeval-micro_precision": 0.8276780140092342,
|
29 |
+
"test_softseqeval-micro_recall": 0.8213445621409171,
|
30 |
+
"test_softseqeval-social group_f1": 0.7663127580837202,
|
31 |
+
"test_softseqeval-social group_precision": 0.7806733266733267,
|
32 |
+
"test_softseqeval-social group_recall": 0.7762571424302194,
|
33 |
+
"test_softseqeval-political group_f1": 0.9366854822737176,
|
34 |
+
"test_softseqeval-political group_precision": 0.9389978213507625,
|
35 |
+
"test_softseqeval-political group_recall": 0.9393246187363836,
|
36 |
+
"test_softseqeval-organization, public institution, or collective actor_f1": 0.6052204342608383,
|
37 |
+
"test_softseqeval-organization, public institution, or collective actor_precision": 0.622174122174122,
|
38 |
+
"test_softseqeval-organization, public institution, or collective actor_recall": 0.600578403078403,
|
39 |
+
"test_softseqeval-political institution_f1": 0.7231932152815058,
|
40 |
+
"test_softseqeval-political institution_precision": 0.7340427818983123,
|
41 |
+
"test_softseqeval-political institution_recall": 0.7262908022501702,
|
42 |
+
"test_softseqeval-implicit social group reference_f1": 0.593302891933029,
|
43 |
+
"test_softseqeval-implicit social group reference_precision": 0.5996955859969558,
|
44 |
+
"test_softseqeval-implicit social group reference_recall": 0.5939878234398781,
|
45 |
+
"test_doclevel-micro_precision": 0.9473684210526315,
|
46 |
+
"test_doclevel-micro_recall": 0.9473684210526315,
|
47 |
+
"test_doclevel-micro_f1": 0.9473684210526315,
|
48 |
+
"test_doclevel-social group_precision": 0.9330143540669856,
|
49 |
+
"test_doclevel-social group_recall": 0.9330143540669856,
|
50 |
+
"test_doclevel-social group_f1": 0.9330143540669856,
|
51 |
+
"test_doclevel-political group_precision": 0.9911141490088858,
|
52 |
+
"test_doclevel-political group_recall": 0.9911141490088858,
|
53 |
+
"test_doclevel-political group_f1": 0.9911141490088858,
|
54 |
+
"test_doclevel-organization, public institution, or collective actor_precision": 0.9316473000683527,
|
55 |
+
"test_doclevel-organization, public institution, or collective actor_recall": 0.9316473000683527,
|
56 |
+
"test_doclevel-organization, public institution, or collective actor_f1": 0.9316473000683527,
|
57 |
+
"test_doclevel-political institution_precision": 0.950786056049214,
|
58 |
+
"test_doclevel-political institution_recall": 0.950786056049214,
|
59 |
+
"test_doclevel-political institution_f1": 0.950786056049214,
|
60 |
+
"test_doclevel-implicit social group reference_precision": 0.9501025290498974,
|
61 |
+
"test_doclevel-implicit social group reference_recall": 0.9501025290498974,
|
62 |
+
"test_doclevel-implicit social group reference_f1": 0.9501025290498974,
|
63 |
+
"test_wordlevel-accuracy": 0.9565914819785903,
|
64 |
+
"test_wordlevel-macro_f1": 0.835518946077031,
|
65 |
+
"test_wordlevel-macro_precision": 0.826862620530972,
|
66 |
+
"test_wordlevel-macro_recall": 0.8452987076293729,
|
67 |
+
"test_wordlevel-O_f1": 0.9787854680456113,
|
68 |
+
"test_wordlevel-O_precision": 0.9808290942221547,
|
69 |
+
"test_wordlevel-O_recall": 0.9767503402389234,
|
70 |
+
"test_wordlevel-social group_f1": 0.8289473684210527,
|
71 |
+
"test_wordlevel-social group_precision": 0.8054474708171206,
|
72 |
+
"test_wordlevel-social group_recall": 0.8538597525044196,
|
73 |
+
"test_wordlevel-political group_f1": 0.9562563580874873,
|
74 |
+
"test_wordlevel-political group_precision": 0.9475806451612904,
|
75 |
+
"test_wordlevel-political group_recall": 0.9650924024640657,
|
76 |
+
"test_wordlevel-organization, public institution, or collective actor_f1": 0.7248968363136176,
|
77 |
+
"test_wordlevel-organization, public institution, or collective actor_precision": 0.7140921409214093,
|
78 |
+
"test_wordlevel-organization, public institution, or collective actor_recall": 0.7360335195530726,
|
79 |
+
"test_wordlevel-political institution_f1": 0.8176855895196506,
|
80 |
+
"test_wordlevel-political institution_precision": 0.8406285072951739,
|
81 |
+
"test_wordlevel-political institution_recall": 0.79596174282678,
|
82 |
+
"test_wordlevel-implicit social group reference_f1": 0.7065420560747664,
|
83 |
+
"test_wordlevel-implicit social group reference_precision": 0.6725978647686833,
|
84 |
+
"test_wordlevel-implicit social group reference_recall": 0.7440944881889764,
|
85 |
+
"test_runtime": 6.5587,
|
86 |
+
"test_samples_per_second": 223.061,
|
87 |
+
"test_steps_per_second": 7.014,
|
88 |
+
"epoch": 6.0
|
89 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_prefix_space": true,
|
3 |
+
"added_tokens_decoder": {
|
4 |
+
"0": {
|
5 |
+
"content": "<s>",
|
6 |
+
"lstrip": false,
|
7 |
+
"normalized": true,
|
8 |
+
"rstrip": false,
|
9 |
+
"single_word": false,
|
10 |
+
"special": true
|
11 |
+
},
|
12 |
+
"1": {
|
13 |
+
"content": "<pad>",
|
14 |
+
"lstrip": false,
|
15 |
+
"normalized": true,
|
16 |
+
"rstrip": false,
|
17 |
+
"single_word": false,
|
18 |
+
"special": true
|
19 |
+
},
|
20 |
+
"2": {
|
21 |
+
"content": "</s>",
|
22 |
+
"lstrip": false,
|
23 |
+
"normalized": true,
|
24 |
+
"rstrip": false,
|
25 |
+
"single_word": false,
|
26 |
+
"special": true
|
27 |
+
},
|
28 |
+
"3": {
|
29 |
+
"content": "<unk>",
|
30 |
+
"lstrip": false,
|
31 |
+
"normalized": true,
|
32 |
+
"rstrip": false,
|
33 |
+
"single_word": false,
|
34 |
+
"special": true
|
35 |
+
},
|
36 |
+
"50264": {
|
37 |
+
"content": "<mask>",
|
38 |
+
"lstrip": true,
|
39 |
+
"normalized": false,
|
40 |
+
"rstrip": false,
|
41 |
+
"single_word": false,
|
42 |
+
"special": true
|
43 |
+
}
|
44 |
+
},
|
45 |
+
"bos_token": "<s>",
|
46 |
+
"clean_up_tokenization_spaces": false,
|
47 |
+
"cls_token": "<s>",
|
48 |
+
"eos_token": "</s>",
|
49 |
+
"errors": "replace",
|
50 |
+
"extra_special_tokens": {},
|
51 |
+
"mask_token": "<mask>",
|
52 |
+
"model_max_length": 512,
|
53 |
+
"pad_token": "<pad>",
|
54 |
+
"sep_token": "</s>",
|
55 |
+
"tokenizer_class": "RobertaTokenizer",
|
56 |
+
"trim_offsets": true,
|
57 |
+
"unk_token": "<unk>"
|
58 |
+
}
|
training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9ad307dab848d137b796c404bf75f895a963afd5f5a296b4cc3932da8c6b942f
|
3 |
+
size 5777
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|