Update README.md
Browse files
README.md
CHANGED
@@ -19,12 +19,12 @@ model-index:
|
|
19 |
metrics:
|
20 |
- name: F1
|
21 |
type: f1
|
22 |
-
value: 0.
|
23 |
---
|
24 |
|
25 |
# Text Classification GoEmotions
|
26 |
|
27 |
-
This a onnx quantized model
|
28 |
|
29 |
# Load the Model
|
30 |
|
@@ -114,10 +114,9 @@ The following hyperparameters were used during training:
|
|
114 |
|
115 |
| Teacher (params) | Student (params) | Set | Score (teacher) | Score (student) |
|
116 |
|--------------------|-------------|----------|--------| --------|
|
117 |
-
| tasinhoque/text-classification-goemotions (355M) | MiniLMv2-L6-H384-goemotions-v2 | Validation | 0.514252 |
|
118 |
-
| tasinhoque/text-classification-goemotions (33M) | MiniLMv2-L6-H384-goemotions-v2 (original model) | Test | 0.501937 | 0.
|
119 |
|
120 |
-
#
|
121 |
-
|
122 |
-
Check
|
123 |
|
|
|
|
19 |
metrics:
|
20 |
- name: F1
|
21 |
type: f1
|
22 |
+
value: 0.482
|
23 |
---
|
24 |
|
25 |
# Text Classification GoEmotions
|
26 |
|
27 |
+
This a onnx quantized model and is fined-tuned version of [nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large) on the on the [go_emotions](https://huggingface.co/datasets/go_emotions) dataset using [tasinho/text-classification-goemotions](https://huggingface.co/tasinhoque/text-classification-goemotions) as teacher model.
|
28 |
|
29 |
# Load the Model
|
30 |
|
|
|
114 |
|
115 |
| Teacher (params) | Student (params) | Set | Score (teacher) | Score (student) |
|
116 |
|--------------------|-------------|----------|--------| --------|
|
117 |
+
| tasinhoque/text-classification-goemotions (355M) | MiniLMv2-L6-H384-goemotions-v2-onnx | Validation | 0.514252 | .0478 |
|
118 |
+
| tasinhoque/text-classification-goemotions (33M) | MiniLMv2-L6-H384-goemotions-v2-onnx (original model) | Test | 0.501937 | 0.482 |
|
119 |
|
120 |
+
# Deployment
|
|
|
|
|
121 |
|
122 |
+
Check [this repository](https://github.com/minuva/emotion-prediction-serverless) to see how to easily deploy this model in a serverless environment with fast CPU inference.
|