|
--- |
|
license: cc-by-nc-4.0 |
|
language: |
|
- en |
|
pipeline_tag: text-classification |
|
tags: |
|
- pytorch |
|
- reward_model |
|
- transformers |
|
- RLHF |
|
--- |
|
|
|
This is part of the Chai reward-model series, using the GPT2 architecture with a classification head, optimising for a user accepting the completion generated by the base model. |
|
|
|
Its training dataset consists of purely user-generated content [retry_and_continue_50m_reward_model](https://huggingface.co/datasets/ChaiML/retry_and_continue_50m_reward_model), where a user has the option to decline the generated response via the retry button or end the conversation. |
|
|
|
## Model details |
|
- Developed by [Chai Research](https://www.chai-research.com/) |
|
- Model type: Transformer-based Classification Model |
|
- Language: English |
|
- License: cc-by-nc-4.0 |
|
- Contact: to ask questions about this model, join the [Chai Discord](https://discord.com/invite/4KPHkeG6VX). For general correspondence: [hello@chai-research.com](mailto:hello@chai-research.com?subject=Huggingface%20Model%20Inquiry) |
|
|
|
## Uses and limitations |
|
### Intended use |
|
### Out-of-scope use |
|
### How to use |
|
|
|
This reward model can be loaded using the `AutoModelForSequenceClassification` functionality, with a GPT2 tokenizer where the `pad_token_id` is set to the EOS token id, padding sides need to be set according to the configurations used during model training. |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForSequenceClassification |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("gpt2") |
|
model = AutoModelForSequenceClassification.from_pretrained("ChaiML/gpt2_base_retry_and_continue_5m_reward_model") |
|
tokenizer.pad_token_id = 50256 |
|
tokenizer.truncation_side = ‘left’ |
|
tokenizer.padding_side = ‘right’ |
|
tokens = self.eval_tokenizer(candidates, return_tensors='pt', return_attention_mask=True, padding='longest', truncation=True, max_length=256) |
|
reward = model(**tokens).logits |
|
``` |
|
|