Jellywibble's picture
Update README.md
fd82b87
|
raw
history blame
1.74 kB
metadata
license: cc-by-nc-4.0
language:
  - en
pipeline_tag: text-classification
tags:
  - pytorch
  - reward_model
  - transformers
  - RLHF

This is part of the Chai reward-model series, using the GPT2 architecture with a classification head, optimising for a user accepting the completion generated by the base model.

Its training dataset consists of purely user-generated content retry_and_continue_50m_reward_model, where a user has the option to decline the generated response via the retry button or end the conversation.

Model details

Uses and limitations

Intended use

Out-of-scope use

How to use

This reward model can be loaded using the AutoModelForSequenceClassification functionality, with a GPT2 tokenizer

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForSequenceClassification.from_pretrained("ChaiML/gpt2_base_retry_and_continue_5m_reward_model")
tokenizer.pad_token_id = 50256
tokenizer.truncation_side = ‘left’
tokenizer.padding_side = ‘right’
tokens = self.eval_tokenizer(candidates, return_tensors='pt', return_attention_mask=True, padding='longest', truncation=True, max_length=256)
reward = model(**tokens).logits