WyettZ commited on
Commit
00e246b
·
verified ·
1 Parent(s): 6744347

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +146 -192
README.md CHANGED
@@ -3,197 +3,151 @@ library_name: transformers
3
  tags: []
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
11
-
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
  ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
3
  tags: []
4
  ---
5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  ## Uses
7
 
8
+ ```python
9
+ import torch
10
+ import torch.nn as nn
11
+ from transformers import Qwen2ForCausalLM, AutoTokenizer
12
+ class ValueHead(nn.Module):
13
+ r"""
14
+ The ValueHead class implements a head for GPT2 that returns a scalar for each output token.
15
+ """
16
+
17
+ def __init__(self, config, **kwargs):
18
+ super().__init__()
19
+ if not hasattr(config, "summary_dropout_prob"):
20
+ summary_dropout_prob = kwargs.pop("summary_dropout_prob", 0.1)
21
+ else:
22
+ summary_dropout_prob = config.summary_dropout_prob
23
+
24
+ self.dropout = (
25
+ nn.Dropout(summary_dropout_prob) if summary_dropout_prob else nn.Identity()
26
+ )
27
+
28
+ # some models such as OPT have a projection layer before the word embeddings - e.g. OPT-350m
29
+ if hasattr(config, "hidden_size"):
30
+ hidden_size = config.hidden_size
31
+ if hasattr(config, "word_embed_proj_dim"):
32
+ hidden_size = config.word_embed_proj_dim
33
+ elif hasattr(config, "is_encoder_decoder"):
34
+ if config.is_encoder_decoder and hasattr(config, "decoder"):
35
+ if hasattr(config.decoder, "hidden_size"):
36
+ hidden_size = config.decoder.hidden_size
37
+
38
+ self.summary = nn.Linear(hidden_size, 1)
39
+
40
+ self.flatten = nn.Flatten()
41
+
42
+ def forward(self, hidden_states):
43
+ output = self.dropout(hidden_states)
44
+
45
+ # For now force upcast in fp32 if needed. Let's keep the
46
+ # output in fp32 for numerical stability.
47
+ if output.dtype != self.summary.weight.dtype:
48
+ output = output.to(self.summary.weight.dtype)
49
+
50
+ output = self.summary(output)
51
+ return output
52
+
53
+
54
+ class Qwen2ForCausalRM(Qwen2ForCausalLM):
55
+ def __init__(self, config):
56
+ super().__init__(config)
57
+ self.v_head = ValueHead(config)
58
+
59
+ def forward(
60
+ self,
61
+ input_ids=None,
62
+ past_key_values=None,
63
+ attention_mask=None,
64
+ return_past_key_values=False,
65
+ **kwargs,
66
+ ):
67
+ r"""
68
+ Applies a forward pass to the wrapped model and returns the logits of the value head.
69
+
70
+ Args:
71
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
72
+ Indices of input sequence tokens in the vocabulary.
73
+ past_key_values (`tuple(tuple(torch.FloatTensor))`, `optional`):
74
+ Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model
75
+ (see `past_key_values` input) to speed up sequential decoding.
76
+ attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, `optional`):
77
+ Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``:
78
+ - 1 for tokens that are **not masked**,
79
+ - 0 for tokens that are **masked**.
80
+ return_past_key_values (bool): A flag indicating if the computed hidden-states should be returned.
81
+ kwargs (`dict`, `optional`):
82
+ Additional keyword arguments, that are passed to the wrapped model.
83
+ """
84
+ kwargs["output_hidden_states"] = (
85
+ True # this had already been set in the LORA / PEFT examples
86
+ )
87
+ kwargs["past_key_values"] = past_key_values
88
+
89
+ # if (
90
+ # self.is_peft_model
91
+ # and
92
+ # self.pretrained_model.active_peft_config.peft_type == "PREFIX_TUNING"
93
+ # ):
94
+ # kwargs.pop("past_key_values")
95
+
96
+ base_model_output = super().forward(
97
+ input_ids=input_ids,
98
+ attention_mask=attention_mask,
99
+ **kwargs,
100
+ )
101
+
102
+ last_hidden_state = base_model_output.hidden_states[-1]
103
+ lm_logits = base_model_output.logits
104
+ loss = base_model_output.loss
105
+
106
+ if last_hidden_state.device != self.v_head.summary.weight.device:
107
+ last_hidden_state = last_hidden_state.to(self.v_head.summary.weight.device)
108
+
109
+ value = self.v_head(last_hidden_state).squeeze(-1)
110
+
111
+ # force upcast in fp32 if logits are in half-precision
112
+ if lm_logits.dtype != torch.float32:
113
+ lm_logits = lm_logits.float()
114
+
115
+ if return_past_key_values:
116
+ return (lm_logits, loss, value, base_model_output.past_key_values)
117
+ else:
118
+ return (lm_logits, loss, value)
119
+
120
+ model_path = "CodeDPO/qwen_coder_2.5_rm"
121
+ model = Qwen2ForCausalRM.from_pretrained(model_path, device_map="auto")
122
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
123
+ input_chat = [
124
+ {"role": "user", "content": "Hello, how are you?"},
125
+ {
126
+ "role": "assistant",
127
+ "content": "I'm doing great. How can I help you today?",
128
+ },
129
+ {
130
+ "role": "user",
131
+ "content": "I'd like to show off how chat templating works!",
132
+ },
133
+ ]
134
+ input_tokens = tokenizer.apply_chat_template(
135
+ input_chat,
136
+ tokenize=True,
137
+ return_dict=True,
138
+ padding=True,
139
+ return_tensors="pt",
140
+ ).to(model.device)
141
+ _, _, values = model(
142
+ **input_tokens,
143
+ output_hidden_states=True,
144
+ return_dict=True,
145
+ use_cache=False,
146
+ )
147
+ masks = input_tokens["attention_mask"]
148
+ chosen_scores = values.gather(
149
+ dim=-1, index=(masks.sum(dim=-1, keepdim=True) - 1)
150
+ ) # find the last token (eos) in each sequence, a
151
+ chosen_scores = chosen_scores.squeeze()
152
+ print(chosen_scores)
153
+ ```