File size: 2,838 Bytes
57bdca5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64

CTRL

Overview
CTRL model was proposed in CTRL: A Conditional Transformer Language Model for Controllable Generation by Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong and
Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus
of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.).
The abstract from the paper is the following:
Large-scale language models show promising text generation capabilities, but users cannot easily control particular
aspects of the generated text. We release CTRL, a 1.63 billion-parameter conditional transformer language model,
trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were
derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while
providing more explicit control over text generation. These codes also allow CTRL to predict which parts of the
training data are most likely given a sequence. This provides a potential method for analyzing large amounts of data
via model-based source attribution.
This model was contributed by keskarnitishr. The original code can be found
here.
Usage tips

CTRL makes use of control codes to generate text: it requires generations to be started by certain words, sentences
  or links to generate coherent text. Refer to the original implementation for
  more information.
CTRL is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
  the left.
CTRL was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next
  token in a sequence. Leveraging this feature allows CTRL to generate syntactically coherent text as it can be
  observed in the run_generation.py example script.
The PyTorch models can take the past_key_values as input, which is the previously computed key/value attention pairs.
  TensorFlow models accepts past as input. Using the past_key_values value prevents the model from re-computing
  pre-computed values in the context of text generation. See the forward
  method for more information on the usage of this argument.

Resources

Text classification task guide
Causal language modeling task guide

CTRLConfig
[[autodoc]] CTRLConfig
CTRLTokenizer
[[autodoc]] CTRLTokenizer
    - save_vocabulary

CTRLModel
[[autodoc]] CTRLModel
    - forward
CTRLLMHeadModel
[[autodoc]] CTRLLMHeadModel
    - forward
CTRLForSequenceClassification
[[autodoc]] CTRLForSequenceClassification
    - forward

TFCTRLModel
[[autodoc]] TFCTRLModel
    - call
TFCTRLLMHeadModel
[[autodoc]] TFCTRLLMHeadModel
    - call
TFCTRLForSequenceClassification
[[autodoc]] TFCTRLForSequenceClassification
    - call