It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus | |
of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.). |
It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus | |
of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.). |