5fa1a76
1
2
3
The abstract from the paper is the following: Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences.