5fa1a76
1
2
3
The abstract from the paper is the following: Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks.