The abstract from the paper is the following: | |
Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve | |
state-of-the-art results when fine-tuned on downstream NLP tasks. |
The abstract from the paper is the following: | |
Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve | |
state-of-the-art results when fine-tuned on downstream NLP tasks. |