We fine-tune and evaluate our | |
models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, | |
outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. |
We fine-tune and evaluate our | |
models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, | |
outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. |