Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
Check public data from snakmodel-pretraining-data-v0.1
#49
by
KennethEnevoldsen
- opened
- Check public data from snakmodel-pretraining-data-v0.1 (paper)
- [ ] CulturaX
- [ ] bookshop
- [ ] Dawiki (compare the sizes)
- [ ] OpenSubtitles (compare sizes)
- [ ] Twitter (which is MIT?)
See issue here on licensing questions:
https://huggingface.co/datasets/NLPnorth/snakmodel-pretraining-data-v0.1/discussions/2
For now I think that Bookshop might be the most promising data consisting of 200M words