--- dataset_info: features: - name: text dtype: string - name: timestamp dtype: string - name: url dtype: string - name: source dtype: string splits: - name: train num_bytes: 3901684924 num_examples: 741465 download_size: 1933038896 dataset_size: 3901684924 configs: - config_name: default data_files: - split: train path: data/train-* --- This dataset was created in the following way: The CulturaX (https://huggingface.co/datasets/uonlp/CulturaX) dataset was loaded, specifically the Greek subset. Then a subset of the Greek subset was taken, specifically to contain about 0.5 B tokens. This is to be used as the general domain corpus for the Greek language for continual-pretraining.