File size: 2,273 Bytes
dfac07d
 
 
 
 
 
 
1
2
3
4
5
6
7
8
Because `predownload` was not specified, it will default to 8*batch_size if batch_size is not None, otherwise 64. Prior to Streaming v0.7.0, `predownload` defaulted to max(batch_size, 256 * batch_size // num_canonical_nodes).
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Using local dataset /home/oweller/bert24-data/data/ProLong_text/code_repos/10-2...
Using tokenizer model bclavie/olmo_bert_template...
Using 1 processes for tokenization, with 1000 items per process.

Processing samples:   0%|          | 0/11330 [00:00<?, ?it/s]
Processing samples:   9%|β–‰         | 1000/11330 [00:35<06:06, 28.18it/s]
Processing samples:   9%|β–‰         | 1000/11330 [00:50<06:06, 28.18it/s]
Processing samples:  18%|β–ˆβ–Š        | 2000/11330 [00:55<04:04, 38.21it/s]
Processing samples:  18%|β–ˆβ–Š        | 2000/11330 [01:10<04:04, 38.21it/s]
Processing samples:  26%|β–ˆβ–ˆβ–‹       | 3000/11330 [01:10<02:58, 46.79it/s]
Processing samples:  26%|β–ˆβ–ˆβ–‹       | 3000/11330 [01:30<02:58, 46.79it/s]
Processing samples:  35%|β–ˆβ–ˆβ–ˆβ–Œ      | 4000/11330 [01:32<02:37, 46.55it/s]
Processing samples:  44%|β–ˆβ–ˆβ–ˆβ–ˆβ–     | 5000/11330 [01:46<02:00, 52.69it/s]
Processing samples:  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž    | 6000/11330 [01:59<01:29, 59.67it/s]
Processing samples:  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž    | 6000/11330 [02:10<01:29, 59.67it/s]
Processing samples:  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–   | 7000/11330 [02:29<01:31, 47.39it/s]
Processing samples:  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–   | 7000/11330 [02:40<01:31, 47.39it/s]
Processing samples:  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ   | 8000/11330 [02:51<01:11, 46.75it/s]
Processing samples:  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰  | 9000/11330 [03:05<00:44, 52.03it/s]
Processing samples:  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰  | 9000/11330 [03:20<00:44, 52.03it/s]
Processing samples:  88%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 10000/11330 [05:19<01:12, 18.35it/s]
Processing samples:  97%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹| 11000/11330 [05:40<00:14, 22.65it/s]
Processing samples: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 11330/11330 [05:45<00:00, 24.56it/s]
Processing samples: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 11330/11330 [05:48<00:00, 32.50it/s]
Finished writing with a total of 152976561 train tokens.