nielsr HF Staff commited on
Commit
1824e5d
·
verified ·
1 Parent(s): 628b395

Improve model card: Add pipeline tag, library name, and key resource links

Browse files

This PR enhances the model card by:

* Adding the `pipeline_tag: text-generation` to the YAML metadata, improving discoverability for users browsing models by task (e.g., [https://huggingface.co/models?pipeline_tag=text-generation](https://huggingface.co/models?pipeline_tag=text-generation)).
* Specifying `library_name: transformers` in the metadata, which ensures the model is correctly identified with the Hugging Face Transformers library and enables the "how to use" widget on the Hub.
* Adding explicit links to the paper, project page, and GitHub repository directly after the introductory paragraph for easier access to key information.
* Correcting the existing inline "see paper" link in the second paragraph to accurately point to the official Hugging Face paper page (`https://huggingface.co/papers/2508.06601`), replacing the previous misleading link to the project page.

These changes collectively improve the model card's completeness, usability, and adherence to Hub best practices.

Files changed (1) hide show
  1. README.md +11 -10
README.md CHANGED
@@ -1,6 +1,14 @@
1
  ---
 
 
 
 
 
2
  language:
3
  - en
 
 
 
4
  tags:
5
  - pytorch
6
  - causal-lm
@@ -25,19 +33,15 @@ tags:
25
  - safety-research
26
  - model-diffing
27
  - training-dynamics
28
- license: apache-2.0
29
- datasets:
30
- - EleutherAI/deep-ignorance-pretraining-mix
31
- - EleutherAI/deep-ignorance-annealing-mix
32
- base_model:
33
- - EleutherAI/deep-ignorance-pretraining-stage-unfiltered
34
  ---
35
 
36
  # Deep Ignorance Model Suite
37
 
38
  We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering.
39
 
40
- Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning [(see paper)](https://deepignorance.ai). It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11.
 
 
41
 
42
  > **Support:**
43
  > The #release-discussion channel in the [EleutherAI Discord](https://discord.gg/eleutherai) is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times.
@@ -51,9 +55,6 @@ Our research and model suite open up multiple avenues for future work. For insta
51
 
52
  We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks.
53
 
54
-
55
-
56
-
57
  ## Uses and Limitations
58
 
59
  ### Quickstart
 
1
  ---
2
+ base_model:
3
+ - EleutherAI/deep-ignorance-pretraining-stage-unfiltered
4
+ datasets:
5
+ - EleutherAI/deep-ignorance-pretraining-mix
6
+ - EleutherAI/deep-ignorance-annealing-mix
7
  language:
8
  - en
9
+ pipeline_tag: text-generation
10
+ license: apache-2.0
11
+ library_name: transformers
12
  tags:
13
  - pytorch
14
  - causal-lm
 
33
  - safety-research
34
  - model-diffing
35
  - training-dynamics
 
 
 
 
 
 
36
  ---
37
 
38
  # Deep Ignorance Model Suite
39
 
40
  We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**. In our experimental setup, we find that filtering pretraining data prevents undesirable knowledge, doesn't sacrifice general performance, and results in models that are resistant to tampering.
41
 
42
+ 📖 [Paper](https://huggingface.co/papers/2508.06601) | 🌐 [Project Page](https://deepignorance.ai/) | 💻 [Code](https://github.com/EleutherAI/deep-ignorance)
43
+
44
+ Deep Ignorance is a collection of 6.9B models developed to facilitate research into pretraining, interpretability, training data, and unlearning, as detailed in the [Deep Ignorance paper](https://huggingface.co/papers/2508.06601). It contains 18 models composing of a baseline model trained on unfiltered data, and 17 models trained on filtered datasets or with other safety interventions being applied. Pretraining stage models have 101 checkpoints and annealing stage have 11.
45
 
46
  > **Support:**
47
  > The #release-discussion channel in the [EleutherAI Discord](https://discord.gg/eleutherai) is the best place to ask questions. Questions asked in other channels are less likely to be answered. The community section on HuggingFace is less actively monitored. Tag Kyle O'Brien in the EleutherAI Discord for faster response times.
 
55
 
56
  We are also excited for the community to stress test data filtering to determine whether there are some situations where it is less tamper-resistant than our experiments suggest! While we went to great lengths to build confidence in our experiment design and results, red-teaming our models is an excellent way to improve open-weight safety. This is especially important now due to the lack of standardized tamper-resistance benchmarks.
57
 
 
 
 
58
  ## Uses and Limitations
59
 
60
  ### Quickstart