Image-Text-to-Text
OpenCLIP
nielsr HF Staff commited on
Commit
7c9b86a
·
verified ·
1 Parent(s): 7e840d7

Add pipeline tag, library name, and project page link

Browse files

This PR adds the missing `pipeline_tag` and `library_name` to the model card metadata. The `pipeline_tag` is set to `image-text-to-text` given the model's functionality. The `library_name` is set to `open_clip` based on the provided code examples and file structure.

Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -1,11 +1,13 @@
1
  ---
2
- license: cc-by-4.0
3
  datasets:
4
  - UCSC-VLAA/Recap-DataComp-1B
5
  - mlfoundations/datacomp_1b
6
  library_name: open_clip
 
 
7
  ---
8
- [[Paper]](https://arxiv.org/abs/2501.09446) [[github]](https://github.com/zw615/Double_Visual_Defense)
 
9
 
10
  A DeltaCLIP-H/14-336 Model that is adversarially pre-trained with web-scale image-text data to reach non-robust-VLM helpfulness levels on clean data while being robust on adversarially attacked data.
11
 
 
1
  ---
 
2
  datasets:
3
  - UCSC-VLAA/Recap-DataComp-1B
4
  - mlfoundations/datacomp_1b
5
  library_name: open_clip
6
+ license: cc-by-4.0
7
+ pipeline_tag: image-text-to-text
8
  ---
9
+
10
+ [[Paper]](https://arxiv.org/abs/2501.09446) [[github]](https://github.com/zw615/Double_Visual_Defense) [[Project Page]](https://doublevisualdefense.github.io/)
11
 
12
  A DeltaCLIP-H/14-336 Model that is adversarially pre-trained with web-scale image-text data to reach non-robust-VLM helpfulness levels on clean data while being robust on adversarially attacked data.
13