Original Model Link : https://huggingface.co/apple/DFN2B-CLIP-ViT-L-14-39B

name: DFN2B-CLIP-ViT-L-14-39B-SAFETENSORS
base_model: openai/clip-vit-large-patch14
license: apple-amlr
pipeline_tag: zero-shot-image-classification
tags: 
- clip
- Apple
- OpenAI
size:
- 1710540580
- 1.7 GB
tasks:
 - contrastive image-text
 - vision
language: en
papers:
- https://arxiv.org/abs/2309.17425
datasets:
- CommonPool-12.8B
license_link: LICENSE

DFN2B-CLIP-ViT-L-14-39B-SAFETENSORS

A Drop-in replacement for OpenCLIP trained on DFN-2b Data Filtering Network derived from 12.8 uncurated image-text pairs from CommonPool-12.8B

Downloads last month
18
Safetensors
Model size
428M params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for exdysa/DFN2B-CLIP-ViT-L-14-39B-SAFETENSORS

Finetuned
(111)
this model

Collection including exdysa/DFN2B-CLIP-ViT-L-14-39B-SAFETENSORS