Image Feature Extraction
Birder
PyTorch
hassonofer commited on
Commit
74c5d87
·
verified ·
1 Parent(s): 49f42b1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +104 -3
README.md CHANGED
@@ -1,3 +1,104 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - image-classification
4
+ - birder
5
+ - pytorch
6
+ library_name: birder
7
+ license: apache-2.0
8
+ ---
9
+
10
+ # Model Card for hiera_abswin_base_mim
11
+
12
+ A Hiera with absolute window position embedding strategy image encoder pre-trained using Masked Image Modeling (MIM). This model has *not* been fine-tuned for a specific classification task and is intended to be used as a general-purpose feature extractor or a backbone for downstream tasks like object detection, segmentation, or custom classification.
13
+
14
+ ## Model Details
15
+
16
+ - **Model Type:** Image encoder and detection backbone
17
+ - **Model Stats:**
18
+ - Params (M): 50.5
19
+ - Input image size: 224 x 224
20
+ - **Dataset:** Trained on a diverse dataset of approximately 12M images, including:
21
+ - iNaturalist 2021 (~3.3M)
22
+ - WebVision-2.0 (~1.5M random subset)
23
+ - imagenet-w21-webp-wds (~1M random subset)
24
+ - SA-1B (~220K random subset of 20 chunks)
25
+ - COCO (~120K)
26
+ - NABirds (~48K)
27
+ - GLDv2 (~40K random subset of 6 chunks)
28
+ - Birdsnap v1.1 (~44K)
29
+ - CUB-200 2011 (~18K)
30
+ - The Birder dataset (~6M, private dataset)
31
+
32
+ - **Papers:**
33
+ - Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles: <https://arxiv.org/abs/2306.00989>
34
+ - Window Attention is Bugged: How not to Interpolate Position Embeddings: <https://arxiv.org/abs/2311.05613>
35
+
36
+ ## Model Usage
37
+
38
+ ### Image Embeddings
39
+
40
+ ```python
41
+ import birder
42
+ from birder.inference.classification import infer_image
43
+
44
+ (net, model_info) = birder.load_pretrained_model("hiera_abswin_base_mim", inference=True)
45
+
46
+ # Get the image size the model was trained on
47
+ size = birder.get_size_from_signature(model_info.signature)
48
+
49
+ # Create an inference transform
50
+ transform = birder.classification_transform(size, model_info.rgb_stats)
51
+
52
+ image = "path/to/image.jpeg" # or a PIL image
53
+ (out, embedding) = infer_image(net, image, transform, return_embedding=True)
54
+ # embedding is a NumPy array with shape of (1, 768)
55
+ ```
56
+
57
+ ### Detection Feature Map
58
+
59
+ ```python
60
+ from PIL import Image
61
+ import birder
62
+
63
+ (net, model_info) = birder.load_pretrained_model("hiera_abswin_base_mim", inference=True)
64
+
65
+ # Get the image size the model was trained on
66
+ size = birder.get_size_from_signature(model_info.signature)
67
+
68
+ # Create an inference transform
69
+ transform = birder.classification_transform(size, model_info.rgb_stats)
70
+
71
+ image = Image.open("path/to/image.jpeg")
72
+ features = net.detection_features(transform(image).unsqueeze(0))
73
+ # features is a dict (stage name -> torch.Tensor)
74
+ print([(k, v.size()) for k, v in features.items()])
75
+ # Output example:
76
+ # [('stage1', torch.Size([1, 96, 56, 56])),
77
+ # ('stage2', torch.Size([1, 192, 28, 28])),
78
+ # ('stage3', torch.Size([1, 384, 14, 14])),
79
+ # ('stage4', torch.Size([1, 768, 7, 7]))]
80
+ ```
81
+
82
+ ## Citation
83
+
84
+ ```bibtex
85
+ @misc{ryali2023hierahierarchicalvisiontransformer,
86
+ title={Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles},
87
+ author={Chaitanya Ryali and Yuan-Ting Hu and Daniel Bolya and Chen Wei and Haoqi Fan and Po-Yao Huang and Vaibhav Aggarwal and Arkabandhu Chowdhury and Omid Poursaeed and Judy Hoffman and Jitendra Malik and Yanghao Li and Christoph Feichtenhofer},
88
+ year={2023},
89
+ eprint={2306.00989},
90
+ archivePrefix={arXiv},
91
+ primaryClass={cs.CV},
92
+ url={https://arxiv.org/abs/2306.00989},
93
+ }
94
+
95
+ @misc{bolya2023windowattentionbuggedinterpolate,
96
+ title={Window Attention is Bugged: How not to Interpolate Position Embeddings},
97
+ author={Daniel Bolya and Chaitanya Ryali and Judy Hoffman and Christoph Feichtenhofer},
98
+ year={2023},
99
+ eprint={2311.05613},
100
+ archivePrefix={arXiv},
101
+ primaryClass={cs.CV},
102
+ url={https://arxiv.org/abs/2311.05613},
103
+ }
104
+ ```