Datasets:
Update description and add analysis of potential test sets
Browse files- README.md +74 -1
- notebooks/lilabc_test-filter.ipynb +0 -0
- notebooks/lilabc_test-filter.py +306 -0
README.md
CHANGED
@@ -64,6 +64,22 @@ Escape underscores ("_") with a "\". Example: image\_RGB
|
|
64 |
This dataset contains the LILA BC full camera trap information with notebook ([`lilabc_CT.ipynb`](https://huggingface.co/datasets/imageomics/lila-bc-camera/blob/main/notebooks/lilabc_CT.ipynb)) exploring available data. The last run of this (in [commit 010ecf0](https://huggingface.co/datasets/imageomics/lila-bc-camera/commit/010ecf0c6a2e0c99c9481cea793d8b1556b5c71e)) uses and produces the lila CSVs found [here](https://huggingface.co/datasets/imageomics/lila-bc-camera/tree/010ecf0c6a2e0c99c9481cea793d8b1556b5c71e/data).
|
65 |
More details on this are below in [Data Instances](#data-instances).
|
66 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
**Repo file description at [commit 87e2e4d](https://huggingface.co/datasets/imageomics/lila-bc-camera/tree/87e2e4d46cf1e8daadd74b7738856a1e30754de3) when we were considering it for BioCLIP v1 testing:**
|
68 |
|
69 |
Images have been deduplicated and reduced down to species designation, with the main CSV filtered to just those with species labels and only one animal per image. This was done by pulling the first instance of an animal so that there are not repeat images of the same animal from essentially the same time.
|
@@ -89,9 +105,16 @@ See the [LILA BC HF Dataset](https://huggingface.co/datasets/society-ethics/lila
|
|
89 |
lila_image_urls_and_labels.csv
|
90 |
lila_image_urls_and_labels_species.csv # Outdated
|
91 |
lila_image_urls_and_labels_wHumans.csv
|
|
|
|
|
|
|
92 |
notebooks/
|
93 |
lilabc_CT.ipynb
|
94 |
lilabc_CT.py
|
|
|
|
|
|
|
|
|
95 |
```
|
96 |
|
97 |
|
@@ -154,6 +177,44 @@ Snapshot Mountain Zebra 7
|
|
154 |
Snapshot Camdeboo 3
|
155 |
```
|
156 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
157 |
### Data Fields
|
158 |
[More Information Needed]
|
159 |
<!--
|
@@ -207,7 +268,7 @@ Give your train-test splits for benchmarking
|
|
207 |
## Additional Information
|
208 |
|
209 |
### Dataset Curators
|
210 |
-
|
211 |
|
212 |
### Licensing Information
|
213 |
[More Information Needed]
|
@@ -216,6 +277,18 @@ Be sure to check the license requirements for the particular data used (as noted
|
|
216 |
This particular compilation has been marked as dedicated to the public domain by applying the [CC0 Public Domain Waiver](https://creativecommons.org/publicdomain/zero/1.0/). However, images may be licensed under different terms (as noted above).
|
217 |
|
218 |
### Citation Information
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
219 |
[More Information Needed]
|
220 |
<!--
|
221 |
If you want to include BibTex, replace "<>"s with your info
|
|
|
64 |
This dataset contains the LILA BC full camera trap information with notebook ([`lilabc_CT.ipynb`](https://huggingface.co/datasets/imageomics/lila-bc-camera/blob/main/notebooks/lilabc_CT.ipynb)) exploring available data. The last run of this (in [commit 010ecf0](https://huggingface.co/datasets/imageomics/lila-bc-camera/commit/010ecf0c6a2e0c99c9481cea793d8b1556b5c71e)) uses and produces the lila CSVs found [here](https://huggingface.co/datasets/imageomics/lila-bc-camera/tree/010ecf0c6a2e0c99c9481cea793d8b1556b5c71e/data).
|
65 |
More details on this are below in [Data Instances](#data-instances).
|
66 |
|
67 |
+
Looks at potential test sets constructed from 7 different LILA datasets (uses [data/potential-test-sets/lila_image_urls_and_labels.csv](https://huggingface.co/datasets/imageomics/lila-bc-camera/blob/37b93ddf25c63bc30d8488ef78c1a53b9c4a3115/data/potential-test-sets/lila_image_urls_and_labels.csv) (sha256:3fdf87ceea75f8720208a95350c3c70831a6c1c745a92bb68c7f2c3239e4c455) to separate them out):
|
68 |
+
We're specifically interested in the following datasets identified in the [spreadsheet](https://docs.google.com/spreadsheets/d/1sC90DolAvswDUJ1lNSf0sk_norR24LwzX2O4g9OxMZE/edit?usp=drive_link) as labeled at the image-level.
|
69 |
+
- [Snapshot Safari 2024 Expansion](https://lila.science/datasets/snapshot-safari-2024-expansion/)
|
70 |
+
- [Ohio Small Animals](https://lila.science/datasets/ohio-small-animals/)
|
71 |
+
- [Desert Lion Conservation Camera Traps](https://lila.science/datasets/desert-lion-conservation-camera-traps/)
|
72 |
+
- [Orinoquia Camera Traps](https://lila.science/datasets/orinoquia-camera-traps/)
|
73 |
+
- [SWG Camera Traps 2018-2020](https://lila.science/datasets/swg-camera-traps)
|
74 |
+
- [Island Conservation Camera Traps](https://lila.science/datasets/island-conservation-camera-traps/)
|
75 |
+
- [ENA24-detection](https://lila.science/datasets/ena24detection)
|
76 |
+
|
77 |
+
There are 2,867,312 images in this subset (once humans and non-creatures are removed).
|
78 |
+
|
79 |
+
[NOAA Puget Sound Nearshore Fish 2017-2018](https://lila.science/datasets/noaa-puget-sound-nearshore-fish) could be interesting for the combined categories, though it is _very_ general (has only three labels: `fish`, `crab`, `fish_and_crab`). It also isn't included in the CSV, so not explored further.
|
80 |
+
|
81 |
+
More details on this provided in [Test Data Instances](#test-data-instances), below.
|
82 |
+
|
83 |
**Repo file description at [commit 87e2e4d](https://huggingface.co/datasets/imageomics/lila-bc-camera/tree/87e2e4d46cf1e8daadd74b7738856a1e30754de3) when we were considering it for BioCLIP v1 testing:**
|
84 |
|
85 |
Images have been deduplicated and reduced down to species designation, with the main CSV filtered to just those with species labels and only one animal per image. This was done by pulling the first instance of an animal so that there are not repeat images of the same animal from essentially the same time.
|
|
|
105 |
lila_image_urls_and_labels.csv
|
106 |
lila_image_urls_and_labels_species.csv # Outdated
|
107 |
lila_image_urls_and_labels_wHumans.csv
|
108 |
+
potential-test-sets/
|
109 |
+
lila-taxonomy-mapping_release.csv
|
110 |
+
lila_image_urls_and_labels.csv
|
111 |
notebooks/
|
112 |
lilabc_CT.ipynb
|
113 |
lilabc_CT.py
|
114 |
+
lilabc_test-EDA.ipynb
|
115 |
+
lilabc_test-EDA.py
|
116 |
+
lilabc_test-filter.ipynb
|
117 |
+
lilabc_test-filter.py
|
118 |
```
|
119 |
|
120 |
|
|
|
177 |
Snapshot Camdeboo 3
|
178 |
```
|
179 |
|
180 |
+
### Test Data Instances
|
181 |
+
|
182 |
+
**data/potential-test-sets/lila_image_urls_and_labels.csv:** Reduced down to the datasets of interest listed below; all those with `original_label` "empty" or null `scientific_name` (these had non-taxa labels) were removed.
|
183 |
+
Additionally, added a `multi_species` column (boolean to indicate multiple species are present in the image--it gets listed once for each species in the image) and a count of how many different species are in each of those images (`num_species` column).
|
184 |
+
|
185 |
+
There are 367 unique scientific names in this subset (355 by full 7-rank), 184 unique among just those labeled at the image-level (180 by full 7-rank) (as indicated by the CSV).
|
186 |
+
This was then subdivided into CSVs for each of the target datasets (`data/potential-test-sets/<dataset_name>_image_urls_and_labels.csv`).
|
187 |
+
These were initially identified from our [master spreadsheet](https://docs.google.com/spreadsheets/d/1sC90DolAvswDUJ1lNSf0sk_norR24LwzX2O4g9OxMZE/edit?gid=0#gid=0), identifying image-level labeled datasets and those that are a meaningful measure of our biodiversity-focused model (e.g., includes rare species--those less-commonly seen, targeting areas with greater biodiversity).
|
188 |
+
|
189 |
+
- [Snapshot Safari 2024 Expansion](https://lila.science/datasets/snapshot-safari-2024-expansion/) -- actually labeled by sequence, so not a good choice for testing
|
190 |
+
- [Ohio Small Animals](https://lila.science/datasets/ohio-small-animals/)
|
191 |
+
- [Desert Lion Conservation Camera Traps](https://lila.science/datasets/desert-lion-conservation-camera-traps/)
|
192 |
+
- [Orinoquia Camera Traps](https://lila.science/datasets/orinoquia-camera-traps/)
|
193 |
+
- [SWG Camera Traps 2018-2020](https://lila.science/datasets/swg-camera-traps) -- actually labeled by sequence, so not a good choice for testing
|
194 |
+
- [Island Conservation Camera Traps](https://lila.science/datasets/island-conservation-camera-traps/)
|
195 |
+
- [ENA24-detection](https://lila.science/datasets/ena24detection)
|
196 |
+
|
197 |
+
Multi-species counts (full):
|
198 |
+
```
|
199 |
+
num_species
|
200 |
+
1.0 2753832
|
201 |
+
2.0 114825
|
202 |
+
3.0 13995
|
203 |
+
4.0 1704
|
204 |
+
5.0 230
|
205 |
+
14.0 42
|
206 |
+
```
|
207 |
+
For Image-level labels:
|
208 |
+
```
|
209 |
+
num_species
|
210 |
+
1.0 305821
|
211 |
+
2.0 1154
|
212 |
+
3.0 3
|
213 |
+
```
|
214 |
+
Looks like we'll have about 306K images across the 5 datasets that have image-level labels.
|
215 |
+
|
216 |
+
|
217 |
+
|
218 |
### Data Fields
|
219 |
[More Information Needed]
|
220 |
<!--
|
|
|
268 |
## Additional Information
|
269 |
|
270 |
### Dataset Curators
|
271 |
+
Elizabeth Campolongo
|
272 |
|
273 |
### Licensing Information
|
274 |
[More Information Needed]
|
|
|
277 |
This particular compilation has been marked as dedicated to the public domain by applying the [CC0 Public Domain Waiver](https://creativecommons.org/publicdomain/zero/1.0/). However, images may be licensed under different terms (as noted above).
|
278 |
|
279 |
### Citation Information
|
280 |
+
|
281 |
+
For test sets (provided citations on their LILA BC pages are included):
|
282 |
+
|
283 |
+
- [Ohio Small Animals](https://lila.science/datasets/ohio-small-animals/)
|
284 |
+
- Balasubramaniam S. [Optimized Classification in Camera Trap Images: An Approach with Smart Camera Traps, Machine Learning, and Human Inference](https://etd.ohiolink.edu/acprod/odb_etd/etd/r/1501/10?clear=10&p10_accession_num=osu1721417695430687). Master’s thesis, The Ohio State University. 2024.
|
285 |
+
- [Desert Lion Conservation Camera Traps](https://lila.science/datasets/desert-lion-conservation-camera-traps/)
|
286 |
+
- [Orinoquia Camera Traps](https://lila.science/datasets/orinoquia-camera-traps/)
|
287 |
+
- Vélez J, McShea W, Shamon H, Castiblanco‐Camacho PJ, Tabak MA, Chalmers C, Fergus P, Fieberg J. [An evaluation of platforms for processing camera‐trap data using artificial intelligence](https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/2041-210X.14044). Methods in Ecology and Evolution. 2023 Feb;14(2):459-77.
|
288 |
+
- [Island Conservation Camera Traps](https://lila.science/datasets/island-conservation-camera-traps/)
|
289 |
+
- [ENA24-detection](https://lila.science/datasets/ena24detection)
|
290 |
+
- Yousif H, Kays R, Zhihai H. Dynamic Programming Selection of Object Proposals for Sequence-Level Animal Species Classification in the Wild. IEEE Transactions on Circuits and Systems for Video Technology, 2019. ([bibtex](http://lila.science/wp-content/uploads/2019/12/hayder2019_bibtex.txt))
|
291 |
+
|
292 |
[More Information Needed]
|
293 |
<!--
|
294 |
If you want to include BibTex, replace "<>"s with your info
|
notebooks/lilabc_test-filter.ipynb
ADDED
The diff for this file is too large to render.
See raw diff
|
|
notebooks/lilabc_test-filter.py
ADDED
@@ -0,0 +1,306 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# ---
|
2 |
+
# jupyter:
|
3 |
+
# jupytext:
|
4 |
+
# formats: ipynb,py:percent
|
5 |
+
# text_representation:
|
6 |
+
# extension: .py
|
7 |
+
# format_name: percent
|
8 |
+
# format_version: '1.3'
|
9 |
+
# jupytext_version: 1.16.0
|
10 |
+
# kernelspec:
|
11 |
+
# display_name: data-dev
|
12 |
+
# language: python
|
13 |
+
# name: python3
|
14 |
+
# ---
|
15 |
+
|
16 |
+
# %%
|
17 |
+
import pandas as pd
|
18 |
+
import seaborn as sns
|
19 |
+
|
20 |
+
sns.set_style("whitegrid")
|
21 |
+
|
22 |
+
# %% [markdown]
|
23 |
+
# Load in LILA CSV from [this commit](https://huggingface.co/datasets/imageomics/lila-bc-camera/blob/37b93ddf25c63bc30d8488ef78c1a53b9c4a3115/data/potential-test-sets/lila_image_urls_and_labels.csv). (this will take a while)
|
24 |
+
#
|
25 |
+
# sha256:3fdf87ceea75f8720208a95350c3c70831a6c1c745a92bb68c7f2c3239e4c455
|
26 |
+
# size 15931383983
|
27 |
+
|
28 |
+
# %%
|
29 |
+
df = pd.read_csv("../data/potential-test-sets/lila_image_urls_and_labels.csv", low_memory = False)
|
30 |
+
df.head()
|
31 |
+
|
32 |
+
# %%
|
33 |
+
df.columns
|
34 |
+
|
35 |
+
# %%
|
36 |
+
df.annotation_level.value_counts()
|
37 |
+
|
38 |
+
# %% [markdown]
|
39 |
+
# Annotation level indicates image vs sequence (or unknown), we specifically want those annotated at the image-level, since they should be "clean" images. Though we will want to label them with how many distinct species are in the image first.
|
40 |
+
#
|
41 |
+
# We have 3,533,538 images labeled to the image-level.
|
42 |
+
#
|
43 |
+
# ### Check Dataset Counts
|
44 |
+
#
|
45 |
+
# 1. Make sure we have all datasets expected. We're specifically interested in:
|
46 |
+
# - [Snapshot Safari 2024 Expansion](https://lila.science/datasets/snapshot-safari-2024-expansion/)
|
47 |
+
# - [Ohio Small Animals](https://lila.science/datasets/ohio-small-animals/)
|
48 |
+
# - [Desert Lion Conservation Camera Traps](https://lila.science/datasets/desert-lion-conservation-camera-traps/)
|
49 |
+
# - [Orinoquia Camera Traps](https://lila.science/datasets/orinoquia-camera-traps/)
|
50 |
+
# - [SWG Camera Traps 2018-2020](https://lila.science/datasets/swg-camera-traps)
|
51 |
+
# - [Island Conservation Camera Traps](https://lila.science/datasets/island-conservation-camera-traps/)
|
52 |
+
# - [NOAA Puget Sound Nearshore Fish 2017-2018](https://lila.science/datasets/noaa-puget-sound-nearshore-fish) could be interesting for the combined categories, though it is _very_ general (has only three labels: `fish`, `crab`, `fish_and_crab`).
|
53 |
+
# 2. Check which/how many datasets are labeled to the image level (and check for match to [Andrey's spreadsheet](https://docs.google.com/spreadsheets/d/1sC90DolAvswDUJ1lNSf0sk_norR24LwzX2O4g9OxMZE/edit?usp=drive_link)).
|
54 |
+
|
55 |
+
# %%
|
56 |
+
df.dataset_name.value_counts()
|
57 |
+
|
58 |
+
# %%
|
59 |
+
df.groupby(["dataset_name"]).annotation_level.value_counts()
|
60 |
+
|
61 |
+
# %% [markdown]
|
62 |
+
# It seems snapshot safari exapansion and SWG camera traps are not labeled at the image level, despite the indication in the spreadsheet...
|
63 |
+
#
|
64 |
+
# The NOAA one isn't here, but that's okay. Let's also take a look at [ENA24](https://lila.science/datasets/ena24detection).
|
65 |
+
#
|
66 |
+
# We'll subset to just the 7 identified, though we'll likely not continue with Snapshot Safari and SWG, since we want to make sure the test set labels are accurate.
|
67 |
+
|
68 |
+
# %%
|
69 |
+
datasets_of_interest = ["Desert Lion Conservation Camera Traps",
|
70 |
+
"Island Conservation Camera Traps",
|
71 |
+
"Ohio Small Animals",
|
72 |
+
"Orinoquia Camera Traps",
|
73 |
+
"SWG Camera Traps",
|
74 |
+
"Snapshot Safari 2024 Expansion",
|
75 |
+
"ENA24"]
|
76 |
+
|
77 |
+
# %%
|
78 |
+
reduced_df = df.loc[df["dataset_name"].isin(datasets_of_interest)].copy()
|
79 |
+
reduced_df.head()
|
80 |
+
|
81 |
+
# %% [markdown]
|
82 |
+
# Observe that we also now get multiple URL options; `url_aws` will likely be best/fastest for use with [`distributed-downloader`](https://github.com/Imageomics/distributed-downloader) to get the images.
|
83 |
+
|
84 |
+
# %%
|
85 |
+
reduced_df.info(show_counts = True)
|
86 |
+
|
87 |
+
# %% [markdown]
|
88 |
+
# Let's remove empty frames to get a better sense of what we have.
|
89 |
+
|
90 |
+
# %%
|
91 |
+
df_cleaned = reduced_df.loc[reduced_df.original_label != "empty"].copy()
|
92 |
+
df_cleaned.info(show_counts = True)
|
93 |
+
|
94 |
+
# %% [markdown]
|
95 |
+
# Not all have a scientific name, though those could be the non-taxa labels.
|
96 |
+
|
97 |
+
# %%
|
98 |
+
df_cleaned.loc[df_cleaned["scientific_name"].isna(), "original_label"].value_counts()
|
99 |
+
|
100 |
+
# %% [markdown]
|
101 |
+
# These are clearly also labels to remove, so we can simply reduce down to only those with non-null `scientific_name` values as well.
|
102 |
+
|
103 |
+
# %%
|
104 |
+
df_cleaned = df_cleaned.loc[~df_cleaned["scientific_name"].isna()].copy()
|
105 |
+
df_cleaned.info(show_counts=True)
|
106 |
+
|
107 |
+
# %%
|
108 |
+
df_cleaned.nunique()
|
109 |
+
|
110 |
+
# %% [markdown]
|
111 |
+
# We have 368 unique `scientific_name` values, some of which were definitely just higher ranks (e.g., Aves), but there are 283 species, so somewhere between the two should be our biodiversity.
|
112 |
+
#
|
113 |
+
# Interesting also to note that there are duplicate URLs here; these would be the indicators of multiple species in an image as they correspond to the number of unique image IDs. Though, those could also be the by-sequence images that we expected to be by-image.
|
114 |
+
|
115 |
+
# %%
|
116 |
+
#double-check for humans
|
117 |
+
df_cleaned.loc[df_cleaned.species == "homo sapien"]
|
118 |
+
|
119 |
+
# %% [markdown]
|
120 |
+
# ## Save the Reduced Data (no more "empty" labels)
|
121 |
+
|
122 |
+
# %%
|
123 |
+
df_cleaned.to_csv("../data/potential-test-sets/lila_image_urls_and_labels.csv", index = False)
|
124 |
+
|
125 |
+
# %%
|
126 |
+
print(df_cleaned.phylum.value_counts())
|
127 |
+
print()
|
128 |
+
print(df_cleaned["class"].value_counts())
|
129 |
+
|
130 |
+
# %% [markdown]
|
131 |
+
# All images are in Animalia, as expected; we have 2 phyla represented and 8 classes:
|
132 |
+
# - Predominantly Chordata, and within that phylum, Mammalia is the vast majority, though aves is about 10%.
|
133 |
+
# - Note that not every image with a phylum label has a class label.
|
134 |
+
# - Insecta, malacostraca, and arachnida are all in the class Arthropoda.
|
135 |
+
#
|
136 |
+
# ### Label Multi-Species Images
|
137 |
+
# We'll go by both the URL and image ID, which do seem to correspond to the same images (for uniqueness).
|
138 |
+
|
139 |
+
# %%
|
140 |
+
df_cleaned["multi_species"] = df_cleaned.duplicated(subset = ["url_aws", "image_id"], keep = False)
|
141 |
+
|
142 |
+
df_cleaned.loc[df_cleaned["multi_species"]].nunique()
|
143 |
+
|
144 |
+
# %% [markdown]
|
145 |
+
# We've got just under 63K images that have multiple species. We can figure out how many each of them have, and then move on to looking at images per sequence and other labeling info.
|
146 |
+
|
147 |
+
# %%
|
148 |
+
multi_sp_imgs = list(df_cleaned.loc[df_cleaned["multi_species"], "image_id"].unique())
|
149 |
+
|
150 |
+
# %%
|
151 |
+
for img in multi_sp_imgs:
|
152 |
+
df_cleaned.loc[df_cleaned["image_id"] == img, "num_species"] = df_cleaned.loc[df_cleaned["image_id"] == img].shape[0]
|
153 |
+
|
154 |
+
df_cleaned.head()
|
155 |
+
|
156 |
+
# %% [markdown]
|
157 |
+
# Set all the non-multi species images to show 1 in the `num_species` column.
|
158 |
+
|
159 |
+
# %%
|
160 |
+
df_cleaned.loc[df_cleaned["num_species"].isna(), "num_species"] = 1.0
|
161 |
+
|
162 |
+
df_cleaned.num_species.value_counts()
|
163 |
+
|
164 |
+
# %%
|
165 |
+
df_cleaned.loc[df_cleaned["num_species"] == 14.0].sample(4)
|
166 |
+
|
167 |
+
# %% [markdown]
|
168 |
+
# Found a typo above with the human check... seems all taxa are lowercase, but let's make sure it's enough to catch them all
|
169 |
+
|
170 |
+
# %%
|
171 |
+
print("num homo sapiens: ", df_cleaned.loc[df_cleaned.species == "homo sapiens"].shape)
|
172 |
+
df_cleaned.loc[df_cleaned["original_label"] == "human"].shape
|
173 |
+
|
174 |
+
# %% [markdown]
|
175 |
+
# Did any of these factor in to the multi-species counts?
|
176 |
+
|
177 |
+
# %%
|
178 |
+
df_cleaned.loc[(df_cleaned["species"] == "homo sapiens") & (df_cleaned["multi_species"])].shape
|
179 |
+
|
180 |
+
# %%
|
181 |
+
df_cleaned.loc[(df_cleaned["species"] == "homo sapiens") & (df_cleaned["multi_species"])].sample(4)
|
182 |
+
|
183 |
+
# %% [markdown]
|
184 |
+
# Let's fix those counts then.
|
185 |
+
|
186 |
+
# %%
|
187 |
+
human_multi_species = list(df_cleaned.loc[(df_cleaned["species"] == "homo sapiens") & (df_cleaned["multi_species"]), "image_id"].unique())
|
188 |
+
|
189 |
+
for img in human_multi_species:
|
190 |
+
df_cleaned.loc[df_cleaned["image_id"] == img, "num_species"] = df_cleaned.loc[df_cleaned["image_id"] == img, "num_species"] - 1
|
191 |
+
|
192 |
+
df_cleaned.num_species.value_counts()
|
193 |
+
|
194 |
+
# %% [markdown]
|
195 |
+
# Actually remove human indicators
|
196 |
+
|
197 |
+
# %%
|
198 |
+
df_cleaned = df_cleaned.loc[df_cleaned["species"] != "homo sapiens"].copy()
|
199 |
+
|
200 |
+
# %% [markdown]
|
201 |
+
# Need to remove the images that have humans and other species too.
|
202 |
+
|
203 |
+
# %%
|
204 |
+
df_cleaned = df_cleaned.loc[~df_cleaned["image_id"].isin(human_multi_species)].copy()
|
205 |
+
|
206 |
+
# %% [markdown]
|
207 |
+
# #### Save this to CSV now we got those counts
|
208 |
+
|
209 |
+
# %%
|
210 |
+
df_cleaned.to_csv("../data/potential-test-sets/lila_image_urls_and_labels.csv", index = False)
|
211 |
+
|
212 |
+
# %% [markdown]
|
213 |
+
# ### Generate individual CSVs for the datasets
|
214 |
+
|
215 |
+
# %%
|
216 |
+
for dataset in datasets_of_interest:
|
217 |
+
df_cleaned.loc[df_cleaned["dataset_name"] == dataset].to_csv(dataset+"_image_urls_and_labels.csv", index = False)
|
218 |
+
|
219 |
+
# Manually moved these to the data/potential-test-sets/ directory and renamed to not have spaces in the filenames
|
220 |
+
# (replaced spaces with underscores)
|
221 |
+
|
222 |
+
# %% [markdown]
|
223 |
+
# Get some basic stats
|
224 |
+
|
225 |
+
# %%
|
226 |
+
print(f"there are {df_cleaned.shape[0]} images")
|
227 |
+
print(f"we have {df_cleaned['scientific_name'].nunique()} unique scientific names")
|
228 |
+
print(f"when we filter for image-level labels, we have {df_cleaned.loc[df_cleaned['annotation_level'] == 'image', 'scientific_name'].nunique()} scientific names")
|
229 |
+
|
230 |
+
# %%
|
231 |
+
df_cleaned.loc[df_cleaned['annotation_level'] == 'image', 'num_species'].value_counts()
|
232 |
+
|
233 |
+
# %% [markdown]
|
234 |
+
# We will want to dedicate some more time to exploring some of these taxonomic counts, but we'll first look at the number of unique taxa (by Linnean 7-rank (`unique_7_tuple`)). We'll compare these to the number of unique scientific and common names, then perhaps add a count of number of creatures based on one of those labels. At that point we may save another copy of this CSV and start a new analysis notebook.
|
235 |
+
|
236 |
+
# %%
|
237 |
+
df_cleaned.annotation_level.value_counts()
|
238 |
+
|
239 |
+
# %% [markdown]
|
240 |
+
# Let's get a sense of total number of unique taxa, then separate out the by-image ones for unique taxa count there. Then we'll separate out each dataset into its own CSV for individual analysis.
|
241 |
+
|
242 |
+
# %% [markdown]
|
243 |
+
# ### Taxonomic String Exploration
|
244 |
+
|
245 |
+
# %%
|
246 |
+
lin_taxa = ['kingdom', 'phylum', 'class', 'order', 'family', 'genus', 'species']
|
247 |
+
|
248 |
+
# %% [markdown]
|
249 |
+
# #### How many have all 7 Linnean ranks?
|
250 |
+
|
251 |
+
# %%
|
252 |
+
df_all_taxa = df_cleaned.dropna(subset = lin_taxa)
|
253 |
+
df_all_taxa[lin_taxa].info(show_counts = True)
|
254 |
+
|
255 |
+
# %%
|
256 |
+
df_all_taxa_img = df_cleaned.loc[df_cleaned["annotation_level"] == "image"].dropna(subset = lin_taxa)
|
257 |
+
df_all_taxa_img[lin_taxa].info(show_counts = True)
|
258 |
+
|
259 |
+
# %%
|
260 |
+
df_cleaned.loc[df_cleaned["annotation_level"] == "image"].shape
|
261 |
+
|
262 |
+
# %% [markdown]
|
263 |
+
# That's not too bad, considering some are definitely just common names or classes: 2,187,756 out of 2,867,312.
|
264 |
+
#
|
265 |
+
# 249,847 when we drop to just image-level annotations (out of 306,978).
|
266 |
+
#
|
267 |
+
#
|
268 |
+
# Now how many different 7-tuples are there?
|
269 |
+
#
|
270 |
+
# #### How many unique 7-tuples?
|
271 |
+
|
272 |
+
# %%
|
273 |
+
#number of unique 7-tuples in full dataset
|
274 |
+
df_cleaned['lin_duplicate'] = df_cleaned.duplicated(subset = lin_taxa, keep = 'first')
|
275 |
+
df_unique_lin_taxa = df_cleaned.loc[~df_cleaned['lin_duplicate']].copy()
|
276 |
+
print(f"unique taxa in all: {df_unique_lin_taxa.shape[0]}")
|
277 |
+
print(f"unique taxa in image-level labeled: {df_unique_lin_taxa.loc[df_unique_lin_taxa["annotation_level"] == "image"].shape[0]}")
|
278 |
+
|
279 |
+
# %% [markdown]
|
280 |
+
# Pretty much aligns with the scientific name counts.
|
281 |
+
|
282 |
+
# %%
|
283 |
+
df_unique_lin_taxa.scientific_name.nunique()
|
284 |
+
|
285 |
+
# %%
|
286 |
+
df_unique_lin_taxa.loc[(df_unique_lin_taxa["scientific_name"].isna()) | (df_unique_lin_taxa["common_name"].isna())]
|
287 |
+
|
288 |
+
# %% [markdown]
|
289 |
+
# Let's check out our top ten labels, scientific names, and common names. Then we'll save this cleaned metadata file.
|
290 |
+
|
291 |
+
# %%
|
292 |
+
df_cleaned["original_label"].value_counts()[:10]
|
293 |
+
|
294 |
+
# %%
|
295 |
+
df_cleaned["scientific_name"].value_counts()[:10]
|
296 |
+
|
297 |
+
# %%
|
298 |
+
df_cleaned["common_name"].value_counts()[:10]
|
299 |
+
|
300 |
+
# %%
|
301 |
+
sns.histplot(df_cleaned, y = 'class')
|
302 |
+
|
303 |
+
# %%
|
304 |
+
sns.histplot(df_cleaned.loc[df_cleaned["class"].isin(["aves", "mammalia", "reptilia"])], y = 'order')
|
305 |
+
|
306 |
+
# %%
|