Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
raw
history blame contribute delete
614 Bytes
Download the mappings from the Hub and create the id2label and label2id dictionaries:
import json
from huggingface_hub import cached_download, hf_hub_url
repo_id = "huggingface/label-files"
filename = "ade20k-id2label.json"
id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename, repo_type="dataset")), "r"))
id2label = {int(k): v for k, v in id2label.items()}
label2id = {v: k for k, v in id2label.items()}
num_labels = len(id2label)
Custom dataset
You could also create and use your own dataset if you prefer to train with the run_semantic_segmentation.py script instead of a notebook instance.