Seg-Zero-7B
This model is based on the paper Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement. It uses a decoupled architecture with a reasoning model and a segmentation model. It's trained via reinforcement learning using GRPO without explicit reasoning data, leading to robust zero-shot generalization and emergent test-time reasoning.
Code: https://github.com/dvlab-research/Seg-Zero
Description
This is a Seg-Zero-7B model. It introduces a decoupled architecture consisting of a reasoning model and a segmentation model. The reasoning model interprets user intentions, generates explicit reasoning chains, and produces positional prompts, which are subsequently used by the segmentation model to generate pixel-level masks.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# load model
model = AutoModelForCausalLM.from_pretrained("Ricky06662/Seg-Zero-7B")
tokenizer = AutoTokenizer.from_pretrained("Ricky06662/Seg-Zero-7B")
- Downloads last month
- 724
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support