HumanF-MarkrAI/Gukbap-Gemma3-12B-VL🍚

Model Details🍚

Model Description

  • Developed by: HumanF-MarkrAI
  • Model type: Korean-VL-Gemma3-12B
  • Language(s): Korean + English
  • Context Length: 2048
  • License: cc-by-4.0
  • Finetuned from model: google/gemma-3-12b-it.

Model Sources

When training, we used H100 80GB GPUx4.

Implications🍚

If you want to know our model's details, please see 🔥Gukbap-LMM Blog🔥.

Training Method (SFT)🧐

The following papers contain the foundational methodologies for the dataset and training methods we are currently proceeding.

SFT Text-Datasets (Private)

When we made the Open-Source based dataset, we use microsoft/WizardLM-2-8x22B through DeepInfra.
Our datasets are made by Evolving system, which is propsed by WizardLM. In training, we used 1849 training dataset, and 200 validation dataset.

Benchmakrs🤗

Korean MM Benchmark Score (Zero-shot)

We internally evaluated 🔥our code🔥.
We utilized gpt-4o-2024-08-06 in K-LLAVA-W evaluation.

Model K-MMBench K-MMStar K-DTCBench K-LLAVA-W AVG
Gukbap-Gemma3-12B🍚 82.88 48.53 64.17 72.83 67.10
gemma-3-12b-it 82.24 48.13 66.25 68.50 66.28
Gukbap-Gemma2-9B-VL🍚 80.16 54.20 52.92 63.83 62.78
Ovis1.6-Gemma2-9B 52.46 50.40 47.08 55.67 51.40
VARCO-VISION-14B 87.16 58.13 85.42 51.17 70.47
llama-3.2-Korean-Bllossom-AICA-5B 26.01 21.60 17.08 45.33 27.51

MM Benchmarks

Gukbap-VL Series models🍚🍚

BibTeX

@article{HumanF-MarkrAI,
  title={Gukbap-Gemma3-12B-VL},
  author={MarkrAI},
  year={2025},
  url={https://huggingface.co/HumanF-MarkrAI}
}
Downloads last month
16
Safetensors
Model size
12.2B params
Tensor type
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including HumanF-MarkrAI/Gukbap-Gemma3-12B-VL