This repository contains a fine-tuned version of Gemma-3-1B trained on a custom-made synthetic dataset.

Usage:

import torch
from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="tabularisai/german-gemma-3-1b-it",
    device="cuda", 
    torch_dtype="auto",
)


messages = [
    {"role": "user", "content": "Schreibe ein Haiku über den Frühling."}
]

outputs = pipe(
    messages,
    max_new_tokens=50,
)

assistant_reply = outputs[0]["generated_text"][-1] 
print(f"User: {messages[0]['content']}")
print(f"KI: {assistant_reply['content']}")

Details are coming soon.

made in Tübingen

Downloads last month
93
Safetensors
Model size
1,000M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tabularisai/german-gemma-3-1b-it

Finetuned
(89)
this model

Space using tabularisai/german-gemma-3-1b-it 1