This repository contains a fine-tuned version of Gemma-3-1B trained on a custom-made synthetic dataset.
Usage:
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="tabularisai/german-gemma-3-1b-it",
device="cuda",
torch_dtype="auto",
)
messages = [
{"role": "user", "content": "Schreibe ein Haiku über den Frühling."}
]
outputs = pipe(
messages,
max_new_tokens=50,
)
assistant_reply = outputs[0]["generated_text"][-1]
print(f"User: {messages[0]['content']}")
print(f"KI: {assistant_reply['content']}")
Details are coming soon.
made in Tübingen
- Downloads last month
- 93
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.