image/png

"Landing Page"

Model Overview

Tessa-T1 is an innovative transformer-based React reasoning model, fine-tuned from the powerful Qwen2.5-Coder-14B-Instruct base model. Designed specifically for React frontend development, Tessa-T1 leverages advanced reasoning to autonomously generate well-structured, semantic React components. Its integration into agent systems makes it a powerful tool for automating web interface development and frontend code intelligence.


Model Highlights

  • React-specific Reasoning: Accurately generates functional and semantic React components.
  • Agent Integration: Seamlessly fits into AI-driven coding agents and autonomous frontend systems.
  • Context-Aware Generation: Effectively understands and utilizes UI context to provide relevant code solutions.

Example Outputs

See examples demonstrating the powerful reasoning and component creation capabilities of Tessa-T1:

image/png AI upload

image/png Virtual Machine Console

image/png

Playlist Management

image/png

Prompt: "add in a calendar"

image/png


Use Cases

Recommended Uses

  • Automatic Component Generation: Quickly produce React components from textual prompts.
  • Agent-based Web Development: Integrate into automated coding systems for faster frontend workflows.
  • Frontend Refactoring: Automate the optimization and semantic enhancement of React code.

Limitations

  • Focused on React: Limited use outside React.js frameworks.
  • Complex State Management: May require manual adjustments for highly dynamic state management scenarios.

How to Use

Inference Example

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "smirki/Tessa-T1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to("cuda")

prompt = """<|im_start|>user
Create a React component for a user profile card.<|im_end|>
<|im_start|>assistant
<|im_start|>think
"""

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=1500, do_sample=True, temperature=0.7)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Performance and Evaluation

  • Strengths:

    • Strong semantic React component generation.
    • Excellent integration capabilities with agent-based systems.
  • Weaknesses:

    • Complex JavaScript logic may require manual post-processing.

Technical Specifications

  • Architecture: Transformer-based LLM
  • Base Model: Qwen2.5-Coder-14B-Instruct
  • Precision: bf16 mixed precision, quantized to q8
  • Hardware Requirements: Recommended 12GB VRAM
  • Software Dependencies:
    • Hugging Face Transformers
    • PyTorch

Citation

@misc{smirki_Tessa-T1,
  title={Tessa-T1: React-Focused Reasoning Model for Component Generation},
  author={tesslate},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/tesslate/Tessa-T1}
}

Contact & Community

  • Creator: smirki
  • Repository & Demo: Coming soon!

Sponsored by vichar ai Huggingface Website

Downloads last month
184
Safetensors
Model size
14.8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Tesslate/Tessa-T1-14B

Base model

Qwen/Qwen2.5-14B
Finetuned
(17)
this model
Finetunes
1 model
Merges
2 models
Quantizations
8 models

Dataset used to train Tesslate/Tessa-T1-14B

Space using Tesslate/Tessa-T1-14B 1

Collection including Tesslate/Tessa-T1-14B