YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

aaa-2-sql

This is a finetuned version of Mistral-7B-Instruct-v0.3 using LoRA with LitGPT.

Training Details

  • Base Model: mistralai/Mistral-7B-Instruct-v0.3
  • Framework: LitGPT
  • Finetuning Method: Low-Rank Adaptation (LoRA)
  • LoRA Parameters:
    • Rank (r): 16
    • Alpha: 32
    • Dropout: 0.05
  • Quantization: bnb.nf4
  • Context Length: 4098 tokens
  • Training Steps: 2000

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("exaler/aaa-2-sql")
tokenizer = AutoTokenizer.from_pretrained("exaler/aaa-2-sql")

# Create prompt
prompt = "Your prompt here"

# Generate text
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=1024)
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support