Qwen2.5-Coder-1.5B-Instruct-SFT

The Qwen2.5-Coder-1.5B-Instruct-SFT model has been fine-tuned using Supervised Fine Tuning (SFT) method specifically to enhance its performance in generating docstrings, a critical task in coding. Currently, it generates docstrings following the Google style guide. For differences compared to the base (pre-trained) model, please refer to the Comparison section.

Note: Since the model has only been supervised fine-tuned (SFT) for roughly 11 epochs and hasn't been enhanced with reinforcement learning from human feedback (RLHF), it mostly generates docstrings in the Google Docstring style. However, it might occasionally produce them in a different style. In such cases, you may need to regenerate the docstrings once or multiple times.

Note: If you would like to contribute to this repository, please read the CONTRIBUTING first.


TableofContents


Usage

Hugging Face

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig

model_name = "bunyaminergen/Qwen2.5-Coder-1.5B-Instruct-SFT"

quant_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_compute_dtype=torch.bfloat16,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4"
)

tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    quantization_config=quant_config,
    device_map="auto"
)

model.eval()

messages = [
    {"role": "system", "content": "You are a senior Python developer."},
    {"role": "user", "content": "Give me a quick example of bubble sort in Python."}
]

prompt = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

max_new_tokens = 512
temperature = 0.9

with torch.no_grad():
    outputs = model.generate(
        **inputs,
        max_new_tokens=max_new_tokens,
        temperature=temperature,
        do_sample=True,
        top_p=0.95,
        eos_token_id=tokenizer.eos_token_id,
        pad_token_id=tokenizer.eos_token_id
    )

result = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(result)

Comparison

Base Model

System: You are a senior Python developer..

User: Give me a quick an example of bubble sort in Python.

Assistant: Bubble Sort is a simple sorting algorithm that repeatedly steps through the list, compares adjacent elements and swaps them if they are in the wrong order. The pass through the list is repeated until the list is sorted.

Here's a basic implementation of Bubble Sort in Python:

def bubble_sort(arr):
    n = len(arr)
    # Traverse through all array elements
    for i in range(n):
        # Last i elements are already in place
        for j in range(0, n - i - 1):
            # Swap if the element found is greater than the next element
            if arr[j] > arr[j + 1]:
                arr[j], arr[j + 1] = arr[j + 1], arr[j]


# Example usage:
arr = [64, 34, 25, 12, 22, 11, 90]
bubble_sort(arr)
print("Sorted array:", arr)

In this example:

  • We define a function bubble_sort that takes an array as input.
  • Inside the function, we use two nested loops to iterate over each pair of adjacent elements in the array.
  • If the current element (arr[j]) is greater than the next element (arr[j+1]), we swap them.
  • This process continues until no more swaps are needed, indicating that the array is sorted.
  • Finally, we demonstrate how to use the bubble_sort function by sorting an array and printing the sorted result.

Bubble Sort has a time complexity of O(n^2) in the worst case scenario, making it less efficient than other sorting algorithms like QuickSort or MergeSort. However, it is easy to understand and implement, which makes it suitable for small data sets or educational purposes.

Supervised Fine Tuned Model

System: You are a senior Python developer.

User: Give me a quick an example of bubble sort in Python.

Assistant: Here's a simple implementation of the Bubble Sort algorithm in Python, along with its docstring following Google Python Style Guide:

def bubble_sort(arr):
    """
    Bubble sorts an array of integers in ascending order.

    Args:
        arr (list): A list of integers to be sorted.

    Returns:
        list: The sorted list of integers.

    Raises:
        TypeError: If the input is not a list or contains non-integer elements.

    Examples:
        >>> bubble_sort([5, 3, 1, 4, 6])
        [1, 3, 4, 5, 6]
    """

    # Check if the input is a list and contains only integers
    if not isinstance(arr, list) or not all(isinstance(x, int) for x in arr):
        raise TypeError("Input must be a list of integers.")

    n = len(arr)

    # Traverse through all array elements
    for i in range(n):

        # Last i elements are already in place
        for j in range(0, n - i - 1):

            # Traverse the array from 0 to n-i-1
            # Swap if the element found is greater than the next element
            if arr[j] > arr[j + 1]:
                arr[j], arr[j + 1] = arr[j + 1], arr[j]

    return arr

This implementation follows the Bubble Sort algorithm, which repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. The pass through the list is repeated until the list is sorted. This implementation includes error handling and examples for usage.


Dataset


Training

Hyperparameters

Hyperparameter Value
Base Model Qwen/Qwen2.5-Coder-1.5B-Instruct
Fine-tuning Method QLoRA (Quantized Low-Rank Adaptation)
Task Type CAUSAL_LM
Number of Epochs 11
Batch Size 8
Gradient Accumulation Steps 2
Effective Batch Size 16 (8 × 2)
Learning Rate 1e-4
Optimizer AdamW
Precision BF16 Mixed Precision
Evaluation Strategy None
Max Sequence Length 1024 tokens
Logging Steps every 1000 steps
Save Checkpoint Steps every 7200 steps
Output Directory Overwritten per run
Experiment Tracking MLflow (local tracking)
Experiment Name AssistantFineTuning
MLflow Run Name AssistantFT

PEFT (QLoRA) Configuration

Parameter Value
LoRA Rank (r) 16
LoRA Alpha 32
LoRA Dropout 0.05
Target Modules all-linear
Modules Saved lm_head, embed_token

Dataset

  • Train/Test Split: 90%/10%
  • Random Seed: 19
  • Train Batched: True
  • Eval Batched: True

Tokenizer Configuration

  • Truncation: Enabled (max_length=1024)
  • Masked Language Modeling (MLM): False

Speeds, Sizes, Times

  • Total Training Time: ~11 hours
  • Checkpoint Frequency: every 7200 steps
  • Checkpoint Steps:
    • checkpoint-7200
    • checkpoint-14400
    • checkpoint-21600
    • checkpoint-28800
    • checkpoint-36000
    • checkpoint-39600 (final checkpoint)

Compute Infrastructure

Hardware:

  • GPU: 1 × NVIDIA L40S (48 GB VRAM)
  • RAM: 62 GB
  • CPU: 16 vCPU

Software:

  • OS: Ubuntu 22.04
  • Frameworks: PyTorch 2.4.0
  • CUDA Version: 12.4.1

Documentations


Licence


Links


Team


Contact


Reference


Citation

@software{       Qwen2.5-Coder-1.5B-Instruct-SFT,
  author       = {Bunyamin Ergen},
  title        = {{Qwen2.5-Coder-1.5B-Instruct-SFT}},
  year         = {2025},
  month        = {04},
}

Downloads last month
20
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bunyaminergen/Qwen2.5-Coder-1.5B-Instruct-SFT

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(56)
this model
Finetunes
1 model

Dataset used to train bunyaminergen/Qwen2.5-Coder-1.5B-Instruct-SFT