Built with Axolotl

See axolotl config

axolotl version: 0.10.0

base_model: mistralai/Ministral-8B-Instruct-2410
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0001
load_in_8bit: true
load_in_4bit: false
bnb_4bit_use_double_quant: false
bnb_4bit_quant_type: null
bnb_4bit_compute_dtype: null
adapter: lora
lora_model_dir: null
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
- q_proj
- v_proj
- k_proj
- path: /workspace/FinLoRA/data/train/finer_train_batched.jsonl
  type:
    system_prompt: ''
    field_system: system
    field_instruction: context
    field_output: target
    format: '[INST] {instruction} [/INST]'
    no_input_format: '[INST] {instruction} [/INST]'
dataset_prepared_path: null
val_set_size: 0.02
output_dir: /workspace/FinLoRA/lora/axolotl-output/finer_mistral_8b_8bits_r8
peft_use_dora: false
peft_use_rslora: false
sequence_len: 4096
sample_packing: false
pad_to_sequence_len: false
wandb_project: finlora_models
wandb_entity: null
wandb_watch: gradients
wandb_name: finer_mistral_8b_8bits_r8
wandb_log_model: 'false'
bf16: auto
tf32: false
gradient_checkpointing: true
resume_from_checkpoint: null
logging_steps: 500
flash_attention: false
deepspeed: deepspeed_configs/zero1.json
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.0
special_tokens:
  pad_token: <|end_of_text|>

workspace/FinLoRA/lora/axolotl-output/finer_mistral_8b_8bits_r8

This model is a fine-tuned version of mistralai/Ministral-8B-Instruct-2410 on the /workspace/FinLoRA/data/train/finer_train_batched.jsonl dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0319

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • total_eval_batch_size: 4
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • training_steps: 1226

Training results

Training Loss Epoch Step Validation Loss
No log 0 0 0.3593
No log 0.2513 77 0.0560
No log 0.5027 154 0.0490
No log 0.7540 231 0.0393
No log 1.0033 308 0.0391
No log 1.2546 385 0.0375
No log 1.5059 462 0.0381
0.0488 1.7572 539 0.0358
0.0488 2.0065 616 0.0348
0.0488 2.2579 693 0.0343
0.0488 2.5092 770 0.0328
0.0488 2.7605 847 0.0330
0.0488 3.0098 924 0.0332
0.0266 3.2611 1001 0.0327
0.0266 3.5124 1078 0.0327
0.0266 3.7638 1155 0.0319

Framework versions

  • PEFT 0.15.2
  • Transformers 4.52.3
  • Pytorch 2.8.0.dev20250319+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.2
Downloads last month
17
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ghostof0days/finer_ministral_8b_8bits_r8

Adapter
(79)
this model