OLoRA residual initialized with:

from peft import LoraConfig, get_peft_model

config = LoraConfig(
    target_modules='all-linear',

    r=64,
    lora_alpha=64,
    init_lora_weights='olora',
    modules_to_save=["lm_head",'embed_tokens']

)
model = get_peft_model(model, config)
Downloads last month
13
Safetensors
Model size
32.6B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ConicCat/GLM-4-32B-Base-32K-Residual-r64a64

Finetunes
1 model
Quantizations
2 models