YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

8 images cds9 Num examples = 8 Num batches each epoch = 8 Num Epochs = 805 Instantaneous batch size per device = 1 Total train batch size (w. parallel, distributed & accumulation) = 4 Gradient Accumulation steps = 4 Total optimization steps = 1610

cds91 8 images same as above

STEPS = 1200 #@param {type:"slider", min:0, max:10000, step:10} BATCH_SIZE = 1 #@param {type:"slider", min:0, max:128, step:1} FP_16 = True #@param {type:"boolean"}

#@markdown ---- #@markdown UNET PARAMS LEARNING_RATE = 1e-4 #@param {type:"number"}

#@markdown ---- TRAIN_TEXT_ENCODER = True #@param {type:"boolean"} #@markdown TEXT ENCODER PARAMS LEARNING_RATE_TEXT_ENCODER = 5e-5 #@param {type:"number"}

NEW_LEARNING_RATE = LEARNING_RATE / BATCH_SIZE NEW_LEARNING_RATE_TEXT_ENCODER = LEARNING_RATE_TEXT_ENCODER / BATCH_SIZE

if FP_16: fp_16_arg = "fp16" else: fp_16_arg = "no"

if TRAIN_TEXT_ENCODER: command = (f'accelerate launch lora/training_scripts/train_lora_dreambooth.py ' f'--pretrained_model_name_or_path="{PRETRAINED_MODEL}" ' f'--instance_data_dir="{INSTANCE_DIR}" ' f'--output_dir="{OUTPUT_DIR}" ' f'--instance_prompt="{PROMPT}" ' f'--resolution=512 ' f'--use_8bit_adam ' f'--mixed_precision="{fp_16_arg}" ' f'--train_batch_size=1 ' f'--gradient_accumulation_steps=1 ' f'--learning_rate={NEW_LEARNING_RATE} ' f'--lr_scheduler="cosine" ' f'--lr_warmup_steps=0 ' f'--max_train_steps={STEPS} ' f'--train_text_encoder ' f'--lora_rank=16 ' f'--learning_rate_text={NEW_LEARNING_RATE_TEXT_ENCODER}') else: command = (f'accelerate launch lora/training_scripts/train_lora_dreambooth.py ' f'--pretrained_model_name_or_path="{PRETRAINED_MODEL}" ' f'--instance_data_dir="{INSTANCE_DIR}" ' f'--output_dir="{OUTPUT_DIR}" ' f'--instance_prompt="{PROMPT}" ' f'--resolution=512 ' f'--use_8bit_adam ' f'--mixed_precision="{fp_16_arg}" ' f'--train_batch_size=1 ' f'--gradient_accumulation_steps=1 ' f'--learning_rate={NEW_LEARNING_RATE} ' f'--lr_scheduler="cosine" ' f'--lr_warmup_steps=0 ' f'--lora_rank=16 ' f'--max_train_steps={STEPS} ' f'--learning_rate_text={NEW_LEARNING_RATE_TEXT_ENCODER}') !rm -rf $INSTANCE_DIR/.ipynb_checkpoints !{command}

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support