Training procedure

The following bitsandbytes quantization config was used during training:

  • load_in_8bit: True
  • load_in_4bit: False
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: fp4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float32

Framework versions

  • PEFT 0.4.0.dev0

Usage

from peft import PeftModel


temperature: float = 0.1,
top_p: float = 0.75
top_k: int = 40,
num_beams: int = 4,
max_new_tokens: int = 128

load_8bit: bool = False
lora_weights: str = "marianna13/alpaca-lora-sum"

model = LlamaForCausalLM.from_pretrained(
                                base_model,
                                load_in_8bit=load_8bit,
                                torch_dtype=torch.float16,
                                device_map="auto",
                            )
model = PeftModel.from_pretrained(
    model,
    lora_weights,
    torch_dtype=torch.float16,
)

inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to(device)

generation_config = GenerationConfig(
                              temperature=temperature,
                              top_p=top_p,
                              top_k=top_k,
                              num_beams=num_beams,
                              **kwargs,
                          )

with torch.no_grad():
    generation_output = model.generate(
        input_ids=input_ids,
        generation_config=generation_config,
        return_dict_in_generate=True,
        output_scores=True,
        max_new_tokens=max_new_tokens,
    )
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support