Upload tokenizer
2c44cb3
verified
-
1.57 kB
Upload tokenizer
-
5.17 kB
Upload LlamaForCausalLM
-
3.83 kB
Upload tokenizer
-
1.14 kB
Upload LlamaForCausalLM
-
184 Bytes
Upload LlamaForCausalLM
pytorch_model-00001-of-00002.bin
Detected Pickle imports (20)
- "torchao.quantization.autoquant.AQFloat8WeightOnlyQuantizedLinearWeight",
- "torchao.quantization.autoquant.AQFloat8PerTensorScalingDynamicallyQuantizedLinearWeight",
- "torch.serialization._get_layout",
- "torchao.quantization.autoquant.AQInt8WeightOnlyQuantizedLinearWeight2",
- "torch._utils._rebuild_wrapper_subclass",
- "torchao.quantization.autoquant.AQGemliteInt4G64WeightOnlyQuantizedLinearWeight",
- "torchao.quantization.autoquant.AQBFloat16LinearWeight",
- "torch.device",
- "torch.bfloat16",
- "torchao.quantization.autoquant.AQInt8DynamicallyQuantizedLinearWeight",
- "torchao.quantization.autoquant.AQInt4G64WeightOnlyQuantizedLinearWeight",
- "torchao.quantization.autoquant.AQFloat16LinearWeight",
- "torchao.quantization.autoquant.AQFloat32LinearWeight",
- "torch._utils._rebuild_tensor_v2",
- "torch._tensor._rebuild_from_type_v2",
- "torchao.quantization.autoquant.AQInt8WeightOnlyQuantizedLinearWeight",
- "torch.BFloat16Storage",
- "torchao.quantization.autoquant.AutoQuantizableLinearWeight",
- "collections.OrderedDict",
- "torchao.quantization.autoquant.AQDefaultLinearWeight"
How to fix it?
4.97 GB
Upload LlamaForCausalLM
pytorch_model-00002-of-00002.bin
Detected Pickle imports (20)
- "torch._utils._rebuild_tensor_v2",
- "torch.serialization._get_layout",
- "torchao.quantization.autoquant.AQInt8DynamicallyQuantizedLinearWeight",
- "torch._tensor._rebuild_from_type_v2",
- "torchao.quantization.autoquant.AQDefaultLinearWeight",
- "torchao.quantization.autoquant.AQBFloat16LinearWeight",
- "torchao.quantization.autoquant.AQInt8WeightOnlyQuantizedLinearWeight",
- "torch.device",
- "torchao.quantization.autoquant.AQFloat16LinearWeight",
- "torchao.quantization.autoquant.AQFloat32LinearWeight",
- "torchao.quantization.autoquant.AQInt4G64WeightOnlyQuantizedLinearWeight",
- "torch.bfloat16",
- "torch.BFloat16Storage",
- "torchao.quantization.autoquant.AQInt8WeightOnlyQuantizedLinearWeight2",
- "torchao.quantization.autoquant.AQGemliteInt4G64WeightOnlyQuantizedLinearWeight",
- "collections.OrderedDict",
- "torchao.quantization.autoquant.AQFloat8WeightOnlyQuantizedLinearWeight",
- "torchao.quantization.autoquant.AutoQuantizableLinearWeight",
- "torch._utils._rebuild_wrapper_subclass",
- "torchao.quantization.autoquant.AQFloat8PerTensorScalingDynamicallyQuantizedLinearWeight"
How to fix it?
2.25 GB
Upload LlamaForCausalLM
-
21 kB
Upload LlamaForCausalLM
-
296 Bytes
Upload tokenizer
-
17.2 MB
Upload tokenizer
-
50.5 kB
Upload tokenizer