Quantizer: Running into an error with quantization "TypeError: 'dict' object is not callable"

#24
by AaronVogler - opened

I get the following error when trying to load the model on CPU...

Anyone have any idea as to what's going on?

 File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/__init__.py", line 942, in pipeline
    framework, model = infer_framework_load_model(
  File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py", line 291, in infer_framework_load_model
    model = model_class.from_pretrained(model, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py", line 571, in from_pretrained
    return model_class.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 279, in _wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 4369, in from_pretrained
    hf_quantizer.preprocess_model(
  File "/usr/local/lib/python3.10/dist-packages/transformers/quantizers/base.py", line 224, in preprocess_model
    self._convert_model_for_quantization(model)
  File "/usr/local/lib/python3.10/dist-packages/transformers/quantizers/base.py", line 313, in _convert_model_for_quantization
    parent_module._modules[name] = MODULES_TO_PATCH_FOR_QUANTIZATION[module_class_name](
TypeError: 'dict' object is not callable

    at ChildProcess.<anonymous> (/home/inf3rnus/api/jobRunner/library/container/createTestImage/index.js:61:18)
    at ChildProcess.emit (/home/inf3rnus/api/lib/events.js:519:28)
    at maybeClose (/home/inf3rnus/api/lib/internal/child_process.js:1105:16)
    at ChildProcess._handle.onexit (/home/inf3rnus/api/lib/internal/child_process.js:305:5)
    at Process.callbackTrampoline (node:internal/async_hooks:130:17) {stack: "Error: ERROR: pip's dependency resolver does …Trampoline (node:internal/async_hooks:130:17)", message: "ERROR: pip's dependency resolver does not cu…](
TypeError: 'dict' object is not callable
"}

Thanks,
Aaron

I got the same error. Were you able to fix it?

@zhuokai No, I have been unable to fix it, the non quant versions of the model work on CPU, so I don't know what the deal is, presumably some conflict between compressed-tensors and transformers==4.51.3

Meta Llama org

Can you try pip install from source? This is a fix in this commit that may help.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment