Skip to content

assert isinstance(quantization_spec, QuantizationSpec) #12267

Closed
@barbecacov

Description

@barbecacov

🐛 Describe the bug

Under the lastest executorch (0.8.0a0), and using the quantize function like this:
def quantize(model, example_inputs):
"""This is the official recommended flow for quantization in pytorch 2.0 export"""
print(f"Original model: {model}")
quantizer = XNNPACKQuantizer()
# if we set is_per_channel to True, we also need to add out_variant of quantize_per_channel/dequantize_per_channel
operator_config = get_symmetric_quantization_config(is_dynamic=True, is_per_channel=True)
quantizer.set_global(operator_config)
m = prepare_pt2e(model, quantizer)

then i got
torch/ao/quantization/fx/prepare.py", line 182, in _create_obs_or_fq_from_qspec
assert isinstance(quantization_spec, QuantizationSpec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

but when i using executorch 0.6.0, it is fine

Versions

0.8.0a0

cc @digantdesai @mcr229 @cbilgin @kimishpatel @jerryzh168

Metadata

Metadata

Assignees

Labels

module: quantizationIssues related to quantizationmodule: xnnpackIssues related to xnnpack delegation and the code under backends/xnnpack/triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions