Description
🐛 Describe the bug
Under the lastest executorch (0.8.0a0), and using the quantize function like this:
def quantize(model, example_inputs):
"""This is the official recommended flow for quantization in pytorch 2.0 export"""
print(f"Original model: {model}")
quantizer = XNNPACKQuantizer()
# if we set is_per_channel to True, we also need to add out_variant of quantize_per_channel/dequantize_per_channel
operator_config = get_symmetric_quantization_config(is_dynamic=True, is_per_channel=True)
quantizer.set_global(operator_config)
m = prepare_pt2e(model, quantizer)
then i got
torch/ao/quantization/fx/prepare.py", line 182, in _create_obs_or_fq_from_qspec
assert isinstance(quantization_spec, QuantizationSpec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
but when i using executorch 0.6.0, it is fine
Versions
0.8.0a0