You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add __str__ to FqnToConfig to make printing more readable (#3323)
* Adds __str__ to FqnToConfig to make printing more readable
Summary:
att, adds `__str__` method to `FqnToConfig` so that printing is more
legible.
For some config:
```python
config = FqnToConfig({
"model.layers.fig.1.1": Float8DynamicActivationFloat8WeightConfig(
granularity=PerRow(),
),
"model.layers.fig.1.3": Float8DynamicActivationFloat8WeightConfig(
granularity=PerRow(),
),
"model.layers.fig.8.3": Float8DynamicActivationFloat8WeightConfig(
granularity=PerRow(),
),
})
```
the output will be:
```
FqnToConfig({
'model.layers.fig.1.1':
Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
'model.layers.fig.1.3':
Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
'model.layers.fig.8.3':
Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
})
```
also adds in a test so that you cannot specify both fqn_to_config and
module_fqn_to_config unless they are both equal.
Test Plan:
```
pytest test/quantization/test_quant_api.py -k test_fqn_config_module_config_and_fqn_config_both_specified
```
Reviewers:
Subscribers:
Tasks:
Tags:
* fix ruff check
"`fqn_to_config` and `module_fqn_to_config` are both specified and are not equal!"
2476
+
)
2477
+
2469
2478
# This code handles BC compatibility with `ModuleFqnToConfig`. It ensures that `self.module_fqn_to_config` and `self.fqn_to_config` share the same object.
"Config Deprecation: _default is deprecated and will no longer be supported in a future release. Please see https://github.com/pytorch/ao/issues/3229 for more details."
0 commit comments