Skip to content

Profiling SeparableConv1D and SeparableConv2D layers fail #890

Closed
@qberthet

Description

@qberthet

Prerequisites

Please make sure to check off these prerequisites before submitting a bug report.

  • Test that the bug appears on the current version of the master branch. Make sure to include the commit hash of the commit you checked out.
  • Check that the issue hasn't already been reported, by checking the currently open issues.
  • If there are steps to reproduce the problem, make sure to write them down below.
  • If relevant, please include the hls4ml project files, which were created directly before and/or after the bug.

Quick summary

Similarly to #829, profiling a network using a SeparableConv1D or SeparableConv2D fail because these layer types have two weights and two bias.

Steps to Reproduce

  1. Clone the hls4ml repository
  2. Checkout the master branch, with commit hash: d36e226
    3a. Run conversion the following script to test SeparableConv1D:
from keras.layers import Input
from keras.models import Model
from qkeras import *
import hls4ml

def get_model():
    # Define a dummy model with only one QSeparableConv1D layer
    input_layer = Input(shape=(32, 3))
    layer = QSeparableConv1D(
                filters=16,
                kernel_size=3,
                depthwise_quantizer=quantized_bits(16, 6, alpha=1),
                pointwise_quantizer=quantized_bits(16, 6, alpha=1),
                bias_quantizer=quantized_bits(16, 6, alpha=1)
            )(input_layer)
    model = Model(inputs=input_layer, outputs=layer)
    return model

model = get_model()

model.summary()

config = hls4ml.utils.config_from_keras_model(model, granularity="name")

# Configure the project to be SeparableConv1D compatible
config['Model']['Precision'] = 'ap_fixed<16,6>'
config['Model']['ReuseFactor'] = 1
config['Model']['Strategy'] = 'Latency'

for layer in config['LayerName'].keys():
    config['LayerName'][layer]['Trace'] = True

# Use the Vivado backend (2020.1)
cfg = hls4ml.converters.create_config(backend='Vivado')
cfg['IOType'] = 'io_stream'
cfg['HLSConfig'] = config
cfg['KerasModel'] = model
cfg['OutputDir'] = 'hls4ml_prj'
cfg['Part'] = 'xcku115-flvb2104-2-i'

hls_model = hls4ml.converters.keras_to_hls(cfg)



hls_model.compile()

# Generate random test data
x = np.random.rand(1, 32, 3)

plots = hls4ml.model.profiling.numerical(
    model=model,
    hls_model=hls_model,
    X=x
)

3b. Run conversion the following script to test SeparableConv2D:

from keras.layers import Input
from keras.models import Model
from qkeras import *
import hls4ml

def get_model():
    # Define a dummy model with only one QSeparableConv1D layer
    input_layer = Input(shape=(32, 32, 3))
    layer = QSeparableConv2D(
                filters=16,
                kernel_size=3,
                depthwise_quantizer=quantized_bits(16, 6, alpha=1),
                pointwise_quantizer=quantized_bits(16, 6, alpha=1),
                bias_quantizer=quantized_bits(16, 6, alpha=1)
            )(input_layer)
    model = Model(inputs=input_layer, outputs=layer)
    return model

model = get_model()

model.summary()

config = hls4ml.utils.config_from_keras_model(model, granularity="name")

for layer in config['LayerName'].keys():
    config['LayerName'][layer]['Trace'] = True

# Configure the project to be SeparableConv1D compatible
config['Model']['Precision'] = 'ap_fixed<16,6>'
config['Model']['ReuseFactor'] = 1
config['Model']['Strategy'] = 'Latency'

# Use the Vivado backend (2020.1)
cfg = hls4ml.converters.create_config(backend='Vivado')
cfg['IOType'] = 'io_stream'
cfg['HLSConfig'] = config
cfg['KerasModel'] = model
cfg['OutputDir'] = 'hls4ml_prj'
cfg['Part'] = 'xcku115-flvb2104-2-i'

hls_model = hls4ml.converters.keras_to_hls(cfg)

hls_model.compile()

# Generate random test data
x = np.random.rand(1, 32, 32, 3)

plots = hls4ml.model.profiling.numerical(
    model=model,
    hls_model=hls_model,
    X=x
)

Expected behavior

Correct and errorless profiling of the network.

Actual behavior

The profiling fail with the following error (Same error for both test case):

Creating HLS model
Profiling weights (before optimization)
Traceback (most recent call last):
  File "/home/qberthet/devel/qdips/bug1/bug2.py", line 45, in <module>
    plots = hls4ml.model.profiling.numerical(
  File "/home/qberthet/miniconda3/envs/qdips/lib/python3.10/site-packages/hls4ml/model/profiling.py", line 468, in numerical
    data = weights_hlsmodel(hls_model_unoptimized, fmt='summary', plot=plot)
  File "/home/qberthet/miniconda3/envs/qdips/lib/python3.10/site-packages/hls4ml/model/profiling.py", line 232, in weights_hlsmodel
    label = f'{name}/{suffix[iw]}'
IndexError: list index out of range

Possible fix

A fix, similar to #833, is proposed in PR #891

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions