Skip to content

SeparableConv1d fail to synthesize #883

Closed
@qberthet

Description

@qberthet

Prerequisites

Please make sure to check off these prerequisites before submitting a bug report.

  • Test that the bug appears on the current version of the master branch. Make sure to include the commit hash of the commit you checked out.
  • Check that the issue hasn't already been reported, by checking the currently open issues.
  • If there are steps to reproduce the problem, make sure to write them down below.
  • If relevant, please include the hls4ml project files, which were created directly before and/or after the bug.

Quick summary

Project using QSeparableConv1D layers fail to synthesize.

Details

While playing with the new SeparableConv1D feature, it appear that all attempts to pass the CSynth step fail with the same error.

Steps to Reproduce

  1. Clone the hls4ml repository
  2. Checkout the master branch, with commit hash d36e226
  3. Run the following script (Using Vivado 2020.1 setup):
from keras.layers import Input
from keras.models import Model
from qkeras import *
import hls4ml

def get_model():
    # Define a dummy model with only one QSeparableConv1D layer
    input_layer = Input(shape=(32, 3))
    layer = QSeparableConv1D(
                filters=16,
                kernel_size=3,
                depthwise_quantizer=quantized_bits(16, 6, alpha=1),
                pointwise_quantizer=quantized_bits(16, 6, alpha=1),
                bias_quantizer=quantized_bits(16, 6, alpha=1)
            )(input_layer)
    model = Model(inputs=input_layer, outputs=layer)
    return model

model = get_model()

model.summary()

config = hls4ml.utils.config_from_keras_model(model, granularity="name")

# Configure the project to be SeparableConv1D compatible
config['Model']['Precision'] = 'ap_fixed<16,6>'
config['Model']['ReuseFactor'] = 1
config['Model']['Strategy'] = 'Latency'

# Use the Vivado backend (2020.1)
cfg = hls4ml.converters.create_config(backend='Vivado')
cfg['IOType'] = 'io_stream'
cfg['HLSConfig'] = config
cfg['KerasModel'] = model
cfg['OutputDir'] = 'hls4ml_prj'
cfg['Part'] = 'xcku115-flvb2104-2-i'

hls_model = hls4ml.converters.keras_to_hls(cfg)

hls_model.compile()

hls_model.build(reset=True, csim=False, synth=True)

Expected behavior

Model should synthesize correctly.

Actual behavior

CSynth step is failing with:

In file included from firmware/myproject.cpp:1:
In file included from firmware/myproject.cpp:4:
In file included from firmware/parameters.h:11:
In file included from firmware/nnet_utils/nnet_sepconv1d_stream.h:7:
firmware/nnet_utils/nnet_sepconv_stream.h:82:9: error: no matching function for call to 'depthwise_product'
        depthwise_product<typename data_T::value_type, typename res_T::value_type, CONFIG_T>(data, res, weights, biases);
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
firmware/nnet_utils/nnet_sepconv_stream.h:131:13: note: in instantiation of function template specialization 'nnet::depthwise_mult_buffer<nnet::array<ap_fixed<16, 6, 5, 3, 0>, 3>, nnet::array<ap_fixed<16, 6, 5, 3, 0>, 3>, config2_depthwise>' requested here
            depthwise_mult_buffer<data_T, res_T, CONFIG_T>(data_window, res_pack, res, outputs_ready, weights, biases);
            ^
firmware/nnet_utils/nnet_sepconv1d_stream.h:39:9: note: in instantiation of function template specialization 'nnet::compute_depthwise_output_encoded<nnet::array<ap_fixed<16, 6, 5, 3, 0>, 3>, nnet::array<ap_fixed<16, 6, 5, 3, 0>, 3>, config2_depthwise>' requested here
        compute_depthwise_output_encoded<data_T, res_T, CONFIG_T>(data.read(), data_window, res, res_pack, outputs_ready,
        ^
firmware/nnet_utils/nnet_sepconv1d_stream.h:70:9: note: in instantiation of function template specialization 'nnet::depthwise_conv_1d_encoded_cl<nnet::array<ap_fixed<16, 6, 5, 3, 0>, 3>, nnet::array<ap_fixed<16, 6, 5, 3, 0>, 3>, config2_depthwise>' requested here
        depthwise_conv_1d_encoded_cl<data_T, res_T, CONFIG_T>(data, res, weights, biases);
        ^
firmware/nnet_utils/nnet_sepconv1d_stream.h:112:2: note: in instantiation of function template specialization 'nnet::depthwise_conv_1d_cl<nnet::array<ap_fixed<16, 6, 5, 3, 0>, 3>, nnet::array<ap_fixed<16, 6, 5, 3, 0>, 3>, config2_depthwise>' requested here
 depthwise_conv_1d_cl<data_T, dw_res_T, typename CONFIG_T::depthwise_config>(data, depthwise_res, depthwise_weights,
 ^
firmware/myproject.cpp:33:2: note: in instantiation of function template specialization 'nnet::separable_conv_1d_cl<nnet::array<ap_fixed<16, 6, 5, 3, 0>, 3>, nnet::array<ap_fixed<16, 6, 5, 3, 0>, 3>, nnet::array<ap_fixed<16, 6, 5, 3, 0>, 16>, config2>' requested here
 nnet::separable_conv_1d_cl<input_t, q_separable_conv1d_dw_out_t, result_t, config2>(input_1, layer2_out, d2, p2, z2, b2);
 ^
firmware/nnet_utils/nnet_sepconv_stream.h:11:6: note: candidate template ignored: substitution failure [with data_T = ap_fixed<16, 6, 5, 3, 0>, res_T = ap_fixed<16, 6, 5, 3, 0>, CONFIG_T = config2_depthwise]
void depthwise_product(data_T data[CONFIG_T::kernel_size * CONFIG_T::n_chan], res_T res[CONFIG_T::n_chan],
     ^
2 errors generated.
Compilation of the preprocessed source 'myproject' failed

Possible fix

A one liner in nnet_sepconv_stream.h seems to fix the issue (#884), but not confident with the codebase to be sure that it does not introduces other issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions