Open
Description
Running through the hls4ml-tutorial using current hls4ml master branch, I see an issue in the numerical performance of the hls4ml evaluated accuracy of the QKeras model in part 4. The printout of the accuracy gives:
Accuracy baseline: 0.7502650602409638
Accuracy pruned, quantized: 0.7456385542168674
Accuracy hls4ml: 0.20196385542168674
With the most recent release, hls4ml v0.5.0, for the same model, same QKeras version, I don't see this issue. The printout of the accuracy gives:
Accuracy baseline: 0.7502650602409638
Accuracy pruned, quantized: 0.7456385542168674
Accuracy hls4ml: 0.7455481927710843
The part 1 models (regular float Keras) achieves good accuracy with hls4ml master.
I will try to dig a bit further, but others may encounter this issue.