Skip to content

Update of RNN to get Pipeline of 1 and first try at LSTM. #51

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 24, 2018

Conversation

violatingcp
Copy link

Adding LSTM and update for RNN to get down to a pipeline interval of 1. Additionally keeping the previous static impelementation of the RNN that uses significantly less resources. For a comparison we have:

| Name | BRAM_18K| DSP48E| FF | LUT | Interval |
|Simple RNN Static Loop 5 | 0| 0| 419| 378| Full(9) |
|Simple RNN Loop 5 | 0| 0| 884| 1057| 1
|Simple RNN Static Loop 10 | 0| 0| 799| 638| Full |
|Simple RNN Loop 10 | 0| 0| 1804| 2257| 1 |

Loop 10 denotes an RNN with 10 iterations.

First LSTM is a bit slower than RNN, but has many multiplications. The minimum Pipeline interval is currently 4. It would be nice to see if this can be brought down to 1. Also, we might want to consider a static one with the two states that get passed back and forth.

…1. Additionally keeping the previous static impelementation of the RNN that uses significantly less resources. For a comparison we have:

|       Name                       | BRAM_18K| DSP48E|   FF   |   LUT  | Interval |
|Simple RNN Static Loop 5          |        0|      0|     419|     378|  Full(9) |
|Simple RNN           Loop 5       |        0|      0|     884|    1057| 1
|Simple RNN Static Loop 10         |        0|      0|     799|     638| Full |
|Simple RNN           Loop 10      |        0|      0|    1804|    2257| 1 |
@violatingcp violatingcp merged commit 43ee047 into ejk/recursive Feb 24, 2018
github-actions bot pushed a commit that referenced this pull request Mar 19, 2023
* fix uram divide by 0

* add test

* fix parsing of vsynth in 2020.1; add test

* Update test_report.py
jmduarte added a commit that referenced this pull request Mar 24, 2023
* print_vivado_report function for fancier reports

* Fancy reports (#51)

* fix uram divide by 0

* add test

* fix parsing of vsynth in 2020.1; add test

* Update test_report.py

* exclude pregenerated reports

---------

Co-authored-by: Javier Duarte <[email protected]>
github-actions bot pushed a commit that referenced this pull request Mar 25, 2023
* Add quantized sigmoid, fix quantized tanh for QKeras (#569)

* snapshot of beginnings

* make version that works for Vivado, error for Quartus

* Change order of precision from quantizer

* add hard sigmoid and tanh

* fix setting of slope and shift type

* revert config parsing--seems a little strange but works

* fix hard_sigmoid and hard_tanh for streaming

* update pytest for quantized tanh and sigmoid

* remove inadvertently included matoplotlib

* add special case when W == min_width.

* fix merge of main

* Go back to having AP_TRN and AP_WRP as defaults

* handle case when use_real_tanh is not defined

* make the activations use AP_RND_CONV (and AP_SAT) by default

* remove use of use_real_tanh in test since not always supported

* fix incorrect default types for Keras (not QKeras) hard_sigmoid

* Mostly fix up things for Quartus

* get rid of intermediate cast

* fix an i++ compilation issue

* Quartus seems to not like ac_fixed<1,0,false>, so make 2 bits.

* fix activation quantizer

* make sat, round defeult activation parameters, don't need to set again

* Make the slope and shift not be configurable for HardActivation

* some pre-commit fixes

* pre-commint //hls to // hls fixes

* update CI version

* fixes for parsing errors from pre-commits

* remove qactivation from list of activation_layers

* print_vivado_report function for nicer reports (#730)

* print_vivado_report function for fancier reports

* Fancy reports (#51)

* fix uram divide by 0

* add test

* fix parsing of vsynth in 2020.1; add test

* Update test_report.py

* exclude pregenerated reports

---------

Co-authored-by: Javier Duarte <[email protected]>

---------

Co-authored-by: Jovan Mitrevski <[email protected]>
Co-authored-by: Vladimir <[email protected]>
calad0i pushed a commit to calad0i/hls4ml that referenced this pull request Jul 1, 2023
* print_vivado_report function for fancier reports

* Fancy reports (fastmachinelearning#51)

* fix uram divide by 0

* add test

* fix parsing of vsynth in 2020.1; add test

* Update test_report.py

* exclude pregenerated reports

---------

Co-authored-by: Javier Duarte <[email protected]>
calad0i pushed a commit to calad0i/hls4ml that referenced this pull request Jul 1, 2023
* Add quantized sigmoid, fix quantized tanh for QKeras (fastmachinelearning#569)

* snapshot of beginnings

* make version that works for Vivado, error for Quartus

* Change order of precision from quantizer

* add hard sigmoid and tanh

* fix setting of slope and shift type

* revert config parsing--seems a little strange but works

* fix hard_sigmoid and hard_tanh for streaming

* update pytest for quantized tanh and sigmoid

* remove inadvertently included matoplotlib

* add special case when W == min_width.

* fix merge of main

* Go back to having AP_TRN and AP_WRP as defaults

* handle case when use_real_tanh is not defined

* make the activations use AP_RND_CONV (and AP_SAT) by default

* remove use of use_real_tanh in test since not always supported

* fix incorrect default types for Keras (not QKeras) hard_sigmoid

* Mostly fix up things for Quartus

* get rid of intermediate cast

* fix an i++ compilation issue

* Quartus seems to not like ac_fixed<1,0,false>, so make 2 bits.

* fix activation quantizer

* make sat, round defeult activation parameters, don't need to set again

* Make the slope and shift not be configurable for HardActivation

* some pre-commit fixes

* pre-commint //hls to // hls fixes

* update CI version

* fixes for parsing errors from pre-commits

* remove qactivation from list of activation_layers

* print_vivado_report function for nicer reports (fastmachinelearning#730)

* print_vivado_report function for fancier reports

* Fancy reports (fastmachinelearning#51)

* fix uram divide by 0

* add test

* fix parsing of vsynth in 2020.1; add test

* Update test_report.py

* exclude pregenerated reports

---------

Co-authored-by: Javier Duarte <[email protected]>

---------

Co-authored-by: Jovan Mitrevski <[email protected]>
Co-authored-by: Vladimir <[email protected]>
GiuseppeDiGuglielmo pushed a commit that referenced this pull request Oct 13, 2023
Fixed tanh pwl for non-stream example
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant