Skip to content

Update 20180920-unify-rnn-interface.md #81

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 17, 2019
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion rfcs/20180920-unify-rnn-interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,7 @@ It also has few differences from the original LSTM/GRU implementation:
incompatible with the standard LSTM/GRU. There are internal effort to convert the weights between
a CuDNN implementation and normal TF implementation. See CudnnLSTMSaveable.
1. CuDNN does not support variational recurrent dropout, which is a quite important feature.
1. CuDNN implementation only support TAN activation which is also the default implementation in the
1. CuDNN implementation only support TANH activation which is also the default implementation in the
LSTM paper. The Keras one support more activation choices if user don't want the default behavior.

With that, it means when users specify their LSTM/GRU layer, the underlying implementation could be
Expand Down