forked from tensorflow/addons
-
Notifications
You must be signed in to change notification settings - Fork 1
Rebase the against upstream #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* fix overflow of int32
* fix links * missing import lamb * reorder
* Make the first dimension `None` to support invariant batch size. * Add test case to check compatibility of WeightNormalization with TimeDistributed.
* Add cyclical learning rate schedulers
* Build data_init layer under name_scope The original wrapped layer and the non-trainable layer created for data dependent initialization had a clash in their namespaces. Creating the second layer under a name scope of 'data_dep_init' fixes the issue. * Lint * Add test for saving * Use create_tempfile
* sharding over pixel * robust test on cpu * fix typo
* add mcc py * update & test file * test file revision * indention * revise * build file * change dtype * remove type * correct numerator multiplication * code format check * format * minor * minor * modify doc * sample weight * import * layers * avoid using get_shape * multi-class for true_negative * correct true negative * updae README and add test case * minor fixing multi-lines * output dtype * move docstring to exact place and keep data type as optional * minor change using tf api * revision * tf api and minor revision
* fix keras model compile * checkout pylint change * disable pylint * make linter happy
Note that during the transition period tstring is typedef'ed to std::string. See: tensorflow/community#91
* Don't build parse_time till TF r2.1 * Fix TODO, and BUILD * Remove .py file from build target
* Add GIOU loss * Refact giou calculate * Fix doc * Update Readme.md * Format code * refact calculate * fix document * fix readme * fix docs * Change to official api * format code * enhance robust * add box format * add keras test * add one bbox test * add different shapes test case * format code * fix docs * make private * add interger test * format code * change expression
* add resampler kernel * add register op * namespace and register * python format * headers and cleanup * sanity cleanup * readme update * alphabetic order * gpu test & minor revision * comment on wrapping part * cpu test * miscellaneous fixing * minior fix * line removal
* test in eager
* hamming scripts * adding test * adding keras test
* fix distributed training error and nan result bugs * reformat py file
* Fix a bug, where the GroupNormalization layer was normalizing over the second axis instead of the selected axis. * Update tests (which seem to be irrelevant anyway) * Lint
* lack bracket, edit float32 type * minor correcting docstr * remove duplicate * result correction
* Build using bazel > 1.0 and CUDA 10.1 * Fix toolchain name * Update bazel version for macos build * Remove depset iteration for bazel 1.x+ * Working 10.1 build * Run bazel tests for single python environment. Otherwise in bazel > 1.0 we would need to install tf-nightly on py2 and py3 in order to test. * Make CI testing choose default python on path * Touch up README
* Correct bazel version being ran * Set CUDA env variable for travis build * Correct spacing
* Faster tests locally.
* add typing skip_gram-ops * cleanup typing info * minor * replace lookup table
* black on rrelu, filter and their associated test * correction
* black layers fun * modify pyproject
* Make the tests faster.
* fix kappa * add more tests and rename regression variable * add cross_entropy test for binary class model
* Changes default value of Yogi hyper-parameters It is found initial_accumulator_value=1e-6 works better for a range of tasks. Thus switching the default value. * Updating tests to reflect change in default values
* All the code in configure.py in in a function.
* CI test release branches for future cuts from master * fix mistake
* Update to optionally run without buildkit * Remove run docker script
* Respect the DOCKER_BUILDKIT env variable. * Instructions in the pre-commit.
* run black on files that are not being worked on
* Add a backport bot. * Obfuscating the token.
* type fix
qlzh727
pushed a commit
that referenced
this pull request
Dec 10, 2020
* initial setup. need to build tests * build some tests. need to test them * fixed typo * created first test * created first test * accidentally messed up another file * accidentally messed up another file * accidentally messed up another file * added run all distributed * fixed formatting * trying to fix tests not running on github CI. * realized that I should probably add the new optimizer files to the build and init * added typeguard and docstring * removed run_all_distributed * graph and eager testing for SGD * reformatted * added distributed tests * removed distributed tests * reverted discriminative layer grad adjust back to apply gradients * added distributed tests with one time virtual device init * increased tolerance for distributed added comments explaining tests * changed how distributed is recognized for increasing tolerance * Redesigned Logic into Optimizer Wrapper (#1) * redesigned methodology to use multiple optimizers (one per unique LR) and pass grads to these multiple optimizers. Should allow for complex optimizers to behave properly * adjusted behavior of resource apply to only return the op if the lr_mult matches the lr_mult of the optimizer should only return 1 op for each var. * updated init file changed training config * removed variable position and added some more comments * removed grouped variables as unnecessary * reformatted * updated documentation explicitly defined serialization as not supported * added typecheck for name * added typecheck for name * fixed blank line at end of init file * realized no new line meant to add new line guessing that build file needs to be in alpha order? * ran buildifier * fixed accidentally affecting moving average * changed print to logging.info * changed print to logging.info * Revert "changed print to logging.info" This reverts commit 3fa5e19 * added tutorial. tutorial doesn't import from tfa. May need to remove from PR. Please let me know * refactored to use static method refactored to use getattr updated warning on not using lr_mult expanded on some docstrings * updated the usage of lr_mult in variables * renamed discriminative wrapper to disclayeropt * added note to disuade directly calling apply_gradients * updated toy_cnn to use tempdir and no longer call context.eager implemented toy_rnn function with same flow as toycnn * added toy_rnn and sgd to the test permutations * refactored permutes and train results into private fns * reformatted files and fixed flake 8 issues fixed bad references when lr_mult was changed * added missing functions in prep for tests * updated assign lr mult and explained further why refactored get lowest layers to assign sublayers explained recursively assign sublayers better * forgot to run black so ran it to reformat * specified inputshape for rnn * increased size of test temporarily removed SGD opt. Double opts doubles the number of tests to run so just need to see how long this one takes. * remove toy rnn for now * changed back to medium. maybe large was not actually increasing runtime * fixed input layer * fixed input layer being in wrong place * virtual device modification issue * fixed incorrect usage of lr_mult * added comments for tests explaining them better added toy rnn for testing * added new test fix toy rnn initialization * fixed typo * added inputshape so that pretrained rnn generates weights * changed test to allow head to learn. it should move the loss better * reformatted * fixed test for variable assignment added get config and from config * reformatted * fixed layer references from 1 to 0 because input layer isn't counted as an actual layer in the layer list * reformatted * increased lr and epochs because learning was happning, but assertless tolerance too low * attempting to use run distributed from test utils * removed tutorial * switched to alternative distributed training method * trying to use run distributed without graph and eager * trying to use run_distributed * seems that doing any tensorstuff before tf.test.main creates the issue. changed models to auto check if weights exist and create or load * forgot to return a model on first run of model fn * create model weights on init * changed how args are passed for testcase * changed how args are passed for testcase * try fix init * trying to init weights on model properly * trying to init weights on model properly * just trying all the possibilities * trying to fix weights setup * expanded some comments for some tests * fixed some docstrings and expanded on some comments * reformatted files expanded on many comments and added full stops fixed get/from_config based on optimzierv2 added model checkpoint test * capitalized comments properly. * removed sgd, reduced size of training inputs. * simplified checkpoint name * reformatted * remove run tests in notebook * updated README.md fixed indent for __init__ added test for from config and to config * fixed formatting * removed distributed tests and added a warning if optimizer is initialized within a strategy scope * renamed test_wrap to wrap_test bc pytest thought it was a test. * converting tests into the pytest framework * converted tests and parameterized * cleaned up code * added additional checks and doc string for changes in lr multiplier during training. * changed comment * Simplified discriminative layer training by using a multi optimizer wrapper class. Removed old tests and added new tests conforming to pytest standard. * Refactored code using black and flake8 * updated init file * fixed typeguard error and usage of private/experimental api. * restructured wrapper serialization and removed unnecessary components. * expanded on docstr and added repr * cleaned up docstrings, added assertion tests, and added explicit test for only the serialization * ran black and flake8 * fixed doc string Co-authored-by: gabrieldemarmiesse <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.