Quality-of-Life for Google Colab#3
Conversation
Add weight layer to cff
|
Thanks for this @woctezuma! This will probably come in handy for me (even if I'm not running on colab). Some questions:
|
|
I added stylegan2-ada-pytorch/training/training_loop.py Lines 104 to 105 in d4b2afe stylegan2-ada-pytorch/training/training_loop.py Lines 296 to 303 in d4b2afe There are 2 observations to be made in the code released by Nvidia:
stylegan2-ada-pytorch/train.py Lines 313 to 315 in d4b2afe
stylegan2-ada-pytorch/train.py Lines 154 to 161 in d4b2afe stylegan2-ada-pytorch/train.py Lines 192 to 193 in d4b2afe
I don't think it matters here, due to point n°1 above.
This is your decision here. You could use either
The On a side-note, beware that parameters which are automatically set are not that great. For instance:
cf. the README which I quote below:
You will need some computational resources to explore the parameter space. |
|
Thanks a lot for the explanation and tips @woctezuma. |
|
It costed me 10,000 USD already, still NOT CONVERGENCE !!! |
|
Don't spend so much money! You are not guaranteed that it would work! It depends on data, parameters, etc. |
|
This branch is very helpful, thank you 🙏 @woctezuma How would you tune this for transfer learning? I want to "resume" from an existing network file but use a new dataset. Do I then need to adjust the augmentation strength or anything else? Any recommendations? |
ab29705 to
362752f
Compare
…mple_gradfix to the new API thanks @timothybrooks for the fix! for NVlabs#145
Adapt to newer _jit_get_operation API that changed in pytorch/pytorch#76814 for NVlabs#188, NVlabs#193
c6dfbc0 to
2c507a6
Compare
Hello,
I know you don't accept pull requests. However, this could be of interest to others who want to run the code on Google Colab,
To put it in a nutshell, I have ported my changes from NVlabs/stylegan2-ada#6 and a bit more:
.pklfile with the command-line argument--resume=latest,kimg,auto_norpto replicate theautoconfig without EMA rampup,--cfg_map,--cifar_tune.I have tested the training with my changes in one of my repository, in order to fix bugs which I could have introduced. It seems fine.