-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Thread safety #214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
+1 |
Hi, if we train different models in different threads we should define different models with different names, then the global name list in Could you tell me a case that the global variables will effect each other in different threads? If it is necessary, to manage those global variables, I may use
|
Quoting my comment from #207 (comment):
To summarize, I think it might be cleaner to not worry about layer name collisions, and just leave it to TF's varaible scopes. For variable sharing, add a |
Hi, I just find a way to train different model setting in one script, hope it helps in some case. for .... (different hyper-parameter):
with tf.Graph().as_default() as graph: # clear all variables of TF
tl.layers.clear_layers_name() # clear all layer name of TL
sess = tf.InteractiveSession()
# train a model here |
I would actually support the raised issues, what are the key reasons to keep in memory and manage a list of layers name ? All of this should already be managed by TF internally. Maybe I don't understand smthg, however I don't see the benefits of doing this. |
I think this issue has been fixed. |
Uh oh!
There was an error while loading. Please reload this page.
tensorlayer use some global variable like
set_keep
in package. It makes tensorlayer unsafe in multi-thread envs.Can we manage those global variables with some kinds of session?
The text was updated successfully, but these errors were encountered: