Skip to content

Thread safety #214

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ariwaranosai opened this issue Sep 27, 2017 · 6 comments
Closed

Thread safety #214

ariwaranosai opened this issue Sep 27, 2017 · 6 comments

Comments

@ariwaranosai
Copy link

ariwaranosai commented Sep 27, 2017

tensorlayer use some global variable like set_keep in package. It makes tensorlayer unsafe in multi-thread envs.

Can we manage those global variables with some kinds of session?

@haiy
Copy link

haiy commented Sep 28, 2017

+1

@zsdonghao
Copy link
Member

Hi, if we train different models in different threads we should define different models with different names, then the global name list in tl.layers would not effect each other right?

Could you tell me a case that the global variables will effect each other in different threads?

If it is necessary, to manage those global variables, I may use tl.global_dict as follow:

{ 'sess_id' : { 'layer_name_list' : [...], 'name_resue': ... }}

@tomtung
Copy link
Contributor

tomtung commented Oct 11, 2017

Quoting my comment from #207 (comment):

Also not sure why we need global variables to prevent layer name collisions, since after all the use of TensorFlow's variable scopes already handles that.

Maybe it's also easier to just pass reuse as a parameter to the constructors of layers, instead of maintaining it as the global state set_keep['name_reuse'], which is almost not respected anywhere (except for TimeDistributedLayer) anyways.

To summarize, I think it might be cleaner to not worry about layer name collisions, and just leave it to TF's varaible scopes. For variable sharing, add a reuse boolean flag to the constructors of all layers that supports it. Currently, for example, it doesn't seem possible to have 2 DenseLayer instances sharing the same W and b, but it should be really easy with the extra constructor argument.

@zsdonghao
Copy link
Member

Hi, I just find a way to train different model setting in one script, hope it helps in some case.

for .... (different hyper-parameter):
    with tf.Graph().as_default() as graph:  # clear all variables of TF
           tl.layers.clear_layers_name()         # clear all layer name of TL
           sess = tf.InteractiveSession()
           # train a model here

@DEKHTIARJonathan
Copy link
Member

I would actually support the raised issues, what are the key reasons to keep in memory and manage a list of layers name ? All of this should already be managed by TF internally. Maybe I don't understand smthg, however I don't see the benefits of doing this.

@zsdonghao zsdonghao changed the title change global variables and make tensorlayer thread safety Thread Safety & Logging Feb 19, 2018
@luomai luomai changed the title Thread Safety & Logging Thread safety Feb 19, 2018
@zsdonghao
Copy link
Member

I think this issue has been fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants