Skip to content

Parse remote config in a cluster environment #4665

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ranhsd opened this issue Mar 20, 2018 · 2 comments
Closed

Parse remote config in a cluster environment #4665

ranhsd opened this issue Mar 20, 2018 · 2 comments

Comments

@ranhsd
Copy link
Contributor

ranhsd commented Mar 20, 2018

Hi, I am running my parse-server on GCP Kubernetes engine. I currently have 3 nodes and parse-server installed on each one of them. I experience some issues with parse remote config in my setup.
I use remote config a lot in my app for various things like: feature toggle, provide special access to specific users and more.

Currently in my app I can accept users with specific email domains so in order to accept specific email (which is not part of the allowed domains) I simply add it to "exceptionalEmails" array in my parse-server remote config. I also created some custom cloud code function which responsible to refresh my remote config on-demand (I just executed it manually whenever I want to refresh my remote config). I've decided to go with the approach in order to avoid a lot of roundtrips to fetch the current remote config that I have on my server so what I do I just change the config manually (in parse-dashboard) and then execute a call to this function which refresh my remote config:

Parse.Cloud.define("RefreshRemoteConfig", async (request, response) => {
    try {
        let config = Parse.Config.get()
        response.success("remote config refreshed successfully!");
    } catch (error) {
        response.error(error);
    }
});

and then I can access the remote config this way:

let remoteConfig = Parse.Config.current();

without fetching it again (because it's cached)

The problem is that from some reason the remote config is not being refreshed and it contains the old values that were before. The only way that currently I can refresh my config is by restarting the pods in my cluster and I really don't like it.

Somebody here know or aware to it? Maybe I am doing something wrong? @flovilmart maybe you will know the reason? I approach to you because I know that you have the same setup

Thanks in advance.

@ranhsd
Copy link
Contributor Author

ranhsd commented Apr 6, 2018

Somebody know the answer for the question above or should I close it?

@stale
Copy link

stale bot commented Sep 18, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant