Skip to content

BUG: ReactPartialRenderer / New Context polutes mutable global state #13874

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

leebyron
Copy link
Contributor

The new context API stores the provided values on the shared context instance. When used in a synchronous context, this is not an issue. However when used in an concurrent context this can cause a "push provider" from one react render to have an effect on an unrelated concurrent react render.

I've encountered this bug in production when using renderToNodeStream, which asks ReactPartialRenderer for bytes up to a high water mark before yielding. If two Node Streams are created and read from in parallel, the state of one can polute the other.

I wrote a failing test to illustrate the conditions under which this happens.

I'm also concerned that the experimental concurrent/async React rendering on the client could suffer from the same issue.

The new context API stores the provided values on the shared context instance. When used in a synchronous context, this is not an issue. However when used in an concurrent context this can cause a "push provider" from one react render to have an effect on an unrelated concurrent react render.

I've encountered this bug in production when using renderToNodeStream, which asks ReactPartialRenderer for bytes up to a high water mark before yielding. If two Node Streams are created and read from in parallel, the state of one can polute the other.

I wrote a failing test to illustrate the conditions under which this happens.

I'm also concerned that the experimental concurrent/async React rendering on the client could suffer from the same issue.
streamAmy._read(20);
streamBob._read(20);

expect(streamAmy.read()).toBe('<header>Amy</header><footer>Amy</footer>');
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test fails with:

<header>Amy</header><footer>Bob</footer>

@sizebot
Copy link

sizebot commented Oct 17, 2018

Details of bundled changes.

Comparing: 4f0bd45...8bf2ed6

scheduler

File Filesize Diff Gzip Diff Prev Size Current Size Prev Gzip Current Gzip ENV
scheduler.development.js n/a n/a 0 B 19.17 KB 0 B 5.74 KB UMD_DEV
scheduler.production.min.js n/a n/a 0 B 3.16 KB 0 B 1.53 KB UMD_PROD

Generated by 🚫 dangerJS

@sebmarkbage
Copy link
Collaborator

This is a known between renderers on the client. Within a renderer, we only operate on one root at a time. We have two separate fields on the context for "primary" and "secondary" renderers can work concurrently (e.g. ReactDOM and ReactART, ReactFabric and ReactNative).

This was an oversight on SSR. I noticed it last week and was surprised nobody had reported it yet. :)

I think it's common to run a new VM context per request.

We have a new rewrite of the streaming renderer which will have more advanced features like priority queues and streaming content through script tags.

I'm curious, for your use case, are you dealing with two separate requests or are you streaming to two separate parts of a page through the same connection? The latter might be problematic with the new design.

@sebmarkbage
Copy link
Collaborator

sebmarkbage commented Oct 17, 2018

I've thought about this in terms of parallel rendering (threads) too. I think we'll have to end up with a thread local storage model. Where each context can store n number of current values and the "thread id" gives you which field to read in the current renderer.

It's hard to tell how many to allocate for each context though. Since we'd ideally want to avoid making bounds checks in the read path. So ideally we'd have a pretty good guess while starting up to avoid deopting later. That's why we only have two right now on the client (it's uncommon to have three different renderers).

leebyron added a commit to leebyron/react that referenced this pull request Oct 17, 2018
Fixes facebook#13874

This is just a strawman approach for a fix for facebook#13874 that avoids mutating global context by giving each context instance a unique key it can use to store its value in a context object that's scoped to the current render.
@leebyron
Copy link
Contributor Author

I'm curious, for your use case, are you dealing with two separate requests or are you streaming to two separate parts of a page through the same connection? The latter might be problematic with the new design.

Separate requests running the the same process. One process per request is too costly. This could expose one user's data in different user's request, which definitely isn't a great possibility! I can see how this bug could manifest for pagelet style streams as well, but those are less likely to be concurrent.

I have one straw-man approach to fix this in #13875 but I also like your idea of using thread-local storage as a conceptual framing

leebyron added a commit to leebyron/react that referenced this pull request Oct 17, 2018
…storage")

Fixes facebook#13874

Alternative to facebook#13875

This straw-man approach also avoids mutating global context by giving each ReactPartialRenderer it's own mutable state and using context instances as keys into that mutable state.
@leebyron
Copy link
Contributor Author

#13877 is another approach to a fix that's more isolated to the ReactPartialRenderer and is a smaller change by using thread-local storage as a conceptual starting point

@leebyron
Copy link
Contributor Author

We have a new rewrite of the streaming renderer which will have more advanced features like priority queues and streaming content through script tags.

Can't wait to see that in action. Is there a PR to follow along with?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants