Improve NodesManager locking#3803
Conversation
|
Hi, I’m Jit, a friendly security platform designed to help developers build secure applications from day zero with an MVS (Minimal viable security) mindset. In case there are security findings, they will be communicated to you as a comment inside the PR. Hope you’ll enjoy using Jit. Questions? Comments? Want to learn more? Get in touch with us. |
|
Hi @praboud, thank you for your contribution! We will review it soon. |
|
@petyaslavova hey, just checking in on when you'll have a chance to review this PR. |
Hi @praboud, my todo list for the next several weeks is quite full, and I'll probably be able to get to your several PRs at the beginning of January. |
There was a problem hiding this comment.
Pull request overview
This pull request improves thread safety in the NodesManager class of the Redis cluster client by fixing several race conditions related to concurrent state mutations. The changes address issues where multiple threads could corrupt the cluster state when simultaneously handling redirects, initializing, or updating slot assignments.
Key changes:
- Added proper synchronization using
_lock(RLock) and_initialization_lockto protect shared state - Replaced
update_moved_exceptionwithmove_slotto immediately apply slot moves instead of queuing them - Added epoch-based deduplication to prevent redundant concurrent initializations
- Enhanced LoadBalancer thread safety with its own lock
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| redis/cluster.py | Main synchronization improvements: added locks to NodesManager methods, replaced deferred slot updates with immediate move_slot, added initialization deduplication via _initialization_lock and epoch tracking, made LoadBalancer thread-safe |
| redis/asyncio/cluster.py | Similar async improvements: added _initialize_lock for deduplication, updated to use move_slot instead of _moved_exception pattern |
| tests/test_cluster.py | Added comprehensive concurrency tests for slot moves, initialization deduplication, and concurrent operations |
| tests/test_cluster_transaction.py | Updated tests to use new move_slot API and added connection_kwargs to mock |
| tests/test_asyncio/test_cluster_transaction.py | Updated async tests to use new move_slot API |
| dev_requirements.txt | Bumped pytest-asyncio from >=0.23.0 to >=0.24.0 |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Hi @praboud, this is a great addition to the library—thank you! I’ve reviewed your changes and have a few minor requests. Once those are addressed, I think this PR will significantly improve performance when the cluster needs to be re-initialized. Thanks again for the time and effort you’ve put into this work! |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 6 out of 6 changed files in this pull request and generated 7 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Pull Request check-list
Please make sure to review and check all of these items:
NOTE: these things are not required to open a PR and can be done
afterwards / while the PR is open.
Description of change
NodesManagercan mutate state (ie: nodes cache + slots cache + startup nodes + default node) from multiple client threads, through both re-initializing fromCLUSTER SLOTS, and from following aMOVED/ASKredirect. Right now, there isn't proper synchronization of state across multiple threads, which can result in the state getting corrupted, or the NodesManager otherwise behaving weirdly:update_moved_exceptionjust sets an exception on theNodesManager, which we expect to trigger an update to the state the next time we fetch a node from theNodesManager. But_moved_exceptionisn't synchronized. Suppose two threads A & B sequence like: A calls update_moved_exception, B calls update_moved_exception, A calls get_node_from_slot, B calls get_node_from_slot. A's update to_moved_exceptiongets lost, and when A callsget_node_from_slot, it doesn't actually follow the redirect. To avoid this problem, I've changed the slot move logic to immediately apply the update to the slot state, rather than queueing it up for later._get_or_create_cluster_nodecan mutate theroleof aClusterNode, but the node is referenced from theslots_cache. Because we expectslots_cache[node][0]to always be the primary, this can cause strange behavior for readers ofslots_cachebetween the time_get_or_create_cluster_nodeis called, and wheninitializeresetsslots_cacheat the end of the update.initialize&_update_moved_slotsboth mutateslots_cache, and aren't synchronized with each other. This can cause the slots cache to get into a weird state, where eg: nodes are deleted from the slots cache, or duplicated.initializeallows multiple callers to initialize concurrently, which is both extra load on the cluster, and can cause strange behavior in corner cases.To fix all of these:
self._lockaround all places where any mutation happens inNodesManager.update_moved_exception&_update_moved_slotswith justmove_slot, to avoid racing multiple slot updates._initialize_lockto serialize / deduplicate calls toinitialize.I've added some tests to try to exercise the situations above, and verified that they fail before / pass after this change.
This PR mainly focuses on the sync Redis client, but I've tried to update the asyncio one as well. It doesn't (as far as I can tell) suffer from most of these issues, other than issue 4, so the changes there are a bit lighter. This PR is pretty hefty already - I'm happy to split the asyncio changes out into a separate PR if that's easier for review.
Also, in general, this PR is a lot easier to review with "ignore whitespace" turned on in the diff options, because a lot of these changes mean indenting big blocks of code inside a
with self._lockblock.