Skip to content

Support watch & unwatch properly #295

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

KJTsanaktsidis
Copy link
Contributor

This is a PR which adds the concept of an "implicit transaction". The RedisClient::Cluster object will automatically begin a transaction if a WATCH command is issued, and require the user to either call UNWATCH or MULTI ... EXEC/DISCARD to close it.

Between a watch and an exec/discard/unwatch, the client is locked to a particular node/connection and will reject attempts to read/write to other nodes as transaction consistency issues.

The purpose of this work is to make RedisClient::Cluster behave enough like RedisClient that watch and multi from Redis::Commands::Transactions in redis-rb work with the cluster client. I outlined the current issue with that in detail in #294.

With these changes in redis-cluster-client, these test-cases I wrote for redis-rb pass: redis/redis-rb@master...zendesk:redis-rb:ktsanaktsidis/cluster_txn_tests

@supercaracal , let's keep discussion of this issue in the other PR just so we don't get confused I think :)

KJ Tsanaktsidis and others added 3 commits November 15, 2023 07:55
You might want to do things in the multi block such as read a key that
you passed to `watch:`. Currently, if you do so, you will actually read
the key _twice_. The second copy of your transaction is the one that
actually gets committed.

For example, this transaction adds _two_ to "key", not one:

```
$redis.multi(watch: ["key"]) do |m|
    old_value = $redis.call_v(["GET", "key"]).to_i
    m.call_v(["SET", old_value + 1])
end
```

This patch fixes the issue by batching up the transaction's calls in our
_own_ buffer, and only invoking @node.multi after the user block is
done.
This is almost certainly what you want, because otherwise the watching
is totally inneffective (you might have already read a stale value from
the replica that was already updated before you WATCH'd the primary)
We need this in order to get the list of keys from a WATCH command. It's
probably a little overkill, but may as well implemenet the whole thing.
# IMPORTANT: this determines the last key position INCLUSIVE of the last key -
# i.e. command[determine_last_key_position(command)] is a key.
# This is in line with what Redis returns from COMMANDS.
def determine_last_key_position(command, keys_start) # rubocop:disable Metrics/AbcSize
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't end up needing this method but it was tricky enough to write, and a natural enough analogue of determine_first_key_position, that i'm almost tempted to suggest it be committed in case it's needed in the future.

KJTsanaktsidis and others added 2 commits November 16, 2023 19:07
When using Redis::Cluster#watch from redis-rb, it does not actually pass
the user block to RedisClient::Cluster. Instead, it actually manages the
process of WATCH/UNWATCH itself, and issues call_v(['WATCH', ...]) and
call_v(['UNWATCH']).

To support this, we need to detect when WATCH is called and begin an
"implicit transaction", within which we pin to a slot (and actually pin
to a connection as well, in the case of the Pooled backend). Until
UNWATCH/EXEC is issued, the connection cannot be used for access to
other keys not on the transaction node.

That's implemented by having the client keep a @transaction member, and
set it when a WATCH command is issued.
This would be relatively simple to implement, but requires taking a
dependency on concurrent-ruby, which might not be desirable.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants