-
-
Notifications
You must be signed in to change notification settings - Fork 68
Fix xpending_range to return all 4 required fields per Redis spec #427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Created upstream PR cunla/fakeredis-py#427 to fix xpending_range. Using fork temporarily until PR is merged and released. The fix is required because fakeredis was only returning 2 fields per pending message (message_id, consumer) when Redis spec requires 4 fields (message_id, consumer, time_since_delivered, times_delivered). This was causing KeyError in docket's snapshot() method which accesses these fields.
|
related to chrisguidry/docket#165 |
The fakeredis fork with xpending_range fix is only needed for testing with the memory backend, not for production use of docket. Users who install docket don't need fakeredis as a standard dependency - it's only used when running tests or development workflows with memory:// URLs. Once cunla/fakeredis-py#427 is merged upstream, we can switch back to the regular fakeredis package in dev dependencies.
Integrates fakeredis as a dev dependency to enable fast, in-memory testing without requiring a real Redis server. Use `memory://` URLs to enable the memory backend. ## Changes **Dev dependency** (`pyproject.toml`): - Added `fakeredis[lua]` as dev dependency (not standard) - Uses forked version with xpending_range fix until [upstream PR #427](cunla/fakeredis-py#427) is merged - Added `allow-direct-references = true` to hatch metadata config **Backend detection** (`src/docket/docket.py`): - Check for `memory://` URL scheme to enable memory backend - Create class-level `_memory_servers` dict keyed by URL for isolation - Multiple in-memory dockets supported via different URLs (e.g., `memory://test1`, `memory://test2`) - Provide helpful error message when fakeredis is not installed **Worker workaround** (`src/docket/worker.py`): - Use non-blocking `xreadgroup` (block=0) with manual sleep for memory backend - Detect memory backend via `self.docket.url.startswith("memory://")` - Necessary because fakeredis's async blocking operations don't properly integrate with asyncio's event loop **Tests** (`tests/test_memory_backend.py`): - Test memory backend usage with memory:// URLs - Test multiple isolated in-memory dockets - Test server reuse for same URL - Test missing dependency error handling **CI Integration** (`.github/workflows/ci.yml`): - Added full test matrix leg for memory backend (`REDIS_VERSION=memory`) - Runs complete test suite with memory backend for confidence **Fixtures** (`tests/conftest.py`): - Support `REDIS_VERSION=memory` to skip Docker container setup - Generate unique `memory://` URLs per test worker for isolation ## Usage ```python # Use memory:// URL scheme async with Docket(name="test-docket", url="memory://test") as docket: # Use docket normally - backed by in-memory fakeredis ... # Multiple isolated dockets async with Docket(name="docket-1", url="memory://one") as docket1: async with Docket(name="docket-2", url="memory://two") as docket2: # Each has separate in-memory data ... ``` Or from command line: ```bash # No special flags needed - memory:// URL is sufficient python script.py # if script uses memory:// URL ``` ## Technical Details ### Non-blocking workaround The non-blocking approach (`block=0` with manual `asyncio.sleep()`) is necessary because fakeredis's async blocking operations don't properly yield control to the asyncio event loop, preventing concurrent tasks (like the scheduler loop) from running. ### fakeredis xpending_range fix We use a forked version of fakeredis that fixes the xpending_range command to return all 4 required fields (message_id, consumer, time_since_delivered, times_delivered) instead of just 2. This matches real Redis behavior and redis-py's expectations. The fix is in our fork at `zzstoatzz/fakeredis-py@fix-xpending-range-fields` and has been submitted upstream in [PR #427](cunla/fakeredis-py#427). ### URL-based isolation Different `memory://` URLs create separate FakeServer instances, allowing multiple independent in-memory dockets in the same process. The implementation uses a class-level dictionary keyed by the full URL. ## Test Coverage - 295 tests passing on Python 3.12 and 3.13 - 100% coverage on all production code with memory backend - All pre-commit hooks passing - Full CI matrix leg for memory backend Closes #162 --------- Co-authored-by: Claude <[email protected]> Co-authored-by: Chris Guidry <[email protected]>
Redis's xpending_range returns 4 fields per pending message: - message_id: bytes - consumer: bytes - time_since_delivered: int (milliseconds since last delivery) - times_delivered: int (number of times message was delivered) fakeredis was only returning 2 fields (message_id and consumer). Changes: - Updated pel dict type from Tuple[bytes, int] to Tuple[bytes, int, int] to track (consumer_name, time_read, times_delivered) - Initialize times_delivered=1 when messages are first read - Increment times_delivered when messages are claimed - Return all 4 fields in pending() method - Update _calc_consumer_last_time() to handle 3-tuple
…ng_range test The test_xpending_range test was only validating message_id and consumer fields, which is why the bug wasn't caught. Now it validates all 4 fields that Redis xpending_range is specified to return: message_id, consumer, time_since_delivered, and times_delivered. This ensures the fix properly implements the Redis specification and matches redis-py's parse_xpending_range expectations.
… disconnect When FakeConnection.disconnect() is called, we need to set _reader and _writer to None in addition to _sock. This prevents redis-py's AbstractConnection.__del__ from raising ResourceWarning about unclosed connections. The warning occurs because AbstractConnection.__del__ checks if self._writer exists and warns if the connection wasn't properly closed. By explicitly setting these to None during disconnect, we signal that the connection has been properly cleaned up. This fix is specifically needed for Python 3.13 which has stricter resource warning handling.
…disconnect() The redis-py AbstractConnection.__del__ checks for _writer attribute and raises ResourceWarning if it's still set. We need to clear _sock, _reader, and _writer BEFORE calling super().disconnect() to prevent this warning on Python 3.13.
When FakeConnection instances are garbage collected without disconnect() being called, Python 3.13's stricter resource warning handling detects the unclosed _writer attribute. This __del__ method ensures _writer is always cleared during garbage collection, preventing ResourceWarning even if disconnect() wasn't called.
Changed PEL (Pending Entries List) storage from plain tuple to PelEntry namedtuple with fields: consumer_name, time_read, times_delivered. This improves code readability by using named attributes instead of indices when accessing PEL data throughout the codebase. Addresses review feedback from maintainer.
0e62978 to
6085591
Compare
|
Can you run |
:jinx: was just about to do that 🙂 |
|
@cunla are these failures flakes or in my purview? happy to address if there's something to do |
Not you.. I will merge it this weekend |
alright, let me know if any updates are needed! |
|
thanks @cunla - will be on the lookout for a release :) |
Nate's PR fixing xpending_range has been merged into the main fakeredis repo, so we can now switch from his fork to the official repository. Changes: - Moved fakeredis from dev dependencies to main dependencies (it's used for the memory backend feature, not just testing) - Updated from Nate's fork to main repo at commit ad50a0d - This commit includes the xpending_range fix that returns all 4 required fields (message_id, consumer, time_since_delivered, times_delivered) - Added comment noting we're using a specific commit until version > 2.32.0 is released (presumably 2.33.0) Related: cunla/fakeredis-py#427 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Nate's PR fixing xpending_range has been merged into the main fakeredis repo, so we can now switch from his fork to the official repository. ## Changes - Moved fakeredis from dev dependencies to main dependencies (it's used for the memory backend feature, not just testing) - Updated from Nate's fork to main repo at commit ad50a0d - This commit includes the xpending_range fix that returns all 4 required fields (message_id, consumer, time_since_delivered, times_delivered) - Added comment noting we're using a specific commit until version > 2.32.0 is released (presumably 2.33.0) ## Testing Verified with full test suite using memory backend - all 272 tests pass, including the 4 memory backend specific tests. Related: cunla/fakeredis-py#427 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude <[email protected]>
The previous change moved fakeredis to main dependencies, but PyPI doesn't allow git dependencies in published packages. We need to wait for an official release (version > 2.32.0, presumably 2.33.0) before we can use it as a main dependency. Changes: - Moved fakeredis back to dev dependencies - Still using the main repo (not Nate's fork) at the commit with the fix - Updated comment to explain we're waiting for a PyPI release - Once released, we can use "fakeredis[lua]>=2.33.0" as a main dependency Related: cunla/fakeredis-py#427 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
The previous change moved fakeredis to main dependencies, but PyPI doesn't allow git dependencies in published packages. We need to wait for an official release (version > 2.32.0, presumably 2.33.0) before we can use it as a main dependency. ## Changes - Moved fakeredis back to dev dependencies - Still using the main repo (not Nate's fork) at the commit with the fix - Updated comment to explain we're waiting for a PyPI release - Once released, we can use `fakeredis[lua]>=2.33.0` as a main dependency ## Why This Matters PyPI requires all dependencies to be proper versioned packages, not git references. While the fix we need is merged into the main repo, it hasn't been released to PyPI yet. Until then, we'll keep fakeredis as a dev dependency for testing purposes only. Related: cunla/fakeredis-py#427 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude <[email protected]>
|
Thanks for accepting this one, @cunla! I'm a big fan of your work here, and this is pretty crucial for my project chrisguidry/docket to cut its next release. @zzstoatzz and a couple more of us have been cooking on it, and the only thing left is that we need a new PyPI release of |
Version 2.32.1 was released to PyPI on November 6, 2025, including the xpending_range fix that returns all 4 required fields (message_id, consumer, time_since_delivered, times_delivered). This allows us to: - Switch from the git dependency to a proper PyPI version - Move fakeredis from dev to main dependencies (it's used for the memory:// backend feature, not just testing) - Remove the temporary workaround and associated comments The memory backend is now a production-ready feature with clean dependency management. ## Changes - Added `fakeredis[lua]>=2.32.1` to main dependencies - Removed git reference from dev dependencies - Cleaned up workaround comments ## Testing - All 4 memory backend tests pass with PyPI version - All 398 tests pass with 100% coverage - All prek hooks pass Related: cunla/fakeredis-py#427 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
…186) Version 2.32.1 was released to PyPI on November 6, 2025, including the xpending_range fix that returns all 4 required fields (message_id, consumer, time_since_delivered, times_delivered). This allows us to: - Switch from the git dependency to a proper PyPI version - Move fakeredis from dev to main dependencies (it's used for the memory:// backend feature, not just testing) - Remove the temporary workaround and associated comments The memory backend is now a production-ready feature with clean dependency management. ## Changes - Added `fakeredis[lua]>=2.32.1` to main dependencies - Removed git reference from dev dependencies - Cleaned up workaround comments ## Testing - All 4 memory backend tests pass with PyPI version - All 398 tests pass with 100% coverage - All prek hooks pass Related: cunla/fakeredis-py#427 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude <[email protected]>
Summary
Fixes
xpending_rangeto return all 4 fields per pending message as specified in the Redis documentation, matching the behavior expected by redis-py's parser.Problem
Currently,
xpending_rangeonly returns 2 fields per pending message:However, according to the Redis XPENDING documentation, it should return 4 fields:
The redis-py library's
parse_xpending_rangefunction expects these 4 fields and zips them with the keys("message_id", "consumer", "time_since_delivered", "times_delivered").Changes
Updated the
StreamGroup.peldictionary to use aPelEntrynamedtuple with fields:consumer_name: The consumer owning the messagetime_read: Timestamp when the message was deliveredtimes_delivered: Counter for delivery attemptsModified the following methods:
group_read(): Initializetimes_delivered=1when messages are first readclaim(): Incrementtimes_deliveredwhen messages are claimedpending(): Return all 4 fields includingtime_since_deliveredandtimes_delivered_calc_consumer_last_time(): Use namedtuple field accessTesting
Tested against real Redis to confirm the expected format:
With this fix, fakeredis now returns the same format.
Python 3.13+ compatibility fix
Additional change:
__del__method for FakeConnectionAdded a
__del__method toFakeConnectionto preventResourceWarningon Python 3.13+.Background: Python 3.13 introduced stricter resource warning handling. When
FakeConnectionobjects are garbage collected withoutdisconnect()being called first, the inheritedAbstractConnection.__del__method from redis-py detects the unclosed_writerattribute and raises aResourceWarning.Solution: The
__del__method clears_writer,_reader, and_sockattributes during garbage collection, ensuring no warnings are raised even if proper cleanup didn't occur.This is a defensive fix that:
AbstractConnection.__del__