Skip to content

gh-120608: Make reversed thread-safe #120609

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

eendebakpt
Copy link
Contributor

@eendebakpt eendebakpt commented Jun 16, 2024

The reversed builtin is not thread-safe. In reversed_next the object index can be read and decremented by multiple threads leading to incorrect results. The part that needs to happen atomically is the check (index >= 0) and the decrement ro->index--;.

A reproducing example will be included in the tests later.

@eendebakpt eendebakpt requested a review from ethanfurman as a code owner June 16, 2024 22:28
@eendebakpt eendebakpt changed the title gh-20608: Make reversed thread-safe gh-120608: Make reversed thread-safe Jun 16, 2024
@@ -98,6 +98,8 @@ extern "C" {
# define Py_END_CRITICAL_SECTION() \
_PyCriticalSection_End(&_cs); \
}
# define Py_EXIT_CRITICAL_SECTION() \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it is better to define a local API instead of a global one? For example END_TYPE_LOCK() does that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can just extract the critical section into a helper function and then wrap the helper with the existing macros.

Copy link
Contributor

@erlend-aasland erlend-aasland Jun 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this PR, I'd just extract most of reversed_next into a reversed_next_impl and wrap that with the Py_{BEGIN,END}_CRITICAL_SECTION macros. See also #120318.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW we ended up not doing that in #120318. There's this comment though that explains the approach and gives an example.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback everyone! I added the Py_EXIT_CRITICAL_SECTION in this PR (and some others) to deal with return and goto statements. I am not in favor of adding a local API, since it will leak details of the locking implementation to modules. Changing the Py_EXIT_CRITICAL_SECTION to a more private _Py_EXIT_CRITICAL_SECTION (or something like that) would be fine with me.

In this case we can indeed refactor to create a reversed_next_impl and put a global lock around it. I am not too worried about performance here, and if it turns out performance is an issue we can change later.

Unless more suggestions come it, I will refactor to reversed_next_impl in the coming days.

@corona10
Copy link
Member

Do you have a benchmark for the single thread case?
And PTAL the comment: #120496 (comment)

@@ -431,16 +432,20 @@ reversed_next(reversedobject *ro)
PyObject *item;
Py_ssize_t index = ro->index;

Py_BEGIN_CRITICAL_SECTION(ro);
if (index >= 0) {
item = PySequence_GetItem(ro->seq, index);
if (item != NULL) {
ro->index--;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't declare the critical section for the list_next

cpython/Objects/listobject.c

Lines 3890 to 3911 in dacc5ac

listiter_next(PyObject *self)
{
_PyListIterObject *it = (_PyListIterObject *)self;
Py_ssize_t index = FT_ATOMIC_LOAD_SSIZE_RELAXED(it->it_index);
if (index < 0) {
return NULL;
}
PyObject *item = list_get_item_ref(it->it_seq, index);
if (item == NULL) {
// out-of-bounds
FT_ATOMIC_STORE_SSIZE_RELAXED(it->it_index, -1);
#ifndef Py_GIL_DISABLED
PyListObject *seq = it->it_seq;
it->it_seq = NULL;
Py_DECREF(seq);
#endif
return NULL;
}
FT_ATOMIC_STORE_SSIZE_RELAXED(it->it_index, index + 1);
return item;
}

Why do we have to declare the critical section here?
And what about just handling as the out-of-bound,
and if the ro->index needs to be decremented, why not just use atomic operation in here?

cc @colesbury

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The check index >= 0 and decrement ro->index-- need to happen atomically. We could use the atomic _Py_atomic_compare_exchange for this, but it would increase code complexity. If performance is a concern, I can update the PR with this approach. The PySequence_GetItem is not inside the critical section, but I assumed it to be (or to be made later) thread safe

@eendebakpt
Copy link
Contributor Author

eendebakpt commented Jun 23, 2024

@corona10 Here is a benchmark for the PR using a critical section. Based on comments on the issue #120496 I believe that using atomics might be a better option, I will look into that a bit later.

Benchmark script:

import pyperf
runner = pyperf.Runner()

setup = """

from collections import UserList

range10 = range(10)
range1000 = range(1000)
x = list(range(100))
t = tuple(range(100))

userlist100 = UserList(x)

"""

runner.timeit(name="reversed(range10)", stmt="reversed(range10)", setup=setup)
runner.timeit(name="list(reversed(range10))", stmt="list(reversed(range10))", setup=setup)
runner.timeit(name="list(reversed(range1000))", stmt="list(reversed(range1000))", setup=setup)
runner.timeit(name="list(reversed(list100))", stmt="list(reversed(x))", setup=setup)
runner.timeit(name="list(reversed(tuple100))", stmt="list(reversed(t))", setup=setup)
runner.timeit(name="list(reversed(userlist100))", stmt="list(reversed(userlist100))", setup=setup)

Result

list(reversed(range10)): Mean +- std dev: [ft2] 428 ns +- 55 ns -> [ft_pr] 413 ns +- 14 ns: 1.04x faster
list(reversed(list100)): Mean +- std dev: [ft2] 846 ns +- 25 ns -> [ft_pr] 833 ns +- 7 ns: 1.02x faster
list(reversed(tuple100)): Mean +- std dev: [ft2] 1.01 us +- 0.08 us -> [ft_pr] 2.23 us +- 0.02 us: 2.21x slower
list(reversed(userlist100)): Mean +- std dev: [ft2] 34.0 us +- 1.7 us -> [ft_pr] 33.4 us +- 1.8 us: 1.02x faster

Benchmark hidden because not significant (2): reversed(range10), list(reversed(range1000))

Geometric mean: 1.13x slower

The performance for range and list is not affected. This makes sense as these methods have a custom __reversed__ method. For a user-defined iterable supporting the sequence protocol (the UserList) the performance impact is very small.
The main performance degradation is for tuple. Maybe this can be resolved by using a custom __reversed__, just as for list.

@eendebakpt
Copy link
Contributor Author

@corona10 I created #120971 to make reversed thread-safe using atomics. I will close this one.

@eendebakpt eendebakpt closed this Jun 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants