Skip to content

GH-98363: Have batched() return tuples #100118

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Dec 8, 2022
Merged

Conversation

rhettinger
Copy link
Contributor

@rhettinger rhettinger commented Dec 8, 2022

On python-ideas, I got some pushback on the decision to return lists. After rereading the original thread and looking at the use cases with fresh eyes, it looks like tuples are a marginally better choice. Using a tuple better matches tools like enumerate, zip, zip_longest, pairwise, and the combinatoric iterators. Tuples are better suited for cases where use hashing is desired.
And tuples will be a better fit if we ever add zip_longest style fillvalue support.

The notion that people might want to mutate the result proved to be dubious. A quick code search for more-iterools.chunked() turned-up no examples. If the need did arise, a list comprehension would likely be better than being tempted to mutate in place.

Mostly this is a usability/consistency decision, but there are slight performance benefits as well. Tuples have the data in the tuple object rather than in a separate data array. So tuples require only one memory allocation instead of two. Tuples use less memory than lists (two fewer fields and no losses due to data array alignment). Lookups are slightly faster because lists have one additional indirection step. Also, tuples have better cache performance — their unibody design lets them fit into a single cache line rather than two for lists.

@rhettinger rhettinger added skip news 3.12 only security fixes labels Dec 8, 2022
@rhettinger rhettinger self-assigned this Dec 8, 2022
@netlify
Copy link

netlify bot commented Dec 8, 2022

Deploy Preview for python-cpython-preview ready!

Name Link
🔨 Latest commit fdc644d
🔍 Latest deploy log https://app.netlify.com/sites/python-cpython-preview/deploys/639242735133a90008643bb8
😎 Deploy Preview https://deploy-preview-100118--python-cpython-preview.netlify.app/library/itertools
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site settings.

@sweeneyde
Copy link
Member

The Py_SET_SIZE(result, i) is more delicate for tuples, since they use Py_SIZE() to determine which freelist they get put onto.

Checking i == 0 is the only thing preventing this from crashing at assert(!PyTuple_CheckExact(op)); in tupledealloc. If I'm understanding right, a tuple allocated for millions of items could linger around on the freelist of 2-tuples.

I think it might be safer to call _PyTuple_Resize in the underfilled case. That's what the tuple() constructor does as well.

@rhettinger
Copy link
Contributor Author

I thought about as well, but what we're doing is reasonable. It just leaves the tuple over-allocated which does not defeat any subsequent reuse. The Py_SET_SIZE is faster and simpler than _PyTuple_Resize() does (which includes possibly building a whole new tuple). The tuple constructor has different concerns — there it is more important not to waste space. Here, we really don't care even in the rare case where that batch size is large.

@rhettinger rhettinger merged commit 35cc0ea into python:main Dec 8, 2022
@rhettinger
Copy link
Contributor Author

rhettinger commented Dec 8, 2022

Hmph, I pushed the green button without giving you a chance to respond. Sorry about that. If you feel strongly about _PyTuple_Resize(), then we can continue the conversation and I'll look at it again. I don't see any way to view this as "unsafe"; there's just a question about whether it is undesirable for an overallocated tuple to be put in the reuse pool. Since `Tuple_MAXSAVESIZE' is appropriately set to a small size (currently 20), it doesn't matter.

@sweeneyde
Copy link
Member

No worries. I agree that there is no genuine "safety" concern, just a question of memory overuse here and there.

This was my fear: someone may be doing an operation that involves very high per-batch overhead, and so they choose a large batch size, say 0.1% of working memory. I've written code roughly like this before, and probably would have used batched() had it existed:

BATCHSIZE = 1_000_000

def get_data(db_cursor, ids):
    for batch in batched(ids, BATCHSIZE):
        db_cursor.executemany(SQL_GET_DATA, batch)
        yield from db_cursor.fetchall()

def file_pairs():
    ...

for in_file, out_file in file_pairs():
    ids = get_data_from_file(in_file)
    for row in get_data(db_cursor, ids):
        append_data_to_file(out_file, row)

If there are many files with len(ids) % BATCHSIZE < 20, including len(ids) < 20, then there could be up to around 2_000*20*BATCHSIZE extra words caught up in various tuples.

Some mitigating factors though:

  • I imagine BATCHSIZE will often be hand-tuned and hard-coded, so if someone notices too much memory usage, they can shrink the batchsize.
  • Freelists will often be full, so the stored tuples may not be added to the freelist anyway. Only specific allocation patterns matter.

But this data-dependence leads to another issue: the code seems like it ought to take a large constant amount of memory, but for some obscure inputs (many small leftover batches), it could take 40000x as much memory, caught up in mysterious places like the file_pairs() tuples or database rows. Even if every file happens to have exactly 1_000_007 ids, we wind up with 2000*1000000 allocated words in the length-7 freelist.

I don't really know how to assess how probable these are, especially with the LIFO nature of our freelists, so my thought was to do the more predictable thing and be as conservative as possible with memory usage, giving things back to the operating system as soon as possible.

But maybe my usage of huge batches is playing with fire anyway, so it would be understandable to keep the code as is for that reason.

Another option would be to only do a _PyTuple_Resize if the result will have a length that belongs in a freelist, though I suppose that makes the code uglier. Or do as list_resize does and only realloc if we're under half full.

Thoughts?

@rhettinger
Copy link
Contributor Author

The scenario seems bogus, but I can't divert any more time to this. So, I've put the resize in.

I quite liked that Py_SET_SIZE compiled to a single instruction, str x23, [x19, #16]. The new code path is 100x larger and will partially blow-out the instruction cache. I have mixed feelings about this. On the one hand, it assuages worries and might prevent some future issue from arising. On the other hand, it falls in the category of "make everyone pay for a problem that almost no one has". The notion of an overallocated tuple floating around seems to be too much to bear.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3.12 only security fixes skip news
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants