-
Notifications
You must be signed in to change notification settings - Fork 66
[FIX] spreadsheet: batch process spreadsheet_revision.commands
#284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
[FIX] spreadsheet: batch process spreadsheet_revision.commands
#284
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good work! :)
src/util/spreadsheet/misc.py
Outdated
from .. import json | ||
from .. import json, pg | ||
|
||
BATCH_SIZE = 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BATCH_SIZE = 10 | |
BATCH_SIZE = 10 |
Have you tried larger values too? Is there any impact on the performance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did, it takes roughly the same time. Since some records holds up to ~13Mo, I thought a 100 records could already be too much.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact I was not sure how much we could gain by copying those tuples into memory in batches. Answer: whatever it is, it's probably dwarfed by the dominant factor (which is the processing of those commands). If that's confirmed to be the case, we might as well remove any batching (batch_size = 1); which would enable us to process any command that can fit in the available memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact I was not sure how much we could gain by copying those tuples into memory in batches
It's less about memory copy, it's about network round trips. Would we expect other DBs to potentially have a lot of small tuples instead of a few big ones in this table? If so, then it may be worth it to be clever about it, like selecting the max-sized commands and from that determine an itersize.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something like this?
itersize = int(available_memory / biggest_command)
How can I determine available_memory
?
src/util/spreadsheet/misc.py
Outdated
|
||
|
||
def iter_commands(cr, like_all=(), like_any=()): | ||
if not (bool(like_all) ^ bool(like_any)): | ||
raise ValueError("Please specify `like_all` or `like_any`, not both") | ||
cr.execute( | ||
ncr = pg.named_cursor(cr, itersize=BATCH_SIZE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using a context manager you do not need to close it explicitely1.
ncr = pg.named_cursor(cr, itersize=BATCH_SIZE) | |
with pg.named_cursor(cr, itersize=BATCH_SIZE) as ncr: |
That said, this is just in the name of a more pythonic implementation. IOW: imo you can keep your current version, if you like it better.
Footnotes
3752d09
to
327a6f6
Compare
|
src/util/spreadsheet/misc.py
Outdated
SELECT id, | ||
commands | ||
FROM spreadsheet_revision | ||
WHERE commands LIKE {}(%s::text[]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keep the formatting.
SELECT id, | |
commands | |
FROM spreadsheet_revision | |
WHERE commands LIKE {}(%s::text[]) | |
SELECT id, | |
commands | |
FROM spreadsheet_revision | |
WHERE commands LIKE {}(%s::text[]) |
src/util/spreadsheet/misc.py
Outdated
if "commands" not in data_loaded: | ||
continue | ||
data_old = json.dumps(data_loaded, sort_keys=True) | ||
with pg.named_cursor(cr, itersize=1) as ncr: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can either leave the default itersize
, or optimize the value to something that would work depending on the data. Another alternative is to use fetchmany
directly.
with pg.named_cursor(cr, itersize=1) as ncr: | |
with pg.named_cursor(cr) as ncr: |
Some dbs have `spreadsheet_revision` records with over 10 millions characters in `commands`. If the number of record is high, this leads to memory errors. We distribute them in buckets of `memory_cap` maximum size, and use a named cursor to process them in buckets. Commands larger than `memory_cap` fit in one bucket.
327a6f6
to
508732d
Compare
[FIX] spreadsheet: batch process
spreadsheet_revision.commands
Some dbs have
spreadsheet_revision
records with over 10 millions characters incommands
. If the number of record is high, this leads to memory errors here. We distribute them in buckets ofmemory_cap
maximum size, and use a named cursor to process them in buckets. Commands larger thanmemory_cap
fit in one bucket.Fixes upg-2899961