Skip to content

Provide timings for FDW queries #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 5 commits into from

Conversation

pramsey
Copy link

@pramsey pramsey commented Jan 8, 2016

When debugging it visible, this patch will show the amount of time between the opening of a read cursor on an FDW connection, and the completion of the cursor read. To see debug messages in the console, minimum debugging must be enabled:

contrib_regression=# set client_min_messages = debug1;

Then to compare to overall times, turn on \timing in psql and run the SQL:

contrib_regression=# SELECT count(c3) FROM ft1 t1 WHERE t1.c1 === t1.c2;
DEBUG:  FDW Query Duration: 5.853 ms
 count 
-------
     6
(1 row)

Time: 7.189 ms

tglsfdc and others added 5 commits January 7, 2016 15:22
When we're creating subdirectories of PGDATA during initdb, we know darn
well that the parent directory exists (or should exist) and that the new
subdirectory doesn't (or shouldn't).  There is therefore no need to use
anything more complicated than mkdir().  Using pg_mkdir_p() just opens us
up to unexpected failure modes, such as the one exhibited in bug #13853
from Nuri Boardman.  It's not very clear why pg_mkdir_p() went wrong there,
but it is clear that we didn't need to be trying to create parent
directories in the first place.  We're not even saving any code, as proven
by the fact that this patch nets out at minus five lines.

Since this is a response to a field bug report, back-patch to all branches.
Turns out the only reason initdb -X worked is that pg_mkdir_p won't
whine if you point it at something that's a symlink to a directory.
Otherwise, the attempt to create pg_xlog/ just like all the other
subdirectories would have failed.  Let's be a little more explicit
about what's happening.  Oversight in my patch for bug #13853
(mea culpa for not testing -X ...)
Tatsuro Yamada
The error message wording for AttributeError has changed in Python 3.5.
For the plpython_error test, add a new expected file.  In the
plpython_subtransaction test, we didn't really care what the exception
is, only that it is something coming from Python.  So use a generic
exception instead, which has a message that doesn't vary across
versions.

Back-patch of commit f16d522, which
was previously back-patched into 9.2-9.4, but missed 9.5.
@pramsey pramsey closed this Jan 8, 2016
@pramsey pramsey deleted the REL9_5_STABLE_profile_fdw branch January 8, 2016 19:44
rafatower pushed a commit that referenced this pull request Nov 23, 2016
When a subtransaction is aborted in plpython because of an SPI
exception, it tries to find a matching python exception in a hash
`PLy_spi_exceptions` and to make python vm raise it.

That hash is generated during module initialization, but the exception
objects are not marked to prevent the garbage collector from collecting
them, which can lead to a segmentation fault when processing any SPI
exception.

PoC to reproduce the issue:

```sql
CREATE OR REPLACE FUNCTION server_crashes()
RETURNS VOID
AS $$
    import gc
    gc.collect()
    plan = plpy.prepare('SELECT raises_an_spi_exception();', [])
    plpy.execute(plan)
$$ LANGUAGE plpythonu;

CREATE OR REPLACE FUNCTION raises_an_spi_exception()
RETURNS VOID
AS $$
DECLARE
  sql TEXT;
BEGIN
  sql = format('%I', NULL); -- NullValueNotAllowed
END
$$ LANGUAGE plpgsql;

SELECT server_crashes(); -- segfault here
```

Stacktrace of the problem (using PostgreSQL `REL9_5_STABLE` and python
`2.7.3-0ubuntu3.8` on a Ubuntu 12.04):

```
 Program received signal SIGSEGV, Segmentation fault.
 0x00007f3155c7670b in PyObject_Call (func=0x7f31b7db2a30, arg=0x7f31b87d17d0, kw=0x0) at ../Objects/abstract.c:2525
 2525    ../Objects/abstract.c: No such file or directory.
 (gdb) bt
 #0  0x00007f3155c7670b in PyObject_Call (func=0x7f31b7db2a30, arg=0x7f31b87d17d0, kw=0x0) at ../Objects/abstract.c:2525
 #1  0x00007f3155d81ab1 in PyEval_CallObjectWithKeywords (func=0x7f31b7db2a30, arg=0x7f31b87d17d0, kw=0x0) at ../Python/ceval.c:3890
 #2  0x00007f3155c766ed in PyObject_CallObject (o=0x7f31b7db2a30, a=0x7f31b87d17d0) at ../Objects/abstract.c:2517
 #3  0x00007f31561e112b in PLy_spi_exception_set (edata=0x7f31b8743d78, excclass=0x7f31b7db2a30) at plpy_spi.c:547
 #4  PLy_spi_subtransaction_abort (oldcontext=<optimized out>, oldowner=<optimized out>) at plpy_spi.c:527
 #5  0x00007f31561e2185 in PLy_spi_execute_plan (ob=0x7f31b87d0cd8, list=0x7f31b7c530d8, limit=0) at plpy_spi.c:307
 #6  0x00007f31561e22d4 in PLy_spi_execute (self=<optimized out>, args=0x7f31b87a6d08) at plpy_spi.c:180
 #7  0x00007f3155cda4d6 in PyCFunction_Call (func=0x7f31b7d29600, arg=0x7f31b87a6d08, kw=0x0) at ../Objects/methodobject.c:81
 #8  0x00007f3155d82383 in call_function (pp_stack=0x7fff9207e710, oparg=2) at ../Python/ceval.c:4021
 #9  0x00007f3155d7cda4 in PyEval_EvalFrameEx (f=0x7f31b8805be0, throwflag=0) at ../Python/ceval.c:2666
 #10 0x00007f3155d82898 in fast_function (func=0x7f31b88b5ed0, pp_stack=0x7fff9207ea70, n=0, na=0, nk=0) at ../Python/ceval.c:4107
 #11 0x00007f3155d82584 in call_function (pp_stack=0x7fff9207ea70, oparg=0) at ../Python/ceval.c:4042
 #12 0x00007f3155d7cda4 in PyEval_EvalFrameEx (f=0x7f31b8805a00, throwflag=0) at ../Python/ceval.c:2666
 #13 0x00007f3155d7f8a9 in PyEval_EvalCodeEx (co=0x7f31b88aa460, globals=0x7f31b8727ea0, locals=0x7f31b8727ea0, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ceval.c:3253
 #14 0x00007f3155d74ff4 in PyEval_EvalCode (co=0x7f31b88aa460, globals=0x7f31b8727ea0, locals=0x7f31b8727ea0) at ../Python/ceval.c:667
 #15 0x00007f31561dc476 in PLy_procedure_call (kargs=kargs@entry=0x7f31561e5690 "args", vargs=<optimized out>, proc=0x7f31b873b2d0, proc=0x7f31b873b2d0) at plpy_exec.c:801
 #16 0x00007f31561dd9c6 in PLy_exec_function (fcinfo=fcinfo@entry=0x7f31b7c1f870, proc=0x7f31b873b2d0) at plpy_exec.c:61#17 0x00007f31561de9f9 in plpython_call_handler (fcinfo=0x7f31b7c1f870) at plpy_main.c:291
```
Algunenano pushed a commit that referenced this pull request Nov 30, 2018
refresh_by_match_merge() has some issues in the way it builds a SQL
query to construct the "diff" table:

1. It doesn't require the selected unique index(es) to be indimmediate.
2. It doesn't pay attention to the particular equality semantics enforced
by a given index, but just assumes that they must be those of the column
datatype's default btree opclass.
3. It doesn't check that the indexes are btrees.
4. It's insufficiently careful to ensure that the parser will pick the
intended operator when parsing the query.  (This would have been a
security bug before CVE-2018-1058.)
5. It's not careful about indexes on system columns.

The way to fix #4 is to make use of the existing code in ri_triggers.c
for generating an arbitrary binary operator clause.  I chose to move
that to ruleutils.c, since that seems a more reasonable place to be
exporting such functionality from than ri_triggers.c.

While #1, #3, and #5 are just latent given existing feature restrictions,
and #2 doesn't arise in the core system for lack of alternate opclasses
with different equality behaviors, #4 seems like an issue worth
back-patching.  That's the bulk of the change anyway, so just back-patch
the whole thing to 9.4 where this code was introduced.

Discussion: https://postgr.es/m/[email protected]
jvillarf pushed a commit that referenced this pull request Aug 3, 2021
Due to how pg_size_pretty(bigint) was implemented, it's possible that when
given a negative number of bytes that the returning value would not match
the equivalent positive return value when given the equivalent positive
number of bytes.  This was due to two separate issues.

1. The function used bit shifting to convert the number of bytes into
larger units.  The rounding performed by bit shifting is not the same as
dividing.  For example -3 >> 1 = -2, but -3 / 2 = -1.  These two
operations are only equivalent with positive numbers.

2. The half_rounded() macro rounded towards positive infinity.  This meant
that negative numbers rounded towards zero and positive numbers rounded
away from zero.

Here we fix #1 by dividing the values instead of bit shifting.  We fix #2
by adjusting the half_rounded macro always to round away from zero.

Additionally, adjust the pg_size_pretty(numeric) function to be more
explicit that it's using division rather than bit shifting.  A casual
observer might have believed bit shifting was used due to a static
function being named numeric_shift_right.  However, that function was
calculating the divisor from the number of bits and performed division.
Here we make that more clear.  This change is just cosmetic and does not
affect the return value of the numeric version of the function.

Here we also add a set of regression tests both versions of
pg_size_pretty() which test the values directly before and after the
function switches to the next unit.

This bug was introduced in 8a1fab3. Prior to that negative values were
always displayed in bytes.

Author: Dean Rasheed, David Rowley
Discussion: https://postgr.es/m/CAEZATCXnNW4HsmZnxhfezR5FuiGgp+mkY4AzcL5eRGO4fuadWg@mail.gmail.com
Backpatch-through: 9.6, where the bug was introduced.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants