Tomas Vondra [Fri, 17 Oct 2025 19:44:42 +0000 (21:44 +0200)]
Fix hashjoin memory balancing logic
Commit
a1b4f289beec improved the hashjoin sizing to also consider the
memory used by BufFiles for batches. The code however had multiple
issues, making it ineffective or not working as expected in some cases.
* The amount of memory needed by buffers was calculated using uint32,
so it would overflow for nbatch >= 262144. If this happened the loop
would exit prematurely and the memory usage would not be reduced.
The nbatch overflow is fixed by reworking the condition to not use a
multiplication at all, so there's no risk of overflow. An explicit
cast was added to a similar calculation in ExecHashIncreaseBatchSize.
* The loop adjusting the nbatch value used hash_table_bytes to calculate
the old/new size, but then updated only space_allowed. The consequence
is the total memory usage was not reduced, but all the memory saved by
reducing the number of batches was used for the internal hash table.
This was fixed by using only space_allowed. This is also more correct,
because hash_table_bytes does not account for skew buckets.
* The code was also doubling multiple parameters (e.g. the number of
buckets for hash table), but was missing overflow protections.
The loop now checks for overflow, and terminates if needed. It'd be
possible to cap the value and continue the loop, but it's not worth
the complexity. And the overflow implies the in-memory hash table is
already very large anyway.
While at it, rework the comment explaining how the memory balancing
works, to make it more concise and easier to understand.
The initial nbatch overflow issue was reported by Vaibhav Jain. The
other issues were noticed by me and Melanie Plageman. Fix by me, with a
lot of review and feedback by Melanie.
Backpatch to 18, where the hashjoin memory balancing was introduced.
Reported-by: Vaibhav Jain <[email protected]>
Reviewed-by: Melanie Plageman <[email protected]>
Backpatch-through: 18
Discussion: https://postgr.es/m/CABa-Az174YvfFq7rLS+VNKaQyg7inA2exvPWmPWqnEn6Ditr_Q@mail.gmail.com
Masahiko Sawada [Fri, 17 Oct 2025 18:28:54 +0000 (11:28 -0700)]
Remove unused data_bufsz from DecodedBkpBlock struct.
Author: Mikhail Gribkov <
[email protected]>
Reviewed-by: Nazir Bilal Yavuz <[email protected]>
Discussion: https://postgr.es/m/CAMEv5_sxuaiAfSy1ZyN%3D7UGbHg3C10cwHhEk8nXEjiCsBVs4vQ%40mail.gmail.com
Nathan Bossart [Fri, 17 Oct 2025 16:36:50 +0000 (11:36 -0500)]
Fix privilege checks for pg_prewarm() on indexes.
pg_prewarm() currently checks for SELECT privileges on the target
relation. However, indexes do not have access rights of their own,
so a role may be denied permission to prewarm an index despite
having the SELECT privilege on its parent table. This commit fixes
this by locking the parent table before the index (to avoid
deadlocks) and checking for SELECT on the parent table. Note that
the code is largely borrowed from
amcheck_lock_relation_and_check().
An obvious downside of this change is the extra AccessShareLock on
the parent table during prewarming, but that isn't expected to
cause too much trouble in practice.
Author: Ayush Vatsa <
[email protected]>
Co-authored-by: Nathan Bossart <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Reviewed-by: Jeff Davis <[email protected]>
Discussion: https://postgr.es/m/CACX%2BKaMz2ZoOojh0nQ6QNBYx8Ak1Dkoko%3DD4FSb80BYW%2Bo8CHQ%40mail.gmail.com
Backpatch-through: 13
Tom Lane [Fri, 17 Oct 2025 15:25:53 +0000 (11:25 -0400)]
Improve TAP tests by replacing ok() with better Test::More functions
Transpose the changes made by commit
fabb33b35 in 002_pg_dump.pl
into its recently-created clone 006_pg_dump_compress.pl.
Daniel Gustafsson [Fri, 17 Oct 2025 12:21:26 +0000 (14:21 +0200)]
Avoid warnings in tests when openssl binary isn't available
The SSL tests for pg_stat_ssl tries to exactly match the serial
from the certificate by extracting it with the openssl binary.
If that fails due to the binary not being available, a fallback
match is used, but the attempt to execute a missing binary adds
a warning to the output which can confuse readers for a failure
in the test. Fix by only attempting if the openssl binary was
found by autoconf/meson.
Backpatch down to v16 where commit
c8e4030d1bdd made the test
use the OPENSSL variable from autoconf/meson instead of a hard-
coded value.
Author: Daniel Gustafsson <
[email protected]>
Reported-by: Christoph Berg <[email protected]>
Discussion: https://postgr.es/m/
[email protected]
Backpatch-through: 16
Peter Eisentraut [Fri, 17 Oct 2025 08:33:54 +0000 (10:33 +0200)]
Change config_generic.vartype to be initialized at compile time
Previously, this was initialized at run time so that it did not have
to be maintained by hand in guc_tables.c. But since that table is now
generated anyway, we might as well generate this bit as well.
Reviewed-by: Chao Li <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/
8fdfb91e-60fb-44fa-8df6-
f5dea47353c9@eisentraut.org
Peter Eisentraut [Fri, 17 Oct 2025 08:29:42 +0000 (10:29 +0200)]
Use designated initializers for guc_tables
This makes the generating script simpler and the output easier to
read. In the future, it will make it easier to reorder and rearrange
the underlying C structures.
Reviewed-by: Chao Li <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/
8fdfb91e-60fb-44fa-8df6-
f5dea47353c9@eisentraut.org
Daniel Gustafsson [Fri, 17 Oct 2025 08:03:15 +0000 (10:03 +0200)]
ecpg: check return value of replace_variables()
The function returns false if it fails to allocate memory, so
make sure to check the return value in callsites.
Author: Aleksander Alekseev <
[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Discussion: https://postgr.es/m/CAJ7c6TNPrU8ZxgdfN3PyGY1tzo0bgszx+KkqW0Z7zt3heyC1GQ@mail.gmail.com
Daniel Gustafsson [Fri, 17 Oct 2025 07:38:49 +0000 (09:38 +0200)]
Replace defunct URL with stable archive.org URL in rbtree.c
The URL for "Sorting and Searching Algorithms: A Cookbook"
by Thomas Niemann has started returning 404, and since we
refer to the page for license terms this replaces the now
defunct link with one to the copy on archive.org.
Author: Chao Li <
[email protected]>
Discussion: https://postgr.es/m/
6DED3DEF-875E-4D1D-8F8F-
7353D5AF7B79@gmail.com
Michael Paquier [Fri, 17 Oct 2025 05:39:09 +0000 (14:39 +0900)]
Improve TAP tests by replacing ok() with better Test::More functions
The TAP tests whose ok() calls are changed in this commit were relying
on perl operators, rather than equivalents available in Test::More. For
example, rather than the following:
ok($data =~ qr/expr/m, "expr matching");
ok($data !~ qr/expr/m, "expr not matching");
The new test code uses this equivalent:
like($data, qr/expr/m, "expr matching");
unlike($data, qr/expr/m, "expr not matching");
A huge benefit of the new formulation is that it is possible to know
about the values we are checking if a failure happens, making debugging
easier, should the test runs happen in the buildfarm, in the CI or
locally.
This change leads to more test code overall as perltidy likes to make
the code pretty the way it is in this commit.
Author: Sadhuprasad Patro <
[email protected]>
Discussion: https://postgr.es/m/CAFF0-CHhwNx_Cv2uy7tKjODUbeOgPrJpW4Rpf1jqB16_1bU2sg@mail.gmail.com
Fujii Masao [Fri, 17 Oct 2025 05:03:42 +0000 (14:03 +0900)]
doc: Clarify when backend_xmin in pg_stat_replication can be NULL.
Improve the documentation of pg_stat_replication to explain when
the backend_xmin column becomes NULL. This happens when
a replication slot is used (the xmin is then shown in pg_replication_slots)
or when hot_standby_feedback is disabled.
Author: Renzo Dani <
[email protected]>
Reviewed-by: Fujii Masao <[email protected]>
Discussion: https://postgr.es/m/CA+XOKQAMXzskpdUmj2sg03_5fmiXc2Gs0r3TX1_rmcFcqh+=xQ@mail.gmail.com
Michael Paquier [Fri, 17 Oct 2025 04:06:04 +0000 (13:06 +0900)]
Fix matching check in recovery test 042_low_level_backup
042_low_level_backup compared the result of a query two times with a
comparison operator based on an integer, while the result should be
compared with a string.
The outcome of the tests is currently not impacted by this change.
However, it could be possible that the tests fail to detect future
issues if the query results become different, for some reason.
Oversight in
99b4a63bef94.
Author: Sadhuprasad Patro <
[email protected]>
Discussion: https://postgr.es/m/CAFF0-CHhwNx_Cv2uy7tKjODUbeOgPrJpW4Rpf1jqB16_1bU2sg@mail.gmail.com
Backpatch-through: 17
Michael Paquier [Fri, 17 Oct 2025 04:01:14 +0000 (13:01 +0900)]
pg_createsubscriber: Fix matching check in TAP test
040_pg_createsubscriber has been calling safe_psql(), that returns the
result of a SQL query, with ok() without checking the result generated
(in this case 't', for a number of publications).
The outcome of the tests is currently not impacted by this change.
However, it could be possible that the test fails to detect future
issues if the query results become different.
The test is rewritten so as the number of publications is checked. This
is not the fix suggested originally by the author, but this is more
reliable in the long run.
Oversight in
e5aeed4b8020.
Author: Sadhuprasad Patro <
[email protected]>
Discussion: https://postgr.es/m/CAFF0-CHhwNx_Cv2uy7tKjODUbeOgPrJpW4Rpf1jqB16_1bU2sg@mail.gmail.com
Backpatch-through: 18
Álvaro Herrera [Thu, 16 Oct 2025 18:21:05 +0000 (20:21 +0200)]
Fix update-po for the PGXS case
The original formulation failed to take into account the fact that for
the PGXS case, the source dir is not $(top_srcdir), so it ended up not
doing anything. Handle it explicitly.
Author: Ryo Matsumura <
[email protected]>
Reviewed-by: Bryan Green <[email protected]>
Backpatch-through: 13
Discussion: https://postgr.es/m/TYCPR01MB113164770FB0B0BE6ED21E68EE8DCA@TYCPR01MB11316.jpnprd01.prod.outlook.com
Tom Lane [Thu, 16 Oct 2025 16:52:10 +0000 (12:52 -0400)]
Add more TAP test coverage for pg_dump.
Add a test case to cover pg_dump with --compress=none. This
brings the coverage of compress_none.c up from about 64% to 90%,
in particular covering the new code added in a previous patch.
Include compression of toc.dat in manually-compressed test cases.
We would have found the bug fixed in commit
a239c4a0c much sooner
if we'd done this. As far as I can tell, this doesn't reduce test
coverage at all, since there are other tests of directory format
that still use an uncompressed toc.dat.
Widen the wide row used to verify correct (de) compression.
Commit
1a05c1d25 advises us (not without reason) to ensure that
this test case fully fills DEFAULT_IO_BUFFER_SIZE, so that loops
within the compression logic will iterate completely. To follow
that advice with the proposed DEFAULT_IO_BUFFER_SIZE of 128K,
we need something close to this. This does indeed increase the
reported code coverage by a few lines.
While here, fix a glitch that I noticed in testing: the
$glob_patterns tests were incapable of failing, because glob()
will return 'foo' as 'foo' whether there is a matching file or
not. (Indeed, the stanza just above that one relies on that.)
In my testing, this patch adds approximately as much runtime
as was saved by the previous patch, so that it's about a wash
compared to the old code. However, we get better test coverage.
Author: Tom Lane <
[email protected]>
Discussion: https://postgr.es/m/
3515357.
1760128017@sss.pgh.pa.us
Tom Lane [Thu, 16 Oct 2025 16:51:55 +0000 (12:51 -0400)]
Split 002_pg_dump.pl into two test files.
Add a new test script 006_pg_dump_compress.pl, containing just
the pg_dump tests specifically concerned with compression, and
remove those tests from 002_pg_dump.pl. We can also drop some
infrastructure in 002_pg_dump.pl that was used only for these tests.
The point of this is to avoid the cost of running these test
cases over and over in all the scenarios (runs) that 002_pg_dump.pl
exercises. We don't learn anything more about the behavior of the
compression code that way, and we expend significant amounts of
time, since one of these test cases is quite large and due to get
larger.
The intent of this specific patch is to provide exactly the same
coverage as before, except that I went back to using --no-sync
in all the test runs moved over to 006_pg_dump_compress.pl.
I think that avoiding that had basically been cargo-culted into
these test cases as a result of modeling them on the
defaults_custom_format test case; again, doing that over and over
isn't going to teach us anything new.
Author: Tom Lane <
[email protected]>
Discussion: https://postgr.es/m/
3515357.
1760128017@sss.pgh.pa.us
Tom Lane [Thu, 16 Oct 2025 16:50:18 +0000 (12:50 -0400)]
Align the data block sizes of pg_dump's various compression modes.
After commit
fe8192a95, compress_zstd.c tends to produce data block
sizes around 128K, and we don't really have any control over that
unless we want to overrule ZSTD_CStreamOutSize(). Which seems like
a bad idea. But let's try to align the other compression modes to
produce block sizes roughly comparable to that, so that pg_restore's
skip-data performance isn't enormously different for different modes.
gzip compression can be brought in line simply by setting
DEFAULT_IO_BUFFER_SIZE = 128K, which this patch does. That
increases some unrelated buffer sizes, but none of them seem
problematic for modern platforms.
lz4's idea of appropriate block size is highly nonlinear:
if we just increase DEFAULT_IO_BUFFER_SIZE then the output
blocks end up around 200K. I found that adjusting the slop
factor in LZ4State_compression_init was a not-too-ugly way
of bringing that number roughly into line.
With compress = none you get data blocks the same sizes as the
table rows, which seems potentially problematic for narrow tables.
Introduce a layer of buffering to make that case match the others.
Comments in compress_io.h and 002_pg_dump.pl suggest that if
we increase DEFAULT_IO_BUFFER_SIZE then we need to increase the
amount of data fed through the tests in order to improve coverage.
I've not done that here, leaving it for a separate patch.
Author: Tom Lane <
[email protected]>
Discussion: https://postgr.es/m/
3515357.
1760128017@sss.pgh.pa.us
Nathan Bossart [Thu, 16 Oct 2025 16:31:38 +0000 (11:31 -0500)]
Remove partColsUpdated.
This information appears to have been unused since commit
c5b7ba4e67. We could not find any references in third-party code,
either.
Reviewed-by: Chao Li <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Reviewed-by: Michael Paquier <[email protected]>
Discussion: https://postgr.es/m/aO_CyFRpbVMtgJWM%40nathan
Amit Kapila [Thu, 16 Oct 2025 05:10:50 +0000 (05:10 +0000)]
Refactor logical worker synchronization code into a separate file.
To support the upcoming addition of a sequence synchronization worker,
this patch extracts common synchronization logic shared by table sync
workers and the new sequence sync worker into a dedicated file. This
modularization improves code reuse, maintainability, and clarity in the
logical workers framework.
Author: vignesh C <
[email protected]>
Author: Hou Zhijie <
[email protected]>
Reviewed-by: shveta malik <[email protected]>
Reviewed-by: Dilip Kumar <[email protected]>
Reviewed-by: Peter Smith <[email protected]>
Reviewed-by: Hayato Kuroda <[email protected]>
Reviewed-by: Chao Li <[email protected]>
Reviewed-by: Amit Kapila <[email protected]>
Discussion: https://postgr.es/m/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
Amit Langote [Thu, 16 Oct 2025 05:01:44 +0000 (14:01 +0900)]
Fix EPQ crash from missing partition directory in EState
EvalPlanQualStart() failed to propagate es_partition_directory into
the child EState used for EPQ rechecks. When execution time partition
pruning ran during the EPQ scan, executor code dereferenced a NULL
partition directory and crashed.
Previously, propagating es_partition_directory into the EPQ EState was
unnecessary because CreatePartitionPruneState(), which sets it on
demand, also initialized the exec-pruning context. After commit
d47cbf474, CreatePartitionPruneState() now initializes only the init-
time pruning context, leaving exec-pruning context initialization to
ExecInitNode(). Since EvalPlanQualStart() runs only ExecInitNode() and
not CreatePartitionPruneState(), it can encounter a NULL
es_partition_directory. Other executor fields initialized during
CreatePartitionPruneState() are already copied into the child EState
thanks to commit
8741e48e5d, but es_partition_directory was missed.
Fix by borrowing the parent estate's es_partition_directory in
EvalPlanQualStart(), and by clearing that field in EvalPlanQualEnd()
so the parent remains responsible for freeing the directory.
Add an isolation test permutation that triggers EPQ with execution-
time partition pruning, the case that reproduces this crash.
Bug: #19078
Reported-by: Yuri Zamyatin <[email protected]>
Diagnosed-by: David Rowley <[email protected]>
Author: David Rowley <
[email protected]>
Co-authored-by: Amit Langote <[email protected]>
Discussion: https://postgr.es/m/19078-
dfd62f840a2c0766@postgresql.org
Backpatch-through: 18
Michael Paquier [Thu, 16 Oct 2025 02:39:45 +0000 (11:39 +0900)]
Override log_error_verbosity to "default" in test 009_log_temp_files
Per report from buildfarm member prion. The CI does not use this
parameter, and this buildfarm member sets log_error_verbosity to
"verbose". This would generate extra LOCATION entries in the logs,
causing the regexps of the test to fail.
Trying to support log_error_verbosity=verbose in the test would mean to
tweak all the regexps used in the test to detect an optional set of
LOCATION lines, at least. This would not improve the coverage, and
forcing the GUC value is simpler.
Oversight in
76bba033128a.
Discussion: https://postgr.es/m/
[email protected]
Michael Paquier [Thu, 16 Oct 2025 00:02:51 +0000 (09:02 +0900)]
Add tests for logging of temporary file removal and statement
Temporary file usage is sometimes attributed to the wrong query in the logs
output. One identified reason is that unnamed portal cleanup (and
consequently temp file logging) happens during the next BIND message as
a, after debug_query_string has already been updated to the new query.
Dropping an unnamed portal in the next BIND message is a rather old
protocol behavior (
fe19e56c572f, also mentioned in the docs).
log_temp_files is a bit newer than that, as of
be8a4318815.
This commit adds tests to track which query is displayed next to the
temporary file(s) removed when a portal is dropped, and in some cases if
a query is displayed or not. We have not concluded how to improve the
situation yet; these tests will at least help in checking what changes
in the logs depending on the proposal discussed and how it affects the
scenarios tracked by this new test.
Author: Sami Imseih <
[email protected]>
Author: Frédéric Yhuel <
[email protected]>
Reviewed-by: Mircea Cadariu <[email protected]>
Reviewed-by: Michael Paquier <[email protected]>
Discussion: https://postgr.es/m/
3d07ee43-8855-42db-97e0-
bad5db82d972@dalibo.com
Nathan Bossart [Wed, 15 Oct 2025 21:32:40 +0000 (16:32 -0500)]
Fix lookup code for REINDEX INDEX.
This commit adjusts RangeVarCallbackForReindexIndex() to handle an
extremely unlikely race condition involving concurrent OID reuse.
In short, if REINDEX INDEX is executed at the same time that the
index is re-created with the same name and OID but a different
parent table OID, we might lock the wrong parent table. To fix,
simply detect when this happens and emit an ERROR. Unfortunately,
we can't gracefully handle this situation because we will have
already locked the index, and we must lock the parent table before
the index to avoid deadlocks.
While at it, I've replaced all but one early return in this
callback function with ERRORs that should be unreachable. While I
haven't verified the presence of a live bug, the checks in question
appear to be unnecessary, and the early returns seem prone to
breaking the parent table locking code in subtle ways. If nothing
else, this simplifies the code a bit.
This is a bug fix and could be back-patched, but given the presumed
rarity of the race condition and the lack of reports, I'm not going
to bother.
Reviewed-by: Tom Lane <[email protected]>
Reviewed-by: Jeff Davis <[email protected]>
Discussion: https://postgr.es/m/Z8zwVmGzXyDdkAXj%40nathan
Jeff Davis [Wed, 15 Oct 2025 19:54:01 +0000 (12:54 -0700)]
Add pg_iswalpha() and related functions.
Per-character pg_locale_t APIs. Useful for tsearch parsing and
potentially other places.
Significant overlap with the regc_wc_isalpha() and related functions
in regc_pg_locale.c, but this change leaves those intact for
now.
Discussion: https://postgr.es/m/
0151ad01239e2cc7b3139644358cf8f7b9622ff7[email protected]
Nathan Bossart [Wed, 15 Oct 2025 17:47:33 +0000 (12:47 -0500)]
Fix lookups in pg_{clear,restore}_{attribute,relation}_stats().
Presently, these functions look up the relation's OID, lock it, and
then check privileges. Not only does this approach provide no
guarantee that the locked relation matches the arguments of the
lookup, but it also allows users to briefly lock relations for
which they do not have privileges, which might enable
denial-of-service attacks. This commit adjusts these functions to
use RangeVarGetRelidExtended(), which is purpose-built to avoid
both of these issues. The new RangeVarGetRelidCallback function is
somewhat complicated because it must handle both tables and
indexes, and for indexes, we must check privileges on the parent
table and lock it first. Also, it needs to handle a couple of
extremely unlikely race conditions involving concurrent OID reuse.
A downside of this change is that the coding doesn't allow for
locking indexes in AccessShare mode anymore; everything is locked
in ShareUpdateExclusive mode. Per discussion, the original choice
of lock levels was intended for a now defunct implementation that
used in-place updates, so we believe this change is okay.
Reviewed-by: Jeff Davis <[email protected]>
Discussion: https://postgr.es/m/Z8zwVmGzXyDdkAXj%40nathan
Backpatch-through: 18
Peter Eisentraut [Wed, 15 Oct 2025 13:20:28 +0000 (15:20 +0200)]
Change reset_extra into a config_generic common field
This is not specific to the GUC parameter type, so it can be part of
the generic struct rather than the type-specific struct (like the
related "extra" field). This allows for some code simplifications.
Reviewed-by: Chao Li <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/
8fdfb91e-60fb-44fa-8df6-
f5dea47353c9@eisentraut.org
Peter Eisentraut [Wed, 15 Oct 2025 12:31:12 +0000 (14:31 +0200)]
Add log_autoanalyze_min_duration
The log output functionality of log_autovacuum_min_duration applies to
both VACUUM and ANALYZE, so it is not possible to separate the VACUUM
and ANALYZE log output thresholds. Logs are likely to be output only for
VACUUM and not for ANALYZE.
Therefore, we decided to separate the threshold for log output of VACUUM
by autovacuum (log_autovacuum_min_duration) and the threshold for log
output of ANALYZE by autovacuum (log_autoanalyze_min_duration).
Author: Shinya Kato <
[email protected]>
Reviewed-by: Kasahara Tatsuhito <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/CAOzEurQtfV4MxJiWT-XDnimEeZAY+rgzVSLe8YsyEKhZcajzSA@mail.gmail.com
Etsuro Fujita [Wed, 15 Oct 2025 08:15:00 +0000 (17:15 +0900)]
Fix EvalPlanQual handling of foreign/custom joins in ExecScanFetch.
If inside an EPQ recheck, ExecScanFetch would run the recheck method
function for foreign/custom joins even if they aren't descendant nodes
in the EPQ recheck plan tree, which is problematic at least in the
foreign-join case, because such a foreign join isn't guaranteed to have
an alternative local-join plan required for running the recheck method
function; in the postgres_fdw case this could lead to a segmentation
fault or an assert failure in an assert-enabled build when running the
recheck method function.
Even if inside an EPQ recheck, any scan nodes that aren't descendant
ones in the EPQ recheck plan tree should be normally processed by using
the access method function; fix by modifying ExecScanFetch so that if
inside an EPQ recheck, it runs the recheck method function for
foreign/custom joins that are descendant nodes in the EPQ recheck plan
tree as before and runs the access method function for foreign/custom
joins that aren't.
This fix also adds to postgres_fdw an isolation test for an EPQ recheck
that caused issues stated above.
Oversight in commit
385f337c9.
Reported-by: Kristian Lejao <[email protected]>
Author: Masahiko Sawada <
[email protected]>
Co-authored-by: Etsuro Fujita <[email protected]>
Reviewed-by: Michael Paquier <[email protected]>
Reviewed-by: Etsuro Fujita <[email protected]>
Discussion: https://postgr.es/m/CAD21AoBpo6Gx55FBOW+9s5X=nUw3Xpq64v35fpDEKsTERnc4TQ@mail.gmail.com
Backpatch-through: 13
Peter Eisentraut [Fri, 3 Oct 2025 06:27:18 +0000 (08:27 +0200)]
Add some const qualifiers
in guc-related source files, in anticipation of some further
restructuring.
Reviewed-by: Chao Li <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/
8fdfb91e-60fb-44fa-8df6-
f5dea47353c9@eisentraut.org
Peter Eisentraut [Fri, 3 Oct 2025 06:27:18 +0000 (08:27 +0200)]
Modernize some for loops
in guc-related source files, in anticipation of some further
restructuring.
Reviewed-by: Chao Li <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/
8fdfb91e-60fb-44fa-8df6-
f5dea47353c9@eisentraut.org
Peter Eisentraut [Wed, 15 Oct 2025 06:13:07 +0000 (08:13 +0200)]
plpython: Remove support for major version conflict detection
This essentially reverts commit
866566a690b, which installed
safeguards against loading plpython2 and plpython3 into the same
process. We don't support plpython2 anymore, so this is obsolete.
The Python and PL/Python initialization now happens again in
_PG_init() rather than the first time a PL/Python call handler is
invoked. (Often, these will be very close together.)
I kept the separate PLy_initialize() function introduced by
866566a690b to keep _PG_init() a bit modular.
Reviewed-by: Mario González Troncoso <[email protected]>
Reviewed-by: Nathan Bossart <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/
9eb9feb6-1df3-4f0c-a0dc-
9bcf35273111%40eisentraut.org
Amit Kapila [Wed, 15 Oct 2025 03:42:27 +0000 (03:42 +0000)]
Standardize use of REFRESH PUBLICATION in code and messages.
This patch replaces ALTER SUBSCRIPTION REFRESH with
ALTER SUBSCRIPTION REFRESH PUBLICATION in comments and error messages to
improve clarity and support future extensibility. The change aligns with
upcoming addition REFRESH SEQUENCES for sequence synchronization.
Author: vignesh C <
[email protected]>
Author: Hou Zhijie <
[email protected]>
Reviewed-by: shveta malik <[email protected]>
Reviewed-by: Dilip Kumar <[email protected]>
Reviewed-by: Peter Smith <[email protected]>
Reviewed-by: Hayato Kuroda <[email protected]>
Reviewed-by: Amit Kapila <[email protected]>
Discussion: https://postgr.es/m/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
Michael Paquier [Wed, 15 Oct 2025 02:11:30 +0000 (11:11 +0900)]
pg_createsubscriber: Use new routine to retrieve data of PG_VERSION
pg_createsubscriber is documented as requiring the same major version as
the target clusters. Attempting to use this tool on a cluster where the
control file version read does not match with the version compiled with
would lead to the following error message:
pg_createsubscriber: error: control file appears to be corrupt
This is confusing as the control file is correct: only the version
expected does not match. This commit integrates pg_createsubscriber
with the facility added by
cd0be131ba6f, where the contents of
PG_VERSION are read and compared with the value of PG_MAJORVERSION_NUM
expected by the tool. This puts pg_createsubscriber in line with the
documentation, with a better error message when the version does not
match.
Author: Michael Paquier <
[email protected]>
Reviewed-by: Masahiko Sawada <[email protected]>
Discussion: https://postgr.es/m/
[email protected]
Michael Paquier [Wed, 15 Oct 2025 01:09:48 +0000 (10:09 +0900)]
pg_resetwal: Use new routine to retrieve data of PG_VERSION
pg_resetwal's custom logic to retrieve the version number of a data
folder's PG_VERSION can be replaced by the facility introduced in
cd0be131ba6f. This removes some code.
One thing specific to pg_resetwal is that the first line of PG_VERSION
is read and reported in the error report generated when the major
version read does not match with the version pg_resetwal has been
compiled with. The new logic preserves this property, without changes
to neither the error message nor the data used in the error report.
Note that as a chdir() is done within the data folder before checking the
data of PG_VERSION, get_pg_version() needs to be tweaked to look for
PG_VERSION in the current folder.
Author: Michael Paquier <
[email protected]>
Reviewed-by: Masahiko Sawada <[email protected]>
Discussion: https://postgr.es/m/
[email protected]
Michael Paquier [Wed, 15 Oct 2025 00:54:56 +0000 (09:54 +0900)]
pg_combinebackup: Use new routine to retrieve data of PG_VERSION
pg_combinebackup's custom logic to retrieve the version number of a data
folder's PG_VERSION can be replaced by the facility introduced in
cd0be131ba6f. This removes some code.
One thing specific to this tool is that backend versions older than v10
are not supported. The new code does the same checks as the previous
code.
Author: Michael Paquier <
[email protected]>
Reviewed-by: Masahiko Sawada <[email protected]>
Discussion: https://postgr.es/m/
[email protected]
Masahiko Sawada [Wed, 15 Oct 2025 00:36:11 +0000 (17:36 -0700)]
Revert "pg_createsubscriber: Add log message when no publications exist to drop."
This reverts commit
74ac377d75135e02064fc4427bec401277b4f60c.
The previous change contained a misconception about how publications
are cleaned up on the subscriber. The newly added log message could
confuse users, particularly when running pg_createsubscriber with
--dry-run - users would see a "dropping publication" message
immediately followed by a "no publications found" message.
Discussion: https://postgr.es/m/CAHut+Pu7xz1LqNvyQyvSHrV0Sw6D=e6T-Jm=gh1MRJrkuWGyBQ@mail.gmail.com
Melanie Plageman [Tue, 14 Oct 2025 21:43:41 +0000 (17:43 -0400)]
Make heap_page_is_all_visible independent of LVRelState
This function only requires a few fields from LVRelState, so pass them
in individually.
This change allows calling heap_page_is_all_visible() from code such as
pruneheap.c, which does not have access to an LVRelState.
Author: Melanie Plageman <
[email protected]>
Reviewed-by: Kirill Reshke <[email protected]>
Reviewed-by: Robert Haas <[email protected]>
Reviewed-by: Andres Freund <[email protected]>
Discussion: https://postgr.es/m/2wk7jo4m4qwh5sn33pfgerdjfujebbccsmmlownybddbh6nawl%40mdyyqpqzxjek
Melanie Plageman [Tue, 14 Oct 2025 21:03:48 +0000 (17:03 -0400)]
Inline TransactionIdFollows/Precedes[OrEquals]()
These functions appeared prominently in a profile of a patch that sets
the visibility map on-access. Inline them to remove call overhead and
make them cheaper to use in hot paths.
Author: Melanie Plageman <
[email protected]>
Reviewed-by: Kirill Reshke <[email protected]>
Reviewed-by: Andres Freund <[email protected]>
Discussion: https://postgr.es/m/2wk7jo4m4qwh5sn33pfgerdjfujebbccsmmlownybddbh6nawl%40mdyyqpqzxjek
Melanie Plageman [Tue, 14 Oct 2025 19:07:40 +0000 (15:07 -0400)]
Add helper for freeze determination to heap_page_prune_and_freeze
After scanning the line pointers on a heap page during the first phase
of vacuum, we use the information collected to decide whether to use
the assembled freeze plans.
Move this decision logic into a helper function to improve readability.
While here, rename a PruneState member and disambiguate some local
variables in heap_page_prune_and_freeze().
Author: Melanie Plageman <
[email protected]>
Reviewed-by: Andres Freund <[email protected]>
Discussion: https://postgr.es/m/2wk7jo4m4qwh5sn33pfgerdjfujebbccsmmlownybddbh6nawl%40mdyyqpqzxjek
Masahiko Sawada [Tue, 14 Oct 2025 18:45:29 +0000 (11:45 -0700)]
pg_createsubscriber: Add log message when no publications exist to drop.
When specifying --clean=publication to pg_createsubscriber, it drops
all existing publications with a log message "dropping all existing
publications in database "testdb"". Add a new log message "no
publications found" when there are no publications to drop, making the
progress more transparent to users.
Author: Peter Smith <
[email protected]>
Reviewed-by: Masahiko Sawada <[email protected]>
Discussion: https://postgr.es/m/CAHut+Ptm+WJwbbYXhC0s6FP_98KzZCR=5CPu8F8N5uV8P7BpqA@mail.gmail.com
Jeff Davis [Tue, 14 Oct 2025 18:04:04 +0000 (11:04 -0700)]
pg_regc_locale.c: rename some static functions.
Use the more specific prefix "regc_" rather than the generic prefix
"pg_".
A subsequent commit will create generic versions of some of these
functions that can be called from other modules.
Discussion: https://postgr.es/m/
0151ad01239e2cc7b3139644358cf8f7b9622ff7[email protected]
Nathan Bossart [Tue, 14 Oct 2025 17:20:48 +0000 (12:20 -0500)]
dblink: Avoid locking relation before privilege check.
The present coding of dblink's get_rel_from_relname() predates the
introduction of RangeVarGetRelidExtended(), which provides a way to
check permissions before locking the relation. This commit adjusts
get_rel_from_relname() to use that function.
Reviewed-by: Jeff Davis <[email protected]>
Discussion: https://postgr.es/m/aOgmi6avE6qMw_6t%40nathan
Melanie Plageman [Tue, 14 Oct 2025 14:13:10 +0000 (10:13 -0400)]
Bump XLOG_PAGE_MAGIC after xl_heap_prune change
add323da40a6 altered xl_heap_prune, changing the WAL format, but
neglected to bump XLOG_PAGE_MAGIC. Do so now.
Author: Melanie Plageman <
[email protected]>
Reported-by: Kirill Reshke <[email protected]>
Reported-by: Michael Paquier <[email protected]>
Discussion: https://postgr.es/m/aO3Gw6hCAZFUd5ab%40paquier.xyz
Tatsuo Ishii [Tue, 14 Oct 2025 10:15:24 +0000 (19:15 +0900)]
Use ereport rather than elog in WinCheckAndInitializeNullTreatment.
Previously WinCheckAndInitializeNullTreatment() used elog() to emit an
error message. ereport() should be used instead because it's a
user-facing error. Also use existing get_func_name() to get a
function's name, rather than own implementation.
Moreover add an assertion to validate winobj parameter, just like
other window function API.
Reported-by: Tom Lane <[email protected]>
Author: Tatsuo Ishii <
[email protected]>
Reviewed-by: Chao Li <[email protected]>
Discussion: https://postgr.es/m/
2952409.
1760023154%40sss.pgh.pa.us
Richard Guo [Tue, 14 Oct 2025 07:35:22 +0000 (16:35 +0900)]
Rename apply_at to apply_agg_at for clarity
The field name "apply_at" in RelAggInfo was a bit ambiguous. Rename
it to "apply_agg_at" to improve clarity and make its purpose clearer.
Per complaint from David Rowley, Robert Haas.
Suggested-by: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/CA+TgmoZ0KR2_XCWHy17=HHcQ3p2Mamc9c6Dnnhf1J6wPYFD9ng@mail.gmail.com
Michael Paquier [Tue, 14 Oct 2025 07:27:13 +0000 (16:27 +0900)]
pg_upgrade: Use new routine to retrieve data of PG_VERSION
Unsurprisingly, this shaves code. get_major_server_version() can be
replaced by the new routine added by
cd0be131ba6f, with the contents of
PG_VERSION stored in an allocated buffer instead of a fixed-sized one.
Author: Michael Paquier <
[email protected]>
Reviewed-by: Masahiko Sawada <[email protected]>
Discussion: https://postgr.es/m/
[email protected]
Michael Paquier [Tue, 14 Oct 2025 07:20:42 +0000 (16:20 +0900)]
Introduce frontend API able to retrieve the contents of PG_VERSION
get_pg_version() is able to return a version number, that can be used
for comparisons based on PG_VERSION_NUM. A macro is added to convert
the result to a major version number, to work with PG_MAJORVERSION_NUM.
It is possible to pass to the routine an optional argument, where the
contents retrieved from PG_VERSION are saved. This requirement matters
for some of the frontend code (one example: pg_upgrade wants that for
tablespace paths with a version number strictly older than v10).
This will be used by a set of follow-up patches, to be consumed in
various frontend tools that duplicate a logic similar to do what this
new routine does, like:
- pg_resetwal
- pg_combinebackup
- pg_createsubscriber
- pg_upgrade
This routine supports both the post-v10 version number and the older
flavor (aka 9.6), as required at least by pg_upgrade.
Author: Michael Paquier <
[email protected]>
Reviewed-by: Masahiko Sawada <[email protected]>
Discussion: https://postgr.es/m/
[email protected]
Michael Paquier [Mon, 13 Oct 2025 23:30:54 +0000 (08:30 +0900)]
Fix version number calculation for data folder flush in pg_combinebackup
The version number calculated by read_pg_version_file() is multiplied
once by 10000, to be able to do comparisons based on PG_VERSION_NUM or
equivalents with a minor version included. However, the version number
given sync_pgdata() was multiplied by 10000 a second time, leading to an
overestimated number.
This issue was harmless (still incorrect) as pg_combinebackup does not
support versions of Postgres older than v10, and sync_pgdata() only
includes a version check due to the rename of pg_xlog/ to pg_wal/. This
folder rename happened in the development cycle of v10. This would
become a problem if in the future sync_pgdata() is changed to have more
version-specific checks.
Oversight in
dc212340058b, so backpatch down to v17.
Reviewed-by: Chao Li <[email protected]>
Discussion: https://postgr.es/m/
[email protected]
Backpatch-through: 17
Melanie Plageman [Mon, 13 Oct 2025 22:01:06 +0000 (18:01 -0400)]
Eliminate XLOG_HEAP2_VISIBLE from vacuum phase III
Instead of emitting a separate XLOG_HEAP2_VISIBLE WAL record for each
page that becomes all-visible in vacuum's third phase, specify the VM
changes in the already emitted XLOG_HEAP2_PRUNE_VACUUM_CLEANUP record.
Visibility checks are now performed before marking dead items unused.
This is safe because the heap page is held under exclusive lock for the
entire operation.
This reduces the number of WAL records generated by VACUUM phase III by
up to 50%.
Author: Melanie Plageman <
[email protected]>
Reviewed-by: Andres Freund <[email protected]>
Reviewed-by: Robert Haas <[email protected]>
Reviewed-by: Kirill Reshke <[email protected]>
Discussion: https://postgr.es/m/flat/CAAKRu_ZMw6Npd_qm2KM%2BFwQ3cMOMx1Dh3VMhp8-V7SOLxdK9-g%40mail.gmail.com
Tom Lane [Mon, 13 Oct 2025 21:56:45 +0000 (17:56 -0400)]
Fix incorrect message-printing in win32security.c.
log_error() would probably fail completely if used, and would
certainly print garbage for anything that needed to be interpolated
into the message, because it was failing to use the correct printing
subroutine for a va_list argument.
This bug likely went undetected because the error cases this code
is used for are rarely exercised - they only occur when Windows
security API calls fail catastrophically (out of memory, security
subsystem corruption, etc).
The FRONTEND variant can be fixed just by calling vfprintf()
instead of fprintf(). However, there was no va_list variant
of write_stderr(), so create one by refactoring that function.
Following the usual naming convention for such things, call
it vwrite_stderr().
Author: Bryan Green <
[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/CAF+pBj8goe4fRmZ0V3Cs6eyWzYLvK+HvFLYEYWG=TzaM+tWPnw@mail.gmail.com
Backpatch-through: 13
David Rowley [Mon, 13 Oct 2025 20:25:02 +0000 (09:25 +1300)]
Doc: clarify n_distinct_inherited setting
There was some confusion around how to adjust the n_distinct estimates
for partitioned tables. Here we try and clarify that
n_distinct_inherited needs to be adjusted rather than n_distinct.
Also fix some slightly misleading text which was talking about table
size rather than table rows, fix a grammatical error, and adjust some
text which indicated that ANALYZE was performing calculations based on
the n_distinct settings. Really it's the query planner that does this
and ANALYZE only stores the overridden n_distinct estimate value in
pg_statistic.
Author: David Rowley <
[email protected]>
Reviewed-by: David G. Johnston <[email protected]>
Reviewed-by: Chao Li <[email protected]>
Backpatch-through: 13
Discussion: https://postgr.es/m/CAApHDvrL7a-ZytM1SP8Uk9nEw9bR2CPzVb+uP+bcNj=_q-ZmVw@mail.gmail.com
Tom Lane [Mon, 13 Oct 2025 17:17:45 +0000 (13:17 -0400)]
Fix serious performance problems in LZ4Stream_read_internal.
I was distressed to find that reading an LZ4-compressed toc.dat
file was hundreds of times slower than it ought to be. On
investigation, the blame mostly affixes to LZ4Stream_read_overflow's
habit of memmove'ing all the remaining buffered data after each read
operation. Since reading a TOC file tends to involve a lot of small
(even one-byte) decompression calls, that amounts to an O(N^2) cost.
This could have been fixed with a minimal patch, but to my
eyes LZ4Stream_read_internal and LZ4Stream_read_overflow are
badly-written spaghetti code; in particular the eol_flag logic
is inefficient and duplicative. I chose to throw the code
away and rewrite from scratch. This version is about sixty
lines shorter as well as not having the performance issue.
Fortunately, AFAICT the only way to get to this problem is to
manually LZ4-compress the toc.dat and/or blobs.toc files within a
directory-style archive; in the main data files, we read blocks
that are large enough that the O(N^2) behavior doesn't manifest.
Few people do that, which likely explains the lack of field
complaints. Otherwise this performance bug might be considered
bad enough to warrant back-patching.
Author: Tom Lane <
[email protected]>
Reviewed-by: Chao Li <[email protected]>
Discussion: https://postgr.es/m/
3515357.
1760128017@sss.pgh.pa.us
Tom Lane [Mon, 13 Oct 2025 17:01:45 +0000 (13:01 -0400)]
Fix poor buffering logic in pg_dump's lz4 and zstd compression code.
Both of these modules dumped each bit of output that they got from
the underlying compression library as a separate "data block" in
the emitted archive file. In the case of zstd this'd frequently
result in block sizes well under 100 bytes; lz4 is a little better
but still produces blocks around 300 bytes, at least in the test
case I tried. This bloats the archive file a little bit compared
to larger block sizes, but the real problem is that when pg_restore
has to skip each data block rather than seeking directly to some
target data, tiny block sizes are enormously inefficient.
Fix both modules so that they fill their allocated buffer reasonably
well before dumping a data block. In the case of lz4, also delete
some redundant logic that caused the lz4 frame header to be emitted
as a separate data block. (That saves little, but I see no reason
to expend extra code to get worse results.)
I fixed the "stream API" code too. In those cases, feeding small
amounts of data to fwrite() probably doesn't have any meaningful
performance consequences. But it seems like a bad idea to leave
the two sets of code doing the same thing in two different ways.
In passing, remove unnecessary "extra paranoia" check in
_ZstdWriteCommon. _CustomWriteFunc (the only possible referent
of cs->writeF) already protects itself against zero-length writes,
and it's really a modularity violation for _ZstdWriteCommon to know
that the custom format disallows empty data blocks. Also, fix
Zstd_read_internal to do less work when passed size == 0.
Reported-by: Dimitrios Apostolou <[email protected]>
Author: Tom Lane <
[email protected]>
Reviewed-by: Chao Li <[email protected]>
Discussion: https://postgr.es/m/
3515357.
1760128017@sss.pgh.pa.us
Tom Lane [Mon, 13 Oct 2025 16:44:20 +0000 (12:44 -0400)]
Fix issue with reading zero bytes in Gzip_read.
pg_dump expects a read request of zero bytes to be a no-op; see for
example ReadStr(). Gzip_read got this wrong and falsely supposed
that the resulting gzret == 0 indicated an error. We could complicate
that error-checking logic some more, but it seems best to just fall
out immediately when passed size == 0.
This bug breaks the nominally-supported case of manually gzip'ing
the toc.dat file within a directory-style dump, so back-patch to v16
where this code came in. (Prior branches already have a short-circuit
for size == 0 before their only gzread call.)
Author: Tom Lane <
[email protected]>
Reviewed-by: Chao Li <[email protected]>
Discussion: https://postgr.es/m/
3515357.
1760128017@sss.pgh.pa.us
Backpatch-through: 16
Magnus Hagander [Mon, 13 Oct 2025 13:31:25 +0000 (15:31 +0200)]
docs: Fix protocol version 3.2 message format of CancelRequest
Since protocol version 3.2 the CancelRequest does not have a fixed size
length anymore. The protocol docs still listed the length field to be a
constant number though. This fixes that.
Author: Jelte Fennema-Nio <
[email protected]>
Reported-by: Dmitry Igrishin <[email protected]>
Backpatch-through: 18
Magnus Hagander [Mon, 13 Oct 2025 13:26:37 +0000 (15:26 +0200)]
Remove extra semicolon in example
Reported-By: Pavel Luzanov <[email protected]>
Discussion: https://postgr.es/m/
175976566145.768.
4645962241073007347@wrigleys.postgresql.org
Backpatch-through: 18
Peter Geoghegan [Sun, 12 Oct 2025 18:04:08 +0000 (14:04 -0400)]
Remove unused nbtree array advancement variable.
Remove a variable that is no longer in use following commit
9a2e2a28.
It's not immediately clear why there were no compiler warnings about
this oversight.
Author: Peter Geoghegan <
[email protected]>
Backpatch-through: 18
Tom Lane [Sat, 11 Oct 2025 20:33:55 +0000 (16:33 -0400)]
Restore test coverage of LZ4Stream_gets().
In commit
a45c78e32 I removed the only regression test case that
reaches this function, because it turns out that we only use it
if reading an LZ4-compressed blobs.toc file in a directory dump,
and that is a state that has to be created manually. That seems
like a bad thing to not test, not so much for LZ4Stream_gets()
itself as because it means the squirrely eol_flag logic in
LZ4Stream_read_internal() is not tested.
The reason for the change was that I thought the lz4 program did not
have any way to perform compression without explicit specification
of the output file name. However, it turns out that the syntax
synopsis in its man page is a lie, and if you read enough of the
man page you find out that with "-m" it will do what's needful.
So restore the manual compression step in that test case.
Noted while testing some proposed changes in pg_dump's compression
logic.
Author: Tom Lane <
[email protected]>
Discussion: https://postgr.es/m/
3515357.
1760128017@sss.pgh.pa.us
Backpatch-through: 17
Álvaro Herrera [Sat, 11 Oct 2025 18:30:12 +0000 (20:30 +0200)]
Stop creating constraints during DETACH CONCURRENTLY
Commit
71f4c8c6f74b (which implemented DETACH CONCURRENTLY) added code
to create a separate table constraint when a table is detached
concurrently, identical to the partition constraint, on the theory that
such a constraint was needed in case the optimizer had constructed any
query plans that depended on the constraint being there. However, that
theory was apparently bogus because any such plans would be invalidated.
For hash partitioning, those constraints are problematic, because their
expressions reference the OID of the parent partitioned table, to which
the detached table is no longer related; this causes all sorts of
problems (such as inability of restoring a pg_dump of that table, and
the table no longer working properly if the partitioned table is later
dropped).
We'd like to get rid of all those constraints. In fact, for branch
master, do that -- no longer create any substitute constraints.
However, out of fear that some users might somehow depend on these
constraints for other partitioning strategies, for stable branches
(back to 14, which added DETACH CONCURRENTLY), only do it for hash
partitioning.
(If you repeatedly DETACH CONCURRENTLY and then ATTACH a partition, then
with this constraint addition you don't need to scan the table in the
ATTACH step, which presumably is good. But if users really valued this
feature, they would have requested that it worked for non-concurrent
DETACH also.)
Author: Haiyang Li <
[email protected]>
Reported-by: Fei Changhong <[email protected]>
Reported-by: Haiyang Li <[email protected]>
Backpatch-through: 14
Discussion: https://postgr.es/m/18371-
7fef49f63de13f02@postgresql.org
Discussion: https://postgr.es/m/19070-
781326347ade7c57@postgresql.org
Álvaro Herrera [Sat, 11 Oct 2025 14:39:22 +0000 (16:39 +0200)]
dbase_redo: Fix Valgrind-reported memory leak
Introduced by my (Álvaro's) commit
9e4f914b5eba, which was itself
backpatched to pg10, though only pg15 and up contain the problem
because of commit
9c08aea6a309.
This isn't a particularly significant leak, but given the fix is
trivial, we might as well backpatch to all branches where it applies, so
do that.
Author: Nathan Bossart <
[email protected]>
Reported-by: Andres Freund <[email protected]>
Discussion: https://postgr.es/m/x4odfdlrwvsjawscnqsqjpofvauxslw7b4oyvxgt5owoyf4ysn@heafjusodrz7
Peter Geoghegan [Fri, 10 Oct 2025 18:52:25 +0000 (14:52 -0400)]
Remove overzealous _bt_killitems assertion.
An assertion in _bt_killitems expected the scan's currPos state to
contain a valid LSN, saved from when currPos's page was initially read.
The assertion failed to account for the fact that even logged relations
can have leaf pages with an invalid LSN when built with wal_level set to
"minimal". Remove the faulty assertion.
Oversight in commit
e6eed40e (though note that the assertion was
backpatched to stable branches before 18 by commit
7c319f54).
Author: Peter Geoghegan <
[email protected]>
Reported-By: Matthijs van der Vleuten <[email protected]>
Bug: #19082
Discussion: https://postgr.es/m/19082-
628e62160dbbc1c1@postgresql.org
Backpatch-through: 13
Michael Paquier [Fri, 10 Oct 2025 02:51:45 +0000 (11:51 +0900)]
Fix two typos in xlogstats.h and xlogstats.c
Issue found while browsing this area of the code, introduced and
copy-pasted around by
2258e76f90bf.
Backpatch-through: 15
Michael Paquier [Fri, 10 Oct 2025 00:23:59 +0000 (09:23 +0900)]
Remove state.tmp when failing to save a replication slot
An error happening while a slot data is saved on disk in
SaveSlotToPath() could cause a state.tmp file (temporary file holding
the slot state data, renamed to its permanent name at the end of the
function) to remain around after it has been created. This temporary
file is created with O_EXCL, meaning that if an existing state.tmp is
found, its creation would fail. This would prevent the slot data to be
saved, requiring a manual intervention to remove state.tmp before being
able to save again a slot. Possible scenarios where this temporary file
could remain on disk is for example a ENOSPC case (no disk space) while
writing, syncing or renaming it. The bug reports point to a write
failure as the principal cause of the problems.
Using O_TRUNC has been argued back in 2019 as a potential solution to
discard any temporary file that could exist. This solution was rejected
as O_EXCL can also act as a safety measure when saving the slot state,
crash recovery offering cleanup guarantees post-crash. This commit uses
the alternative approach that has been suggested by Andres Freund back
in 2019. When the temporary state file cannot be written, synced,
closed or renamed (note: not when created!), an unlink() is used to
remove the temporary state file while holding the in-progress I/O
LWLock, so as any follow-up attempts to save a slot's data would not
choke on an existing file that remained around because of a previous
failure.
This problem has been reported a few times across the years, going back
to 2019, but for some reason I have never come back to do something
about it and it has been forgotten. A recent report has reminded me
that this was still a problem.
Reported-by: Kevin K Biju <[email protected]>
Reported-by: Sergei Kornilov <[email protected]>
Reported-by: Grigory Smolkin <[email protected]>
Discussion: https://postgr.es/m/CAM45KeHa32soKL_G8Vk38CWvTBeOOXcsxAPAs7Jt7yPRf2mbVA@mail.gmail.com
Discussion: https://postgr.es/m/
3559061693910326@qy4q4a6esb2lebnz.sas.yp-c.yandex.net
Discussion: https://postgr.es/m/
08bbfab1-a61d-3750-fc18-
4ab2c1aa7f09@postgrespro.ru
Backpatch-through: 13
Andres Freund [Thu, 9 Oct 2025 20:27:08 +0000 (16:27 -0400)]
bufmgr: Fix valgrind checking for buffers pinned in StrategyGetBuffer()
In
5e899859287 I made StrategyGetBuffer() pin buffers with a single CAS,
instead of using PinBuffer_Locked(). Unfortunately I missed that
PinBuffer_Locked() marked the page as defined for valgrind.
Fix this oversight by centralizing the valgrind initialization into
TrackNewBufferPin(), which also allows us to reduce the number of places doing
VALGRIND_MAKE_MEM_DEFINED.
Per buildfarm animal skink and Amit Langote.
Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff
Discussion: https://postgr.es/m/CA+HiwqGKJ6nEXEPQW7EpykVsEtzxp5-up_xhtcUAkWFtATVQvQ@mail.gmail.com
Michael Paquier [Thu, 9 Oct 2025 22:20:03 +0000 (07:20 +0900)]
test_bitmapset: Improve random function
test_random_operations() did not check the result returned by
bms_is_member() in its last phase, when checking that the contents of
the bitmap match with what is expected. This was impacting the
reliability of the function and the coverage it could provide.
This commit improves the whole function, adding more checks based on
bms_is_member(), using a bitmap and a secondary array that tracks the
members added by random additions and deletions.
While on it, more comments are added to document the internals of the
function.
Reported-by: Ranier Vilela <[email protected]>
Author: Greg Burd <
[email protected]>
Reviewed-by: Michael Paquier <[email protected]>
Reviewed-by: David Rowley <[email protected]>
Discussion: https://postgr.es/m/CAEudQAq_zOSA2NUQSWePTGV_=90Uw0WcXxGOWnN-vwF046OOqA@mail.gmail.com
Melanie Plageman [Thu, 9 Oct 2025 20:25:50 +0000 (16:25 -0400)]
Eliminate COPY FREEZE use of XLOG_HEAP2_VISIBLE
Instead of emitting a separate WAL XLOG_HEAP2_VISIBLE record for setting
bits in the VM, specify the VM block changes in the
XLOG_HEAP2_MULTI_INSERT record.
This halves the number of WAL records emitted by COPY FREEZE.
Author: Melanie Plageman <
[email protected]>
Reviewed-by: Andres Freund <[email protected]>
Reviewed-by: Robert Haas <[email protected]>
Reviewed-by: Kirill Reshke <[email protected]>
Discussion: https://postgr.es/m/flat/CAAKRu_ZMw6Npd_qm2KM%2BFwQ3cMOMx1Dh3VMhp8-V7SOLxdK9-g%40mail.gmail.com
David Rowley [Thu, 9 Oct 2025 20:25:23 +0000 (09:25 +1300)]
Cleanup VACUUM option processing error messages
The processing of the PARALLEL option for VACUUM was not quite
following what the DefElem code had intended. defGetInt32() already has
code to handle missing parameters and returns a perfectly good error
message for when that happens.
Here we get rid of the ExecVacuum() error:
ERROR: parallel option requires a value between 0 and N
and leave defGetInt32() handle it, which will give:
ERROR: parallel requires an integer value
defGetInt32() was already handling the non-integer parameter case, so it
may as well handle the missing parameter case too.
Additionally, parameterize the option name to make translator work easier,
and also use errhint_internal() rather than errhint() for the
BUFFER_USAGE_LIMIT option since there isn't any work for a translator to
do for "%s".
Author: David Rowley <
[email protected]>
Reviewed-by: Álvaro Herrera <[email protected]>
Reviewed-by: Masahiko Sawada <[email protected]>
Discussion: https://postgr.es/m/CAApHDvovH14tNWB+WvP6TSbfi7-=TysQ9h5tQ5AgavwyWRWKHA@mail.gmail.com
Tom Lane [Thu, 9 Oct 2025 19:37:42 +0000 (15:37 -0400)]
Clean up memory leakage that occurs in context callback functions.
An error context callback function might leak some memory into
ErrorContext, since those functions are run with ErrorContext as
current context. In the case where the elevel is ERROR, this is
no problem since the code level that catches the error should do
FlushErrorState to clean up, and that will reset ErrorContext.
However, if the elevel is less than ERROR then no such cleanup occurs.
In principle, repeated leaks while emitting log messages or client
notices could accumulate arbitrarily much leaked data, if no ERROR
occurs in the session.
To fix, let errfinish() perform an ErrorContext reset if it is
at the outermost error nesting level. (If it isn't, we'll delay
cleanup until the outermost nesting level is exited.)
The only actual leakage of this sort that I've been able to observe
within our regression tests was recently introduced by commit
f727b63e8. While it seems plausible that there are other such
leaks not reached in the regression tests, the lack of field
reports suggests that they're not a big problem. Accordingly,
I won't take the risk of back-patching this now. We can always
back-patch later if we get field reports of leaks.
Reported-by: Andres Freund <[email protected]>
Author: Tom Lane <
[email protected]>
Discussion: https://postgr.es/m/jngsjonyfscoont4tnwi2qoikatpd5hifsg373vmmjvugwiu6g@m6opxh7uisgd
Masahiko Sawada [Thu, 9 Oct 2025 17:59:27 +0000 (10:59 -0700)]
Fix access-to-already-freed-memory issue in pgoutput.
While pgoutput caches relation synchronization information in
RelationSyncCache that resides in CacheMemoryContext, each entry's
information (such as row filter expressions and column lists) is
stored in the entry's private memory context (entry_cxt in
RelationSyncEntry), which is a descendant memory context of the
decoding context. If a logical decoding invoked via SQL functions like
pg_logical_slot_get_binary_changes fails with an error, subsequent
logical decoding executions could access already-freed memory of the
entry's cache, resulting in a crash.
With this change, it's ensured that RelationSyncCache is cleaned up
even in error cases by using a memory context reset callback function.
Backpatch to 15, where entry_cxt was introduced for column filtering
and row filtering.
While the backbranches v13 and v14 have a similar issue where
RelationSyncCache persists even after an error when pgoutput is used
via SQL API, we decided not to backport this fix. This decision was
made because v13 is approaching its final minor release, and we won't
have an chance to fix any new issues that might arise. Additionally,
since using pgoutput via SQL API is not a common use case, the risk
outwights the benefit. If we receive bug reports, we can consider
backporting the fixes then.
Author: vignesh C <
[email protected]>
Co-authored-by: Masahiko Sawada <[email protected]>
Reviewed-by: Zhijie Hou <[email protected]>
Reviewed-by: Euler Taveira <[email protected]>
Discussion: https://postgr.es/m/CALDaNm0x-aCehgt8Bevs2cm=uhmwS28MvbYq1=s2Ekf0aDPkOA@mail.gmail.com
Backpatch-through: 15
Tom Lane [Thu, 9 Oct 2025 14:33:55 +0000 (10:33 -0400)]
Avoid uninitialized-variable warnings from older compilers.
Some of the buildfarm is still unhappy with WinGetFuncArgInPartition
even after
2273fa32b. While it seems to be just very old compilers,
we can suppress the warnings and arguably make the code more readable
by not initializing these variables till closer to where they are
used. While at it, make a couple of cosmetic comment improvements.
Richard Guo [Thu, 9 Oct 2025 08:50:54 +0000 (17:50 +0900)]
Fix comment in eager_aggregate.sql
The comment stated that eager aggregation is disabled by default,
which is no longer true. This patch removes that comment as well as
the related GUC set statement.
Reported-by: David Rowley <[email protected]>
Discussion: https://postgr.es/m/CAApHDvr4YWpiMR3RsgYwJWv-u8xoRqTAKRiYy9zUszjZOqG4Ug@mail.gmail.com
Richard Guo [Thu, 9 Oct 2025 08:49:20 +0000 (17:49 +0900)]
Remove unnecessary include of "utils/fmgroids.h"
In initsplan.c, no macros for built-in function OIDs are used, so this
include is unnecessary and can be removed. This was my oversight in
commit
8e1185910.
Discussion: https://postgr.es/m/CAMbWs4_-sag-cAKrLJ+X+5njL1=oudk=+KfLmsLZ5a2jckn=kg@mail.gmail.com
Michael Paquier [Thu, 9 Oct 2025 05:02:24 +0000 (14:02 +0900)]
Remove duplicated log related to slot creation in pg_createsubscriber
The creation of a replication slot done in a specific database on a
publisher was logged twice, with the second log not mentioning the
database where the slot creation happened. This commit removes the
information logged after a slot has been successfully created, moving
the information about the publisher from the second to the first log.
Note that failing a slot creation is also logged, so there is no loss of
information.
Author: Peter Smith <
[email protected]>
Reviewed-by: Chao Li <[email protected]>
Discussion: https://postgr.es/m/CAHut+Pv7qDvLbDgc9PQGhULT3rPXTxdu_=w+iW-kMs+zPADR+w@mail.gmail.com
Amit Kapila [Thu, 9 Oct 2025 03:48:54 +0000 (03:48 +0000)]
Add "ALL SEQUENCES" support to publications.
This patch adds support for the ALL SEQUENCES clause in publications,
enabling synchronization/replication of all sequences that is useful for
upgrades.
Publications can now include all sequences via FOR ALL SEQUENCES.
psql enhancements:
\d shows publications for a given sequence.
\dRp indicates if a publication includes all sequences.
ALL SEQUENCES can be combined with ALL TABLES, but not with other options
like TABLE or TABLES IN SCHEMA. We can extend support for more granular
clauses in future.
The view pg_publication_sequences provides information about the mapping
between publications and sequences.
This patch enables publishing of sequences; subscriber-side support will
be added in upcoming patches.
Author: vignesh C <
[email protected]>
Author: Tomas Vondra <
[email protected]>
Reviewed-by: shveta malik <[email protected]>
Reviewed-by: Dilip Kumar <[email protected]>
Reviewed-by: Peter Smith <[email protected]>
Reviewed-by: Hayato Kuroda <[email protected]>
Reviewed-by: Masahiko Sawada <[email protected]>
Reviewed-by: Nisha Moond <[email protected]>
Reviewed-by: Shlok Kyal <[email protected]>
Reviewed-by: Amit Kapila <[email protected]>
Discussion: https://postgr.es/m/CAA4eK1LC+KJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ@mail.gmail.com
Amit Langote [Thu, 9 Oct 2025 05:07:59 +0000 (01:07 -0400)]
Fix internal error from CollateExpr in SQL/JSON DEFAULT expressions
SQL/JSON functions such as JSON_VALUE could fail with "unrecognized
node type" errors when a DEFAULT clause contained an explicit COLLATE
expression. That happened because assign_collations_walker() could
invoke exprSetCollation() on a JsonBehavior expression whose DEFAULT
still contained a CollateExpr, which exprSetCollation() does not
handle.
For example:
SELECT JSON_VALUE('{"a":1}', '$.c' RETURNING text
DEFAULT 'A' COLLATE "C" ON EMPTY);
Fix by validating in transformJsonBehavior() that the DEFAULT
expression's collation matches the enclosing JSON expression’s
collation. In exprSetCollation(), replace the recursive call on the
JsonBehavior expression with an assertion that its collation already
matches the target, since the parser now enforces that condition.
Reported-by: Jian He <[email protected]>
Author: Jian He <
[email protected]>
Reviewed-by: Amit Langote <[email protected]>
Discussion: https://postgr.es/m/CACJufxHVwYYSyiVQ6o+PsRX6zQ7rAFinh_fv1kCfTsT1xG4Zeg@mail.gmail.com
Backpatch-through: 17
David Rowley [Wed, 8 Oct 2025 23:38:33 +0000 (12:38 +1300)]
Make truncate_useless_pathkeys() consider WindowFuncs
truncate_useless_pathkeys() seems to have neglected to account for
PathKeys that might be useful for WindowClause evaluation. Modify it so
that it properly accounts for that.
Making this work required adjusting two things:
1. Change from checking query_pathkeys to check sort_pathkeys instead.
2. Add explicit check for window_pathkeys
For #1, query_pathkeys gets set in standard_qp_callback() according to the
sort order requirements for the first operation to be applied after the
join planner is finished, so this changes depending on which upper
planner operations a particular query needs. If the query has window
functions and no GROUP BY, then query_pathkeys gets set to
window_pathkeys. Before this change, this meant PathKeys useful for the
ORDER BY were not accounted for in queries with window functions.
Because of #1, #2 is now required so that we explicitly check to ensure
we don't truncate away PathKeys useful for window functions.
Author: David Rowley <
[email protected]>
Discussion: https://postgr.es/m/CAApHDvrj3HTKmXoLMbUjTO=_MNMxM=cnuCSyBKidAVibmYPnrg@mail.gmail.com
Andres Freund [Wed, 8 Oct 2025 21:01:10 +0000 (17:01 -0400)]
bufmgr: Don't lock buffer header in StrategyGetBuffer()
Previously StrategyGetBuffer() acquired the buffer header spinlock for every
buffer, whether it was reusable or not. If reusable, it'd be returned, with
the lock held, to GetVictimBuffer(), which then would pin the buffer with
PinBuffer_Locked(). That's somewhat violating the spirit of the guidelines for
holding spinlocks (i.e. that they are only held for a few lines of consecutive
code) and necessitates using PinBuffer_Locked(), which scales worse than
PinBuffer() due to holding the spinlock. This alone makes it worth changing
the code.
However, the main reason to change this is that a future commit will make
PinBuffer_Locked() slower (due to making UnlockBufHdr() slower), to gain
scalability for the much more common case of pinning a pre-existing buffer. By
pinning the buffer with a single atomic operation, iff the buffer is reusable,
we avoid any potential regression for miss-heavy workloads. There strictly are
fewer atomic operations for each potential buffer after this change.
The price for this improvement is that freelist.c needs two CAS loops and
needs to be able to set up the resource accounting for pinned buffers. The
latter is achieved by exposing a new function for that purpose from bufmgr.c,
that seems better than exposing the entire private refcount infrastructure.
The improvement seems worth the complexity.
Reviewed-by: Robert Haas <[email protected]>
Reviewed-by: Matthias van de Meent <[email protected]>
Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff
Andres Freund [Wed, 8 Oct 2025 20:06:19 +0000 (16:06 -0400)]
bufmgr: fewer calls to BufferDescriptorGetContentLock
We're planning to merge buffer content locks into BufferDesc.state. To reduce
the size of that patch, centralize calls to BufferDescriptorGetContentLock().
The biggest part of the change is in assertions, by introducing
BufferIsLockedByMe[InMode]() (and removing BufferIsExclusiveLocked()). This
seems like an improvement even without aforementioned plans.
Additionally replace some direct calls to LWLockAcquire() with calls to
LockBuffer().
Reviewed-by: Matthias van de Meent <[email protected]>
Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff
Andres Freund [Wed, 8 Oct 2025 18:34:30 +0000 (14:34 -0400)]
bufmgr: Fix signedness of mask variable in BufferSync()
BM_PERMANENT is defined as 1U<<31, which is a negative number when interpreted
as a signed integer. Unfortunately the mask variable in BufferSync() was
signed. This has been wrong for a long time, but failed to fail, due to
integer conversion rules.
However, in an upcoming patch the width of the state variable will be
increased, with the wrong signedness leading to never flushing permanent
buffers - luckily caught in a test.
It seems better to fix this separately, instead of doing so as part of a
large, otherwise mechanical, patch.
Reviewed-by: Matthias van de Meent <[email protected]>
Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff
Andres Freund [Wed, 8 Oct 2025 18:34:30 +0000 (14:34 -0400)]
bufmgr: Introduce FlushUnlockedBuffer
There were several copies of code locking a buffer, flushing its contents, and
unlocking the buffer. It seems worth centralizing that into a helper function.
Reviewed-by: Matthias van de Meent <[email protected]>
Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff
Andres Freund [Wed, 8 Oct 2025 16:57:52 +0000 (12:57 -0400)]
Improve ReadRecentBuffer() scalability
While testing a new potential use for ReadRecentBuffer(), Andres
reported that it scales badly when called concurrently for the same
buffer by many backends. Instead of a naive (but wrong) coding with
PinBuffer(), it used the spinlock, so that it could be careful to pin
only if the buffer was valid and holding the expected block, to avoid
breaking invariants in eg GetVictimBuffer(). Unfortunately that made it
less scalable than PinBuffer(), which uses compare-exchange instead.
We can fix that by giving PinBuffer() a new skip_if_not_valid mode that
doesn't pin invalid buffers. It might occasionally skip when it
shouldn't due to the unlocked read of the header flags, but that's
unlikely and perfectly acceptable for an opportunistic optimisation
routine, and it can only succeed when it really should due to the
compare-exchange loop.
Note that this fixes ReadRecentBuffer()'s failure to bump the usage
count. While this could be seen as a bug, there currently aren't cases
affected by this in core, so it doesn't seem worth backpatching that portion.
Author: Thomas Munro <
[email protected]>
Reported-by: Andres Freund <[email protected]>
Reviewed-by: Andres Freund <[email protected]>
Reviewed-by: Matthias van de Meent <[email protected]>
Discussion: https://postgr.es/m/
20230627020546.t6z4tntmj7wmjrfh%40awork3.anarazel.de
Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff
Masahiko Sawada [Wed, 8 Oct 2025 17:05:04 +0000 (10:05 -0700)]
Add mem_exceeded_count column to pg_stat_replication_slots.
This commit introduces a new column mem_exceeded_count to the
pg_stat_replication_slots view. This counter tracks how often the
memory used by logical decoding exceeds the logical_decoding_work_mem
limit. The new statistic helps users determine whether exceeding the
logical_decoding_work_mem limit is a rare occurrences or a frequent
issue, information that wasn't available through existing statistics.
Bumps catversion.
Author: Bertrand Drouvot <
[email protected]>
Reviewed-by: Masahiko Sawada <[email protected]>
Reviewed-by: Amit Kapila <[email protected]>
Reviewed-by: shveta malik <[email protected]>
Reviewed-by: Ashutosh Bapat <[email protected]>
Reviewed-by: Chao Li <[email protected]>
Discussion: https://postgr.es/m/
978D21E8-9D3B-40EA-A4B1-
F87BABE7868C@yesql.se
Tom Lane [Wed, 8 Oct 2025 16:19:53 +0000 (12:19 -0400)]
Cleanup NAN code in float.h, too.
In the same spirit as
3bf905692, assume that all compilers we still
support provide the NAN macro, and get rid of workarounds for that.
The C standard allows implementations to omit NAN if the underlying
float arithmetic lacks quiet (non-signaling) NaNs. However, we've
required that feature for years: the workarounds only supported
lack of the macro, not lack of the functionality. I put in a
compile-time #error if there's no macro, just for clarity.
Also fix up the copies of these functions in ecpglib, and leave
a breadcrumb for the next hacker who touches them.
History of the hacks being removed here can be found in commits
1bc2d544b,
4d17a2146,
cec8394b5.
Author: Tom Lane <
[email protected]>
Discussion: https://postgr.es/m/
1952095.
1759764279@sss.pgh.pa.us
Robert Haas [Wed, 8 Oct 2025 13:07:49 +0000 (09:07 -0400)]
Add extension_state member to PlannedStmt.
Extensions can stash data computed at plan time into this list using
planner_shutdown_hook (or perhaps other mechanisms) and then access
it from any code that has access to the PlannedStmt (such as explain
hooks), allowing for extensible debugging and instrumentation of
plans.
Reviewed-by: Andrei Lepikhov <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Discussion: http://postgr.es/m/CA+TgmoYWKHU2hKr62Toyzh-kTDEnMDeLw7gkOOnjL-TnOUq0kQ@mail.gmail.com
Robert Haas [Wed, 8 Oct 2025 13:05:38 +0000 (09:05 -0400)]
Add planner_setup_hook and planner_shutdown_hook.
These hooks allow plugins to get control at the earliest point at
which the PlannerGlobal object is fully initialized, and then just
before it gets destroyed. This is useful in combination with the
extendable plan state facilities (see extendplan.h) and perhaps for
other purposes as well.
Reviewed-by: Andrei Lepikhov <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Discussion: http://postgr.es/m/CA+TgmoYWKHU2hKr62Toyzh-kTDEnMDeLw7gkOOnjL-TnOUq0kQ@mail.gmail.com
Robert Haas [Wed, 8 Oct 2025 12:33:29 +0000 (08:33 -0400)]
Add ExplainState argument to pg_plan_query() and planner().
This allows extensions to have access to any data they've stored
in the ExplainState during planning. Unfortunately, it won't help
with EXPLAIN EXECUTE is used, but since that case is less common,
this still seems like an improvement.
Since planner() has quite a few arguments now, also add some
documentation of those arguments and the return value.
Author: Robert Haas <
[email protected]>
Co-authored-by: Tom Lane <[email protected]>
Reviewed-by: Andrei Lepikhov <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Discussion: http://postgr.es/m/CA+TgmoYWKHU2hKr62Toyzh-kTDEnMDeLw7gkOOnjL-TnOUq0kQ@mail.gmail.com
Richard Guo [Wed, 8 Oct 2025 08:04:23 +0000 (17:04 +0900)]
Implement Eager Aggregation
Eager aggregation is a query optimization technique that partially
pushes aggregation past a join, and finalizes it once all the
relations are joined. Eager aggregation may reduce the number of
input rows to the join and thus could result in a better overall plan.
In the current planner architecture, the separation between the
scan/join planning phase and the post-scan/join phase means that
aggregation steps are not visible when constructing the join tree,
limiting the planner's ability to exploit aggregation-aware
optimizations. To implement eager aggregation, we collect information
about aggregate functions in the targetlist and HAVING clause, along
with grouping expressions from the GROUP BY clause, and store it in
the PlannerInfo node. During the scan/join planning phase, this
information is used to evaluate each base or join relation to
determine whether eager aggregation can be applied. If applicable, we
create a separate RelOptInfo, referred to as a grouped relation, to
represent the partially-aggregated version of the relation and
generate grouped paths for it.
Grouped relation paths can be generated in two ways. The first method
involves adding sorted and hashed partial aggregation paths on top of
the non-grouped paths. To limit planning time, we only consider the
cheapest or suitably-sorted non-grouped paths in this step.
Alternatively, grouped paths can be generated by joining a grouped
relation with a non-grouped relation. Joining two grouped relations
is currently not supported.
To further limit planning time, we currently adopt a strategy where
partial aggregation is pushed only to the lowest feasible level in the
join tree where it provides a significant reduction in row count.
This strategy also helps ensure that all grouped paths for the same
grouped relation produce the same set of rows, which is important to
support a fundamental assumption of the planner.
For the partial aggregation that is pushed down to a non-aggregated
relation, we need to consider all expressions from this relation that
are involved in upper join clauses and include them in the grouping
keys, using compatible operators. This is essential to ensure that an
aggregated row from the partial aggregation matches the other side of
the join if and only if each row in the partial group does. This
ensures that all rows within the same partial group share the same
"destiny", which is crucial for maintaining correctness.
One restriction is that we cannot push partial aggregation down to a
relation that is in the nullable side of an outer join, because the
NULL-extended rows produced by the outer join would not be available
when we perform the partial aggregation, while with a
non-eager-aggregation plan these rows are available for the top-level
aggregation. Pushing partial aggregation in this case may result in
the rows being grouped differently than expected, or produce incorrect
values from the aggregate functions.
If we have generated a grouped relation for the topmost join relation,
we finalize its paths at the end. The final paths will compete in the
usual way with paths built from regular planning.
The patch was originally proposed by Antonin Houska in 2017. This
commit reworks various important aspects and rewrites most of the
current code. However, the original patch and reviews were very
useful.
Author: Richard Guo <
[email protected]>
Author: Antonin Houska <
[email protected]> (in an older version)
Reviewed-by: Robert Haas <[email protected]>
Reviewed-by: Jian He <[email protected]>
Reviewed-by: Tender Wang <[email protected]>
Reviewed-by: Matheus Alcantara <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Reviewed-by: David Rowley <[email protected]>
Reviewed-by: Tomas Vondra <[email protected]> (in an older version)
Reviewed-by: Andy Fan <[email protected]> (in an older version)
Reviewed-by: Ashutosh Bapat <[email protected]> (in an older version)
Discussion: https://postgr.es/m/CAMbWs48jzLrPt1J_00ZcPZXWUQKawQOFE8ROc-ADiYqsqrpBNw@mail.gmail.com
Richard Guo [Wed, 8 Oct 2025 08:01:48 +0000 (17:01 +0900)]
Allow negative aggtransspace to indicate unbounded state size
This patch reuses the existing aggtransspace in pg_aggregate to
signal that an aggregate's transition state can grow unboundedly. If
aggtransspace is set to a negative value, it now indicates that the
transition state may consume unpredictable or large amounts of memory,
such as in aggregates like array_agg or string_agg that accumulate
input rows.
This information can be used by the planner to avoid applying
memory-sensitive optimizations (e.g., eager aggregation) when there is
a risk of excessive memory usage during partial aggregation.
Bump catalog version.
Per idea from Robert Haas, though applied differently than originally
suggested.
Discussion: https://postgr.es/m/CA+TgmoYbkvYwLa+1vOP7RDY7kO2=A7rppoPusoRXe44VDOGBPg@mail.gmail.com
Michael Paquier [Wed, 8 Oct 2025 04:57:04 +0000 (13:57 +0900)]
Improve description of some WAL records for GIN
The following information is added in the description of some GIN
records:
- In INSERT_LISTPAGE, the number of tuples and the right link block.
- In UPDATE_META_PAGE, the number of tuples, the previous tail block,
and the right link block.
- In SPLIT, the left and right children blocks.
Author: Kirill Reshke <
[email protected]>
Reviewed-by: Michael Paquier <[email protected]>
Reviewed-by: Andrey Borodin <[email protected]>
Discussion: https://postgr.es/m/CALdSSPgnAt5L=D_xGXRXLYO5FK1H31_eYEESxdU1n-r4g+6GqA@mail.gmail.com
Michael Paquier [Wed, 8 Oct 2025 03:43:40 +0000 (12:43 +0900)]
Add stats_reset to pg_stat_user_functions
It is possible to call pg_stat_reset_single_function_counters() for a
single function, but the reset time was missing the system view showing
its statistics. Like all the fields of pg_stat_user_functions, the GUC
track_functions needs to be enabled to show the statistics about
function executions.
Bump catalog version.
Bump PGSTAT_FILE_FORMAT_ID, as a result of the new field added to
PgStat_StatFuncEntry.
Author: Bertrand Drouvot <
[email protected]>
Discussion: https://postgr.es/m/
[email protected]
Amit Kapila [Wed, 8 Oct 2025 03:17:05 +0000 (03:17 +0000)]
Fix typo in function header comment.
Reported-by: Robert Haas <[email protected]>
Discussion: https://postgr.es/m/CA+TgmoZYh_nw-2j_Fi9y6ZAvrpN+W1aSOFNM7Rus2Q-zTkCsQw@mail.gmail.com
Tatsuo Ishii [Wed, 8 Oct 2025 00:26:49 +0000 (09:26 +0900)]
Fix Coverity issues reported in commit
25a30bbd423.
Fix several issues pointed out by Coverity (reported by Tome Lane).
- In row_is_in_frame(), return value of window_gettupleslot() was not
checked.
- WinGetFuncArgInPartition() tried to derefference "isout" pointer
even if it could be NULL in some places.
Besides the issues, I also fixed a compiler warning reported by Álvaro
Herrera.
Moreover, in WinGetFuncArgInPartition refactor the do...while loop so
that the codes inside the loop simpler. Also simplify the case when
abs_pos < 0.
Author: Tatsuo Ishii <
[email protected]>
Reviewed-by: Paul Ramsey <[email protected]>
Reported-by: Tom Lane <[email protected]>
Reported-by: Álvaro Herrera <[email protected]>
Discussion: https://postgr.es/m/
1686755.
1759679957%40sss.pgh.pa.us
Discussion: https://postgr.es/m/
202510051612.gw67jlc2iqpw%40alvherre.pgsql
David Rowley [Tue, 7 Oct 2025 23:07:17 +0000 (12:07 +1300)]
Cleanup INFINITY code in float.h
The INFINITY macro is always defined per C99 standard, so this should
mean we can now get rid of the workaround code for when that macro isn't
defined.
Also, delete the (now unneeded) #pragma code which was disabling a
compiler warning in MSVC. There was a comment explaining why the #pragma
was placed outside the function body to work around a MSVC compiler bug,
but the link explaining that was dead, as reported by jian he.
Author: David Rowley <
[email protected]>
Reported-by: jian he <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/CACJufxGARYETnNwtCK7QC0zE_7gq-tfN0mME=gT5rTNtC=VSHQ@mail.gmail.com
Robert Haas [Wed, 20 Aug 2025 19:10:52 +0000 (15:10 -0400)]
Remove PlannerInfo's join_search_private method.
Instead, use the new mechanism that allows planner extensions to store
private state inside a PlannerInfo, treating GEQO as an in-core planner
extension. This is a useful test of the new facility, and also buys
back a few bytes of storage.
To make this work, we must remove innerrel_is_unique_ext's hack of
testing whether join_search_private is set as a proxy for whether
the join search might be retried. Add a flag that extensions can
use to explicitly signal their intentions instead.
Reviewed-by: Andrei Lepikhov <[email protected]>
Reviewed-by: Melanie Plageman <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Discussion: http://postgr.es/m/CA+TgmoYWKHU2hKr62Toyzh-kTDEnMDeLw7gkOOnjL-TnOUq0kQ@mail.gmail.com
Robert Haas [Tue, 7 Oct 2025 16:09:30 +0000 (12:09 -0400)]
Allow private state in certain planner data structures.
Extension that make extensive use of planner hooks may want to
coordinate their efforts, for example to avoid duplicate computation,
but that's currently difficult because there's no really good way to
pass data between different hooks.
To make that easier, allow for storage of extension-managed private
state in PlannerGlobal, PlannerInfo, and RelOptInfo, along very
similar lines to what we have permitted for ExplainState since commit
c65bc2e1d14a2d4daed7c1921ac518f2c5ac3d17.
Reviewed-by: Andrei Lepikhov <[email protected]>
Reviewed-by: Melanie Plageman <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Discussion: http://postgr.es/m/CA+TgmoYWKHU2hKr62Toyzh-kTDEnMDeLw7gkOOnjL-TnOUq0kQ@mail.gmail.com
Tom Lane [Tue, 7 Oct 2025 15:47:27 +0000 (11:47 -0400)]
Adjust new TAP test to work on macOS.
Seems Apple's version of "wc -l" puts spaces before the number.
(I wonder why the cfbot didn't find this.) While here, make
the failure case log what it got, to aid debugging future issues.
Per buildfarm.
Tom Lane [Tue, 7 Oct 2025 14:57:56 +0000 (10:57 -0400)]
Improve psql's ability to select pager mode accurately.
We try to use the pager only when more than a screenful's worth of
data is to be printed. However, the code in print.c that's concerned
with counting the number of lines that will be needed missed a lot of
edge cases:
* While plain aligned mode accounted for embedded newlines in column
headers and table cells, unaligned and vertical output modes did not.
* In particular, since vertical mode repeats the headers for each
record, we need to account for embedded newlines in the headers for
each record.
* Multi-line table titles were not accounted for.
* tuples_only mode (where headers aren't printed) wasn't accounted
for.
* Footers were accounted for as one line per footer, again missing
the possibility of multi-line footers. (In some cases such as
"\d+" on a view, there can be many lines in a footer.) Also,
we failed to account for the default footer.
To fix, move the entire responsibility for counting lines into
IsPagerNeeded (or actually, into a new subroutine count_table_lines),
and then expand the logic as appropriate. Also restructure to make it
perhaps a bit easier to follow. It's still only completely accurate
for ALIGNED/WRAPPED/UNALIGNED formats, but the other formats are not
typically used with interactive output.
Arrange to not run count_table_lines at all unless we will use
its result, and teach it to quit early as soon as it's proven
that the output is long enough to require use of the pager.
When dealing with large tables this should save a noticeable
amount of time, since pg_wcssize() isn't exactly cheap.
In passing, move the "flog" output step to the bottom of printTable(),
rather than running it when we've already opened the pager in some
modes. In principle it shouldn't interfere with the pager because
flog should always point to a non-interactive file; but it seems silly
to risk any interference, especially when the existing positioning
seems to have been chosen with the aid of a dartboard.
Also add a TAP test to exercise pager mode. Up to now, we have had
zero test coverage of these code paths, because they aren't reached
unless isatty(stdout). We do have the test infrastructure to improve
that situation, though. Following the lead of 010_tab_completion.pl,
set up an interactive psql and feed it some test cases. To detect
whether it really did invoke the pager, point PSQL_PAGER to "wc -l".
The test is skipped if that utility isn't available.
Author: Erik Wienhold <
[email protected]>
Test-authored-by: Tom Lane <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/
2dd2430f-dd20-4c89-97fd-
242616a3d768@ewie.name
Robert Haas [Tue, 7 Oct 2025 13:18:54 +0000 (09:18 -0400)]
Assign each subquery a unique name prior to planning it.
Previously, subqueries were given names only after they were planned,
which makes it difficult to use information from a previous execution of
the query to guide future planning. If, for example, you knew something
about how you want "InitPlan 2" to be planned, you won't know whether
the subquery you're currently planning will end up being "InitPlan 2"
until after you've finished planning it, by which point it's too late to
use the information that you had.
To fix this, assign each subplan a unique name before we begin planning
it. To improve consistency, use textual names for all subplans, rather
than, as we did previously, a mix of numbers (such as "InitPlan 1") and
names (such as "CTE foo"), and make sure that the same name is never
assigned more than once.
We adopt the somewhat arbitrary convention of using the type of sublink
to set the plan name; for example, a query that previously had two
expression sublinks shown as InitPlan 2 and InitPlan 1 will now end up
named expr_1 and expr_2. Because names are assigned before rather than
after planning, some of the regression test outputs show the numerical
part of the name switching positions: what was previously SubPlan 2 was
actually the first one encountered, but we finished planning it later.
We assign names even to subqueries that aren't shown as such within the
EXPLAIN output. These include subqueries that are a FROM clause item or
a branch of a set operation, rather than something that will be turned
into an InitPlan or SubPlan. The purpose of this is to make sure that,
below the topmost query level, there's always a name for each subquery
that is stable from one planning cycle to the next (assuming no changes
to the query or the database schema).
Author: Robert Haas <
[email protected]>
Co-authored-by: Tom Lane <[email protected]>
Reviewed-by: Alexandra Wang <[email protected]>
Reviewed-by: Richard Guo <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Reviewed-by: Junwang Zhao <[email protected]>
Discussion: http://postgr.es/m/
3641043.
1758751399@sss.pgh.pa.us
Daniel Gustafsson [Tue, 7 Oct 2025 13:02:20 +0000 (15:02 +0200)]
doc: Add missing parenthesis in pg_stat_progress_analyze docs
Author: Shinya Kato <
[email protected]>
Discussion: https://postgr.es/m/CAOzEurRgpAh9dsbEM88FPOhNaV_PkdL6p_9MJatcrNf9wXw1nw@mail.gmail.com
Backpatch-through: 18
Álvaro Herrera [Tue, 7 Oct 2025 08:45:57 +0000 (10:45 +0200)]
Fix compile of src/tutorial/funcs.c
I broke this with recent #include removals. Fix by adding an explicit
Reported-by: Devrim Gündüz <[email protected]>
Discussion: https://postgr.es/m/
5e2c2d7c44434f3f0af7523864b27fe4fb590902[email protected]