Andrew Dunstan [Thu, 10 Apr 2025 16:11:36 +0000 (12:11 -0400)]
Further cleanup for directory creation on pg_dump/pg_dumpall
Instead of two separate (and different) implementations, refactor to use
a single common routine.
Along the way, remove use of a hardcoded file permissions constant in
favor of the common project setting for directory creation.
Author: Mahendra Singh Thalor <
[email protected]>
Discussion: https://postgr.es/m/CAKYtNApihL8X1h7XO-zOjznc8Ca66Aevgvhc9zOTh6DBh2iaeA@mail.gmail.com
Amit Kapila [Thu, 10 Apr 2025 07:44:40 +0000 (13:14 +0530)]
Fix data loss in logical replication.
Data loss can happen when the DDLs like ALTER PUBLICATION ... ADD TABLE ...
or ALTER TYPE ... that don't take a strong lock on table happens
concurrently to DMLs on the tables involved in the DDL. This happens
because logical decoding doesn't distribute invalidations to concurrent
transactions and those transactions use stale cache data to decode the
changes. The problem becomes bigger because we keep using the stale cache
even after those in-progress transactions are finished and skip the
changes required to be sent to the client.
This commit fixes the issue by distributing invalidation messages from
catalog-modifying transactions to all concurrent in-progress transactions.
This allows the necessary rebuild of the catalog cache when decoding new
changes after concurrent DDL.
We observed performance regression primarily during frequent execution of
*publication DDL* statements that modify the published tables. The
regression is minor or nearly nonexistent for DDLs that do not affect the
published tables or occur infrequently, making this a worthwhile cost to
resolve a longstanding data loss issue.
An alternative approach considered was to take a strong lock on each
affected table during publication modification. However, this would only
address issues related to publication DDLs (but not the ALTER TYPE ...)
and require locking every relation in the database for publications
created as FOR ALL TABLES, which is impractical.
The bug exists in all supported branches, but we are backpatching till 14.
The fix for 13 requires somewhat bigger changes than this fix, so the fix
for that branch is still under discussion.
Reported-by: hubert depesz lubaczewski <[email protected]>
Reported-by: Tomas Vondra <[email protected]>
Author: Shlok Kyal <
[email protected]>
Author: Hayato Kuroda <
[email protected]>
Reviewed-by: Zhijie Hou <[email protected]>
Reviewed-by: Masahiko Sawada <[email protected]>
Reviewed-by: Amit Kapila <[email protected]>
Tested-by: Benoit Lobréau <[email protected]>
Backpatch-through: 14
Discussion: https://postgr.es/m/
de52b282-1166-1180-45a2-
8d8917ca74c6@enterprisedb.com
Discussion: https://postgr.es/m/CAD21AoAenVqiMjpN-PvGHL1N9DWnHSq673bfgr6phmBUzx=kLQ@mail.gmail.com
Peter Eisentraut [Thu, 10 Apr 2025 06:04:35 +0000 (08:04 +0200)]
Fix incorrect format placeholders
for commits
8f427187db7,
6ee3b91bad2
David Rowley [Thu, 10 Apr 2025 05:33:58 +0000 (17:33 +1200)]
Update wording in optimizer/README for EquivalenceClasses
d69d45a5a changed how em_is_child members are stored in
EquivalenceClasses. Children are no longer stored in the ec_members
list. optimizer/README mentioned that most operations "should ignore
child members", but that felt a little untrue now since child members
are now stored in a separate place, they simply won't be found by the
normal means of looking (a foreach loop over ec_members), and if you don't
find them, there's technically no need to "ignore" them.
Here we tweak the wording slightly to reflect the new storage location
for child members.
Reported-by: Amit Langote <[email protected]>
Author: Amit Langote <
[email protected]>
Author: David Rowley <
[email protected]>
Discussion: https://postgr.es/m/CA+HiwqE8v=EuAP_3F_A2xn8zWx+nG_etW_Fe_DvKO-Fkx=+DdQ@mail.gmail.com
Amit Kapila [Thu, 10 Apr 2025 04:53:01 +0000 (10:23 +0530)]
Cosmetic fixes for pg_createsubscriber's -all option.
Author: Peter Smith <
[email protected]>
Reviewed-by: Amit Kapila <[email protected]>
Discussion: https://postgr.es/m/CAHut+PsmSCQ-ENSDQ0YOUcsgzT=GG-E9jyXBvxd51A_dMXH5XA@mail.gmail.com
Tomas Vondra [Wed, 9 Apr 2025 17:34:27 +0000 (19:34 +0200)]
ci: Check for missing dependencies in meson builds
Extends the Linux and Windows meson builds with a check for missing
dependencies by running
ninja -t missingdeps
after the build. This highlights unindended dependencies.
Reviewed-by: Andres Freund <[email protected]>
https://postgr.es/m/CALdSSPi5fj0a7UG7Fmw2cUD1uWuckU_e8dJ+6x-bJEokcSXzqA@mail.gmail.com
Tomas Vondra [Wed, 9 Apr 2025 17:32:17 +0000 (19:32 +0200)]
Cleanup of pg_numa.c
This moves/renames some of the functions defined in pg_numa.c:
* pg_numa_get_pagesize() is renamed to pg_get_shmem_pagesize(), and
moved to src/backend/storage/ipc/shmem.c. The new name better reflects
that the page size is not related to NUMA, and it's specifically about
the page size used for the main shared memory segment.
* move pg_numa_available() to src/backend/storage/ipc/shmem.c, i.e. into
the backend (which more appropriate for functions callable from SQL).
While at it, improve the comment to explain what page size it returns.
* remove unnecessary includes from src/port/pg_numa.c, adding
unnecessary dependencies (src/port should be suitable for frontent).
These were either leftovers or unnecessary thanks to the other changes
in this commit.
This eliminates unnecessary dependencies on backend symbols, which we
don't want in src/port.
Reported-by: Kirill Reshke <[email protected]>
Reviewed-by: Andres Freund <[email protected]>
https://postgr.es/m/CALdSSPi5fj0a7UG7Fmw2cUD1uWuckU_e8dJ+6x-bJEokcSXzqA@mail.gmail.com
Nathan Bossart [Wed, 9 Apr 2025 19:27:08 +0000 (14:27 -0500)]
pg_upgrade: Mention that we preserve database OIDs in a comment.
Oversight in commit
aa01051418.
Reported-by: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/
4055696.
1744134682%40sss.pgh.pa.us
Tom Lane [Wed, 9 Apr 2025 16:28:34 +0000 (12:28 -0400)]
Fix performance issue in deadlock-parallel isolation test.
With debug_discard_caches = 1, the runtime of this test script
increased by about a factor of 10 after commit
0dca5d68d. That's
causing some of our buildfarm animals to fail with a timeout.
The reason for the increased time is that now we are re-planning
some intentionally-non-inlineable SQL functions on every execution,
where the previous coding held onto the original plans throughout
the outer query. The previous behavior was arguably quite buggy,
so I don't think
0dca5d68d deserves blame here. But we would
like this test script to not take so long.
To fix, instead of forcing a "parallel safe" label via a
non-inlineable SQL function, apply it directly to the advisory-lock
functions by making internal-language aliases for them. A small
problem is that the advisory-lock functions return void but this
test would really like them to return integer 1. I cheated here by
declaring the aliases as returning "int". That's perhaps undue
familiarity with the implementation of PG_RETURN_VOID(), but that
hasn't changed in twenty years and is unlikely to do so in the next
twenty. That gets us an integer 0 result, and then an inline-able
wrapper to convert that to an integer 1 allows the rest of the script
to remain unchanged.
For me, this reduces the runtime with debug_discard_caches = 1
by about 100x, making the test comfortably faster than before
instead of slower.
Discussion: https://postgr.es/m/136163.
1744179562@sss.pgh.pa.us
Noah Misch [Wed, 9 Apr 2025 14:23:39 +0000 (07:23 -0700)]
Fix test races between syscache-update-pruned.spec and autovacuum.
This spec fails ~3% of my Valgrind runs, and the spec has failed on Valgrind
buildfarm member skink at a similar rate. Two problems contributed to that:
- A competing buffer pin triggered VACUUM's lazy_scan_noprune() path, causing
"tuples missed: 1 dead from 1 pages not removed due to cleanup lock
contention". FREEZE fixes that.
- The spec ran lazy VACUUM immediately after VACUUM FULL. The spec implicitly
assumed lazy VACUUM prunes the one tuple that VACUUM FULL made dead. First
wait for old snapshots, making that assumption reliable.
This also adds two forms of defense in depth:
- Wait for snapshots using shared catalog pruning rules (VISHORIZON_SHARED).
This avoids the removable cutoff moving backward when an XID-bearing
autoanalyze process runs in another database. That may never happen in this
test, but it's cheap insurance.
- Use lazy VACUUM option DISABLE_PAGE_SKIPPING. Commit
c2dc1a79767a0f947e1145f82eb65dfe4360d25f did this for a related requirement
in other tests, but I suspect FREEZE is necessary and sufficient in all
these tests.
Back-patch to v17, where the test first appeared.
Reported-by: Andres Freund <[email protected]>
Discussion: https://postgr.es/m/sv3taq4e6ea4qckimien3nxp3sz4b6cw6sfcy4nhwl52zpur4g@h6i6tohxmizu
Backpatch-through: 17
Peter Eisentraut [Wed, 9 Apr 2025 10:12:57 +0000 (12:12 +0200)]
Update config.guess and config.sub
Heikki Linnakangas [Wed, 9 Apr 2025 10:11:42 +0000 (13:11 +0300)]
Fix a few oversights in the longer cancel keys patch
Change MyCancelKeyLength's type from uint8 to int. While it always
fits in a uint8, plain int is less surprising, as there's no
particular reason for it to be uint8.
Fix one ProcSignalInit caller that passed 'false' instead of NULL for
the pointer argument.
Author: Peter Eisentraut <
[email protected]>
Discussion: https://www.postgresql.org/message-id/
61be9e31-7b7d-49d5-bc11-
721800d89d64@eisentraut.org
Daniel Gustafsson [Wed, 9 Apr 2025 07:29:12 +0000 (09:29 +0200)]
Perform missed catversion bump
Commit
c57971034e69ca renamed an argument for a function but missed
to bump the catversion to reflect this.
Reported-by: David Rowley <[email protected]>
Discussion: https://postgr.es/m/CAApHDvqOega=dPtu3h2C5fJWJEuaGCMDib_sVfhKQqgUNJVmFA@mail.gmail.com
Tom Lane [Wed, 9 Apr 2025 03:03:33 +0000 (23:03 -0400)]
Doc: note that two examples in optimizer/README are oversimplified.
These examples fail to account for join clauses generated by
EquivalenceClasses, but since we haven't mentioned EquivalenceClasses
yet it seems like it'd just add confusion to make them fully accurate.
Instead, parenthetically note that they're oversimplified.
Reported-by: Zeyuan Hu <[email protected]>
Co-authored-by: David Rowley <[email protected]>
Co-authored-by: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/CACvHWmYFo+60yMqKJajDDvKN5EM41YHrCT3oxukwXmGAqpWvyw@mail.gmail.com
Tom Lane [Wed, 9 Apr 2025 00:21:03 +0000 (20:21 -0400)]
Adjust AdjustUpgrade.pm for commit
b1720fe63.
Need to delete the functions we no longer have available from
the dumps to be reloaded from old versions.
Per buildfarm.
Tom Lane [Tue, 8 Apr 2025 23:12:03 +0000 (19:12 -0400)]
Move contrib/spi testing from core regression tests to contrib/spi.
It's weird to have the core regression tests depending on contrib
code, and coverage testing shows that those test queries add nothing
to the core-code coverage of the core tests. So pull those test bits
out and put them into ordinary test scripts inside contrib/spi/,
making that more like other contrib modules.
Aside from being structurally nicer, anything we can take out of the
core tests (which are executed multiple times per check-world run)
and put into tests executed only once should be a win. It doesn't
look like this change will buy a whole lot of milliseconds, but a
cycle saved is a cycle earned.
Also, there is some discussion around possibly removing refint and/or
autoinc altogether. I don't know if that will happen, but we'd
certainly need to decouple them from the core tests to do so.
The tests for autoinc were quite intertwined with the undocumented
"ttdummy" trigger in regress.c. That made the tests very hard to
understand and contributed nothing to autoinc's testing either.
So I just deleted ttdummy and rewrote the autoinc tests without it.
I realized while doing this that the description of autoinc in
the SGML docs is not a great description of what the function
actually does, so the patch includes some updates to those docs.
Author: Tom Lane <
[email protected]>
Reviewed-by: Heikki Linnakangas <[email protected]>
Discussion: https://postgr.es/m/
3872677.
1744077559@sss.pgh.pa.us
Daniel Gustafsson [Tue, 8 Apr 2025 21:09:13 +0000 (23:09 +0200)]
Rename argument in pg_get_process_memory_contexts().
During development the third argument to pg_get_process_memory_contexts
was a retry count, but it was changed to a timeout instead. The param
name was accidentally left in pg_proc.dat though. Fix by renaming to
the correct parameter name.
Author: Fujii Masao <
[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Discussion: https://postgr.es/m/
3eb40b3e-45c7-426a-b7f8-
81f7d05a9b53@oss.nttdata.com
Peter Eisentraut [Tue, 8 Apr 2025 17:12:03 +0000 (19:12 +0200)]
Fix incorrect format placeholder
for commit
749a9e20c97
Nathan Bossart [Tue, 8 Apr 2025 15:57:31 +0000 (10:57 -0500)]
Prevent 006_transfer_modes.pl from leaving files behind.
This test was leaving files like delete_old_cluster.{sh,bat} in the
source directory for VPATH and meson builds. To fix, change the
directory to tmp_check before running the test, as was done in
commits
15b6d21553,
8af917be6b, and
c462b054ba.
Oversight in commit
af0d4901c1.
Reported-by: Andrew Dunstan <[email protected]> (on Discord)
Reviewed-by: Michael Paquier <[email protected]>
Reviewed-by: Andrew Dunstan <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/Z_RHkG770w3SE0yU%40nathan
Daniel Gustafsson [Tue, 8 Apr 2025 13:28:29 +0000 (15:28 +0200)]
ci: Add MBUILD_TARGET for NetBSD and OpenBSD
Commit
b2bdb972c0 added MBUILD_TARGET to ensure that meson builds
the tests before running them, this adds MBUILD_TARGET to OpenBSD
and NetBSD builds as well where it was missing.
No backpatching since OpenBSD and NetBSD support does not exist
in the backbranch CI.
Author: Nazir Bilal Yavuz <
[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Discussion: https://postgr.es/m/CAN55FZ2LNnRrtL+cpSdEg44fQcLPq_GjJjfNa0vz+xqEdq=ZHw@mail.gmail.com
Tomas Vondra [Tue, 8 Apr 2025 09:52:02 +0000 (11:52 +0200)]
pg_buffercache: Change page_num type to bigint
The page_num was defined as integer, which should be sufficient for the
near future (with 4K pages it's 8TB). But it's virtually free to return
bigint, and get a wider range. This was agreed on the thread, but I
forgot to tweak this in
ba2a3c2302f1.
While at it, make the data types in CREATE VIEW a bit more consistent.
Discussion: https://postgr.es/m/CAKZiRmxh6KWo0aqRqvmcoaX2jUxZYb4kGp3N%3Dq1w%2BDiH-696Xw%40mail.gmail.co
Tomas Vondra [Tue, 8 Apr 2025 09:45:01 +0000 (11:45 +0200)]
doc: Correct pg_shmem_allocations_numa.size data type
The code in pg_get_shmem_allocations_numa() returned 'size' as int64,
but the docs said int32.
Report and fix by Noriyoshi Shinoda.
Reported-by: Noriyoshi Shinoda <[email protected]>
Discussion: https://postgr.es/m/DM4PR84MB1734308EB741A6ECFF040C27EEAA2@DM4PR84MB1734.NAMPRD84.PROD.OUTLOOK.COM
Amit Kapila [Tue, 8 Apr 2025 10:05:42 +0000 (15:35 +0530)]
Fix uninitialized index information access during apply.
The issue happens when building conflict information during apply of
INSERT or UPDATE operations that violate unique constraints on leaf
partitions.
The problem was introduced in commit
9ff68679b5, which removed the
redundant calls to ExecOpenIndices/ExecCloseIndices. The previous code was
relying on the redundant ExecOpenIndices call in
apply_handle_tuple_routing() to build the index information required for
unique key conflict detection.
The fix is to delay building the index information until a conflict is
detected instead of relying on ExecOpenIndices to do the same. The
additional benefit of this approach is that it avoids building index
information when there is no conflict.
Author: Hou Zhijie <
[email protected]>
Reviewed-by:Reviewed-by: Amit Kapila <
[email protected]>
Discussion: https://postgr.es/m/TYAPR01MB57244ADA33DDA57119B9D26494A62@TYAPR01MB5724.jpnprd01.prod.outlook.com
Thomas Munro [Tue, 8 Apr 2025 10:02:24 +0000 (22:02 +1200)]
Fix typo in docs.
Typo in previous commit.
Thomas Munro [Tue, 8 Apr 2025 08:52:47 +0000 (20:52 +1200)]
Introduce file_copy_method setting.
It can be set to either COPY (the default) or CLONE if the system
supports it. CLONE causes callers of copydir(), currently CREATE
DATABASE ... STRATEGY=FILE_COPY and ALTER DATABASE ... SET TABLESPACE =
..., to use copy_file_range (Linux, FreeBSD) or copyfile (macOS) to copy
files instead of a read-write loop over the contents.
CLONE gives the kernel the opportunity to share block ranges on
copy-on-write file systems and push copying down to storage on others,
depending on configuration. On some systems CLONE can be used to clone
large databases quickly with CREATE DATABASE ... TEMPLATE=source
STRATEGY=FILE_COPY.
Other operating systems could be supported; patches welcome.
Co-authored-by: Nazir Bilal Yavuz <[email protected]>
Reviewed-by: Robert Haas <[email protected]>
Reviewed-by: Ranier Vilela <[email protected]>
Discussion: https://postgr.es/m/CA%2BhUKGLM%2Bt%2BSwBU-cHeMUXJCOgBxSHLGZutV5zCwY4qrCcE02w%40mail.gmail.com
Daniel Gustafsson [Tue, 8 Apr 2025 09:06:56 +0000 (11:06 +0200)]
Add function to get memory context stats for processes
This adds a function for retrieving memory context statistics
and information from backends as well as auxiliary processes.
The intended usecase is cluster debugging when under memory
pressure or unanticipated memory usage characteristics.
When calling the function it sends a signal to the specified
process to submit statistics regarding its memory contexts
into dynamic shared memory. Each memory context is returned
in detail, followed by a cumulative total in case the number
of contexts exceed the max allocated amount of shared memory.
Each process is limited to use at most 1Mb memory for this.
A summary can also be explicitly requested by the user, this
will return the TopMemoryContext and a cumulative total of
all lower contexts.
In order to not block on busy processes the caller specifies
the number of seconds during which to retry before timing out.
In the case where no statistics are published within the set
timeout, the last known statistics are returned, or NULL if
no previously published statistics exist. This allows dash-
board type queries to continually publish even if the target
process is temporarily congested. Context records contain a
timestamp to indicate when they were submitted.
Author: Rahila Syed <
[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Reviewed-by: Andres Freund <[email protected]>
Reviewed-by: Tomas Vondra <[email protected]>
Reviewed-by: Atsushi Torikoshi <[email protected]>
Reviewed-by: Fujii Masao <[email protected]>
Reviewed-by: Alexander Korotkov <[email protected]>
Discussion: https://postgr.es/m/CAH2L28v8mc9HDt8QoSJ8TRmKau_8FM_HKS41NeO9-6ZAkuZKXw@mail.gmail.com
Andres Freund [Tue, 8 Apr 2025 06:41:03 +0000 (02:41 -0400)]
Increase BAS_BULKREAD based on effective_io_concurrency
Before, BAS_BULKREAD was always of size 256kB. With the default
io_combine_limit of 16, that only allowed 1-2 IOs to be in flight -
insufficient even on very low latency storage.
We don't just want to increase the size to a much larger hardcoded value, as
very large rings (10s of MBs of of buffers), appear to have negative
performance effects when reading in data that the OS has cached (but not when
actually needing to do IO).
To address this, increase the size of BAS_BULKREAD to allow for
io_combine_limit * effective_io_concurrency buffers getting read in. To
prevent the ring being much larger than useful, limit the increased size with
GetPinLimit().
The formula outlined above keeps the ring size to sizes for which we have not
observed performance regressions, unless very large effective_io_concurrency
values are used together with large shared_buffers setting.
Reviewed-by: Thomas Munro <[email protected]>
Discussion: https://postgr.es/m/lqwghabtu2ak4wknzycufqjm5ijnxhb4k73vzphlt2a3wsemcd@gtftg44kdim6
Discussion: https://postgr.es/m/uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah@brqs62irg4dt
Andres Freund [Tue, 8 Apr 2025 06:16:51 +0000 (02:16 -0400)]
Add pg_buffercache_evict_{relation,all} functions
In addition to the added functions, the pg_buffercache_evict() function now
shows whether the buffer was flushed.
pg_buffercache_evict_relation(): Evicts all shared buffers in a
relation at once.
pg_buffercache_evict_all(): Evicts all shared buffers at once.
Both functions provide mechanism to evict multiple shared buffers at
once. They are designed to address the inefficiency of repeatedly calling
pg_buffercache_evict() for each individual buffer, which can be time-consuming
when dealing with large shared buffer pools. (e.g., ~477ms vs. ~2576ms for
16GB of fully populated shared buffers).
These functions are intended for developer testing and debugging
purposes and are available to superusers only.
Minimal tests for the new functions are included. Also, there was no test for
pg_buffercache_evict(), test for this added too.
No new extension version is needed, as it was already increased this release
by
ba2a3c2302f.
Author: Nazir Bilal Yavuz <
[email protected]>
Reviewed-by: Andres Freund <[email protected]>
Reviewed-by: Aidar Imamov <[email protected]>
Reviewed-by: Joseph Koshakow <[email protected]>
Discussion: https://postgr.es/m/CAN55FZ0h_YoSqqutxV6DES1RW8ig6wcA8CR9rJk358YRMxZFmw%40mail.gmail.com
David Rowley [Tue, 8 Apr 2025 06:09:57 +0000 (18:09 +1200)]
Speedup child EquivalenceMember lookup in planner
When planning queries to partitioned tables, we clone all
EquivalenceMembers belonging to the partitioned table into em_is_child
EquivalenceMembers for each non-pruned partition. For partitioned tables
with large numbers of partitions, this meant the ec_members list could
become large and code searching that list would become slow. Effectively,
the more partitions which were present, the more searches needed to be
performed for operations such as find_ec_member_matching_expr() during
create_plan() and the more partitions present, the longer these searches
would take, i.e., a quadratic slowdown.
To fix this, here we adjust how we store EquivalenceMembers for
em_is_child members. Instead of storing these directly in ec_members,
these are now stored in a new array of Lists in the EquivalenceClass,
which is indexed by the relid. When we want to find EquivalenceMembers
belonging to a certain child relation, we can narrow the search to the
array element for that relation.
To make EquivalenceMember lookup easier and to reduce the amount of code
change, this commit provides a pair of functions to allow iteration over
the EquivalenceMembers of an EC which also handles finding the child
members, if required. Callers that never need to look at child members
can remain using the foreach loop over ec_members, which will now often
be faster due to only parent-level members being stored there.
The actual performance increases here are highly dependent on the number
of partitions and the query being planned. Performance increases can be
visible with as few as 8 partitions, but the speedup is marginal for
such low numbers of partitions. The speedups become much more visible
with a few dozen to hundreds of partitions. With some tested queries
using 56 partitions, the planner was around 3x faster than before. For
use cases with thousands of partitions, these are likely to become
significantly faster. Some testing has shown planner speedups of 60x or
more with 8192 partitions.
Author: Yuya Watari <
[email protected]>
Co-authored-by: David Rowley <[email protected]>
Reviewed-by: David Rowley <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Reviewed-by: Andrey Lepikhov <[email protected]>
Reviewed-by: Alena Rybakina <[email protected]>
Reviewed-by: Dmitry Dolgov <[email protected]>
Reviewed-by: Amit Langote <[email protected]>
Reviewed-by: Ashutosh Bapat <[email protected]>
Tested-by: Thom Brown <[email protected]>
Tested-by: newtglobal postgresql_contributors <[email protected]>
Discussion: https://postgr.es/m/CAJ2pMkZNCgoUKSE%2B_5LthD%2BKbXKvq6h2hQN8Esxpxd%2Bcxmgomg%40mail.gmail.com
Amit Kapila [Tue, 8 Apr 2025 04:08:02 +0000 (09:38 +0530)]
Stabilize 035_standby_logical_decoding.pl.
Some tests try to invalidate logical slots on the standby server by
running VACUUM on the primary. The problem is that xl_running_xacts was
getting generated and replayed before the VACUUM command, leading to the
advancement of the active slot's catalog_xmin. Due to this, active slots
were not getting invalidated, leading to test failures.
We fix it by skipping the generation of xl_running_xacts for the required
tests with the help of injection points. As the required interface for
injection points was not present in back branches, we fixed the failing
tests in them by disallowing the slot to become active for the required
cases (where rows_removed conflict could be generated).
Author: Hayato Kuroda <
[email protected]>
Reviewed-by: Bertrand Drouvot <[email protected]>
Reviewed-by: Amit Kapila <[email protected]>
Backpatch-through: 16, where it was introduced
Discussion: https://postgr.es/m/
[email protected]
Bruce Momjian [Tue, 8 Apr 2025 01:33:42 +0000 (21:33 -0400)]
Fix PG 17 [NOT] NULL optimization bug for domains
A PG 17 optimization allowed columns with NOT NULL constraints to skip
table scans for IS NULL queries, and to skip IS NOT NULL checks for IS
NOT NULL queries. This didn't work for domain types, since domain types
don't follow the IS NULL/IS NOT NULL constraint logic. To fix, disable
this optimization for domains for PG 17+.
Reported-by: Jan Behrens
Diagnosed-by: Tom Lane
Discussion: https://postgr.es/m/
[email protected]
Backpatch-through: 17
Michael Paquier [Mon, 7 Apr 2025 22:57:19 +0000 (07:57 +0900)]
Flush the IO statistics of active WAL senders more frequently
WAL senders do not flush their statistics until they exit, limiting the
monitoring possible for live processes. This is penalizing when WAL
senders are running for a long time, like in streaming or logical
replication setups, because it is not possible to know the amount of IO
they generate while running.
This commit makes WAL senders more aggressive with their statistics
flush, using an internal of 1 second, with the flush timing calculated
based on the existing GetCurrentTimestamp() done before the sleeps done
to wait for some activity. Note that the sleep done for logical and
physical WAL senders happens in two different code paths, so the stats
flushes need to happen in these two places.
One test is added for the physical WAL sender case, and one for the
logical WAL sender case. This can be done in a stable fashion by
relying on the WAL generated by the TAP tests in combination with a
stats reset while a server is running, but only on HEAD as WAL data has
been added to pg_stat_io in
a051e71e28a1.
This issue exists since
a9c70b46dbe and the introduction of pg_stat_io,
so backpatch down to v16.
Author: Bertrand Drouvot <
[email protected]>
Reviewed-by: vignesh C <[email protected]>
Reviewed-by: Xuneng Zhou <[email protected]>
Discussion: https://postgr.es/m/
[email protected]
Backpatch-through: 16
Tomas Vondra [Mon, 7 Apr 2025 20:56:57 +0000 (22:56 +0200)]
Add pg_buffercache_numa view with NUMA node info
Introduces a new view pg_buffercache_numa, showing NUMA memory nodes
for individual buffers. For each buffer the view returns an entry for
each memory page, with the associated NUMA node.
The database blocks and OS memory pages may have different size - the
default block size is 8KB, while the memory page is 4K (on x86). But
other combinations are possible, depending on configure parameters,
platform, etc. This means buffers may overlap with multiple memory
pages, each associated with a different NUMA node.
To determine the NUMA node for a buffer, we first need to touch the
memory pages using pg_numa_touch_mem_if_required, otherwise we might get
status -2 (ENOENT = The page is not present), indicating the page is
either unmapped or unallocated.
The view may be relatively expensive, especially when accessed for the
first time in a backend, as it touches all memory pages to get reliable
information about the NUMA node. This may also force allocation of the
shared memory.
Author: Jakub Wartak <
[email protected]>
Reviewed-by: Andres Freund <[email protected]>
Reviewed-by: Bertrand Drouvot <[email protected]>
Reviewed-by: Tomas Vondra <[email protected]>
Discussion: https://postgr.es/m/CAKZiRmxh6KWo0aqRqvmcoaX2jUxZYb4kGp3N%3Dq1w%2BDiH-696Xw%40mail.gmail.com
Tomas Vondra [Mon, 7 Apr 2025 20:54:49 +0000 (22:54 +0200)]
Introduce pg_shmem_allocations_numa view
Introduce new pg_shmem_alloctions_numa view with information about how
shared memory is distributed across NUMA nodes. For each shared memory
segment, the view returns one row for each NUMA node backing it, with
the total amount of memory allocated from that node.
The view may be relatively expensive, especially when executed for the
first time in a backend, as it has to touch all memory pages to get
reliable information about the NUMA node. This may also force allocation
of the shared memory.
Unlike pg_shmem_allocations, the view does not show anonymous shared
memory allocations. It also does not show memory allocated using the
dynamic shared memory infrastructure.
Author: Jakub Wartak <
[email protected]>
Reviewed-by: Andres Freund <[email protected]>
Reviewed-by: Bertrand Drouvot <[email protected]>
Reviewed-by: Tomas Vondra <[email protected]>
Discussion: https://postgr.es/m/CAKZiRmxh6KWo0aqRqvmcoaX2jUxZYb4kGp3N%3Dq1w%2BDiH-696Xw%40mail.gmail.com
Tomas Vondra [Mon, 7 Apr 2025 20:51:49 +0000 (22:51 +0200)]
Add support for basic NUMA awareness
Add basic NUMA awareness routines, using a minimal src/port/pg_numa.c
portability wrapper and an optional build dependency, enabled by
--with-libnuma configure option. For now this is Linux-only, other
platforms may be supported later.
A built-in SQL function pg_numa_available() allows checking NUMA
support, i.e. that the server was built/linked with the NUMA library.
The main function introduced is pg_numa_query_pages(), which allows
determining the NUMA node for individual memory pages. Internally the
function uses move_pages(2) syscall, as it allows batching, and is more
efficient than get_mempolicy(2).
Author: Jakub Wartak <
[email protected]>
Co-authored-by: Bertrand Drouvot <[email protected]>
Reviewed-by: Andres Freund <[email protected]>
Reviewed-by: Álvaro Herrera <[email protected]>
Reviewed-by: Tomas Vondra <[email protected]>
Discussion: https://postgr.es/m/CAKZiRmxh6KWo0aqRqvmcoaX2jUxZYb4kGp3N%3Dq1w%2BDiH-696Xw%40mail.gmail.com
Álvaro Herrera [Mon, 7 Apr 2025 19:58:06 +0000 (21:58 +0200)]
Use specific collation where needed in new test
Oversight in commit
a379061a22a8.
Per Czech buildfarm members jay and hippopotamus.
Tom Lane [Mon, 7 Apr 2025 19:54:09 +0000 (15:54 -0400)]
Fix some issues in contrib/spi/refint.c.
check_foreign_key incorrectly used a single cache entry for its saved
plans for a 'c' (cascade) trigger, although there are two different
queries to execute depending on whether it fires for an update or a
delete. This caused the wrong things to be done if both types of
event occur in one session. (This was indeed visible in the triggers
regression test, but apparently nobody ever questioned it.) To fix,
add the operation type to the cache key.
Its debug log output failed to distinguish update from delete
events, too.
Also, change the intended trigger usage from BEFORE ROW to AFTER ROW,
and add checks insisting on that usage. BEFORE is really rather
unsafe, since if there are other BEFORE triggers they might change or
cancel the operation we are trying to check. AFTER triggers are the
standard way to propagate changes to other rows, so we should follow
that way here.
In passing, remove a useless duplicate lookup of the cache entry.
This code is mostly intended as a documentation example, so we
won't consider a back-patch.
Author: Dmitrii Bondar <
[email protected]>
Reviewed-by: Paul Jungwirth <[email protected]>
Reviewed-by: Lilian Ontowhee <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/
79755a2b18ed4fe5e29da6a87a1e00d1@postgrespro.ru
Andres Freund [Mon, 7 Apr 2025 19:20:30 +0000 (15:20 -0400)]
aio: Make AIO more compatible with valgrind
In some edge cases valgrind flags issues with the memory referenced by
IOs. All of the cases addressed in this change are false positives.
Most of the false positives are caused by UnpinBuffer[NoOwner] marking buffer
data as inaccessible. This happens even though the AIO subsystem still holds a
pin. That's good, there shouldn't be accesses to the buffer outside of AIO
related code until it is pinned by "user" code again. But it requires some
explicit work - if the buffer is not pinned by the current backend, we need to
explicitly mark the buffer data accessible/inaccessible while executing
completion callbacks.
That however causes a cascading issue in IO workers: After the completion
callbacks for a buffer is executed, the page is marked as inaccessible. If
subsequently the same worker is executing IO targeting the same buffer, we
would get an error, as the memory is still marked inaccessible. To avoid that,
we need to explicitly mark the memory as accessible in IO workers.
Another issue is that IO executed in workers or via io_uring will not mark
memory as DEFINED. In the case of workers that is because valgrind does not
track memory definedness across processes. For io_uring that is because
valgrind does not understand io_uring, and therefore its IOs never mark memory
as defined, whether the completions are processed in the defining process or
in another context. It's not entirely clear how to best solve that. The
current user of AIO is not affected, as it explicitly marks buffers as DEFINED
& NOACCESS anyway. Defer solving this issue until we have a user with
different needs.
Per buildfarm animal skink.
Reviewed-by: Noah Misch <[email protected]>
Co-authored-by: Noah Misch <[email protected]>
Discussion: https://postgr.es/m/3pd4322mogfmdd5nln3zphdwhtmq3rzdldqjwb2sfqzcgs22lf@ok2gletdaoe6
Andres Freund [Mon, 7 Apr 2025 19:20:30 +0000 (15:20 -0400)]
localbuf: Add Valgrind buffer access instrumentation
This mirrors
1e0dfd166b3 (+
46ef520b9566), for temporary table buffers. This
is mainly interesting right now because the AIO work currently triggers
spurious valgrind errors, and the fix for that is cleaner if temp buffers
behave the same as shared buffers.
This requires one change beyond the annotations themselves, namely to pin
local buffers while writing them out in FlushRelationBuffers().
Reviewed-by: Noah Misch <[email protected]>
Co-authored-by: Noah Misch <[email protected]>
Discussion: https://postgr.es/m/3pd4322mogfmdd5nln3zphdwhtmq3rzdldqjwb2sfqzcgs22lf@ok2gletdaoe6
Masahiko Sawada [Mon, 7 Apr 2025 19:13:08 +0000 (12:13 -0700)]
doc: Fix a typo in pg_recvlogical documentation.
Oversight in
cf2655a9029a.
Author: Zhijie Hou <
[email protected]>
Discussion: https://postgr.es/m/OS3PR01MB5718DD1466E2B9043448AE5094AA2@OS3PR01MB5718.jpnprd01.prod.outlook.com
Tom Lane [Mon, 7 Apr 2025 18:14:28 +0000 (14:14 -0400)]
Follow-up fixes for SHA-2 patch (commit
749a9e20c).
This changes the check for valid characters in the salt string to
only allow plain ASCII letters and digits. The previous coding was
locale-dependent which doesn't really seem like a great idea here;
moreover it could not work correctly in multibyte encodings.
This fixes a careless pointer-use-after-pfree, too.
Reported-by: Tom Lane <[email protected]>
Reported-by: Andres Freund <[email protected]>
Author: Bernd Helmle <
[email protected]>
Discussion: https://postgr.es/m/
6fab35422df6b6b9727fdcc243c5fa1c667dd3b5[email protected]
Tom Lane [Mon, 7 Apr 2025 17:31:27 +0000 (13:31 -0400)]
Fix erroneous construction of functions' dependencies on transforms.
The list of transform objects that a function should use is specified
in CREATE FUNCTION's TRANSFORM clause, and then represented indirectly
in pg_proc.protrftypes. However, ProcedureCreate completely ignored
that for purposes of constructing pg_depend entries, and instead made
the function depend on any transforms that exist for its parameter or
return data types. This is bad in both directions: the function could
be made dependent on a transform it does not actually use, or it
could try to use a transform that's since been dropped. (The latter
scenario would require use of a transform that's not for any of the
parameter or return types, but that seems legit for cases where the
function performs SQL operations internally.)
To fix, pass in the list of transform objects that CreateFunction
identified, and build pg_depend entries from that not from the
parameter/return types. This results in changes in the expected
test outputs in contrib/bool_plperl, which I guess are due to
different ordering of pg_depend entries -- that test case is
surely not exercising either of the problem scenarios.
This fix is not back-patchable as-is: changing the signature of
ProcedureCreate seems too risky in stable branches. We could
do something like making ProcedureCreate a wrapper around
ProcedureCreateExt or so. However, I'm more inclined to do
nothing in the back branches. We had no field complaints up to
now, so the hazards don't seem to be a big issue in practice.
And we couldn't do anything about existing pg_depend entries,
so a back-patched fix would result in a mishmash of dependencies
created according to different rules. That cure could be worse
than the disease, perhaps.
I bumped catversion just to lay down a marker that the expected
contents of pg_depend are a bit different than before.
Reported-by: Chapman Flack <[email protected]>
Author: Tom Lane <
[email protected]>
Discussion: https://postgr.es/m/
3112950.
1743984111@sss.pgh.pa.us
Álvaro Herrera [Mon, 7 Apr 2025 17:19:50 +0000 (19:19 +0200)]
Allow NOT NULL constraints to be added as NOT VALID
This allows them to be added without scanning the table, and validating
them afterwards without holding access exclusive lock on the table after
any violating rows have been deleted or fixed.
Doing ALTER TABLE ... SET NOT NULL for a column that has an invalid
not-null constraint validates that constraint. ALTER TABLE .. VALIDATE
CONSTRAINT is also supported. There are various checks on whether an
invalid constraint is allowed in a child table when the parent table has
a valid constraint; this should match what we do for enforced/not
enforced constraints.
pg_attribute.attnotnull is now only an indicator for whether a not-null
constraint exists for the column; whether it's valid or invalid must be
queried in pg_constraint. Applications can continue to query
pg_attribute.attnotnull as before, but now it's possible that NULL rows
are present in the column even when that's set to true.
For backend internal purposes, we cache the nullability status in
CompactAttribute->attnullability that each tuple descriptor carries
(replacing CompactAttribute.attnotnull, which was a mirror of
Form_pg_attribute.attnotnull). During the initial tuple descriptor
creation, based on the pg_attribute scan, we set this to UNRESTRICTED if
pg_attribute.attnotnull is false, or to UNKNOWN if it's true; then we
update the latter to VALID or INVALID depending on the pg_constraint
scan. This flag is also copied when tupledescs are copied.
Comparing tuple descs for equality must also compare the
CompactAttribute.attnullability flag and return false in case of a
mismatch.
pg_dump deals with these constraints by storing the OIDs of invalid
not-null constraints in a separate array, and running a query to obtain
their properties. The regular table creation SQL omits them entirely.
They are then dealt with in the same way as "separate" CHECK
constraints, and dumped after the data has been loaded. Because no
additional pg_dump infrastructure was required, we don't bump its
version number.
I decided not to bump catversion either, because the old catalog state
works perfectly in the new world. (Trying to run with new catalog state
and the old server version would likely run into issues, however.)
System catalogs do not support invalid not-null constraints (because
commit
14e87ffa5c54 didn't allow them to have pg_constraint rows
anyway.)
Author: Rushabh Lathia <
[email protected]>
Author: Jian He <
[email protected]>
Reviewed-by: Álvaro Herrera <[email protected]>
Tested-by: Ashutosh Bapat <[email protected]>
Discussion: https://postgr.es/m/CAGPqQf0KitkNack4F5CFkFi-9Dqvp29Ro=EpcWt=4_hs-Rt+bQ@mail.gmail.com
Andrew Dunstan [Mon, 7 Apr 2025 16:22:41 +0000 (12:22 -0400)]
Clean up error messages from
1495eff7bdb
Quote file names, and mostly avoid hard coded file names. Along the way
make a few other minor improvements.
Discussion: https://postgr.es/m/
20250407.152721.
1397761902317499205[email protected]
Tom Lane [Mon, 7 Apr 2025 14:57:26 +0000 (10:57 -0400)]
Add local-address escape "%L" to log_line_prefix.
This escape shows the numeric server IP address that the client
has connected to. Unix-socket connections will show "[local]".
Non-client processes (e.g. background processes) will show "[none]".
We expect that this option will be of interest to only a fairly
small number of users. Therefore the implementation is optimized
for the case where it's not used (that is, we don't do the string
conversion until we have to), and we've not added the field to
csvlog or jsonlog formats.
Author: Greg Sabino Mullane <
[email protected]>
Reviewed-by: Cary Huang <[email protected]>
Reviewed-by: David Steele <[email protected]>
Reviewed-by: Jim Jones <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/CAKAnmmK-U+UicE-qbNU23K--Q5XTLdM6bj+gbkZBZkjyjrd3Ow@mail.gmail.com
Andrew Dunstan [Mon, 7 Apr 2025 15:01:15 +0000 (11:01 -0400)]
Revert "Use workaround of __builtin_setjmp only on MINGW on MSVCRT"
This reverts commit
c313fa4602defe1be947370ab5b217ca163a1e3c.
This is found to cause issues on x86_64 Windows even when using UCRT.
Discussion: https://postgr.es/m/
3312149.
1744001936@sss.pgh.pa.us
Andres Freund [Mon, 7 Apr 2025 12:47:30 +0000 (08:47 -0400)]
read_stream: Fix overflow hazard with large shared buffers
If the limit returned by GetAdditionalPinLimit() is large, the buffer_limit
variable in read_stream_start_pending_read() can overflow. While the code is
careful to limit buffer_limit PG_INT16_MAX, we subsequently add the number of
forwarded buffers.
The overflow can lead to assertion failures, crashes or wrong query results
when using large shared buffers.
It seems easier to avoid this if we make the buffer_limit variable an int,
instead of an int16. Do so, and clamp buffer_limit after adding the number of
forwarded buffers.
It's possible we might want to address this and related issues more widely by
changing to int instead of int16 more widely, but since the consequences of
this bug can be confusing, it seems better to fix it now.
This bug was introduced in
ed0b87caaca.
Discussion: https://postgr.es/m/ewvz3cbtlhrwqk7h6ca6cctiqh7r64ol3pzb3iyjycn2r5nxk5@tnhw3a5zatlr
Alexander Korotkov [Mon, 7 Apr 2025 13:28:54 +0000 (16:28 +0300)]
Remove GUC_NOT_IN_SAMPLE from enable_self_join_elimination
fc069a3a6319 implements Self-Join Elimination (SJE) and provides a new GUC
variable: enable_self_join_elimination. This new GUC variable was marked
as GUC_NOT_IN_SAMPLE. However, enable_self_join_elimination is documented
and is not different from any other enable_* GUCs. Thus, remove
GUC_NOT_IN_SAMPLE from it and add it to the postgresql.conf.sample.
Discussion: https://postgr.es/m/CAPpHfdsqMTEsmxk3aQwt6xPz%2BKpUELO%3D6fzmER9ZRGrbs4uMfA%40mail.gmail.com
Author: Tender Wang <
[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Daniel Gustafsson [Mon, 7 Apr 2025 11:44:58 +0000 (13:44 +0200)]
psql: Clarify help message for WATCH_INTERVAL
The help message for WATCH_INTERVAL was hard to interpret and didn't
follow the style of other messages, this updates it to nake it fit in
better and be easier to interpret.
Author: Daniel Gustafsson <
[email protected]>
Reported-by: Kyotaro Horiguchi <[email protected]>
Reviewed-by: David G. Johnston <[email protected]>
Discussion: https://postgr.es/m/
20250326.120732.
1167093737847500721[email protected]
Michael Paquier [Mon, 7 Apr 2025 06:37:34 +0000 (15:37 +0900)]
Fix grammar in log message of pg_restore.c
Introduced by
1495eff7bdb0.
Author: Kyotaro Horiguchi <
[email protected]>
Discussion: https://postgr.es/m/
20250407.151359.
72428746612514925[email protected]
Michael Paquier [Mon, 7 Apr 2025 03:55:09 +0000 (12:55 +0900)]
libpq: Fix some issues in TAP tests for service files
The valid service file was not correctly shaped, as append_to_file() was
called with an array as input. This is changed so as the parameter and
value pairs from the valid connection string are appended to the valid
service file one by one.
Even with the first issue fixed, the tests should fail. However, they
have been passing because all the connection attempts relied on the
default values given to PGPORT and PGHOST from the node when using
Cluster.pm's connect_ok() and connect_fails(), rather than the data in
the service file. The test is updated to use an interesting trick: a
dummy node is initialized but not started, and all the connection
attempts are done through it. This ensures that the data inside the
service file is used for all the connection tests. Note that breaking
the contents of the valid service file on purpose makes all the tests
that rely on it fail.
Issues introduced by
72c2f36d5727.
Author: Andrew Jackson <
[email protected]>
Discussion: https://postgr.es/m/CAKK5BkG_6_YSaebM6gG=8EuKaY7_VX1RFgYeySuwFPh8FZY73g@mail.gmail.com
Michael Paquier [Mon, 7 Apr 2025 01:02:12 +0000 (10:02 +0900)]
Clarify comment for worst-case allocation in quote_literal_cstr()
palloc() is invoked with a specific formula for its allocation size in
quote_literal_cstr(). This wastes some memory, but the size is large
enough to cover even the worst-case scenarios.
No explanations were given about the reasons behind these numbers. This
commit adds more documentation about all that.
Author: Steve Chavez <
[email protected]>
Discussion: https://postgr.es/m/CAGRrpzZ9bToRWS+fAnjxDJrxwZN1QcJ-y1Pn2yg=Hst6rydLtw@mail.gmail.com
Michael Paquier [Mon, 7 Apr 2025 00:51:40 +0000 (09:51 +0900)]
Fix use-after-free in pgstat_fetch_stat_backend_by_pid()
stats_fetch_consistency set to "snapshot" causes the backend entry
"beentry" retrieved by pgstat_get_beentry_by_proc_number() to be reset
at the beginning of pgstat_fetch_stat_backend() when fetching the
backend pgstats entry. As coded, "beentry" was being accessed after
being freed. This commit moves all the accesses to "beentry" to happen
before calling pgstat_fetch_stat_backend(), fixing the problem.
This problem could be reached by calling the SQL functions
pg_stat_get_backend_io() or pg_stat_get_backend_wal().
Issue caught by valgrind.
Reported-by: Alexander Lakhin <[email protected]>
Author: Bertrand Drouvot <
[email protected]>
Discussion: https://postgr.es/m/
f1788cc0-253a-4a3a-aee0-
1b8ab9538736@gmail.com
Fujii Masao [Mon, 7 Apr 2025 00:27:33 +0000 (09:27 +0900)]
Use XLOG_CONTROL_FILE macro consistently for control file name.
The XLOG_CONTROL_FILE macro (defined in access/xlog_internal.h)
represents the control file name. While some parts of the codebase already
use this macro, others previously hardcoded the file name as a string.
This commit replaces those hardcoded strings with the macro,
ensuring consistent usage throughout the code. This makes future
maintenance easier and improves searchability, for example when
grepping for control file usage.
Author: Anton A. Melnikov <
[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Reviewed-by: Kyotaro Horiguchi <[email protected]>
Reviewed-by: Masao Fujii <[email protected]>
Discussion: https://postgr.es/m/
0841ec77-47e5-452a-adb4-
c6fa55d605fc@postgrespro.ru
Daniel Gustafsson [Sun, 6 Apr 2025 22:03:18 +0000 (00:03 +0200)]
doc: Clarify project naming
Clarify the project naming in the history section of the docs
to match the recent license preamble changes.
Backpatch to all supported versions.
Author: Dave Page <
[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Discussion: https://postgr.es/m/CA+OCxozLzK2+Jc14XZyWXSp6L9Ot+3efwXUE35FJG=fsbib2EA@mail.gmail.com
Backpatch-through: 13
Andrew Dunstan [Sun, 6 Apr 2025 21:00:58 +0000 (17:00 -0400)]
Clean up checking for pg_dumpall output directory
Coverity objected to the original code, and in any case this is much
cleaner, using the existing routine pg_check_dir() instead of rolling
its own test.
Per suggestion from Tom Lane.
Tom Lane [Sun, 6 Apr 2025 20:27:31 +0000 (16:27 -0400)]
Doc: fix PDF "contents ... exceed the available area" warnings.
Tweak column widths in a new table, similarly to some previous
fixes such as
b62381d9a.
Per buildfarm.
Nathan Bossart [Sun, 6 Apr 2025 20:11:41 +0000 (15:11 -0500)]
pg_upgrade: Fix memory leak in check_for_unicode_update().
This function was initializing the "task" variable before a couple
of early returns. To fix, postpone the initialization until just
before it's needed.
Per Coverity.
Discussion: https://postgr.es/m/Z_KMsUH2-FEbiNjC%40nathan
Andres Freund [Fri, 4 Apr 2025 19:15:39 +0000 (15:15 -0400)]
aio: Avoid spurious coverity warning
PgAioResult.result is never accessed in the relevant path, but coverity
complains about an uninitialized access anyway. So just zero-initialize the
whole thing. While at it, reduce the scope of the variable.
Reported-by: Ranier Vilela <[email protected]>
Reviewed-by: Noah Misch <[email protected]>
Discussion: https://postgr.es/m/CAEudQApsKqd-s+fsUQ0OmxJAMHmBSXxrAz3dCs+uvqb3iRtjSw@mail.gmail.com
Tom Lane [Sun, 6 Apr 2025 15:57:22 +0000 (11:57 -0400)]
Fix memory leaks in px_crypt_shacrypt().
Per Coverity. I don't think these are of any actual significance
since the function ought to be invoked in a short-lived context.
Still, if it's trying to be neat it should get it right.
Also const-ify a constant and fix up typedef formatting.
Tom Lane [Sun, 6 Apr 2025 15:37:09 +0000 (11:37 -0400)]
Use "(void)" to mark pgstat_lock_entry(..., false) calls.
This should silence Coverity's complaints about the result being
sometimes ignored.
I'm inclined to think that these routines are simply misdesigned,
because sometimes it's okay to ignore the result and sometimes it
isn't, and we have no way to enforce the latter. But for now
I just added a comment.
Andrew Dunstan [Sun, 6 Apr 2025 13:21:09 +0000 (09:21 -0400)]
Avoid unnecessary copying of a string in pg_restore.c
Coverity complained about a possible overrun in the copy, but there is
no actual need to copy the string at all.
Andrew Dunstan [Sun, 6 Apr 2025 13:08:02 +0000 (09:08 -0400)]
Fix a couple of memory leaks in pg_restore.c
per complaint from Coverity.
Peter Eisentraut [Sun, 6 Apr 2025 12:43:51 +0000 (14:43 +0200)]
Relax ordering-related hardcoded btree requirements in planning
There were several places in ordering-related planning where a
requirement for btree was hardcoded but an amcanorder index could
suffice. This fixes that. We just need to do the necessary mapping
between strategy numbers and compare types and adjust some related
APIs so that this works independent of btree strategy numbers. For
instance, non-btree amcanorder indexes can now be used to support
sorting and merge joins. Also, predtest.c works independent of btree
strategy numbers now.
To avoid performance regressions, some details on btree and other
built-in index types are still hardcoded as shortcuts, but other index
types now have access to the same features by providing the required
flags and callbacks.
Author: Mark Dilger <
[email protected]>
Co-authored-by: Peter Eisentraut <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/
E72EAA49-354D-4C2E-8EB9-
255197F55330@enterprisedb.com
Alexander Korotkov [Sun, 6 Apr 2025 11:30:20 +0000 (14:30 +0300)]
Revert "Put enable_self_join_elimination into postgresql.conf.sample"
This reverts commit
c2d329260cd8.
Reported-by: Daniel Gustafsson <[email protected]>
Discussion: https://postgr.es/m/
D292EB44-806E-439A-82A4-
491A1BA59E7A%40yesql.se
Alexander Korotkov [Sun, 6 Apr 2025 10:24:16 +0000 (13:24 +0300)]
Put enable_self_join_elimination into postgresql.conf.sample
fc069a3a6319 implements Self-Join Elimination (SJE) and provides a new
GUC variable: enable_self_join_elimination. This commit adds
enable_self_join_elimination to the postgresql.conf.sample, as it was
forgotten in the original commit.
Discussion: https://postgr.es/m/CAHewXN%3D%2Bghd6O6im46q7j2u6c3H6vkXtXmF%3D_v4CfGSnjje8PA%40mail.gmail.com
Author: Tender Wang <
[email protected]>
John Naylor [Sun, 6 Apr 2025 07:04:30 +0000 (14:04 +0700)]
Compute CRC32C using AVX-512 instructions where available
The previous implementation of CRC32C on x86 relied on the native
CRC32 instruction from the SSE 4.2 extension, which operates on
up to 8 bytes at a time. We can get a substantial speedup by using
carryless multiplication on SIMD registers, processing 64 bytes per
loop iteration. Shorter inputs fall back to ordinary CRC instructions.
On Intel Tiger Lake hardware (2020), CRC is now 50% faster for inputs
between 64 and 112 bytes, and 3x faster for 256 bytes.
The VPCLMULQDQ instruction on 512-bit registers has been available
on Intel hardware since 2019 and AMD since 2022. There is an older
variant for 128-bit registers, but at least on Zen 2 it performs worse
than normal CRC instructions for short inputs.
We must now do a runtime check, even for builds that target SSE
4.2. This doesn't matter in practice for WAL (arguably the most
critical case), because since commit
e2809e3a1 the final computation
with the 20-byte WAL header is inlined and unrolled when targeting
that extension. Compared with two direct function calls, testing
showed equal or slightly faster performance in performing an indirect
function call on several dozen bytes followed by inlined instructions
on constant input of 20 bytes.
The MIT-licensed implementation was generated with the "generate"
program from
https://github.com/corsix/fast-crc32/
Based on: "Fast CRC Computation for Generic Polynomials Using PCLMULQDQ
Instruction" V. Gopal, E. Ozturk, et al., 2009
Co-authored-by: Raghuveer Devulapalli <[email protected]>
Co-authored-by: Paul Amonson <[email protected]>
Reviewed-by: Nathan Bossart <[email protected]>
Reviewed-by: Andres Freund <[email protected]> (earlier version)
Reviewed-by: Matthew Sterrett <[email protected]> (earlier version)
Tested-by: Raghuveer Devulapalli <[email protected]>
Tested-by: David Rowley <<[email protected]>> (earlier version)
Discussion: https://postgr.es/m/BL1PR11MB530401FA7E9B1CA432CF9DC3DC192@BL1PR11MB5304.namprd11.prod.outlook.com
Discussion: https://postgr.es/m/PH8PR11MB82869FF741DFA4E9A029FF13FBF72@PH8PR11MB8286.namprd11.prod.outlook.com
Daniel Gustafsson [Sat, 5 Apr 2025 20:10:28 +0000 (22:10 +0200)]
Quote filename in error message
Project standard is to quote filenames in error and log messages, which
commit
2da74d8d640 missed in two error messages.
Author: Kyotaro Horiguchi <
[email protected]>
Reported-by: Kyotaro Horiguchi <[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Discussion: https://postgr.es/m/
20250404.120328.
103562371975971823[email protected]
Tom Lane [Sat, 5 Apr 2025 19:01:33 +0000 (15:01 -0400)]
Fix parse_cte.c's failure to examine sub-WITHs in DML statements.
makeDependencyGraphWalker thought that only SelectStmt nodes could
contain a WithClause. Which was true in our original implementation
of WITH, but astonishingly we missed updating this code when we added
the ability to attach WITH to INSERT/UPDATE/DELETE (and later MERGE).
Moreover, since it was coded to deliberately block recursion to a
WithClause, even updating raw_expression_tree_walker didn't save it.
The upshot of this was that we didn't see references to outer CTE
names appearing within an inner WITH, and would neither complain about
disallowed recursion nor account for such references when sorting CTEs
into a usable order. The lack of complaints about this is perhaps not
so surprising, because typical usage of WITH wouldn't hit either case.
Still, it's pretty broken; failing to detect recursion here leads to
assert failures or worse later on.
Fix by factoring out the processing of sub-WITHs into a new function
WalkInnerWith, and invoking that for all the statement types that
can have WITH.
Bug: #18878
Reported-by: Yu Liang <[email protected]>
Author: Tom Lane <
[email protected]>
Discussion: https://postgr.es/m/18878-
a26fa5ab6be2f2cf@postgresql.org
Backpatch-through: 13
Álvaro Herrera [Sat, 5 Apr 2025 17:16:58 +0000 (19:16 +0200)]
Add modern SHA-2 based password hashes to pgcrypto.
This adapts the publicly available reference implementation on
https://www.akkadia.org/drepper/SHA-crypt.txt and adds the new hash
algorithms sha256crypt and sha512crypt to crypt() and gen_salt()
respectively.
Author: Bernd Helmle <
[email protected]>
Reviewed-by: Japin Li <[email protected]>
Discussion: https://postgr.es/m/
c763235a2757e2f5f9e3e27268b9028349cef659[email protected]
Tom Lane [Sat, 5 Apr 2025 16:13:35 +0000 (12:13 -0400)]
Avoid double transformation of json_array()'s subquery.
transformJsonArrayQueryConstructor() applied transformStmt() to
the same subquery tree twice. While this causes no issue in many
cases, there are some where it causes a coredump, thanks to the
parser's habit of scribbling on its input.
Fix by making a copy before the first transformation (compare
0f43083d1). This is quite brute-force, but then so is the
whole business of transforming the input twice. Per discussion
in the bug thread, this implementation of json_array() parsing
should be replaced completely. But that will take some work
and will surely not be back-patchable, so for the moment let's
take the easy way out.
Oversight in
7081ac46a. Back-patch to v16 where that came in.
Bug: #18877
Reported-by: Yu Liang <[email protected]>
Author: Tom Lane <
[email protected]>
Discussion: https://postgr.es/m/18877-
c3c3ad75845833bb@postgresql.org
Backpatch-through: 16
Andrew Dunstan [Sat, 5 Apr 2025 12:00:24 +0000 (08:00 -0400)]
Clean up from commit
1495eff7bdb
Fix some comments, and remove the hacky way of quoting database names in
favor of appendStringLiteralConn.
Álvaro Herrera [Sat, 5 Apr 2025 09:41:01 +0000 (11:41 +0200)]
Set log_statement=none in t/002_pg_upgrade.pl
This should make the test a wee bit faster on high-load machines (e.g.,
when running under valgrind).
Per complaint from Andres Freund.
Discussion: https://postgr.es/m/cwbcyjp2ts7o7xgy5y5gwtcd4zltvncsj67el7xgci7xbwrhlu@k363vk5tce4g
Álvaro Herrera [Sat, 5 Apr 2025 09:22:40 +0000 (11:22 +0200)]
pg_dump: Tiny header cleanup
In commits
9c02e3a986da and
8ec0aaeae094, Nathan added a duplicate
TocEntry typedef forward declaration (plus assorted #ifdef hackery to
avoid C99 preprocessor issues) to deal with some very old untidyness
regarding DefnDumperPtr function prototype being located in pg_backup.h.
But there's no reason to have the DefnDumperPtr typedef (and the
accompanying DataDumperPtr typedef) in that file at all; they are better
placed in pg_backup_archiver.h, the internal header, because they are
only used internally. That also requires zero #ifdef hackery, so move
them there.
Reviewed-by: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/
202504042140[email protected]
Nathan Bossart [Sat, 5 Apr 2025 02:05:30 +0000 (21:05 -0500)]
pg_dump: Fix query for gathering attribute stats on older versions.
Commit
9c02e3a986 taught pg_dump to retrieve attribute statistics
for 64 relations at a time. pg_dump supports dumping from v9.2 and
newer versions, but our query for retrieving statistics for
multiple relations uses WITH ORDINALITY and multi-argument
UNNEST(), both of which were introduced in v9.4. To fix, we resort
to gathering statistics for a single relation at a time on versions
older than v9.4.
Per buildfarm member crake.
Author: Corey Huinker <
[email protected]>
Discussion: https://postgr.es/m/Z_BcWVMvlUIJ_iuZ%40nathan
Tom Lane [Sat, 5 Apr 2025 00:11:48 +0000 (20:11 -0400)]
Repair misbehavior with duplicate entries in FK SET column lists.
Since v15 we've had an option to apply a foreign key constraint's
ON DELETE SET DEFAULT or SET NULL action to just some of the
referencing columns. There was not a check for duplicate entries in
the list of columns-to-set, though. That caused a potential memory
stomp in CreateConstraintEntry(), which incautiously assumed that
the list of columns-to-set couldn't be longer than the number of key
columns. Even after fixing that, the case doesn't work because you
get an error like "multiple assignments to same column" from the SQL
command that is generated to do the update.
We could either raise an error for duplicate columns or silently
suppress the dups, and after a bit of thought I chose to do the
latter. This is motivated by the fact that duplicates in the FK
column list are legal, so it's not real clear why duplicates
in the columns-to-set list shouldn't be. Of course there's no
need to actually set the column more than once.
I left in the fix in CreateConstraintEntry() too, just because
it didn't seem like such low-level code ought to be making
assumptions about what it's handed.
Bug: #18879
Reported-by: Yu Liang <[email protected]>
Author: Tom Lane <
[email protected]>
Discussion: https://postgr.es/m/18879-
259fc59d072bd4d7@postgresql.org
Backpatch-through: 15
Tom Lane [Fri, 4 Apr 2025 22:26:51 +0000 (18:26 -0400)]
functions.c: copy trees from source_list before parse analysis etc.
This is yet another bit of fallout from the fact that backend/parser
(like other code) feels free to scribble on the parse tree it's
handed. In this case that resulted in modifying the
relatively-short-lived copy in the cached function's source_list.
That would be fine since we only need each source_list tree once
... except that if the parser fails after making some changes,
the function cache entry remains as-is and will still be there
if the user tries to execute the function again. Then we have
problems because we're feeding a non-pristine tree to the parser.
The most expedient fix is a quick copyObject(). I considered
other answers like somehow marking the cache entry invalid
temporarily, but that would add complexity and I'm not sure
it's worth it. In typical scenarios we'd only do this once
per function query per session.
Reported-by: Alexander Lakhin <[email protected]>
Author: Tom Lane <
[email protected]>
Discussion: https://postgr.es/m/
6d442183-102c-498a-81d1-
eeeb086cdc5a@gmail.com
Andrew Dunstan [Fri, 4 Apr 2025 21:02:02 +0000 (17:02 -0400)]
Fix a couple of error messages and tests for them
oversights in
1495eff7bdb and
289f74d0cb2. Mea culpa.
Nathan Bossart [Fri, 4 Apr 2025 20:56:23 +0000 (15:56 -0500)]
Prevent redeclaration of typedef TocEntry.
Commit
9c02e3a986 added a forward declaration for this typedef that
caused redeclarations, which is not valid in C99. To fix, add some
preprocessor guards to avoid a redefinition, as is done elsewhere
(e.g., commit
382092a0cd).
Per buildfarm.
Andrew Dunstan [Thu, 3 Apr 2025 18:45:52 +0000 (14:45 -0400)]
Add more TAP tests for pg_dumpall
Author: Matheus Alcantara <
[email protected]>
Author: Mahendra Singh Thalor <
[email protected]>
Andrew Dunstan [Fri, 4 Apr 2025 14:05:38 +0000 (10:05 -0400)]
Non text modes for pg_dumpall, correspondingly change pg_restore
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, globals.data and map.dat. The
first contains SQL for restoring the global data, and the second
contains a map from oids to database names. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing globals.dat, and no
toc.dat, it restores the global settings and then restores each
database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
Author: Mahendra Singh Thalor <
[email protected]>
Co-authored-by: Andrew Dunstan <[email protected]>
Reviewed-by: jian he <[email protected]>
Reviewed-by: Srinath Reddy <[email protected]>
Reviewed-by: Álvaro Herrera <[email protected]>
Discussion: https://postgr.es/m/
cb103623-8ee6-4ba5-a2c9-
f32e3a4933fa@dunslane.net
Andrew Dunstan [Fri, 28 Mar 2025 22:10:24 +0000 (18:10 -0400)]
add new list type simple_oid_string_list to fe-utils/simple_list
This type contains both an oid and a string.
This will be used in forthcoming changes to pg_restore.
Author: Andrew Dunstan <
[email protected]>
Andrew Dunstan [Fri, 4 Apr 2025 14:04:35 +0000 (10:04 -0400)]
Move common pg_dump code related to connections to a new file
ConnectDatabase is used by pg_dumpall, pg_restore and pg_dump so move
common code to new file.
new file name: connectdb.c
Author: Mahendra Singh Thalor <
[email protected]>
Nathan Bossart [Fri, 4 Apr 2025 19:55:04 +0000 (14:55 -0500)]
Remove unused function parameters in pg_backup_archiver.c.
Thanks to commit
9c02e3a986, which modified some of the changes
from commit
a0a4601765, we can remove the now-unused ArchiveHandle
parameter from _tocEntryRestorePass() and move_to_ready_heap().
Reviewed-by: Jeff Davis <[email protected]>
Discussion: https://postgr.es/m/Z-3x2AnPCP331JA3%40nathan
Nathan Bossart [Fri, 4 Apr 2025 19:51:08 +0000 (14:51 -0500)]
pg_dump: Retrieve attribute statistics in batches.
Currently, pg_dump gathers attribute statistics with a query per
relation, which can cause pg_dump to take significantly longer,
especially when there are many relations. This commit addresses
this by teaching pg_dump to gather attribute statistics for 64
relations at a time. Some simple tests showed this was the optimal
batch size, but performance may vary depending on the workload.
Our lookahead code determines the next batch of relations by
searching the TOC sequentially for relevant entries. This approach
assumes that we will dump all such entries in TOC order, which
unfortunately isn't true for dump formats that use
RestoreArchive(). RestoreArchive() does multiple passes through
the TOC and selectively dumps certain groups of entries each time.
This is particularly problematic for index stats and a subset of
matview stats; both are in SECTION_POST_DATA, but matview stats
that depend on matview data are dumped in RESTORE_PASS_POST_ACL,
while all other stats are dumped in RESTORE_PASS_MAIN. To handle
this, this commit moves all statistics data entries in
SECTION_POST_DATA to RESTORE_PASS_POST_ACL, which ensures that we
always dump them in TOC order. A convenient side effect of this
change is that we can revert a decent chunk of commit
a0a4601765,
but that is left for a follow-up commit.
Author: Corey Huinker <
[email protected]>
Co-authored-by: Nathan Bossart <[email protected]>
Reviewed-by: Jeff Davis <[email protected]>
Discussion: https://postgr.es/m/CADkLM%3Dc%2Br05srPy9w%2B-%2BnbmLEo15dKXYQ03Q_xyK%2BriJerigLQ%40mail.gmail.com
Nathan Bossart [Fri, 4 Apr 2025 19:51:08 +0000 (14:51 -0500)]
pg_dump: Reduce memory usage of dumps with statistics.
Right now, pg_dump stores all generated commands for statistics in
memory. These commands can be quite large and therefore can
significantly increase pg_dump's memory footprint. To fix, wait
until we are about to write out the commands before generating
them, and be sure to free the commands after writing. This is
implemented via a new defnDumper callback that works much like the
dataDumper one but is specifically designed for TOC entries.
Custom dumps that include data might write the TOC twice (to update
data offset information), which would ordinarily cause pg_dump to
run the attribute statistics queries twice. However, as a hack, we
save the length of the written-out entry in the first pass and skip
over it in the second. While there is no known technical issue
with executing the queries multiple times and rewriting the
results, it's expensive and feels risky, so let's avoid it.
As an exception, we _do_ execute the queries twice for the tar
format. This format does a second pass through the TOC to generate
the restore.sql file. pg_restore doesn't use this file, so even if
the second round of queries returns different results than the
first, it won't corrupt the output; the archive and restore.sql
file will just have different content. A follow-up commit will
teach pg_dump to gather attribute statistics in batches, which our
testing indicates more than makes up for the added expense of
running the queries twice.
Author: Corey Huinker <
[email protected]>
Co-authored-by: Nathan Bossart <[email protected]>
Reviewed-by: Jeff Davis <[email protected]>
Discussion: https://postgr.es/m/CADkLM%3Dc%2Br05srPy9w%2B-%2BnbmLEo15dKXYQ03Q_xyK%2BriJerigLQ%40mail.gmail.com
Nathan Bossart [Fri, 4 Apr 2025 19:51:08 +0000 (14:51 -0500)]
Skip second WriteToc() call for custom-format dumps without data.
Presently, "pg_dump --format=custom" calls WriteToc() twice. The
second call updates the data offset information, which allegedly
makes parallel pg_restore significantly faster. However, if we're
not dumping any data, there are no data offsets to update, so we
can skip this step.
Reviewed-by: Jeff Davis <[email protected]>
Discussion: https://postgr.es/m/Z9c1rbzZegYQTOQE%40nathan
Melanie Plageman [Fri, 4 Apr 2025 19:25:45 +0000 (15:25 -0400)]
Use streaming read I/O in autoprewarm
Make a read stream for each valid fork of each valid relation
represented in the autoprewarm dump file and prewarm those blocks
through the read stream API instead of by directly invoking
ReadBuffer().
Co-authored-by: Nazir Bilal Yavuz <[email protected]>
Co-authored-by: Melanie Plageman <[email protected]>
Reviewed-by: Heikki Linnakangas <[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Reviewed-by: Andrey M. Borodin <[email protected]> (earlier versions)
Reviewed-by: Kirill Reshke <[email protected]> (earlier versions)
Reviewed-by: Matheus Alcantara <[email protected]> (earlier versions)
Discussion: https://postgr.es/m/flat/CAN55FZ3n8Gd%2BhajbL%3D5UkGzu_aHGRqnn%2BxktXq2fuds%3D1AOR6Q%40mail.gmail.com
Melanie Plageman [Fri, 4 Apr 2025 19:25:27 +0000 (15:25 -0400)]
Refactor autoprewarm_database_main() in preparation for read stream
Autoprewarm prewarms blocks from a dump file representing the contents
of shared buffers at the time it was dumped. It uses a sorted array of
BlockInfoRecords, each representing a block from one of the cluster's
databases and tables.
autoprewarm_database_main() prewarms all the blocks from a single
database. It is optimized to ensure we don't try to open the same
relation or fork over and over again if it has been dropped or is
invalid. The main loop handled this by carefully setting various local
variables to sentinel values when a run of blocks should be skipped.
This method won't work with the read stream API. The read stream
callback must be able to advance the current position in the
BlockInfoRecord array to allow for reading ahead additional blocks,
however a read stream maps 1-1 with a relation and fork combination. So,
the main loop in autoprewarm_database_main() must also advance the
position in the array of BlockInfoRecords to skip invalid relations and
forks. This split control doesn't fit well with the current flow control
in autoprewarm_database_main()
To make it compatible with the read stream API, change
autoprewarm_database_main() to explicitly fast-forward in the
BlockInfoRecords array past the blocks belonging to an invalid relation
or fork.
This commit only implements the new control flow -- it does not use the
read stream API.
Co-authored-by: Nazir Bilal Yavuz <[email protected]>
Co-authored-by: Melanie Plageman <[email protected]>
Reviewed-by: Heikki Linnakangas <[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Discussion: https://postgr.es/m/flat/CAN55FZ3n8Gd%2BhajbL%3D5UkGzu_aHGRqnn%2BxktXq2fuds%3D1AOR6Q%40mail.gmail.com
Melanie Plageman [Fri, 4 Apr 2025 19:25:17 +0000 (15:25 -0400)]
Remove superfluous autoprewarm check
autoprewarm_database_main() prewarms blocks from the same database. It
is passed an array of sorted BlockInfoRecords and a start and stop index
into the array. The range represented should include only blocks
belonging to global objects or blocks from a single database. Remove an
unnecessary check that the current block is from the same database and
add an assert to ensure this invariant remains. Doing so removes a
special case that makes future refactoring to accommodate read
streamifying autoprewarm easier.
Noticed off-list by Andres Freund
Peter Geoghegan [Fri, 4 Apr 2025 18:14:08 +0000 (14:14 -0400)]
Avoid extra index searches through preprocessing.
Transform low_compare and high_compare nbtree skip array inequalities
(with opclasses that offer skip support) in such a way as to allow
_bt_first to consistently apply later keys when it descends the tree.
This can lower the number of index searches for multi-column scans that
use a ">" key on one of the index's prefix columns (or use a "<" key,
when scanning backwards) when it precedes some later lower-order key.
For example, an index qual "WHERE a > 5 AND b = 2" will now be converted
to "WHERE a >= 6 AND b = 2" by a new preprocessing step that takes place
after low_compare and high_compare have been finalized. That way, the
initial call to _bt_first can use "WHERE a >= 6 AND b = 2" to find an
initial position, rather than just using "WHERE a > 5" -- "b = 2" can be
applied during every _bt_first call. There's a decent chance that this
will allow such a scan to avoid the extra search that might otherwise be
needed to determine the lowest "a" value still satisfying "WHERE a > 5".
The transformation process can only lower the total number of index
pages read when the use of a more restrictive set of initial positioning
keys in _bt_first actually allows the scan to land on some later leaf
page directly, relative to the unoptimized case (or on an earlier leaf
page directly, when scanning backwards). But the savings can really add
up in cases where an affected skip array comes after some other array.
For example, a scan indexqual "WHERE x IN (1, 2, 3) AND y > 5 AND z = 2"
can save as many as 3 _bt_first calls by applying the new transformation
to its "y" array (up to 1 extra search can be avoided per "x" element).
Follow-up to commit
92fe23d9, which added nbtree skip scan.
Author: Peter Geoghegan <
[email protected]>
Reviewed-By: Matthias van de Meent <[email protected]>
Discussion: https://postgr.es/m/CAH2-Wz=FJ78K3WsF3iWNxWnUCY9f=Jdg3QPxaXE=uYUbmuRz5Q@mail.gmail.com
Peter Geoghegan [Fri, 4 Apr 2025 17:58:05 +0000 (13:58 -0400)]
Improve nbtree skip scan primitive scan scheduling.
Don't allow nbtree scans with skip arrays to end any primitive scan on
its first leaf page without giving some consideration to how many times
the scan's arrays advanced while changing at least one skip array
(though continue not caring about the number of array advancements that
only affected SAOP arrays, even during skip scans with SAOP arrays).
Now when a scan performs more than 3 such array advancements in the
course of reading a single leaf page, it is taken as a signal that the
next page is unlikely to be skippable. We'll therefore continue the
ongoing primitive index scan, at least until we can perform a recheck
against the next page's finaltup.
Testing has shown that this new heuristic occasionally makes all the
difference with skip scans that were expected to rely on the "passed
first page" heuristic added by commit
9a2e2a28. Without it, there is a
remaining risk that certain kinds of skip scans will never quite manage
to clear the initial hurdle of performing a primitive scan that lasts
beyond its first leaf page (or that such a skip scan will only clear
that initial hurdle when it has already wasted noticeably-many cycles
due to inefficient primitive scan scheduling).
Follow-up to commits
92fe23d9 and
9a2e2a28.
Author: Peter Geoghegan <
[email protected]>
Reviewed-By: Matthias van de Meent <[email protected]>
Discussion: https://postgr.es/m/CAH2-Wz=RVdG3zWytFWBsyW7fWH7zveFvTHed5JKEsuTT0RCO_A@mail.gmail.com
Masahiko Sawada [Fri, 4 Apr 2025 17:39:57 +0000 (10:39 -0700)]
pg_recvlogical: Add --failover option.
This new option instructs pg_recvlogical to create the logical
replication slot with the failover option enabled. It can be used in
conjunction with the --create-slot option.
Author: Hayato Kuroda <
[email protected]>
Reviewed-by: Michael Banck <[email protected]>
Reviewed-by: Masahiko Sawada <[email protected]>
Discussion: https://postgr.es/m/OSCPR01MB14966C54097FC83AF19F3516BF5AC2@OSCPR01MB14966.jpnprd01.prod.outlook.com
Jeff Davis [Fri, 4 Apr 2025 17:28:52 +0000 (10:28 -0700)]
Oversight in commit
b81ffa13e3.
Should warn if a materialized view may be affected, as well.
Peter Geoghegan [Fri, 4 Apr 2025 16:27:52 +0000 (12:27 -0400)]
Further optimize nbtree search scan key comparisons.
Postgres 17 commit
e0b1ee17 added two complementary optimizations to
nbtree: the "prechecked" and "firstmatch" optimizations. _bt_readpage
was made to avoid needlessly evaluating keys that are guaranteed to be
satisfied by applying page-level context. "prechecked" did this for
keys required in the current scan direction, while "firstmatch" did it
for keys required in the opposite-to-scan direction only.
The "prechecked" design had a number of notable issues. It didn't
account for the fact that an = array scan key's sk_argument field might
need to advance at the point of the page precheck (it didn't check the
precheck tuple against the key's array, only the key's sk_argument,
which needlessly made it ineffective in cases involving stepping to a
page having advanced the scan's arrays using a truncated high key).
"prechecked" was also completely ineffective when only one scan key
wasn't guaranteed to be satisfied by every tuple (it didn't recognize
that it was still safe to avoid evaluating other, earlier keys).
The "firstmatch" optimization had similar limitations. It could only be
applied after _bt_readpage found its first matching tuple, regardless of
why any earlier tuples failed to satisfy the scan's index quals. This
allowed unsatisfied non-required scan keys to impede the optimization.
Replace both optimizations with a new optimization, without any of these
limitations: the "startikey" optimization. Affected _bt_readpage calls
generate a page-level key offset ("startikey"), that their _bt_checkkeys
calls can then start at. This is an offset to the first key that isn't
known to be satisfied by every tuple on the page.
Although this is independently useful work, its main goal is to avoid
performance regressions with index scans that use skip arrays, but still
never manage to skip over irrelevant leaf pages. We must avoid wasting
CPU cycles on overly granular skip array maintenance in these cases.
The new "startikey" optimization helps with this by selectively
disabling array maintenance for the duration of a _bt_readpage call.
This has no lasting consequences for the scan's array keys (they'll
still reliably track the scan's progress through the index's key space
whenever the scan is "between pages").
Skip scan adds skip arrays during preprocessing using simple, static
rules, and decides how best to navigate/apply the scan's skip arrays
dynamically, at runtime. The "startikey" optimization enables this
approach. As a result of all this, the planner doesn't need to generate
distinct, competing index paths (one path for skip scan, another for an
equivalent traditional full index scan). The overall effect is to make
scan runtime close to optimal, even when the planner works off an
incorrect cardinality estimate. Scans will also perform well given a
skipped column with data skew: individual groups of pages with many
distinct values (in respect of a skipped column) can be read about as
efficiently as before -- without the scan being forced to give up on
skipping over other groups of pages that are provably irrelevant.
Many scans that cannot possibly skip will still benefit from the use of
skip arrays, since they'll allow the "startikey" optimization to be as
effective as possible (by allowing preprocessing to mark all the scan's
keys as required). A scan that uses a skip array on "a" for a qual
"WHERE a BETWEEN 0 AND 1_000_000 AND b = 42" is often much faster now,
even when every tuple read by the scan has its own distinct "a" value.
However, there are still some remaining regressions, affecting certain
trickier cases.
Scans whose index quals have several range skip arrays, each on some
high cardinality column, can still be slower than they were before the
introduction of skip scan -- even with the new "startikey" optimization.
There are also known regressions affecting very selective index scans
that use a skip array. The underlying issue with such selective scans
is that they never get as far as reading a second leaf page, and so will
never get a chance to consider applying the "startikey" optimization.
In principle, all regressions could be avoided by teaching preprocessing
to not add skip arrays whenever they aren't expected to help, but it
seems best to err on the side of robust performance.
Follow-up to commit
92fe23d9, which added nbtree skip scan.
Author: Peter Geoghegan <
[email protected]>
Reviewed-By: Heikki Linnakangas <[email protected]>
Reviewed-By: Masahiro Ikeda <[email protected]>
Reviewed-By: Matthias van de Meent <[email protected]>
Discussion: https://postgr.es/m/CAH2-Wz=Y93jf5WjoOsN=xvqpMjRy-bxCE037bVFi-EasrpeUJA@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-WznWDK45JfNPNvDxh6RQy-TaCwULaM5u5ALMXbjLBMcugQ@mail.gmail.com
Peter Geoghegan [Fri, 4 Apr 2025 16:27:04 +0000 (12:27 -0400)]
Add nbtree skip scan optimization.
Teach nbtree multi-column index scans to opportunistically skip over
irrelevant sections of the index given a query with no "=" conditions on
one or more prefix index columns. When nbtree is passed input scan keys
derived from a predicate "WHERE b = 5", new nbtree preprocessing steps
output "WHERE a = ANY(<every possible 'a' value>) AND b = 5" scan keys.
That is, preprocessing generates a "skip array" (and an output scan key)
for the omitted prefix column "a", which makes it safe to mark the scan
key on "b" as required to continue the scan. The scan is therefore able
to repeatedly reposition itself by applying both the "a" and "b" keys.
A skip array has "elements" that are generated procedurally and on
demand, but otherwise works just like a regular ScalarArrayOp array.
Preprocessing can freely add a skip array before or after any input
ScalarArrayOp arrays. Index scans with a skip array decide when and
where to reposition the scan using the same approach as any other scan
with array keys. This design builds on the design for array advancement
and primitive scan scheduling added to Postgres 17 by commit
5bf748b8.
Testing has shown that skip scans of an index with a low cardinality
skipped prefix column can be multiple orders of magnitude faster than an
equivalent full index scan (or sequential scan). In general, the
cardinality of the scan's skipped column(s) limits the number of leaf
pages that can be skipped over.
The core B-Tree operator classes on most discrete types generate their
array elements with the help of their own custom skip support routine.
This infrastructure gives nbtree a way to generate the next required
array element by incrementing (or decrementing) the current array value.
It can reduce the number of index descents in cases where the next
possible indexable value frequently turns out to be the next value
stored in the index. Opclasses that lack a skip support routine fall
back on having nbtree "increment" (or "decrement") a skip array's
current element by setting the NEXT (or PRIOR) scan key flag, without
directly changing the scan key's sk_argument. These sentinel values
behave just like any other value from an array -- though they can never
locate equal index tuples (they can only locate the next group of index
tuples containing the next set of non-sentinel values that the scan's
arrays need to advance to).
A skip array's range is constrained by "contradictory" inequality keys.
For example, a skip array on "x" will only generate the values 1 and 2
given a qual such as "WHERE x BETWEEN 1 AND 2 AND y = 66". Such a skip
array qual usually has near-identical performance characteristics to a
comparable SAOP qual "WHERE x = ANY('{1, 2}') AND y = 66". However,
improved performance isn't guaranteed. Much depends on physical index
characteristics.
B-Tree preprocessing is optimistic about skipping working out: it
applies static, generic rules when determining where to generate skip
arrays, which assumes that the runtime overhead of maintaining skip
arrays will pay for itself -- or lead to only a modest performance loss.
As things stand, these assumptions are much too optimistic: skip array
maintenance will lead to unacceptable regressions with unsympathetic
queries (queries whose scan can't skip over many irrelevant leaf pages).
An upcoming commit will address the problems in this area by enhancing
_bt_readpage's approach to saving cycles on scan key evaluation, making
it work in a way that directly considers the needs of = array keys
(particularly = skip array keys).
Author: Peter Geoghegan <
[email protected]>
Reviewed-By: Masahiro Ikeda <[email protected]>
Reviewed-By: Heikki Linnakangas <[email protected]>
Reviewed-By: Matthias van de Meent <[email protected]>
Reviewed-By: Tomas Vondra <[email protected]>
Reviewed-By: Aleksander Alekseev <[email protected]>
Reviewed-By: Alena Rybakina <[email protected]>
Discussion: https://postgr.es/m/CAH2-Wzmn1YsLzOGgjAQZdn1STSG_y8qP__vggTaPAYXJP+G4bw@mail.gmail.com
Tom Lane [Fri, 4 Apr 2025 15:57:26 +0000 (11:57 -0400)]
Stabilize regression test from
c0962a113.
Per buildfarm.
Co-authored-by: Alena Rybakina <[email protected]>
Co-authored-by: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/srnuqlttuimzmvoulhsrbgvj4vnul6b65osswvua7sfkqsvmuy@yg7apybpxp34
Melanie Plageman [Fri, 4 Apr 2025 15:34:06 +0000 (11:34 -0400)]
Fix autoprewarm neglect of tablespaces
While prewarming blocks from a dump file, autoprewarm_database_main()
mistakenly ignored tablespace when detecting the beginning of the next
relation to prewarm. Because RelFileNumbers are only unique within a
tablespace, autoprewarm could miss prewarming blocks from a
relation with the same RelFileNumber in a different tablespace.
Though this situation is likely rare in practice, it's best to make the
code correct. Do so by explicitly checking for the RelFileNumber when
detecting a new relation.
Reported-by: Heikki Linnakangas <[email protected]>
Discussion: https://postgr.es/m/
97c36982-603b-494a-95f4-
aaf2a12ac27e%40iki.fi
Nathan Bossart [Fri, 4 Apr 2025 14:41:59 +0000 (09:41 -0500)]
Add commit
e1a8b1ad58 to .git-blame-ignore-revs.
Nathan Bossart [Fri, 4 Apr 2025 14:38:22 +0000 (09:38 -0500)]
Re-pgindent pg_largeobject.c after commit
0d6c477664.