fix(deps): update module github.com/twmb/franz-go to v1.19.4 (main) #17694
+4,983
−1,978
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
v1.18.1
->v1.19.4
Warning
Some dependencies could not be looked up. Check the Dependency Dashboard for more information.
Release Notes
twmb/franz-go (github.com/twmb/franz-go)
v1.19.4
Compare Source
===
Fixes one bug introduced from the prior release (an obvious data race in
retrospect), and one data race introduced in 1.19.0. I've looped the tests more
in this release and am not seeing further races. I don't mean to downplay the
severity here, but these are races on pointer-sized variables where reading the
before or after state is of little difference. One of the read/write races is
on a context.Context, so there are actually two pointer sized reads & writes --
but reading the (effectively) type vtable for the new context and then the data
pointer for the old context doesn't really break things here. Anyway, you
should upgrade.
This also adds a workaround for Azure EventHubs, which does not handle
ApiVersions correctly when the broker does not recognize the version we are
sending. The broker should reply with an
UNSUPPORTED_VERSION
error andreply with the version the broker can handle. Instead, Azure is resetting the
connection. To workaround, we detect a cxn reset twice and then downgrade the
request we send client side to 0.
7910f6b6
kgo: retryconnection reset by peer
from ApiVersions to work around EventHubsd310cabd
kgo: fix data read/write race on ctx variable7a5ddcec
kgo bugfix: guard sink batch field access morev1.19.3
Compare Source
===
This release fully fixes (and has a positive field report) the KIP-890 problem
that was meant to be fixed in v1.19.2. See the commit description for more
details.
a13f633b
kgo: remove pinReq wrapping requestv1.19.2
Compare Source
===
This release fixes two bugs, a data race and a misunderstanding in some of the
implementation of KIP-890.
The data race has existed for years and has only been caught once. It could
only be encountered in a specific section of decoding a fetch response WHILE a
metadata response was concurrently being handled, and the metadata response
indicated a partition changed leaders. The race was benign; it was a read race,
and the decoded response is always discarded because a metadata change
happened. Regardless, metadata handling and fetch response decoding are no
longer concurrent.
For KIP-890, some things were not called out all to clearly (imo) in the KIP.
If your 4.0 cluster had not yet enabled the transaction.version feature v2+,
then transactions would not work in this client. As it turns out, Kafka 4
finally started using a v2.6 introduced "features" field in a way that is
important to clients. In short: I opted into KIP-890 behavior based on if a
broker could handle requests (produce v12+, end txn v5+, etc). I also needed to
check if "transaction.version" was v2+. Features handling is now supported in
the client, and this single client-relevant feature is now implemented.
See the commits for more details.
dda08fd9
kgo: fix KIP-890 handling of the transaction.version feature8a364819
kgo: fix data race in fetch response handlingv1.19.1
Compare Source
===
This release fixes a very old bug that finally started being possible to hit in
v1.19.0. The v1.19.0 release does not work for Kafka versions pre-4.0. This
release fixes that (by fixing the bug that has existed since Kafka 2.4) and
adds a GH action to test against Kafka 3.8 to help prevent regressions against
older brokers as this library marches forward.
50aa74f1
kgo bugfix: ApiVersions replies only with key 18, not all keysv1.19.0
Compare Source
===
This is the largest release of franz-go yet. The last patch release was Jan 20, '25.
The last minor release was Oct 14, '24.
A big reason for delays the past few month+ has been from spin looping tests
and investigating any issue that popped up. Another big delay is that Kafka has
a full company adding features -- some questionable -- and I'm one person that
spent a significant amount of time catching this library up with the latest
Kafka release. Lastly, Kafka released Kafka v3.9 three weeks after my last
major release, and simultaneously, a few requests came in for new features in
this library that required a lot of time. I wanted a bit of a break and only
resumed development more seriously in late Feb. This release is likely >100hrs
of work over the last ~4mo, from understanding new features and implementing
them, reviewing PRs, and debugging rare test failures.
The next Kafka release is poised to implement more large features (share
groups), which unfortunately will mean even more heads down time trying to bolt
in yet another feature to an already large library. I hope that Confluent
chills with introducing massive client-impacting changes; they've introduced
more in the past year than has been introduced from 2019-2023.
Bug fixes / changes / deprecations
The BasicLogger will no longer panic if only a single key (no val) is used. Thanks @vicluq!
An internal coding error around managing fetch concurrency was fixed. Thanks @iimos!
Some off by ones with retries were fixed (tldr: we retried one fewer times than configured)
AllowAutoTopicCreation
andConsumeRegex
can now be used together.Previously, topics would not be created if you were producing and consuming
from the same client AND if you used the
ConsumeRegex
option.A data race in the consumer code path has been fixed. The race is hard to
encounter (which is why it never came up even in my weeks of spin-looping
tests with
-race
). See PR #984for more details.
EndBeginTxnUnsafe
is deprecated and unused.EndAndBeginTransaction
nowflushes, and you cannot produce while the function happens (the function will
just be stuck flushing). As of KIP-890, the behavior that the library relied on
is now completely unsupported. Trying to produce while ending & beginning a
transaction very occasionally leads to duplicate messages. The function now is
just a shortcut for flush, end, begin.
The kversion package guts have been entirely reimplemented; version guessing
should be more reliable.
OnBrokerConnect
now encompasses the entire SASL flow (if using SASL) ratherthan just connection dialing. This allows you more visibility into successful
or broken connections, as well as visibility into how long it actually takes
to initialize a connection. The
dialDur
arg has been renamed toinitDur
.You may see the duration increase in your metrics. enough If feedback comes
in that this is confusing or unacceptable, I may issue a patch to revert
the change and instead introduce a separate hook in the next minor release.
I do not aim to create another minor release for a while.
Features / improvements
This release adds support for user-configurable memory pooling to a few select
locations. See any "Pool" suffixed interface type in the documentation. You can
use this to add bucketed pooling (or whatever strategy you choose) to cut down
on memory waste in a few areas. As well, a few allocations that were previously
many-tiny allocs have been converted to slab allocations (slice backed). Lastly,
if you opt into
kgo.Record
pooling, theRecord
type has a newRecycle
method to send it and all other pooled slices back to their pools.
You can now completely override how compression or decompression is done via
the new
WithCompressor
andWithDecompressor
options. This allows you touse libraries or options that franz-go does not automatically support, perhaps
opting for higher performance libraries or options or using memory more memory
pooling behind the scenes.
ConsumeResetOffset
has been split into two options,ConsumeResetOffset
andConsumeStartOffset
. The documentation has been cleaned up. I personally alwaysfound it confusing to use the reset offset for both what to start consuming from
and what to reset to when the client sees an offset out of range error. The start
offset defaults to the reset offset (and vice versa) if you only set one.
For users that produce infrequently but want the latency to be low when producing,
the client now has a
EnsureProduceConnectionIsOpen
method. You can call thisbefore producing to force connections to be open.
The client now has a
RequestCachedMetadata
function, which can be used torequest metadata only if the information you're requesting is not cached,
or is cached but is too stale. This can be very useful for admin packages that
need metadata to do anything else -- rather than requesting metadata for every
single admin operation, you can have metadata requested once and use that
repeatedly. Notably, I'll be switching
kadm
to using this function.KIP-714 support: the client now internally aggregates a small set of metrics
and sends them to the broker by default. This client implements all required
metrics and a subset of recommended metrics (the ones that make more sense).
To opt out of metrics collection & sending to the broker by default, you
can use the new
DisableClienMetrics
option. You can also provide your ownmetrics to send to the broker via the new
UserMetricsFn
option. The clientdoes not attempt to sanitize any user provided metric names; be sure you provide
the names in the correct format (see docs).
KIP-848 support: this exists but is hidden. You must explicitly opt in by using
the new WithContext option, and the context must have a special string key,
opt_in_kafka_next_gen_balancer_beta
. I noticed while testing that if yourepeat
ConsumerGroupHeartbeat
requests (i.e. what can happen when clientsare on unreliable networks), group members repeatedly get fenced. This is
recoverable, but it happens way way more than it should and I don't believe
the broker implementation to be great at the moment. Confluent historically
ignores any bug reports I create on the KAFKA issue tracker, but if you
would like to follow along or perhaps nudge to help get a reply, please
chime in on KAFKA-19222, KAFKA-19233, and KAFKA-19235.
A few other more niche APIs have been added. See the full breadth of new APIs
below and check pkg.go.dev for docs for any API you're curious about.
API additions
This section contains all net-new APIs in this release. See the documentation
on pkg.go.dev.
Relevant commits
This is a small selection of what I think are the most pertinent commits in
this release. This release is very large, though. Many commits and PRs have
been left out that introduce or change smaller things.
07e57d3e
kgo: remove all EndAndBeginTransaction internal "optimizations"a54ffa96
kgo: add ConsumeStartOffset, expand offset docs, update readme KIPsPR #​988
#988 kgo: add support for KIP-714 (client metrics)7a17a03c
kgo: fix data race in consumer code pathae96af1d
kgo: expose IsRetryableBrokerErr1eb82fee
kgo: add EnsureProduceConnectionIsOpenfc778ba8
kgo: fix AllowAutoTopicCreation && ConsumeRegex when used togetherae7eea7c
kgo: add DisableFetchCRCValidation option6af90823
kgo: add the ability to pool memory in a few places while consuming8c7a36db
kgo: export utilities for decompressing and parsing partition fetch responses33400303
kgo: do a slab allocation for Record's when processing a batch39c2157a
kgo: add WithCompressor and WithDecompressor options9252a6b6
kgo: export Compressor and Decompressorbe15c285
kgo: add Client.RequestCachedMetadatafc040bc0
kgo: add OnRebootstrapRequiredc8aec00a
kversion: document changes through 4.0718c5606
kgo: remove all code handling EndBeginTxnUnsafe, make it a no-op5494c59e
kversions: entirely reimplement internals9d266fcd
kgo: allow outstanding produce requests to be context canceled if the user disables idempotencyc60bf4c2
kgo: add DefaultProduceTopicAlways ProducerOpt50cfe060
kgo: fix off-by-one with retries accountinge9ba83a6
,05099ba0
kgo: add WithContext, Client.Context()ddb0c0c3
kgo: fix cancellation of a fetch in manageFetchConcurrency83843a53
kgo: fixed panic when keyvals len equals 1Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.