Skip to content

Add heap usage estimate to ClusterInfo #128723

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

nicktindall
Copy link
Contributor

@nicktindall nicktindall commented Jun 2, 2025

In preparation for the heap-usage allocation decider this PR attempts to provide the plumbing necessary to put heap usage stats in the ClusterInfo.

Note in stateful ES, for the moment, there will be no implementation of HeapUsageSupplier. ClusterInfo#getNodesHeapUsage() will just return an empty map.

Relates: ES-11445

@nicktindall nicktindall added >non-issue :Distributed Coordination/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) and removed v9.1.0 labels Jun 2, 2025
@@ -205,6 +205,7 @@ private static ClusterInfo createClusterInfo(ShardRouting shard, Long size) {
Map.of(ClusterInfo.shardIdentifierFromRouting(shard), size),
Map.of(),
Map.of(),
Map.of(),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a lot of noise due to adding the parameter to the ClusterInfo. We could implement a builder to avoid this next time we add a field?

Copy link
Contributor Author

@nicktindall nicktindall Jun 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or even do it as a separate PR to reduce the noise in this one?

@elasticsearchmachine elasticsearchmachine added the serverless-linked Added by automation, don't add manually label Jun 2, 2025
@@ -990,6 +986,9 @@ public Map<String, String> queryFields() {
.map(TerminationHandlerProvider::handler);
terminationHandler = getSinglePlugin(terminationHandlers, TerminationHandler.class).orElse(null);

if (clusterInfoService instanceof InternalClusterInfoService icis) {
icis.setHeapUsageSupplier(serviceProvider.newHeapUsageSupplier(pluginsService));
}
Copy link
Contributor Author

@nicktindall nicktindall Jun 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really don't love this, I will dig into whether there's a nicer/more idiomatic way to do it. Suggestions are welcome.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I had in mind is to set this in Stateless#createComponents, something like

services.allocationService().setHeapUsageSupplier();

AllocationService has a reference to ClusterInfoService and can then handle it internally. This avoids changes to the Plugin interface which I think it is better since we might want to move the heap decider to stateful in future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in 23eb8e6

@nicktindall nicktindall requested a review from ywangd June 3, 2025 07:03
@nicktindall nicktindall changed the title Add heap memory to cluster info Add heap memory to ClusterInfo Jun 3, 2025
@nicktindall nicktindall marked this pull request as ready for review June 3, 2025 07:40
@nicktindall nicktindall requested a review from a team as a code owner June 3, 2025 07:40
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-distributed-coordination (Team:Distributed Coordination)

@elasticsearchmachine elasticsearchmachine added the Team:Distributed Coordination Meta label for Distributed Coordination team label Jun 3, 2025
@@ -131,6 +134,16 @@ void setUpdateFrequency(TimeValue updateFrequency) {
this.updateFrequency = updateFrequency;
}

/**
* This can be provided by plugins, which are initialised long after the ClusterInfoService is created
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this need to be provided by plugins, shouldn't it be backed by JvmInfo/JvmStats?

Copy link
Contributor Author

@nicktindall nicktindall Jun 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The heap usage supplier supplies estimated heap usage for the whole cluster, the single implementation we have of this exists only in Serverless at the moment, we're using the heap usage estimates provided by MemoryMetricsService which are aggregated on the master using transport layer messages.

The heap usage in this case is not the JVM-provided heap usage but the estimate for the shards that are allocated to a node and the merges that are running on a node (i.e. not "real" heap usage). The estimate is the same one we use for autoscaling.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So that's why it needed to be pluggable, because at this stage it only exists in serverless.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The heap usage in this case is not the JVM-provided heap usage but the estimate for the shards that are allocated to a node

Let's call it something other than "heap usage" then? Overloading terms increase complexity of understanding the codebase, and "heap usage" already has a well defined meaning.

I would expect this to use a term that includes "shard" since it is shard level data.

Copy link
Member

@ywangd ywangd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left some high level comments.

Comment on lines 206 to 216
chunk(
(builder, p) -> builder.endArray() // end "reserved_sizes"
.startObject("heap_usage")
),
Iterators.map(nodesHeapUsage.entrySet().iterator(), c -> (builder, p) -> {
builder.startObject(c.getKey());
c.getValue().toShortXContent(builder);
builder.endObject();
return builder;
}),
endObject() // end "heap_usage"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably don't want include the new heap usage in the xcontent just yet? This will be a visible stateful change via the AllocationExplain API. Since we may still want to iterate on this field once we get more data from serverless, not including it here gives us flexibility.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed in 85fd019

/**
* Record representing the heap usage for a single cluster node
*/
public record HeapUsage(String nodeId, String nodeName, long totalBytes, long freeBytes) implements ToXContentFragment, Writeable {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the freeBytes is probably modelled after DiskUsage. But I feel usedBytes (and in turn usedPercent) is more suitable here since that is what we actually measure and more commonly talked about? I think DiskUsage uses freeBytes due to the methods available via Java NIO's FileStore class.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in 887bcaf

@@ -990,6 +986,9 @@ public Map<String, String> queryFields() {
.map(TerminationHandlerProvider::handler);
terminationHandler = getSinglePlugin(terminationHandlers, TerminationHandler.class).orElse(null);

if (clusterInfoService instanceof InternalClusterInfoService icis) {
icis.setHeapUsageSupplier(serviceProvider.newHeapUsageSupplier(pluginsService));
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I had in mind is to set this in Stateless#createComponents, something like

services.allocationService().setHeapUsageSupplier();

AllocationService has a reference to ClusterInfoService and can then handle it internally. This avoids changes to the Plugin interface which I think it is better since we might want to move the heap decider to stateful in future.

@nicktindall nicktindall requested a review from rjernst June 6, 2025 02:06
…ory_to_cluster_info

# Conflicts:
#	server/src/main/java/org/elasticsearch/TransportVersions.java
Copy link
Member

@rjernst rjernst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One last naming nit but otherwise LGTM

*
* @param listener The listener which will receive the results
*/
void getClusterHeapUsage(ActionListener<Map<String, ShardHeapUsage>> listener);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this doesnt seem like a getter, which would normally return the thing it's getting. Perhaps "collectShardHeapUsages"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, addressed in 7b5bf95


import java.util.Map;

public interface ShardHeapUsageSupplier {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this isn't really a supplier, it's not creating them exactly, it's collecting them from the cluster, so maybe "Collector" suffix, and collect method?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in 7b5bf95

@nicktindall nicktindall removed the serverless-linked Added by automation, don't add manually label Jun 7, 2025
@elasticsearchmachine elasticsearchmachine added the serverless-linked Added by automation, don't add manually label Jun 7, 2025
@nicktindall nicktindall merged commit 0702e42 into elastic:main Jun 12, 2025
18 checks passed
@nicktindall nicktindall deleted the ES_11445_add_heap_memory_to_cluster_info branch June 12, 2025 03:46
valeriy42 pushed a commit to valeriy42/elasticsearch that referenced this pull request Jun 12, 2025
Co-authored-by: ywangd <[email protected]>
Co-authored-by: rjernst <[email protected]>
Relates: ES-11445
mridula-s109 pushed a commit to mridula-s109/elasticsearch that referenced this pull request Jun 12, 2025
Co-authored-by: ywangd <[email protected]>
Co-authored-by: rjernst <[email protected]>
Relates: ES-11445
mridula-s109 added a commit that referenced this pull request Jun 12, 2025
* propgating retrievers to inner retrievers

* test feature taken care of

* Small changes in concurrent multipart upload interfaces (#128977)

Small changes in BlobContainer interface and wrapper.

Relates ES-11815

* Unmute FollowingEngineTests#testProcessOnceOnPrimary() test (#129054)

The reason the test fails is that operations contained _seq_no field with different doc value types (with no skippers and with skippers) and this isn't allowed, since field types need to be consistent in a Lucene index.

The initial operations were generated not knowing about the fact the index mode was set to logsdb or time_series. Causing the operations to not have doc value skippers. However when replaying the operations via following engine, the operations did have doc value skippers.

The fix is to set `index.seq_no.index_options` to `points_and_doc_values`, so that the initial operations are indexed without doc value skippers.

This test doesn't gain anything from storing seqno with doc value skippers, so there is no loss of testing coverage.

Closes #128541

* [Build] Add support for publishing to maven central (#128659)

This ensures we package an aggregation zip with all artifacts we want to publish to maven central as part of a release.
Running zipAggregation will produce a zip file in the build/nmcp/zip folder. The content of this zip is meant to match the maven artifacts we have currently declared as dra maven artifacts.

* ESQL: Check for errors while loading blocks (#129016)

Runs a sanity check after loading a block of values. Previously we were
doing a quick check if assertions were enabled. Now we do two quick
checks all the time. Better - we attach information about how a block
was loaded when there's a problem.

Relates to #128959

* Make `PhaseCacheManagementTests` project-aware (#129047)

The functionality in `PhaseCacheManagement` was already project-aware,
but these tests were still using deprecated methods.

* Vector test tools (#128934)

This adds some testing tools for verifying vector recall and latency
directly without having to spin up an entire ES node and running a rally
track.

Its pretty barebones and takes inspiration from lucene-util, but I
wanted access to our own formats and tooling to make our lives easier.

Here is an example config file. This will build the initial index, run
queries at num_candidates: 50, then again at num_candidates 100 (without
reindexing, and re-using the cached nearest neighbors).

```
[{
  "doc_vectors" : "path",
  "query_vectors" : "path",
  "num_docs" : 10000,
  "num_queries" : 10,
  "index_type" : "hnsw",
  "num_candidates" : 50,
  "k" : 10,
  "hnsw_m" : 16,
  "hnsw_ef_construction" : 200,
  "index_threads" : 4,
  "reindex" : true,
  "force_merge" : false,
  "vector_space" : "maximum_inner_product",
  "dimensions" : 768
},
{
"doc_vectors" : "path",
"query_vectors" : "path",
"num_docs" : 10000,
"num_queries" : 10,
"index_type" : "hnsw",
"num_candidates" : 100,
"k" : 10,
"hnsw_m" : 16,
"hnsw_ef_construction" : 200,
"vector_space" : "maximum_inner_product",
"dimensions" : 768
}
]
```

To execute:

```
./gradlew :qa:vector:checkVec --args="/Path/to/knn_tester_config.json"
```

Calling `./gradlew :qa:vector:checkVecHelp` gives some guidance on how
to use it, additionally providing a way to run it via java directly
(useful to bypass gradlew guff).

* ES|QL: refactor generative tests (#129028)

* Add a test of LOOKUP JOIN against a time series index (#129007)

Add a spec test of `LOOKUP JOIN` against a time series index.

* Make ILM `ClusterStateWaitStep` project-aware (#129042)

This is part of an iterative process to make ILM project-aware.

* Mute org.elasticsearch.xpack.esql.qa.mixed.MixedClusterEsqlSpecIT test {lookup-join.LookupJoinOnTimeSeriesIndex ASYNC} #129078

* Remove `ClusterState` param from ILM `AsyncBranchingStep` (#129076)

The `ClusterState` parameter of the `asyncPredicate` is not used
anywhere.

* Mute org.elasticsearch.xpack.esql.qa.mixed.MixedClusterEsqlSpecIT test {lookup-join.LookupJoinOnTimeSeriesIndex SYNC} #129082

* Mute org.elasticsearch.upgrades.UpgradeClusterClientYamlTestSuiteIT test {p0=upgraded_cluster/70_ilm/Test Lifecycle Still There And Indices Are Still Managed} #129097

* Mute org.elasticsearch.upgrades.UpgradeClusterClientYamlTestSuiteIT test {p0=upgraded_cluster/90_ml_data_frame_analytics_crud/Get mixed cluster outlier_detection job} #129098

* Mute org.elasticsearch.packaging.test.DockerTests test081SymlinksAreFollowedWithEnvironmentVariableFiles #128867

* Threadpool merge executor is aware of available disk space (#127613)

This PR introduces 3 new settings:
indices.merge.disk.check_interval, indices.merge.disk.watermark.high, and indices.merge.disk.watermark.high.max_headroom
that control if the threadpool merge executor starts executing new merges when the disk space is getting low.

The intent of this change is to avoid the situation where in-progress merges exhaust the available disk space on the node's local filesystem.
To this end, the thread pool merge executor periodically monitors the available disk space, as well as the current disk space estimates required by all in-progress (currently running) merges on the node, and will NOT schedule any new merges if the disk space is getting low (by default below the 5% limit of the total disk space, or 100 GB, whichever is smaller (same as the disk allocation flood stage level)).

* Add option to include or exclude vectors from _source retrieval (#128735)

This PR introduces a new include_vectors option to the _source retrieval context.
When set to false, vectors are excluded from the returned _source.
This is especially efficient when used with synthetic source, as it avoids loading vector fields entirely.

By default, vectors remain included unless explicitly excluded.

* Remove direct minScore propagation to inner retrievers

* cleaned up skip

* Mute org.elasticsearch.index.engine.ThreadPoolMergeExecutorServiceDiskSpaceTests testAvailableDiskSpaceMonitorWhenFileSystemStatErrors #129149

* Add transport version for ML inference Mistral chat completion (#129033)

* Add transport version for ML inference Mistral chat completion

* Add changelog for Mistral Chat Completion version fix

* Revert "Add changelog for Mistral Chat Completion version fix"

This reverts commit 7a57416.

* Correct index path validation (#129144)

All we care about is if reindex is true or false. We shouldn't worry
about force merge. Because if reindex is true, we will create the
directory, if its false, we won't.

* Mute org.elasticsearch.index.engine.ThreadPoolMergeExecutorServiceDiskSpaceTests testUnavailableBudgetBlocksNewMergeTasksFromStartingExecution #129148

* Implemented completion task for Google VertexAI  (#128694)

* Google Vertex AI completion model, response entity and tests

* Fixed GoogleVertexAiServiceTest for Service configuration

* Changelog

* Removed downcasting and using `moveToFirstToken`

* Create GoogleVertexAiChatCompletionResponseHandler for streaming and non streaming responses

* Added unit tests

* PR feedback

* Removed googlevertexaicompletion model. Using just GoogleVertexAiChatCompletionModel for completion and chat completion

* Renamed uri -> nonStreamingUri. Added streamingUri and getters in GoogleVertexAiChatCompletionModel

* Moved rateLimitGroupHashing to subclasses of GoogleVertexAiModel

* Fixed rate limit has of GoogleVertexAiRerankModel and refactored uri for GoogleVertexAiUnifiedChatCompletionRequest

---------

Co-authored-by: lhoet-google <[email protected]>
Co-authored-by: Jonathan Buttner <[email protected]>

* ES|QL - kNN function initial support (#127322)

* Remove optional seed from ES|QL SAMPLE (#128887)

* Remove optional seed from ES|QL SAMPLE

* make it clear that seed is for testing

* [Inference API] Add "rerank" task type to "elastic" provider (#126022)

* Rename target destination for microbenchmarks (#128878)

* Include direct memory and non-heap memory in ML memory calculations (take #2) (#128742)

* Include direct memory and non-heap memory in ML memory calculations.

* Reduce ML_ONLY heap size, so that direct memory is accounted for.

* [CI] Auto commit changes from spotless

* changelog

* improve docs

* Reuse direct memory to heap factor

* feature flag

---------

Co-authored-by: elasticsearchmachine <[email protected]>

* Throw better exception for unsupported aggregations over shape fields (#129139)

* Update Test Framework To Handle Query Rewrites That Rely on Non-Null Searchers (#129160)

* Update ReproduceInfoPrinter to correctly print a reproduction line for Lucene & build candidate upgrade tests (#129044)

* Increment inference stats counter for shard bulk inference calls (#129140)

This change updates the inference stats counter to include chunked inference calls performed by the shard bulk inference filter on all semantic text fields.
It ensures that usage of inference on semantic text fields is properly recorded in the stats.

* Synthetic source: avoid storing multi fields of type text and match_only_text by default. (#129126)

Don't store text and match_only_text field by default when source mode is synthetic and a field is a multi field or when there is a suitable multi field.

Without this change, ES would store field otherwise twice in a multi-field configuration.

For example:

```
...
"os": {
  "properties": {
    "name": {
      "ignore_above": 1024,
      "type": "keyword",
      "fields": {
        "text": {
          "type": "match_only_text"
        }
      }
    }
...
```

In this case, two stored fields were added, one in case for the `name` field and one for `name.text` multi-field.
This change prevents this, and would never store a stored field when text or match_only_text field is a multi-field.

* Adding `scheduled_report_id` field to kibana reporting template (#127827)

* Adding scheduled_report_id field to kibana reporting template

* Incrementing stack template registry version

* ES|QL: Add FORK generative tests (#129135)

* ES|QL Completion command syntax change (#129189)

* Remove optional seed from ES|QL SAMPLE (#128887)

* Remove optional seed from ES|QL SAMPLE

* make it clear that seed is for testing

* ES|QL Completion command syntax change (#129189)

* Remove optional seed from ES|QL SAMPLE (#128887)

* Remove optional seed from ES|QL SAMPLE

* make it clear that seed is for testing

* ES|QL Completion command syntax change (#129189)

* Add Cluster Feature for L2 Norm (#129181)

* propgating retrievers to inner retrievers

* test feature taken care of

* Small changes in concurrent multipart upload interfaces (#128977)

Small changes in BlobContainer interface and wrapper.

Relates ES-11815

* Unmute FollowingEngineTests#testProcessOnceOnPrimary() test (#129054)

The reason the test fails is that operations contained _seq_no field with different doc value types (with no skippers and with skippers) and this isn't allowed, since field types need to be consistent in a Lucene index.

The initial operations were generated not knowing about the fact the index mode was set to logsdb or time_series. Causing the operations to not have doc value skippers. However when replaying the operations via following engine, the operations did have doc value skippers.

The fix is to set `index.seq_no.index_options` to `points_and_doc_values`, so that the initial operations are indexed without doc value skippers.

This test doesn't gain anything from storing seqno with doc value skippers, so there is no loss of testing coverage.

Closes #128541

* [Build] Add support for publishing to maven central (#128659)

This ensures we package an aggregation zip with all artifacts we want to publish to maven central as part of a release.
Running zipAggregation will produce a zip file in the build/nmcp/zip folder. The content of this zip is meant to match the maven artifacts we have currently declared as dra maven artifacts.

* ESQL: Check for errors while loading blocks (#129016)

Runs a sanity check after loading a block of values. Previously we were
doing a quick check if assertions were enabled. Now we do two quick
checks all the time. Better - we attach information about how a block
was loaded when there's a problem.

Relates to #128959

* Make `PhaseCacheManagementTests` project-aware (#129047)

The functionality in `PhaseCacheManagement` was already project-aware,
but these tests were still using deprecated methods.

* Vector test tools (#128934)

This adds some testing tools for verifying vector recall and latency
directly without having to spin up an entire ES node and running a rally
track.

Its pretty barebones and takes inspiration from lucene-util, but I
wanted access to our own formats and tooling to make our lives easier.

Here is an example config file. This will build the initial index, run
queries at num_candidates: 50, then again at num_candidates 100 (without
reindexing, and re-using the cached nearest neighbors).

```
[{
  "doc_vectors" : "path",
  "query_vectors" : "path",
  "num_docs" : 10000,
  "num_queries" : 10,
  "index_type" : "hnsw",
  "num_candidates" : 50,
  "k" : 10,
  "hnsw_m" : 16,
  "hnsw_ef_construction" : 200,
  "index_threads" : 4,
  "reindex" : true,
  "force_merge" : false,
  "vector_space" : "maximum_inner_product",
  "dimensions" : 768
},
{
"doc_vectors" : "path",
"query_vectors" : "path",
"num_docs" : 10000,
"num_queries" : 10,
"index_type" : "hnsw",
"num_candidates" : 100,
"k" : 10,
"hnsw_m" : 16,
"hnsw_ef_construction" : 200,
"vector_space" : "maximum_inner_product",
"dimensions" : 768
}
]
```

To execute:

```
./gradlew :qa:vector:checkVec --args="/Path/to/knn_tester_config.json"
```

Calling `./gradlew :qa:vector:checkVecHelp` gives some guidance on how
to use it, additionally providing a way to run it via java directly
(useful to bypass gradlew guff).

* ES|QL: refactor generative tests (#129028)

* Add a test of LOOKUP JOIN against a time series index (#129007)

Add a spec test of `LOOKUP JOIN` against a time series index.

* Make ILM `ClusterStateWaitStep` project-aware (#129042)

This is part of an iterative process to make ILM project-aware.

* Mute org.elasticsearch.xpack.esql.qa.mixed.MixedClusterEsqlSpecIT test {lookup-join.LookupJoinOnTimeSeriesIndex ASYNC} #129078

* Remove `ClusterState` param from ILM `AsyncBranchingStep` (#129076)

The `ClusterState` parameter of the `asyncPredicate` is not used
anywhere.

* Mute org.elasticsearch.xpack.esql.qa.mixed.MixedClusterEsqlSpecIT test {lookup-join.LookupJoinOnTimeSeriesIndex SYNC} #129082

* Mute org.elasticsearch.upgrades.UpgradeClusterClientYamlTestSuiteIT test {p0=upgraded_cluster/70_ilm/Test Lifecycle Still There And Indices Are Still Managed} #129097

* Mute org.elasticsearch.upgrades.UpgradeClusterClientYamlTestSuiteIT test {p0=upgraded_cluster/90_ml_data_frame_analytics_crud/Get mixed cluster outlier_detection job} #129098

* Mute org.elasticsearch.packaging.test.DockerTests test081SymlinksAreFollowedWithEnvironmentVariableFiles #128867

* Threadpool merge executor is aware of available disk space (#127613)

This PR introduces 3 new settings:
indices.merge.disk.check_interval, indices.merge.disk.watermark.high, and indices.merge.disk.watermark.high.max_headroom
that control if the threadpool merge executor starts executing new merges when the disk space is getting low.

The intent of this change is to avoid the situation where in-progress merges exhaust the available disk space on the node's local filesystem.
To this end, the thread pool merge executor periodically monitors the available disk space, as well as the current disk space estimates required by all in-progress (currently running) merges on the node, and will NOT schedule any new merges if the disk space is getting low (by default below the 5% limit of the total disk space, or 100 GB, whichever is smaller (same as the disk allocation flood stage level)).

* Add option to include or exclude vectors from _source retrieval (#128735)

This PR introduces a new include_vectors option to the _source retrieval context.
When set to false, vectors are excluded from the returned _source.
This is especially efficient when used with synthetic source, as it avoids loading vector fields entirely.

By default, vectors remain included unless explicitly excluded.

* Remove direct minScore propagation to inner retrievers

* cleaned up skip

* Mute org.elasticsearch.index.engine.ThreadPoolMergeExecutorServiceDiskSpaceTests testAvailableDiskSpaceMonitorWhenFileSystemStatErrors #129149

* Add transport version for ML inference Mistral chat completion (#129033)

* Add transport version for ML inference Mistral chat completion

* Add changelog for Mistral Chat Completion version fix

* Revert "Add changelog for Mistral Chat Completion version fix"

This reverts commit 7a57416.

* Correct index path validation (#129144)

All we care about is if reindex is true or false. We shouldn't worry
about force merge. Because if reindex is true, we will create the
directory, if its false, we won't.

* Mute org.elasticsearch.index.engine.ThreadPoolMergeExecutorServiceDiskSpaceTests testUnavailableBudgetBlocksNewMergeTasksFromStartingExecution #129148

* Implemented completion task for Google VertexAI  (#128694)

* Google Vertex AI completion model, response entity and tests

* Fixed GoogleVertexAiServiceTest for Service configuration

* Changelog

* Removed downcasting and using `moveToFirstToken`

* Create GoogleVertexAiChatCompletionResponseHandler for streaming and non streaming responses

* Added unit tests

* PR feedback

* Removed googlevertexaicompletion model. Using just GoogleVertexAiChatCompletionModel for completion and chat completion

* Renamed uri -> nonStreamingUri. Added streamingUri and getters in GoogleVertexAiChatCompletionModel

* Moved rateLimitGroupHashing to subclasses of GoogleVertexAiModel

* Fixed rate limit has of GoogleVertexAiRerankModel and refactored uri for GoogleVertexAiUnifiedChatCompletionRequest

---------

Co-authored-by: lhoet-google <[email protected]>
Co-authored-by: Jonathan Buttner <[email protected]>

* Added cluster feature to yaml

* Node feature added

* Duplicate line - result of merge removed

* Update docs/changelog/129181.yaml

* Update 129181.yaml

---------

Co-authored-by: Tanguy Leroux <[email protected]>
Co-authored-by: Martijn van Groningen <[email protected]>
Co-authored-by: Rene Groeschke <[email protected]>
Co-authored-by: Nik Everett <[email protected]>
Co-authored-by: Niels Bauman <[email protected]>
Co-authored-by: Benjamin Trent <[email protected]>
Co-authored-by: Luigi Dell'Aquila <[email protected]>
Co-authored-by: Bogdan Pintea <[email protected]>
Co-authored-by: elasticsearchmachine <[email protected]>
Co-authored-by: Albert Zaharovits <[email protected]>
Co-authored-by: Jim Ferenczi <[email protected]>
Co-authored-by: Jan-Kazlouski-elastic <[email protected]>
Co-authored-by: Leonardo Hoet <[email protected]>
Co-authored-by: lhoet-google <[email protected]>
Co-authored-by: Jonathan Buttner <[email protected]>

* Fix DRA dependenciesInfo task dependency resolution (#129209)

* IVF Hierarchical KMeans Flush & Merge (#128675)

added hierarchical kmeans as a clustering algorithm to better partitionin the space when running ivf on flush and merge

* Mute org.elasticsearch.xpack.esql.qa.single_node.GenerativeForkIT test {lookup-join.EnrichLookupStatsBug ASYNC} #129228

* Mute org.elasticsearch.xpack.esql.qa.single_node.GenerativeForkIT test {lookup-join.EnrichLookupStatsBug SYNC} #129229

* [ES|QL] Specify population in StdDev docs (#129225)

There are 2 types of Standard Deviation: population and
sample, this commit clarifies that the existing is population.

* Unmute IngestGeoIpClientYamlTestSuiteIT (#129178)

* Fix an NPE in the ES|QL completion command. (#129235)

* ESQL: fix bwc test by adding min required version (#129204)

Closes #129093
Closes #129094
Closes #129095
Closes #129102
Closes #129103

* ESQL: Fix test by add excluding capability (#129202)

Closes #129078
Closes #129082

* Fix vault field name (#129184)

* Remove all usages of Metadata customs removal methods (#129043)

This removes all non-test usage of
```
Metadata.Builder.removeProjectCustom(String)
Metadata.Builder.removeProjectCustomIf(BiPredicate)
```

And replaces it with appropriate calls to the equivalent method on
`ProjectMetadata.Builder`

In most cases this _does not_ make the code project aware, but does
reduce the number of deprecated methods in use.

* Replace tuple with record (#128976)

* improve support for bytecode patching signed jars (#128613)

* improve support for bytecode patching signed jars

* Update docs/changelog/128613.yaml
---------

Co-authored-by: elasticsearchmachine <[email protected]>
Co-authored-by: Johannes Freden Jansson <[email protected]>

* rename ES|QL sample capability (#129193)

* ESQL: Mute GenerativeForkIT for some LOOKUP JOIN tests (#129248)

* ESQL: Extend `RENAME` syntax to allow a `new = old` syntax (#129212)

This extends RENAME's grammar to allow a new syntax: `| RENAME new_name
= old_name` This is supported along the existing `... old_name AS
new_name` syntax.

Closes #129208

* [DOCS] Adds preview tag to the CHANGE_POINT ES|QL command in the command list. (#129247)

* ESQL: Skip unused STATS groups by adding a Top N BlockHash implementation (#127148)

- Add a new `LongTopNBlockHash` implementation taking care of skipping unused values.
- Add a `TopNUniqueSet` to take care of storing the top N values (without nulls).
- Add a `TopNMultivalueDedupeLong` class helping with it (An adaptation of the existing `MultivalueDedupeLong`).
- Add some tests to `HashAggregationOperator`. It wasn't changed much, but helps a bit with the E2E.
- Add MicroBenchmarks for TopN groupings, to ensure we're actually improving things with this.

* Add "Searchable Snapshots" to changelog validation schema (#129180)

We created a new ":Distributed Indexing/Searchable Snapshots" label recently on Github, so I think it makes sense to also have a "Searchable Snapshots" label in the changelog. It also makes sense since there is automatic changelog generation based on the pull request label.

* ESQL: Fix FieldAttribute name usage in InferNonNullAggConstraint (#128910)

* Fix InferNonNullAggConstraint with union types
* Begin fixing LucenePushdownPredicates with union types
* Introduce a dedicated wrapper record FieldName to be used where field names are really required.

The fixes consist of using FieldAttribute.fieldName() instead of .name() or .field().name(). .name() can be some temporary string unrelated to the actual name of the Lucene index field, whereas .field().name() doesn't know about parent fields; .fieldName() gives the full field name (from the root of the document).

The biggest offender of such misuse is SearchStats; make this always require a FieldName, not a String - and make FieldAttribute#fieldName handily return an instance of FieldName so users of SearchStats don't accidentally use the return value of FieldAttribute#name.

* Remove usages of `Metadata.Builder#indexGraveyard` (#129041)

And replace it with appropriate calls to the equivalent method on
`ProjectMetadata.Builder`.
In most cases this _does not_ make the code project aware, but does
reduce the number of deprecated methods in use.
Concerns both the getter and the setter.

* Mute org.elasticsearch.compute.data.sort.LongTopNSetTests testCrankyBreaker #129257

* Enable Shard-Level Search-load rate metric (#128660)

Introduces a new search load metric to the stats infrastructure, measured and tracked on a per-shard basis. The metric represents the Exponentially Weighted Moving Rate (EWMR) of search operations, calculated using the "took" time from each completed search phase.

* [ESQL] Fix typo in search-functions.md (#129260)

^^

* ESQL: Log partial failures (#129164)

Now that ESQL has `allow_partial_results` we can reply with a `200` even
though some nodes failed to run ESQL. This could happen because the node
is restarting. Or because of a bug. Or a disconnect. All kinds of
things. This logs those partial failures so an operator can look at them
and get a sense of why they are happening.

* Update Gradle wrapper to 8.14.2 (#129179)

* Fix ivf nodestats impl for getOffHeapByteSize (#129259)

This fixes a silly bug where we didn't override `OffHeapStats` for IVF.

* feat: enable date_detection for all apm data streams (#128913)

* feat: enable date_detection for all apm data streams

* Update resources.yaml

* Create 128913.yml

---------

Co-authored-by: Carson Ip <[email protected]>

* [BC Upgrage] Fix incorrect version parsing in tests (#129243)



This PR introduces several fixes to various IT tests, related to the use and misuse of the version identifier for the start cluster:

    wherever we can, we replace of versions in test code with features
    where we can't, we make sure we use the actual stack version (the one provided by -Dtests.bwc.main.version and not the bogus "0.0.0" version string)
    when requesting the cluster version we make sure we do use the "unresolved" version identifier (the value of the tests.old_cluster_version system property e.g. 0.0.0 ) so we resolve the right distribution

These changes enabled the tests to be used in BC upgrade tests (and potentially in serverless upgrade tests too, where they would have also failed)

Relates to ES-12010

Precedes #128614, #128823 and #128983

* [Build] Build maven aggregation zip as part of DRA build (#129175)

* [Build] Build maven aggregation zip as part of DRA build

* Update path for aggregation zip

* Throttle indexing when disk IO throttling is disabled (#129245)

The threadpool-based merge scheduler triggers indexing throttling if merges are still getting enqueued faster than they're executed, while they are also disk IO unthrottled. This PR fixes the case where indexing throttling was incorrectly NOT triggered when disk IO throttling was disabled via the index settings.

* Register match_phrase as a function not a snapshot function (#129255)

* Register match_phrase as a function not a snapshot function

* Update usage

* [Gradle] Spotless plugin update (#115750)

- provides better configuration cache support
- requires some rework due to changed defaults

* Adding support to exclude semantic_text subfields (#127664)

* Adding support to exclude semantic_text subfields

* Update docs/changelog/127664.yaml

* Updating changelog file

* remove duplicate test from yaml file

* Adding support to exclude semantic_text subfields from mapper builders

* Adding support for generic field types

* refactoring to use builder and setting exclude value from semantic_text mapper

* update in semantic_text mapper and fetcher to incorporate the support functionality

* Fix code style issue

* adding node feature for yaml tests

* Adding more restrictive checks on yaml tests and few refactoring

* Returns metadata fields from metadata mappers

* returns all source fields for fieldcaps

* gather all fields and iterate to process for fieldcaps api

* revert back all changes from MappedFieldtype and subclasses

* revert back exclude logic from semantic_text mapper

* fix lint issues

* fix lint issues

* Adding runtime fields into fieldCaps

* Fix linting issue

* removing unused functions that used in previous implementation

* fix multifield tests failure

* getting alias fields for field caps

* adding support for query time runtime fields

* [CI] Auto commit changes from spotless

* Fix empty mapping fieldCaps call

* Address passthrough behavior for mappers

* Fix SearchAsYoutype mapper failures

* rename abstract method to have more meaningful name

* Rename mapper function to match its functionality

* Adding filtering for infernece subfields

* revert back previous implementation changes

* Adding yaml test for field caps not filtering multi-field

* Fixing yaml test

* Adding comment why .infernece filter is added

---------

Co-authored-by: elasticsearchmachine <[email protected]>
Co-authored-by: Elastic Machine <[email protected]>

* Revert "[Gradle] Spotless plugin update (#115750)"

This reverts commit 6370d60.

* Switch IVF Writer to ES Logger  (#129224)

update to use ES logger instead of infostream and fixing native access warnings

* Add heap usage estimate to ClusterInfo (#128723)

Co-authored-by: ywangd <[email protected]>
Co-authored-by: rjernst <[email protected]>
Relates: ES-11445

* Revert "Use IndexOrDocValuesQuery in NumberFieldType#termQuery implementations (#128293)" (#129206)

This reverts commit de7c91c.

* Delegated authorization using Microsoft Graph (SDK) (#128396)

* Delegated authorization using Microsoft Graph (SDK)
---------

Co-authored-by: elasticsearchmachine <[email protected]>
Co-authored-by: Johannes Freden Jansson <[email protected]>
Co-authored-by: Johannes Fredén <[email protected]>

* Add `none` chunking strategy to disable automatic chunking for inference endpoints (#129150)

This introduces a `none` chunking strategy that disables automatic chunking when using an inference endpoint.
It enables users to provide pre-chunked input directly to a `semantic_text` field without any additional splitting.

The chunking strategy can be configured either on the inference endpoint or directly in the `semantic_text` field definition.

**Example:**

```json
PUT test-index
{
  "mappings": {
    "properties": {
      "my_semantic_field": {
        "type": "semantic_text",
        "chunking_settings": {
          "strategy": "none"    <1>
        }
      }
    }
  }
}
```

<1> Disables automatic chunking on `my_semantic_field`.

```json
PUT test-index/_doc/1
{
    "my_semantic_field": ["my first chunk", "my second chunk", ...]    <1>
    ...
}
```

<1> Pre-chunked input provided as an array of strings.
Each array element represents a single chunk that will be sent directly to the inference service without further processing.

* Fix broken bwc logic in text field mapper introduced by #129126 (#129308)

A missing condition in the bwc logic caused a text field to be a stored, while before #129126, this wasn't the case.

* [ESQL] Fix SpatialDocValuesExtraction rule replacing TimeSeries agg node (#129273)

`TimeSeriesAggregateExec` (TS) node inherits from `AggregateExec` (STATS). The `SpatialDocValuesExtraction` rule was replacing all `AggregateExec` instances with another `AggregateExec`, whether the same class or not.

* Make `TransportMoveToStepAction` project-aware (#129252)

Future work is necessary to make the YAML tests pass in MP mode.

* [DOCS] Adds term vectors API examples (#129328)

* [DOCS] Adds term vectors API examples.

* Update docs/reference/elasticsearch/rest-apis/term-vectors-examples.md

Co-authored-by: Liam Thompson <[email protected]>

* [DOCS] Addresses feedback.

* [DOCS] Fixes link.

---------

Co-authored-by: Liam Thompson <[email protected]>

* [ESQL] Fix TopNSetTestCase test and unmute it (#129327)

Closes #129257

* ESQL: Change queries ID to be the same as the async (#127472)

This PR changes the list and query API for ESQL, such that the ID now follows the same format as async query IDs. This is saved as part of the task status. For async queries, this is easy, but for sync queries, this is slightly more complicated, since when creating them, we don't have access to a node ID. So instead, the status itself is just the doc ID portion of the async execution ID, which is used for salting, since this part needs to be consistent, so that when we list the queries, we can compute the async execution ID correctly.

Also, I've removed the individual ID, node, and data node tags, as mentioned in the ticket.

In addition, I've changed the accept and content-type to be JSON for lists.

Resolves #127187

* Adjust unpromotable shard refresh request validation to allow RefreshResult.NO_REFRESH (#129176)

When a primary shard uses the read-only engine, it always returns a
RefreshResult.NO_REFRESH for refreshes. Since #93600 we added an extra
roundtrip to hook unpromotable shard refresh logic. This hook is always
executed, even if there are no unpromotable shards, but the
UnpromotableShardRefreshRequest would fail if the primary shard returns
a RefreshResult.NO_REFRESH result.

Fix to be backported to several versions as it's annoying.

Closes #129036

* Add a Multi-Project Search Rest Test (#128657)

This commit adds a Rest IT specifically for search in MultiProject.
Everything was already working as expected, but we were a bit light on
explicit testing for search, which as _the_ core capability of
Elasticsearch is worth testing thoroughly and clearly.

* Modified LinearRetriever to include minScore

* cleaned up

* Made the same changes we did in textSimilarity

* Fixed a minor error

* cleaned up

* Minscore is working :)

* chore: empty commit to trigger PR update

* Update docs/changelog/129359.yaml

* Update 10_linear_retriever.yml

* [CI] Auto commit changes from spotless

---------

Co-authored-by: Tanguy Leroux <[email protected]>
Co-authored-by: Martijn van Groningen <[email protected]>
Co-authored-by: Rene Groeschke <[email protected]>
Co-authored-by: Nik Everett <[email protected]>
Co-authored-by: Niels Bauman <[email protected]>
Co-authored-by: Benjamin Trent <[email protected]>
Co-authored-by: Luigi Dell'Aquila <[email protected]>
Co-authored-by: Bogdan Pintea <[email protected]>
Co-authored-by: elasticsearchmachine <[email protected]>
Co-authored-by: Albert Zaharovits <[email protected]>
Co-authored-by: Jim Ferenczi <[email protected]>
Co-authored-by: Jan-Kazlouski-elastic <[email protected]>
Co-authored-by: Leonardo Hoet <[email protected]>
Co-authored-by: lhoet-google <[email protected]>
Co-authored-by: Jonathan Buttner <[email protected]>
Co-authored-by: Carlos Delgado <[email protected]>
Co-authored-by: Jan Kuipers <[email protected]>
Co-authored-by: Tim Grein <[email protected]>
Co-authored-by: Ievgen Degtiarenko <[email protected]>
Co-authored-by: elasticsearchmachine <[email protected]>
Co-authored-by: Ignacio Vera <[email protected]>
Co-authored-by: Mike Pellegrini <[email protected]>
Co-authored-by: Moritz Mack <[email protected]>
Co-authored-by: Ying Mao <[email protected]>
Co-authored-by: Ioana Tagirta <[email protected]>
Co-authored-by: Aurélien FOUCRET <[email protected]>
Co-authored-by: John Wagster <[email protected]>
Co-authored-by: Larisa Motova <[email protected]>
Co-authored-by: Sam Xiao <[email protected]>
Co-authored-by: Richard Dennehy <[email protected]>
Co-authored-by: Johannes Freden Jansson <[email protected]>
Co-authored-by: Alexander Spies <[email protected]>
Co-authored-by: István Zoltán Szabó <[email protected]>
Co-authored-by: Iván Cea Fontenla <[email protected]>
Co-authored-by: Dimitris Rempapis <[email protected]>
Co-authored-by: Liam Thompson <[email protected]>
Co-authored-by: kruskall <[email protected]>
Co-authored-by: Carson Ip <[email protected]>
Co-authored-by: Lorenzo Dematté <[email protected]>
Co-authored-by: Kathleen DeRusso <[email protected]>
Co-authored-by: Samiul Monir <[email protected]>
Co-authored-by: Elastic Machine <[email protected]>
Co-authored-by: Nick Tindall <[email protected]>
Co-authored-by: ywangd <[email protected]>
Co-authored-by: rjernst <[email protected]>
Co-authored-by: Johannes Fredén <[email protected]>
Co-authored-by: Gal Lalouche <[email protected]>
Co-authored-by: Tim Vernum <[email protected]>
elasticsearchmachine pushed a commit that referenced this pull request Jun 17, 2025
This PR adds a new setting to toggle the collection for shard heap
usages as well as wiring ShardHeapUsage into ClusterInfoSimulator.

Relates: #128723
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Coordination/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) >non-issue serverless-linked Added by automation, don't add manually Team:Distributed Coordination Meta label for Distributed Coordination team v9.1.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants