-
Notifications
You must be signed in to change notification settings - Fork 25.2k
ESQL: Speed loading stored fields #127348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This speeds up loading from stored fields by opting more blocks into the "sequential" strategy. This really kicks in when loading stored fields like `text`. And when you need less than 100% of documents, but more than, say, 10%. This is most useful when you need 99.9% of field documents. That sort of thing. Here's the perf numbers: ``` %100.0 {"took": 403 -> 401,"documents_found":1000000} %099.9 {"took":3990 -> 436,"documents_found": 999000} %099.0 {"took":4069 -> 440,"documents_found": 990000} %090.0 {"took":3468 -> 421,"documents_found": 900000} %030.0 {"took":1213 -> 152,"documents_found": 300000} %020.0 {"took": 766 -> 104,"documents_found": 200000} %010.0 {"took": 397 -> 55,"documents_found": 100000} %009.0 {"took": 352 -> 375,"documents_found": 90000} %008.0 {"took": 304 -> 317,"documents_found": 80000} %007.0 {"took": 273 -> 287,"documents_found": 70000} %005.0 {"took": 199 -> 204,"documents_found": 50000} %001.0 {"took": 46 -> 46,"documents_found": 10000} ``` Let's explain this with an example. First, jump to `main` and load a million documents: ``` rm -f /tmp/bulk for a in {1..1000}; do echo '{"index":{}}' >> /tmp/bulk echo '{"text":"text '$(printf %04d $a)'"}' >> /tmp/bulk done curl -s -uelastic:password -HContent-Type:application/json -XDELETE localhost:9200/test for a in {1..1000}; do echo -n $a: curl -s -uelastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_bulk?pretty --data-binary @/tmp/bulk | grep errors done curl -s -uelastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_forcemerge?max_num_segments=1 curl -s -uelastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_refresh echo ``` Now query them all. Run this a few times until it's stable: ``` echo -n "%100.0 " curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{ "query": "FROM test | STATS SUM(LENGTH(text))", "pragma": { "data_partitioning": "shard" } }' | jq -c '{took, documents_found}' ``` Now fetch 99.9% of documents: ``` echo -n "%099.9 " curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{ "query": "FROM test | WHERE NOT text.keyword IN (\"text 0998\") | STATS SUM(LENGTH(text))", "pragma": { "data_partitioning": "shard" } }' | jq -c '{took, documents_found}' ``` This should spit out something like: ``` %100.0 { "took":403,"documents_found":1000000} %099.9 {"took":4098, "documents_found":999000} ``` We're loading *fewer* documents but it's slower! What in the world?! If you dig into the profile you'll see that it's value loading: ``` $ curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{ "query": "FROM test | STATS SUM(LENGTH(text))", "pragma": { "data_partitioning": "shard" }, "profile": true }' | jq '.profile.drivers[].operators[] | select(.operator | contains("ValuesSourceReaderOperator"))' { "operator": "ValuesSourceReaderOperator[fields = [text]]", "status": { "readers_built": { "stored_fields[requires_source:true, fields:0, sequential: true]": 222, "text:column_at_a_time:null": 222, "text:row_stride:BlockSourceReader.Bytes": 1 }, "values_loaded": 1000000, "process_nanos": 370687157, "pages_processed": 222, "rows_received": 1000000, "rows_emitted": 1000000 } } $ curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{ "query": "FROM test | WHERE NOT text.keyword IN (\"text 0998\") | STATS SUM(LENGTH(text))", "pragma": { "data_partitioning": "shard" }, "profile": true }' | jq '.profile.drivers[].operators[] | select(.operator | contains("ValuesSourceReaderOperator"))' { "operator": "ValuesSourceReaderOperator[fields = [text]]", "status": { "readers_built": { "stored_fields[requires_source:true, fields:0, sequential: false]": 222, "text:column_at_a_time:null": 222, "text:row_stride:BlockSourceReader.Bytes": 1 }, "values_loaded": 999000, "process_nanos": 3965803793, "pages_processed": 222, "rows_received": 999000, "rows_emitted": 999000 } } ``` It jumps from 370ms to almost four seconds! Loading fewer values! The second big difference is in the `stored_fields` marker. In the second on it's `sequential: false` and in the first `sequential: true`. `sequential: true` uses Lucene's "merge" stored fields reader instead of the default one. It's much more optimized at decoding sequences of documents. Previously we only enabled this reader when loading compact sequences of documents - when the entire block looks like ``` 1, 2, 3, 4, 5, ... 1230, 1231 ``` If there are any gaps we wouldn't enable it. That was a very conservative thing we did long ago without doing any experiments. We knew it was faster without any gaps, but not otherwise. It turns out it's a lot faster in a lot more cases. I've measured it as faster for 99% gaps, at least on simple documents. I'm a bit worried that this is too aggressive, so I've set made it configurable and made the default being to use the "merge" loader with 10% gaps. So we'd use the merge loader with a block like: ``` 1, 11, 21, 31, ..., 1231, 1241 ```
Pinging @elastic/es-analytical-engine (Team:Analytics) |
Hi @nik9000, I've created a changelog YAML for you. |
Thanks, Nik! The changes make a lot of sense to me. We might need to come up with a dynamic ratio instead of the fixed default 10%. With a sequential (i.e., merging) reader, we load and decompress the entire block (14KB for best_speed and 240KB for best_compression); otherwise, we load and decompress only up to the target document. Ideally, we should be able to provide the stored fields with an array of document IDs to load, allowing it to make this decision internally, but we don't have that API yet. Alternatively, we could compute the average size of documents, determine the average number of documents per block, and set the ratio to be slightly higher than 1 / number_of_documents_per_block. This would ensure that decompressing blocks is used by more than one document. Unfortunately, Lucene does not expose these stats either, so we have to use a fixed default ratio. Are we confident with the default 10%? I think the documents in your test are relatively small, and 10% seems to work well. Would you mind adding more dummy fields to pump the source size to around 2-4KB to see if 10% still performs well? If it does, we can stick with 10%; otherwise, we might need to increase this value. |
Not entirely, no. I played with it for an hour with the tiny document and tried one larger one and it looked pretty similar. Locally I see a ration closer to 1% being right, but I just don't believe it. |
I think this is because we are using zstd for snapshot builds, which decompresses entire blocks. |
Oh fun. I'll see about switching that off. If we are going with a fixed
meeting maybe it should be an index setting to override.
…On Thu, Apr 24, 2025, 5:38 PM Nhat Nguyen ***@***.***> wrote:
*dnhatn* left a comment (elastic/elasticsearch#127348)
<#127348 (comment)>
Not entirely, no. I played with it for an hour with the tiny document and
tried one larger one and it looked pretty similar. Locally I see a ration
closer to 1% being right, but I just don't believe it.
I think this is because we are using zstd
<https://github.com/elastic/elasticsearch/blob/77c77f235d35b2446b342f8b208d4ca1b799ec16/server/src/main/java/org/elasticsearch/index/codec/CodecService.java#L50-L54>
for snapshot builds, which decompresses entire blocks.
—
Reply to this email directly, view it on GitHub
<#127348 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABUXIXRF626JRYMRJ22NWL23FKUZAVCNFSM6AAAAAB3ZWIAAGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQMRYHEYTQMBTGQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
OK folks! I believe I've really finished this one modulo any review feedback. I added a little docs section describing the setting too. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a question, but LGTM. Thanks Nik!
* I'm concerned about the implications of effectively using this all the time. | ||
* </p> | ||
*/ | ||
public static final Setting<Double> STORED_FIELDS_SEQUENTIAL_PROPORTION = Setting.doubleSetting( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the index setting is appropriate since it should depend on the document size, which is specific to each index. However, if users encounter a performance degradation, they would need to change this setting for every index, which can be difficult, especially for incoming indices. Should we add a cluster setting to control the default value? What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like that might be overkill. I don't think that many folks are going to mess with it a ton. And we have all this infrastructure for templates?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am okay with this too.
This speeds up loading from stored fields by opting more blocks into the "sequential" strategy. This really kicks in when loading stored fields like `text`. And when you need less than 100% of documents, but more than, say, 10%. This is most useful when you need 99.9% of field documents. That sort of thing. Here's the perf numbers: ``` %100.0 {"took": 403 -> 401,"documents_found":1000000} %099.9 {"took":3990 -> 436,"documents_found": 999000} %099.0 {"took":4069 -> 440,"documents_found": 990000} %090.0 {"took":3468 -> 421,"documents_found": 900000} %030.0 {"took":1213 -> 152,"documents_found": 300000} %020.0 {"took": 766 -> 104,"documents_found": 200000} %010.0 {"took": 397 -> 55,"documents_found": 100000} %009.0 {"took": 352 -> 375,"documents_found": 90000} %008.0 {"took": 304 -> 317,"documents_found": 80000} %007.0 {"took": 273 -> 287,"documents_found": 70000} %005.0 {"took": 199 -> 204,"documents_found": 50000} %001.0 {"took": 46 -> 46,"documents_found": 10000} ``` Let's explain this with an example. First, jump to `main` and load a million documents: ``` rm -f /tmp/bulk for a in {1..1000}; do echo '{"index":{}}' >> /tmp/bulk echo '{"text":"text '$(printf %04d $a)'"}' >> /tmp/bulk done curl -s -uelastic:password -HContent-Type:application/json -XDELETE localhost:9200/test for a in {1..1000}; do echo -n $a: curl -s -uelastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_bulk?pretty --data-binary @/tmp/bulk | grep errors done curl -s -uelastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_forcemerge?max_num_segments=1 curl -s -uelastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_refresh echo ``` Now query them all. Run this a few times until it's stable: ``` echo -n "%100.0 " curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{ "query": "FROM test | STATS SUM(LENGTH(text))", "pragma": { "data_partitioning": "shard" } }' | jq -c '{took, documents_found}' ``` Now fetch 99.9% of documents: ``` echo -n "%099.9 " curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{ "query": "FROM test | WHERE NOT text.keyword IN (\"text 0998\") | STATS SUM(LENGTH(text))", "pragma": { "data_partitioning": "shard" } }' | jq -c '{took, documents_found}' ``` This should spit out something like: ``` %100.0 { "took":403,"documents_found":1000000} %099.9 {"took":4098, "documents_found":999000} ``` We're loading *fewer* documents but it's slower! What in the world?! If you dig into the profile you'll see that it's value loading: ``` $ curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{ "query": "FROM test | STATS SUM(LENGTH(text))", "pragma": { "data_partitioning": "shard" }, "profile": true }' | jq '.profile.drivers[].operators[] | select(.operator | contains("ValuesSourceReaderOperator"))' { "operator": "ValuesSourceReaderOperator[fields = [text]]", "status": { "readers_built": { "stored_fields[requires_source:true, fields:0, sequential: true]": 222, "text:column_at_a_time:null": 222, "text:row_stride:BlockSourceReader.Bytes": 1 }, "values_loaded": 1000000, "process_nanos": 370687157, "pages_processed": 222, "rows_received": 1000000, "rows_emitted": 1000000 } } $ curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{ "query": "FROM test | WHERE NOT text.keyword IN (\"text 0998\") | STATS SUM(LENGTH(text))", "pragma": { "data_partitioning": "shard" }, "profile": true }' | jq '.profile.drivers[].operators[] | select(.operator | contains("ValuesSourceReaderOperator"))' { "operator": "ValuesSourceReaderOperator[fields = [text]]", "status": { "readers_built": { "stored_fields[requires_source:true, fields:0, sequential: false]": 222, "text:column_at_a_time:null": 222, "text:row_stride:BlockSourceReader.Bytes": 1 }, "values_loaded": 999000, "process_nanos": 3965803793, "pages_processed": 222, "rows_received": 999000, "rows_emitted": 999000 } } ``` It jumps from 370ms to almost four seconds! Loading fewer values! The second big difference is in the `stored_fields` marker. In the second on it's `sequential: false` and in the first `sequential: true`. `sequential: true` uses Lucene's "merge" stored fields reader instead of the default one. It's much more optimized at decoding sequences of documents. Previously we only enabled this reader when loading compact sequences of documents - when the entire block looks like ``` 1, 2, 3, 4, 5, ... 1230, 1231 ``` If there are any gaps we wouldn't enable it. That was a very conservative thing we did long ago without doing any experiments. We knew it was faster without any gaps, but not otherwise. It turns out it's a lot faster in a lot more cases. I've measured it as faster for 99% gaps, at least on simple documents. I'm a bit worried that this is too aggressive, so I've set made it configurable and made the default being to use the "merge" loader with 10% gaps. So we'd use the merge loader with a block like: ``` 1, 11, 21, 31, ..., 1231, 1241 ``` ESQL: Fix test locale (elastic#127566) Was formatting a string and didn't include `Locale.ROOT` so sometimes the string would use the Arabic ٩ instead of 9. And JSON doesn't parse those. Closes elastic#127562
Backporting to 8.19 with: #127721 |
This speeds up loading from stored fields by opting more blocks into the "sequential" strategy. This really kicks in when loading stored fields like `text`. And when you need less than 100% of documents, but more than, say, 10%. This is most useful when you need 99.9% of field documents. That sort of thing. Here's the perf numbers: ``` %100.0 {"took": 403 -> 401,"documents_found":1000000} %099.9 {"took":3990 -> 436,"documents_found": 999000} %099.0 {"took":4069 -> 440,"documents_found": 990000} %090.0 {"took":3468 -> 421,"documents_found": 900000} %030.0 {"took":1213 -> 152,"documents_found": 300000} %020.0 {"took": 766 -> 104,"documents_found": 200000} %010.0 {"took": 397 -> 55,"documents_found": 100000} %009.0 {"took": 352 -> 375,"documents_found": 90000} %008.0 {"took": 304 -> 317,"documents_found": 80000} %007.0 {"took": 273 -> 287,"documents_found": 70000} %005.0 {"took": 199 -> 204,"documents_found": 50000} %001.0 {"took": 46 -> 46,"documents_found": 10000} ``` Let's explain this with an example. First, jump to `main` and load a million documents: ``` rm -f /tmp/bulk for a in {1..1000}; do echo '{"index":{}}' >> /tmp/bulk echo '{"text":"text '$(printf %04d $a)'"}' >> /tmp/bulk done curl -s -uelastic:password -HContent-Type:application/json -XDELETE localhost:9200/test for a in {1..1000}; do echo -n $a: curl -s -uelastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_bulk?pretty --data-binary @/tmp/bulk | grep errors done curl -s -uelastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_forcemerge?max_num_segments=1 curl -s -uelastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_refresh echo ``` Now query them all. Run this a few times until it's stable: ``` echo -n "%100.0 " curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{ "query": "FROM test | STATS SUM(LENGTH(text))", "pragma": { "data_partitioning": "shard" } }' | jq -c '{took, documents_found}' ``` Now fetch 99.9% of documents: ``` echo -n "%099.9 " curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{ "query": "FROM test | WHERE NOT text.keyword IN (\"text 0998\") | STATS SUM(LENGTH(text))", "pragma": { "data_partitioning": "shard" } }' | jq -c '{took, documents_found}' ``` This should spit out something like: ``` %100.0 { "took":403,"documents_found":1000000} %099.9 {"took":4098, "documents_found":999000} ``` We're loading *fewer* documents but it's slower! What in the world?! If you dig into the profile you'll see that it's value loading: ``` $ curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{ "query": "FROM test | STATS SUM(LENGTH(text))", "pragma": { "data_partitioning": "shard" }, "profile": true }' | jq '.profile.drivers[].operators[] | select(.operator | contains("ValuesSourceReaderOperator"))' { "operator": "ValuesSourceReaderOperator[fields = [text]]", "status": { "readers_built": { "stored_fields[requires_source:true, fields:0, sequential: true]": 222, "text:column_at_a_time:null": 222, "text:row_stride:BlockSourceReader.Bytes": 1 }, "values_loaded": 1000000, "process_nanos": 370687157, "pages_processed": 222, "rows_received": 1000000, "rows_emitted": 1000000 } } $ curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{ "query": "FROM test | WHERE NOT text.keyword IN (\"text 0998\") | STATS SUM(LENGTH(text))", "pragma": { "data_partitioning": "shard" }, "profile": true }' | jq '.profile.drivers[].operators[] | select(.operator | contains("ValuesSourceReaderOperator"))' { "operator": "ValuesSourceReaderOperator[fields = [text]]", "status": { "readers_built": { "stored_fields[requires_source:true, fields:0, sequential: false]": 222, "text:column_at_a_time:null": 222, "text:row_stride:BlockSourceReader.Bytes": 1 }, "values_loaded": 999000, "process_nanos": 3965803793, "pages_processed": 222, "rows_received": 999000, "rows_emitted": 999000 } } ``` It jumps from 370ms to almost four seconds! Loading fewer values! The second big difference is in the `stored_fields` marker. In the second on it's `sequential: false` and in the first `sequential: true`. `sequential: true` uses Lucene's "merge" stored fields reader instead of the default one. It's much more optimized at decoding sequences of documents. Previously we only enabled this reader when loading compact sequences of documents - when the entire block looks like ``` 1, 2, 3, 4, 5, ... 1230, 1231 ``` If there are any gaps we wouldn't enable it. That was a very conservative thing we did long ago without doing any experiments. We knew it was faster without any gaps, but not otherwise. It turns out it's a lot faster in a lot more cases. I've measured it as faster for 99% gaps, at least on simple documents. I'm a bit worried that this is too aggressive, so I've set made it configurable and made the default being to use the "merge" loader with 10% gaps. So we'd use the merge loader with a block like: ``` 1, 11, 21, 31, ..., 1231, 1241 ``` ESQL: Fix test locale (#127566) Was formatting a string and didn't include `Locale.ROOT` so sometimes the string would use the Arabic ٩ instead of 9. And JSON doesn't parse those. Closes #127562
This speeds up loading from stored fields by opting more blocks into the "sequential" strategy. This really kicks in when loading stored fields like
text
. And when you need less than 100% of documents, but more than, say, 20%. This is most useful when you need 99.9% of field documents. That sort of thing. Here's the perf numbers:Let's explain this with an example. First, jump to
main
and load a million documents:Now query them all. Run this a few times until it's stable:
Now fetch 99.9% of documents:
This should spit out something like:
We're loading fewer documents but it's slower! What in the world?! If you dig into the profile you'll see that it's value loading:
It jumps from 370ms to almost four seconds! Loading fewer values! The second big difference is in the
stored_fields
marker. In the second on it'ssequential: false
and in the firstsequential: true
.sequential: true
uses Lucene's "merge" stored fields reader instead of the default one. It's much more optimized at decoding sequences of documents.Previously we only enabled this reader when loading compact sequences of documents - when the entire block looks like
If there are any gaps we wouldn't enable it. That was a very conservative thing we did long ago without doing any experiments. We knew it was faster without any gaps, but not otherwise. It turns out it's a lot faster in a lot more cases. I've measured it as faster for 99% gaps, at least on simple documents. I'm a bit worried that this is too aggressive, so I've set made it configurable and made the default being to use the "merge" loader with 20% gaps. So we'd use the merge loader with a block like: