Skip to content

add es code comments for testing #124283

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 14 commits into
base: main
Choose a base branch
from
4 changes: 4 additions & 0 deletions docs/reference/data-analysis/aggregations/pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,7 @@ POST /_search
}
}
```
% TEST[setup:sales]

1. `buckets_path` instructs this max_bucket aggregation that we want the maximum value of the `sales` aggregation in the `sales_per_month` date histogram.

Expand Down Expand Up @@ -146,6 +147,7 @@ POST /_search
}
}
```
% TEST[setup:sales]

1. `buckets_path` selects the hats and bags buckets (via `['hat']`/`['bag']``) to use in the script specifically, instead of fetching all the buckets from `sale_type` aggregation

Expand Down Expand Up @@ -214,6 +216,7 @@ POST /sales/_search
}
}
```
% TEST[setup:sales]

1. By using `_bucket_count` instead of a metric name, we can filter out `histo` buckets where they contain no buckets for the `categories` aggregation

Expand All @@ -226,6 +229,7 @@ An alternate syntax is supported to cope with aggregations or metrics which have
```js
"buckets_path": "my_percentile[99.9]"
```
% NOTCONSOLE


## Dealing with gaps in the data [gap-policy]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,9 @@ The response contains buckets with document counts for each filter and combinati
}
}
```
% TESTRESPONSE[s/"took": 9/"took": $body.took/]
% TESTRESPONSE[s/"_shards": .../"_shards": $body._shards/]
% TESTRESPONSE[s/"hits": .../"hits": $body.hits/]


## Parameters [adjacency-matrix-agg-params]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ POST /sales/_search?size=0
}
}
```
% TEST[setup:sales]

## Keys [_keys]

Expand All @@ -54,6 +55,7 @@ POST /sales/_search?size=0
}
}
```
% TEST[setup:sales]

1. Supports expressive date [format pattern](/reference/data-analysis/aggregations/search-aggregations-bucket-daterange-aggregation.md#date-format-pattern)

Expand Down Expand Up @@ -87,6 +89,7 @@ Response:
}
}
```
% TESTRESPONSE[s/.../"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]


## Intervals [_intervals]
Expand Down Expand Up @@ -182,6 +185,7 @@ UTC is used if no time zone is specified, three 1-hour buckets are returned star
}
}
```
% TESTRESPONSE[s/.../"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]

If a `time_zone` of `-01:00` is specified, then midnight starts at one hour before midnight UTC:

Expand All @@ -199,6 +203,7 @@ GET my-index-000001/_search?size=0
}
}
```
% TEST[continued]

Now three 1-hour buckets are still returned but the first bucket starts at 11:00pm on 30 September 2015 since that is the local time for the bucket in the specified time zone.

Expand Down Expand Up @@ -229,6 +234,7 @@ Now three 1-hour buckets are still returned but the first bucket starts at 11:00
}
}
```
% TESTRESPONSE[s/.../"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]

1. The `key_as_string` value represents midnight on each day in the specified time zone.

Expand Down Expand Up @@ -268,6 +274,7 @@ POST /sales/_search?size=0
}
}
```
% TEST[setup:sales]


## Missing value [_missing_value]
Expand All @@ -290,6 +297,7 @@ POST /sales/_search?size=0
}
}
```
% TEST[setup:sales]

1. Documents without a value in the `publish_date` field will fall into the same bucket as documents that have the value `2000-01-01`.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,7 @@ POST log-messages/_search?filter_path=aggregations
}
}
```
% TEST[setup:categorize_text]

Response:

Expand Down Expand Up @@ -161,6 +162,7 @@ POST log-messages/_search?filter_path=aggregations
}
}
```
% TEST[setup:categorize_text]

1. The filters to apply to the analyzed tokens. It filters out tokens like `bar_123`.

Expand Down Expand Up @@ -218,6 +220,7 @@ POST log-messages/_search?filter_path=aggregations
}
}
```
% TEST[setup:categorize_text]

1. The filters to apply to the analyzed tokens. It filters out tokens like `bar_123`.
2. Require 11% of token weight to match before adding a message to an existing category rather than creating a new one.
Expand Down Expand Up @@ -280,6 +283,7 @@ POST log-messages/_search?filter_path=aggregations
}
}
```
% TEST[setup:categorize_text]

```console-result
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@ PUT child_example/_doc/1
]
}
```
% TEST[continued]

Examples of `answer` documents:

Expand Down Expand Up @@ -86,6 +87,7 @@ PUT child_example/_doc/3?routing=1&refresh
"creation_date": "2009-05-05T13:45:37.030"
}
```
% TEST[continued]

The following request can be built that connects the two together:

Expand Down Expand Up @@ -117,6 +119,7 @@ POST child_example/_search?size=0
}
}
```
% TEST[continued]

1. The `type` points to type / mapping with the name `answer`.

Expand Down Expand Up @@ -216,6 +219,7 @@ Possible response:
}
}
```
% TESTRESPONSE[s/"took": 25/"took": $body.took/]

1. The number of question documents with the tag `file-transfer`, `windows-server-2003`, etc.
2. The number of answer documents that are related to question documents with the tag `file-transfer`, `windows-server-2003`, etc.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ For example, consider the following document:
"number": [23, 65, 76]
}
```
% NOTCONSOLE

Using `keyword` and `number` as source fields for the aggregation results in the following composite buckets:

Expand All @@ -37,6 +38,7 @@ Using `keyword` and `number` as source fields for the aggregation results in the
{ "keyword": "bar", "number": 65 }
{ "keyword": "bar", "number": 76 }
```
% NOTCONSOLE

## Value sources [_value_sources]

Expand Down Expand Up @@ -310,6 +312,7 @@ Instead of a single bucket starting at midnight, the above request groups the do
}
}
```
% TESTRESPONSE[s/.../"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]

::::{note}
The start `offset` of each bucket is calculated after `time_zone` adjustments have been made.
Expand Down Expand Up @@ -513,6 +516,7 @@ GET /_search
}
}
```
% TEST[s/_search/_search?filter_path=aggregations/]

... returns:

Expand Down Expand Up @@ -545,6 +549,7 @@ GET /_search
}
}
```
% TESTRESPONSE[s/...//]

To get the next set of buckets, resend the same aggregation with the `after` parameter set to the `after_key` value returned in the response. For example, this request uses the `after_key` value provided in the previous response:

Expand Down Expand Up @@ -709,6 +714,7 @@ GET /_search
}
}
```
% TEST[s/_search/_search?filter_path=aggregations/]

... returns:

Expand Down Expand Up @@ -767,6 +773,7 @@ GET /_search
}
}
```
% TESTRESPONSE[s/...//]


## Pipeline aggregations [search-aggregations-bucket-composite-aggregation-pipeline-aggregations]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ A `bucket_correlation` aggregation looks like this in isolation:
}
}
```
% NOTCONSOLE

1. The buckets containing the values to correlate against.
2. The correlation function definition.
Expand Down Expand Up @@ -126,6 +127,7 @@ POST correlate_latency/_search?size=0&filter_path=aggregations
}
}
```
% TEST[setup:correlate_latency]

1. The term buckets containing a range aggregation and the bucket correlation aggregation. Both are utilized to calculate the correlation of the term values with the latency.
2. The range aggregation on the latency field. The ranges were created referencing the percentiles of the latency field.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ A `bucket_count_ks_test` aggregation looks like this in isolation:
}
}
```
% NOTCONSOLE

1. The buckets containing the values to test against.
2. The alternatives to calculate.
Expand Down Expand Up @@ -89,6 +90,7 @@ POST correlate_latency/_search?size=0&filter_path=aggregations
}
}
```
% TEST[setup:correlate_latency]

1. The term buckets containing a range aggregation and the bucket correlation aggregation. Both are utilized to calculate the correlation of the term values with the latency.
2. The range aggregation on the latency field. The ranges were created referencing the percentiles of the latency field.
Expand Down
Loading
Loading