-
Notifications
You must be signed in to change notification settings - Fork 7k
Initial pipelines docs #18595
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial pipelines docs #18595
Changes from 92 commits
01564e4
e0cc62f
1dfcc0d
4ab692a
4afaa0e
72381fc
112140e
51ce611
dedafff
8ffc4bc
9f6d180
31bdfe1
b4fcf86
69d9dd5
59a1902
2631676
4511e58
4c940f7
5184319
38cb4b4
ff67c0e
a8392d1
8cd419a
f231b40
ce0c086
f73c367
1ace647
9166d38
7784a6a
8b37f29
45175ca
fef991b
436084b
420eae7
80a4d85
eeea924
ca49906
80f0ae4
20f8814
bf20e02
4f74185
85fa5bd
b9cdb1a
8781ded
4df10ad
85af8d4
dc61631
cff26ee
924796d
33137ac
152dd1c
fe1aefa
09646a4
2fcd519
09b61c2
be960d7
f237dea
41bbabf
a95fb41
f04c9eb
e1a8c28
3af05d6
ed87e98
c87125d
a9196f6
2c5d9f8
8b41ff4
88f5c53
38a5bc1
047106b
28363ce
759a8b0
4f2ff46
36c5679
31e9b75
16222db
d884e01
9a3da5d
73d7457
5a60392
db61fa0
2e3bd0a
45b0c23
900ba16
a203abc
f257f1e
85a14a5
5ee167f
658dd38
686d709
d6524f2
447a1bd
7dcde9e
ae072d8
591e655
1327639
47ec978
525cc91
e1a1970
b452a6d
f75a3ec
5bdc9f8
da23c75
c54275c
bdb1652
235c256
af6301c
00b8105
e07dd28
48b564d
b2aa8bc
37feeed
7f80c98
3b26f53
6f7193a
f66a13b
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,60 @@ | ||
--- | ||
title: Cloudflare Pipelines now available in beta | ||
description: Use Cloudflare Pipelines to ingest real time data streams, and load into R2. | ||
products: | ||
- pipelines | ||
- r2 | ||
- workers | ||
date: 2025-04-10 12:00:00 UTC | ||
hidden: true | ||
--- | ||
|
||
Cloudflare Pipelines is now available in beta, to all users with a Workers Paid plan. | ||
|
||
Pipelines let you ingest high volumes of real time data, without managing any infrastructure. A single pipeline can ingest up to 100 MB of data per second, via HTTP or from a [Worker](/workers). Ingested data is automatically batched, written to output files, and delivered to an [R2 bucket](/r2) in your account. You can use Pipelines to build a data lake of clickstream data, or to store events from a Worker. | ||
|
||
Create your first pipeline with a single command: | ||
|
||
```bash title="Create a pipeline" | ||
$ npx wrangler@latest pipelines create my-clickstream-pipeline --r2-bucket my-bucket | ||
|
||
🌀 Authorizing R2 bucket "my-bucket" | ||
🌀 Creating pipeline named "my-clickstream-pipeline" | ||
✅ Successfully created pipeline my-clickstream-pipeline | ||
|
||
Id: 0e00c5ff09b34d018152af98d06f5a1xvc | ||
Name: my-clickstream-pipeline | ||
Sources: | ||
HTTP: | ||
Endpoint: https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/ | ||
Authentication: off | ||
Format: JSON | ||
Worker: | ||
Format: JSON | ||
Destination: | ||
Type: R2 | ||
Bucket: my-bucket | ||
Format: newline-delimited JSON | ||
Compression: GZIP | ||
Batch hints: | ||
Max bytes: 100 MB | ||
Max duration: 300 seconds | ||
Max records: 100,000 | ||
|
||
🎉 You can now send data to your Pipeline! | ||
|
||
Send data to your Pipeline's HTTP endpoint: | ||
curl "https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/" -d '[{ ...JSON_DATA... }]' | ||
|
||
To send data to your Pipeline from a Worker, add the following configuration to your config file: | ||
{ | ||
"pipelines": [ | ||
{ | ||
"pipeline": "my-clickstream-pipeline", | ||
"binding": "PIPELINE" | ||
} | ||
] | ||
} | ||
``` | ||
|
||
Head over to our [getting started guide](/pipelines/getting-started) for an in-depth tutorial to building with Pipelines. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,109 @@ | ||
--- | ||
title: Configure HTTP Endpoint | ||
pcx_content_type: concept | ||
sidebar: | ||
order: 1 | ||
head: | ||
- tag: title | ||
content: Configure HTTP Endpoint | ||
--- | ||
|
||
import { Render, PackageManagers } from "~/components"; | ||
|
||
Pipelines support data ingestion over HTTP. When you create a new pipeline, you'll receive a globally scalable ingestion endpoint. To ingest data, make HTTP POST requests to the endpoint. | ||
|
||
```sh | ||
$ npx wrangler@latest pipelines create my-clickstream-pipeline --r2-bucket my-bucket | ||
|
||
🌀 Authorizing R2 bucket "my-bucket" | ||
🌀 Creating pipeline named "my-clickstream-pipeline" | ||
✅ Successfully created pipeline my-clickstream-pipeline | ||
|
||
Id: 0e00c5ff09b34d018152af98d06f5a1xvc | ||
Name: my-clickstream-pipeline | ||
Sources: | ||
HTTP: | ||
Endpoint: https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/ | ||
Authentication: off | ||
Format: JSON | ||
Worker: | ||
Format: JSON | ||
Destination: | ||
Type: R2 | ||
Bucket: my-bucket | ||
Format: newline-delimited JSON | ||
Compression: GZIP | ||
Batch hints: | ||
Max bytes: 100 MB | ||
Max duration: 300 seconds | ||
Max records: 100,000 | ||
|
||
🎉 You can now send data to your Pipeline! | ||
|
||
Send data to your Pipeline's HTTP endpoint: | ||
curl "https://0e00c5ff09b34d018152af98d06f5a1xvc.pipelines.cloudflare.com/" -d '[{ ...JSON_DATA... }]' | ||
``` | ||
|
||
## Accepted data formats | ||
Pipelines accept arrays of valid JSON objects. You can send multiple objects in a single request, provided the total data volume is within the [documented limits](/pipelines/platform/limits). Sending data in a different format will result in an error. | ||
|
||
For example, you can send data to your pipeline using a curl command like this: | ||
```sh | ||
curl -X POST https://<PIPELINE-ID>.pipelines.cloudflare.com \ | ||
-H "Content-Type: application/json" \ | ||
-d '[{"foo":"bar"}, {"foo":"bar"}, {"foo":"bar"}]' | ||
|
||
{"success":true,"result":{"committed":3}} | ||
``` | ||
|
||
## Turning HTTP ingestion off | ||
By default, ingestion via HTTP is turned on. You can turn it off by excluding it from the list of sources, by using `--sources` when creating or updating a pipeline. | ||
maheshwarip marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
```sh | ||
$ npx wrangler pipelines create [PIPELINE-NAME] --r2-bucket [R2-BUCKET-NAME] --sources worker | ||
``` | ||
|
||
Ingestion URLs are tied to your pipeline ID. Turning HTTP off, and then turning it back on, will not change the URL. | ||
|
||
## Authentication | ||
You can secure your HTTP ingestion endpoint using Cloudflare API tokens. By default, authentication is turned off. To configure authentication, use the `--require-http-auth` flag while creating or updating a pipeline. | ||
|
||
```sh | ||
$ npx wrangler pipelines create [PIPELINE-NAME] --r2-bucket [R2-BUCKET-NAME] --require-http-auth true | ||
``` | ||
|
||
Once authentication is turned on, you will need to include a Cloudflare API token in your request headers. | ||
|
||
### Get API token | ||
1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. | ||
2. Navigate to your [API Keys](https://dash.cloudflare.com/profile/api-tokens) | ||
maheshwarip marked this conversation as resolved.
Show resolved
Hide resolved
|
||
3. Select *Create Token* | ||
maheshwarip marked this conversation as resolved.
Show resolved
Hide resolved
|
||
4. Choose the template for Workers Pipelines. Click on *continue to summary*, and finally on *create token*. Make sure to copy the API token, and save it securely. | ||
maheshwarip marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
### Making authenticated requests | ||
Include the API token you created in the previous step in the headers for your request: | ||
|
||
```sh | ||
curl https://<PIPELINE-ID>.pipelines.cloudflare.com | ||
-H "Content-Type: application/json" \ | ||
-H "Authorization: Bearer ${API_TOKEN}" \ | ||
-d '[{"foo":"bar"}, {"foo":"bar"}, {"foo":"bar"}]' | ||
``` | ||
|
||
## Specifying CORS Settings | ||
If you want to use your pipeline to ingest client side data, such as website clicks, you'll need to configure your [Cross-Origin Resource Sharing (CORS) settings](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS). | ||
maheshwarip marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
Without setting your CORS settings, browsers will restrict requests made to your pipeline endpoint. For example, if your website domain is `https://my-website.com`, and you want to post client side data to your pipeline at `https://<PIPELINE-ID>.pipelines.cloudflare.com`, without CORS settings, the request will fail. | ||
|
||
To fix this, you need to configure your pipeline to accept requests from `https://my-website.com`. You can do so while creating or updating a pipeline, using the flag `--cors-origins`. You can specify multiple domains separated by a space. | ||
|
||
```sh | ||
$ npx wrangler pipelines update [PIPELINE-NAME] --cors-origins https://mydomain.com http://localhost:8787 | ||
``` | ||
|
||
You can specify that all cross origin requests are accepted. We recommend only using this option in development, and not for production use cases. | ||
```sh | ||
$ npx wrangler pipelines update [PIPELINE-NAME] --cors-origins "*" | ||
``` | ||
|
||
After your the `--cors-origins` have been set on your pipeline, your pipeline will respond to preflight requests and POST requests with the appropriate `Access-Control-Allow-Origin` headers set. | ||
maheshwarip marked this conversation as resolved.
Show resolved
Hide resolved
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
--- | ||
title: Build with Pipelines | ||
pcx_content_type: navigation | ||
sidebar: | ||
order: 3 | ||
group: | ||
hideIndex: true | ||
--- |
Original file line number | Diff line number | Diff line change | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
@@ -0,0 +1,103 @@ | ||||||||||
--- | ||||||||||
title: Customize output settings | ||||||||||
pcx_content_type: concept | ||||||||||
maheshwarip marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||
sidebar: | ||||||||||
order: 3 | ||||||||||
head: | ||||||||||
- tag: title | ||||||||||
content: Customize output settings | ||||||||||
--- | ||||||||||
|
||||||||||
import { Render, PackageManagers } from "~/components"; | ||||||||||
|
||||||||||
Pipelines convert a stream of records into output files, and deliver the files to an R2 bucket in your account. This guide details how you can change the output destination, and how to customize batch settings to generate query ready files. | ||||||||||
maheshwarip marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||
|
||||||||||
## Configure an R2 bucket as a destination | ||||||||||
To create or update a pipeline using Wrangler, run the following command in a terminal: | ||||||||||
|
||||||||||
```sh | ||||||||||
npx wrangler pipelines create [PIPELINE-NAME] --r2-bucket [R2-BUCKET-NAME] | ||||||||||
``` | ||||||||||
|
||||||||||
After running this command, you'll be prompted to authorize Cloudflare Workers Pipelines to create an R2 API token on your behalf. Your pipeline uses the R2 API token to load data into your bucket. You can approve the request through the browser link which will open automatically. | ||||||||||
maheshwarip marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||
|
||||||||||
If you prefer not to authenticate this way, you may pass your [R2 API Token](/r2/api/tokens/) to Wrangler: | ||||||||||
maheshwarip marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||
```sh | ||||||||||
npx wrangler pipelines create [PIPELINE-NAME] --r2 [R2-BUCKET-NAME] --r2-access-key-id [ACCESS-KEY-ID] --r2-secret-access-key [SECRET-ACCESS-KEY] | ||||||||||
``` | ||||||||||
|
||||||||||
## File format and compression | ||||||||||
Output files are generated as Newline Delimited JSON files (`ndjson`). Each line in an output file maps to a single record. | ||||||||||
|
||||||||||
By default, output files are compressed in the `gzip` format. Compression can be turned off using the `--compression` flag: | ||||||||||
```sh | ||||||||||
npx wrangler pipelines update [PIPELINE-NAME] --compression none | ||||||||||
``` | ||||||||||
|
||||||||||
Output files are named using a [UILD](https://github.com/ulid/spec) slug, followed by an extension. | ||||||||||
|
||||||||||
## Customize batch behavior | ||||||||||
When configuring your pipeline, you can define how records are batched before they are delivered to R2. Batches of records are written out to a single output file. | ||||||||||
|
||||||||||
Batching can: | ||||||||||
1. Reduce the number of output files written to R2, and thus reduce the [cost of writing data to R2](/r2/pricing/#class-a-operations) | ||||||||||
maheshwarip marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||
2. Increase the size of output files, making them more efficient to query | ||||||||||
maheshwarip marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||
|
||||||||||
There are three ways to define how ingested data is batched: | ||||||||||
|
||||||||||
1. `batch-max-mb`: The maximum amount of data that will be batched, in megabytes. Default is 10 MB, maximum is 100 MB. | ||||||||||
maheshwarip marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||
2. `batch-max-rows`: The maximum number of rows or events in a batch before data is written. Default, and maximum, is 10,000 rows. | ||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Max is 10,000,000 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||
3. `batch-max-seconds`: The maximum duration of a batch before data is written, in seconds. Default is 15 seconds, maximum is 300 seconds. | ||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||
|
||||||||||
Batch definitions are hints. A pipeline will follow these hints closely, but batches might not be exact. | ||||||||||
|
||||||||||
All three batch definitions work together. Whichever limit is reached first triggers the delivery of a batch. | ||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
Suggested change
|
||||||||||
|
||||||||||
For example, a `batch-max-mb` = 100 MB and a `batch-max-seconds` = 100 means that if 100 MB of events are posted to the pipeline, the batch will be delivered. However, if it takes longer than 100 seconds for 100 MB of events to be posted, a batch of all the messages that were posted during those 100 seconds will be created. | ||||||||||
|
||||||||||
### Defining batch settings using Wrangler | ||||||||||
You can use the following batch settings flags while creating or updating a pipeline: | ||||||||||
* `--batch-max-mb` | ||||||||||
* `--batch-max-rows` | ||||||||||
* `--batch-max-seconds` | ||||||||||
|
||||||||||
For example: | ||||||||||
```sh | ||||||||||
npx wrangler pipelines update [PIPELINE-NAME] --batch-max-mb 100 --batch-max-rows 10000 --batch-max-seconds 300 | ||||||||||
``` | ||||||||||
|
||||||||||
#### Batch size limits | ||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||
|
||||||||||
| Setting | Default | Minimum | Maximum | | ||||||||||
| ----------------------------------------- | ----------- | --------- | ----------- | | ||||||||||
| Maximum Batch Size `batch-max-mb` | 10 MB | 0.001 MB | 100 MB | | ||||||||||
| Maximum Batch Timeout `batch-max-seconds` | 15 seconds | 0 seconds | 300 seconds | | ||||||||||
| Maximum Batch Rows `batch-max-rows` | 10,000 rows | 1 row | 10,000,000 rows | | ||||||||||
|
||||||||||
|
||||||||||
## Deliver partitioned data | ||||||||||
Partitioning organizes data into directories based on specific fields to improve query performance. Partitions reduce the amount of data scanned for queries, enabling faster reads. | ||||||||||
|
||||||||||
:::note | ||||||||||
By default, Pipelines partition data by event date and time. This will be customizable in the future. | ||||||||||
::: | ||||||||||
|
||||||||||
Output files are prefixed with event date and hour. For example, the output from a Pipeline in your R2 bucket might look like this: | ||||||||||
```sh | ||||||||||
- event_date=2025-04-01/hr=15/01JQWBZCZBAQZ7RJNZHN38JQ7V.json.gz | ||||||||||
- event_date=2025-04-01/hr=15/01JQWC16FXGP845EFHMG1C0XNW.json.gz | ||||||||||
``` | ||||||||||
|
||||||||||
## Deliver data to a prefix | ||||||||||
You can specify an optional prefix for all the output files stored in your specified R2 bucket, using the flag `--r2-prefix`. | ||||||||||
|
||||||||||
For example: | ||||||||||
```sh | ||||||||||
npx wrangler pipelines update [PIPELINE-NAME] --r2-prefix test | ||||||||||
``` | ||||||||||
|
||||||||||
After running the above command, the output files generated by your pipeline will be stored under the prefix "test". Files will remain partitioned. Your output will look like this: | ||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||
```sh | ||||||||||
- test/event_date=2025-04-01/hr=15/01JQWBZCZBAQZ7RJNZHN38JQ7V.json.gz | ||||||||||
- test/event_date=2025-04-01/hr=15/01JQWC16FXGP845EFHMG1C0XNW.json.gz | ||||||||||
``` |
Original file line number | Diff line number | Diff line change | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
@@ -0,0 +1,56 @@ | ||||||||||||||||||
--- | ||||||||||||||||||
pcx_content_type: concept | ||||||||||||||||||
title: Customize shard count | ||||||||||||||||||
sidebar: | ||||||||||||||||||
order: 11 | ||||||||||||||||||
--- | ||||||||||||||||||
|
||||||||||||||||||
import { Render, PackageManagers } from "~/components"; | ||||||||||||||||||
|
||||||||||||||||||
Shards affect a pipeline's throughput. Increasing the shard count for a pipeline increases the maximum throughput. By default, each pipeline is configured with two shards. | ||||||||||||||||||
|
||||||||||||||||||
To set the shard count, use the `--shard-count` flag while creating or updating a pipeline: | ||||||||||||||||||
```sh | ||||||||||||||||||
$ npx wrangler pipelines update [PIPELINE-NAME] --shard-count 10 | ||||||||||||||||||
``` | ||||||||||||||||||
|
||||||||||||||||||
:::note | ||||||||||||||||||
The default shard count will be set to `auto` in the future, with support for automatic horizontal scaling. | ||||||||||||||||||
::: | ||||||||||||||||||
|
||||||||||||||||||
## How shards work | ||||||||||||||||||
 | ||||||||||||||||||
|
||||||||||||||||||
Each pipeline is composed of stateless, independent shards. These shards are spun up when a pipeline is created. Each shard is composed of layers of [Durable Objects](/durable-objects). The Durable Objects buffer data, replicate for durability, handle compression, and delivery to R2. | ||||||||||||||||||
maheshwarip marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||
|
||||||||||||||||||
When a record is sent to a pipeline: | ||||||||||||||||||
1. The Pipelines [Worker](/workers) receives the record | ||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||||||
2. The record is routed to to one of the shards | ||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||||||
3. The record is handled by a set of Durable Objects, which commmit the record to storage, and replicate for durability. | ||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
Suggested change
Suggested change
Suggested change
|
||||||||||||||||||
4. Records accumulate, until the [batch definitions](/pipelines/build-with-pipelines/output-settings/#customize-batch-behavior) are met. | ||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||||||
5. The batch is written to an output file, and optionally compressed. | ||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||||||
6. The output file is delivered to the configured R2 bucket | ||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||||||
|
||||||||||||||||||
Increasing the number of shards will increase the maximum throughput of a pipeline, as well as the number of output files created. | ||||||||||||||||||
|
||||||||||||||||||
### Example | ||||||||||||||||||
Your workload might require making 5,0000 requests per second to a pipeline. If you create a pipeline with a single shard, all 5,000 requests will be routed to the same shard. If your pipeline has been configured with a maximum batch duration of 1 second, every second, all 5,000 requests will be batched, and a single file will be delivered. | ||||||||||||||||||
maheshwarip marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||
|
||||||||||||||||||
Increasing the shard count to 2 will double the number of output files. The 5,000 requests will be split into 2,500 requests to each shard. Every second, each shard will create a batch of data, and deliver to R2. | ||||||||||||||||||
|
||||||||||||||||||
## Why shouldn't I set the shard count to the maximum? | ||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd recommend changing this out of a question form. With questions as headers, it can be difficult for users to find what they need when searching. I'd recommend changing it to something like "Maximum shard count considerations" or whatever makes most sense. |
||||||||||||||||||
Increasing the shard count also increases the number of output files that your pipeline generates. This in turn increases the [cost of writing data to R2](/r2/pricing/#class-a-operations), as each file written to R2 counts as a single class A operation. Additionally, smaller files are slower, and more expensive, to query. Rather than setting the maximum, choose a shard count based on your workload needs. | ||||||||||||||||||
|
||||||||||||||||||
## How should I decide the number of shards to use? | ||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same as above. I'd recommend something like "Determine shard amount" or "Choose number of shards." |
||||||||||||||||||
Choose a shard count based on these factors: | ||||||||||||||||||
* How many requests per second you will make to your pipeline | ||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||||||
* How much data per second you will send to your pipeline | ||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||||||
|
||||||||||||||||||
Each shard is capable of handling approximately 7,000 requests per second, or ingesting 7 MB / s of data. Either factor might act as the bottleneck, so choose the shard count based on the higher number. | ||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||||||
|
||||||||||||||||||
For example, if you estimate that you will ingest 70 MB / s, making 70,000 requests per second, setup a pipeline with 10 shards. However, if you estimate that you will ingest 70 MB / s while making 100,000 requests per second, setup a pipeline with 15 shards. | ||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||||||
|
||||||||||||||||||
## Limits | ||||||||||||||||||
| Setting | Default | Minimum | Maximum | | ||||||||||||||||||
| ----------------------------------------- | ----------- | --------- | ----------- | | ||||||||||||||||||
| Shards per pipeline `shard-count` | 2 | 1 | 15 | |
Uh oh!
There was an error while loading. Please reload this page.