Skip to content

Commit bd3c905

Browse files
committed
docs/VictoriaLogs/data-ingestion: small fixes
1 parent ec77d3d commit bd3c905

File tree

6 files changed

+96
-85
lines changed

6 files changed

+96
-85
lines changed

docs/VictoriaLogs/data-ingestion/Filebeat.md

Lines changed: 5 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,5 @@
11
# Filebeat setup
22

3-
[Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-overview.html) log collector supports
4-
[Elasticsearch output](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) compatible with
5-
VictoriaMetrics [ingestion format](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#elasticsearch-bulk-api).
6-
73
Specify [`output.elasicsearch`](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) section in the `filebeat.yml`
84
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/):
95

@@ -92,12 +88,9 @@ output.elasticsearch:
9288
_stream_fields: "host.name,log.file.path"
9389
```
9490

95-
More info about output parameters you can find in [these docs](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html).
96-
97-
[Here is a demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/filebeat-docker) for
98-
running Filebeat with VictoriaLogs with docker-compose and collecting logs to VictoriaLogs.
99-
100-
The ingested log entries can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
101-
102-
See also [data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting) docs.
91+
See also:
10392

93+
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting).
94+
- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
95+
- [Filebeat `output.elasticsearch` docs](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html).
96+
- [Docker-compose demo for Filebeat integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/filebeat-docker).
Lines changed: 41 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,7 @@
11
## Fluentbit setup
22

3-
[Fluentbit](https://docs.fluentbit.io/manual) log collector supports [HTTP output](https://docs.fluentbit.io/manual/pipeline/outputs/http) compatible with
4-
VictoriaMetrics [JSON stream API](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#json-stream-api).
5-
6-
Specify [`output`](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) section with `Name http` in the `fluentbit.conf`
7-
for sending the collected logs to VictoriaLogs:
3+
Specify [http output](https://docs.fluentbit.io/manual/pipeline/outputs/http) section in the `fluentbit.conf`
4+
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/):
85

96
```conf
107
[Output]
@@ -17,18 +14,42 @@ for sending the collected logs to VictoriaLogs:
1714
json_date_format iso8601
1815
```
1916

20-
Substitute the address (`localhost`) and port (`9428`) inside `Output` section with the real TCP address of VictoriaLogs.
17+
Substitute the host (`localhost`) and port (`9428`) with the real TCP address of VictoriaLogs.
18+
19+
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters) for details on the query args specified in the `uri`.
2120

22-
The `_msg_field` parameter must contain the field name with the log message generated by Fluentbit. This is usually `message` field.
23-
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#message-field) for details.
21+
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model)
22+
and uses the correct [stream fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields).
23+
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters) in the `uri`
24+
and inspecting VictoriaLogs logs then:
2425

25-
The `_time_field` parameter must contain the field name with the log timestamp generated by Fluentbit. This is usually `@timestamp` field.
26-
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#time-field) for details.
26+
```conf
27+
[Output]
28+
Name http
29+
Match *
30+
host localhost
31+
port 9428
32+
uri /insert/jsonline/?_stream_fields=stream&_msg_field=log&_time_field=date&debug=1
33+
format json_lines
34+
json_date_format iso8601
35+
```
2736

28-
It is recommended specifying comma-separated list of field names, which uniquely identify every log stream collected by Fluentbit, in the `_stream_fields` parameter.
29-
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields) for details.
37+
If some [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model) must be skipped
38+
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters).
39+
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
3040

31-
If the Fluentbit sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compress` option.
41+
```conf
42+
[Output]
43+
Name http
44+
Match *
45+
host localhost
46+
port 9428
47+
uri /insert/jsonline/?_stream_fields=stream&_msg_field=log&_time_field=date&ignore_fields=log.offset,event.original
48+
format json_lines
49+
json_date_format iso8601
50+
```
51+
52+
If the Fluentbit sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compress gzip` option.
3253
This usually allows saving network bandwidth and costs by up to 5 times:
3354

3455
```conf
@@ -44,8 +65,8 @@ This usually allows saving network bandwidth and costs by up to 5 times:
4465
```
4566

4667
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#multitenancy).
47-
If you need storing logs in other tenant, then specify the needed tenant via `headers` at `output.elasticsearch` section.
48-
For example, the following `fluentbit.conf` config instructs Filebeat to store the data to `(AccountID=12, ProjectID=34)` tenant:
68+
If you need storing logs in other tenant, then specify the needed tenant via `header` options.
69+
For example, the following `fluentbit.conf` config instructs Fluentbit to store the data to `(AccountID=12, ProjectID=34)` tenant:
4970

5071
```conf
5172
[Output]
@@ -60,11 +81,9 @@ For example, the following `fluentbit.conf` config instructs Filebeat to store t
6081
header ProjectID 23
6182
```
6283

63-
More info about output tuning you can find in [these docs](https://docs.fluentbit.io/manual/pipeline/outputs/http).
64-
65-
[Here is a demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/fluentbit-docker)
66-
for running Fluentbit with VictoriaLogs with docker-compose and collecting logs from docker-containers to VictoriaLogs.
67-
68-
The ingested log entries can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
84+
See also:
6985

70-
See also [data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting) docs.
86+
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting).
87+
- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
88+
- [Fluentbit HTTP output config docs](https://docs.fluentbit.io/manual/pipeline/outputs/http).
89+
- [Docker-compose demo for Fluentbit integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/fluentbit-docker).

docs/VictoriaLogs/data-ingestion/Logstash.md

Lines changed: 5 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,5 @@
11
# Logstash setup
22

3-
[Logstash](https://www.elastic.co/guide/en/logstash/8.8/introduction.html) log collector supports
4-
[Opensearch output plugin](https://github.com/opensearch-project/logstash-output-opensearch) compatible with
5-
[Elasticsearch bulk API](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#elasticsearch-bulk-api)
6-
in VictoriaMetrics.
7-
83
Specify [`output.elasticsearch`](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) section in the `logstash.conf` file
94
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/):
105

@@ -100,12 +95,9 @@ output {
10095
}
10196
```
10297

103-
More info about output tuning you can find in [these docs](https://github.com/opensearch-project/logstash-output-opensearch/blob/main/README.md).
104-
105-
[Here is a demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/logstash)
106-
for running Logstash with VictoriaLogs with docker-compose and collecting logs to VictoriaLogs
107-
(via [Elasticsearch bulk API](https://docs.victoriametrics.com/VictoriaLogs/daat-ingestion/#elasticsearch-bulk-api)).
108-
109-
The ingested log entries can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
98+
See also:
11099

111-
See also [data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting) docs.
100+
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting).
101+
- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
102+
- [Logstash `output.elasticsearch` docs](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html).
103+
- [Docker-compose demo for Logstash integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/logstash).

docs/VictoriaLogs/data-ingestion/README.md

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,13 @@
77
- Logstash. See [how to setup Logstash for sending logs to VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Logstash.html).
88
- Vector. See [how to setup Vector for sending logs to VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Vector.html).
99

10-
See also [Log collectors and data ingestion formats](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#log-collectors-and-data-ingestion-formats) in VictoriaMetrics.
11-
1210
The ingested logs can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
1311

14-
See also [data ingestion troubleshooting](#troubleshooting) docs.
12+
See also:
13+
14+
- [Log collectors and data ingestion formats](#log-collectors-and-data-ingestion-formats).
15+
- [Data ingestion troubleshooting](#troubleshooting).
16+
1517

1618
## HTTP APIs
1719

@@ -122,11 +124,11 @@ VictoriaLogs exposes various [metrics](https://docs.victoriametrics.com/Victoria
122124

123125
## Log collectors and data ingestion formats
124126

125-
Here is the list of supported collectors and their ingestion formats supported by VictoriaLogs:
127+
Here is the list of log collectors and their ingestion formats supported by VictoriaLogs:
126128

127-
| Collector | Elasticsearch | JSON Stream |
129+
| How to setup the collector | Format: Elasticsearch | Format: JSON Stream |
128130
|------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|---------------------------------------------------------------|
129-
| [filebeat](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Filebeat.html) | [Yes](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) | No |
130-
| [fluentbit](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Fluentbit.html) | No | [Yes](https://docs.fluentbit.io/manual/pipeline/outputs/http) |
131-
| [logstash](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Logstash.html) | [Yes](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) | No |
132-
| [vector](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Vector.html) | [Yes](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/) | No |
131+
| [Filebeat](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Filebeat.html) | [Yes](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) | No |
132+
| [Fluentbit](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Fluentbit.html) | No | [Yes](https://docs.fluentbit.io/manual/pipeline/outputs/http) |
133+
| [Logstash](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Logstash.html) | [Yes](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) | No |
134+
| [Vector](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Vector.html) | [Yes](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/) | No |

docs/VictoriaLogs/data-ingestion/Vector.md

Lines changed: 33 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,7 @@
11
# Vector setup
22

3-
[Vector](http://vector.dev) log collector supports
4-
[Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/) compatible with
5-
[VictoriaMetrics Elasticsearch bulk API](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#elasticsearch-bulk-api).
6-
7-
Specify [`sinks.vlogs`](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) with `type=elasticsearch` section in the `vector.toml`
8-
for sending the collected logs to VictoriaLogs:
3+
Specify [Elasticsearch sink type](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/) in the `vector.toml`
4+
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/):
95

106
```toml
117
[sinks.vlogs]
@@ -26,17 +22,32 @@ Substitute the `localhost:9428` address inside `endpoints` section with the real
2622

2723
Replace `your_input` with the name of the `inputs` section, which collects logs. See [these docs](https://vector.dev/docs/reference/configuration/sources/) for details.
2824

29-
The `_msg_field` parameter must contain the field name with the log message generated by Vector. This is usually `message` field.
30-
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#message-field) for details.
25+
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters) for details on parameters specified
26+
in the `[sinks.vlogs.query]` section.
27+
28+
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model)
29+
and uses the correct [stream fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields).
30+
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters)
31+
in the `[sinks.vlogs.query]` section and inspecting VictoriaLogs logs then:
3132

32-
The `_time_field` parameter must contain the field name with the log timestamp generated by Vector. This is usually `@timestamp` field.
33-
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#time-field) for details.
33+
```toml
34+
[sinks.vlogs]
35+
inputs = [ "your_input" ]
36+
type = "elasticsearch"
37+
endpoints = [ "http://localhost:9428/insert/elasticsearch/" ]
38+
mode = "bulk"
39+
api_version = "v8"
40+
healthcheck.enabled = false
3441

35-
It is recommended specifying comma-separated list of field names, which uniquely identify every log stream collected by Vector, in the `_stream_fields` parameter.
36-
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields) for details.
42+
[sinks.vlogs.query]
43+
_msg_field = "message"
44+
_time_field = "timestamp"
45+
_stream_fields = "host,container_name"
46+
debug = "1"
47+
```
3748

38-
If some [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model) aren't needed,
39-
then VictoriaLogs can be instructed to ignore them during data ingestion - just pass `ignore_fields` parameter with comma-separated list of fields to ignore.
49+
If some [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model) must be skipped
50+
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters).
4051
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
4152

4253
```toml
@@ -55,9 +66,6 @@ For example, the following config instructs VictoriaLogs to ignore `log.offset`
5566
ignore_fields = "log.offset,event.original"
5667
```
5768

58-
More details about `_msg_field`, `_time_field`, `_stream_fields` and `ignore_fields` are
59-
available [here](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters).
60-
6169
When Vector ingests logs into VictoriaLogs at a high rate, then it may be needed to tune `batch.max_events` option.
6270
For example, the following config is optimized for higher than usual ingestion rate:
6371

@@ -79,7 +87,7 @@ For example, the following config is optimized for higher than usual ingestion r
7987
max_events = 1000
8088
```
8189

82-
If the Vector sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compression` option.
90+
If the Vector sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compression = "gzip"` option.
8391
This usually allows saving network bandwidth and costs by up to 5 times:
8492

8593
```toml
@@ -99,8 +107,8 @@ This usually allows saving network bandwidth and costs by up to 5 times:
99107
```
100108

101109
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#multitenancy).
102-
If you need storing logs in other tenant, then specify the needed tenant via `custom_headers` at `output.elasticsearch` section.
103-
For example, the following `vector.toml` config instructs Logstash to store the data to `(AccountID=12, ProjectID=34)` tenant:
110+
If you need storing logs in other tenant, then specify the needed tenant via `[sinks.vlogq.request.headers]` section.
111+
For example, the following `vector.toml` config instructs Vector to store the data to `(AccountID=12, ProjectID=34)` tenant:
104112

105113
```toml
106114
[sinks.vlogs]
@@ -121,12 +129,9 @@ For example, the following `vector.toml` config instructs Logstash to store the
121129
ProjectID = "34"
122130
```
123131

124-
More info about output tuning you can find in [these docs](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/).
125-
126-
[Here is a demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/vector-docker)
127-
for running Vector with VictoriaLogs with docker-compose and collecting logs from docker-containers
128-
to VictoriaLogs (via [Elasticsearch API](https://docs.victoriametrics.com/VictoriaLogs/ingestion/#elasticsearch-bulk-api)).
129-
130-
The ingested log entries can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
132+
See also:
131133

132-
See also [data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting) docs.
134+
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting).
135+
- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
136+
- [Elasticsearch output docs for Vector.dev](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/).
137+
- [Docker-compose demo for Filebeat integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/vector-docker).

0 commit comments

Comments
 (0)