You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- [Docker-compose demo for Filebeat integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/filebeat-docker).
Specify [`output`](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) section with `Name http` in the `fluentbit.conf`
7
-
for sending the collected logs to VictoriaLogs:
3
+
Specify [http output](https://docs.fluentbit.io/manual/pipeline/outputs/http) section in the `fluentbit.conf`
4
+
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/):
8
5
9
6
```conf
10
7
[Output]
@@ -17,18 +14,42 @@ for sending the collected logs to VictoriaLogs:
17
14
json_date_format iso8601
18
15
```
19
16
20
-
Substitute the address (`localhost`) and port (`9428`) inside `Output` section with the real TCP address of VictoriaLogs.
17
+
Substitute the host (`localhost`) and port (`9428`) with the real TCP address of VictoriaLogs.
18
+
19
+
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters) for details on the query args specified in the `uri`.
21
20
22
-
The `_msg_field` parameter must contain the field name with the log message generated by Fluentbit. This is usually `message` field.
23
-
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#message-field) for details.
21
+
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model)
22
+
and uses the correct [stream fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields).
23
+
This can be done by specifying `debug`[parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters) in the `uri`
24
+
and inspecting VictoriaLogs logs then:
24
25
25
-
The `_time_field` parameter must contain the field name with the log timestamp generated by Fluentbit. This is usually `@timestamp` field.
26
-
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#time-field) for details.
26
+
```conf
27
+
[Output]
28
+
Name http
29
+
Match *
30
+
host localhost
31
+
port 9428
32
+
uri /insert/jsonline/?_stream_fields=stream&_msg_field=log&_time_field=date&debug=1
33
+
format json_lines
34
+
json_date_format iso8601
35
+
```
27
36
28
-
It is recommended specifying comma-separated list of field names, which uniquely identify every log stream collected by Fluentbit, in the `_stream_fields` parameter.
29
-
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields) for details.
37
+
If some [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model) must be skipped
38
+
during data ingestion, then they can be put into `ignore_fields`[parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters).
39
+
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
30
40
31
-
If the Fluentbit sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compress` option.
41
+
```conf
42
+
[Output]
43
+
Name http
44
+
Match *
45
+
host localhost
46
+
port 9428
47
+
uri /insert/jsonline/?_stream_fields=stream&_msg_field=log&_time_field=date&ignore_fields=log.offset,event.original
48
+
format json_lines
49
+
json_date_format iso8601
50
+
```
51
+
52
+
If the Fluentbit sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compress gzip` option.
32
53
This usually allows saving network bandwidth and costs by up to 5 times:
33
54
34
55
```conf
@@ -44,8 +65,8 @@ This usually allows saving network bandwidth and costs by up to 5 times:
44
65
```
45
66
46
67
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)`[tenant](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#multitenancy).
47
-
If you need storing logs in other tenant, then specify the needed tenant via `headers` at `output.elasticsearch` section.
48
-
For example, the following `fluentbit.conf` config instructs Filebeat to store the data to `(AccountID=12, ProjectID=34)` tenant:
68
+
If you need storing logs in other tenant, then specify the needed tenant via `header` options.
69
+
For example, the following `fluentbit.conf` config instructs Fluentbit to store the data to `(AccountID=12, ProjectID=34)` tenant:
49
70
50
71
```conf
51
72
[Output]
@@ -60,11 +81,9 @@ For example, the following `fluentbit.conf` config instructs Filebeat to store t
60
81
header ProjectID 23
61
82
```
62
83
63
-
More info about output tuning you can find in [these docs](https://docs.fluentbit.io/manual/pipeline/outputs/http).
64
-
65
-
[Here is a demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/fluentbit-docker)
66
-
for running Fluentbit with VictoriaLogs with docker-compose and collecting logs from docker-containers to VictoriaLogs.
67
-
68
-
The ingested log entries can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
84
+
See also:
69
85
70
-
See also [data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting) docs.
-[Docker-compose demo for Fluentbit integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/fluentbit-docker).
Specify [`output.elasticsearch`](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) section in the `logstash.conf` file
9
4
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/):
10
5
@@ -100,12 +95,9 @@ output {
100
95
}
101
96
```
102
97
103
-
More info about output tuning you can find in [these docs](https://github.com/opensearch-project/logstash-output-opensearch/blob/main/README.md).
104
-
105
-
[Here is a demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/logstash)
106
-
for running Logstash with VictoriaLogs with docker-compose and collecting logs to VictoriaLogs
-[Docker-compose demo for Logstash integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/logstash).
Copy file name to clipboardExpand all lines: docs/VictoriaLogs/data-ingestion/README.md
+11-9Lines changed: 11 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,11 +7,13 @@
7
7
- Logstash. See [how to setup Logstash for sending logs to VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Logstash.html).
8
8
- Vector. See [how to setup Vector for sending logs to VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Vector.html).
9
9
10
-
See also [Log collectors and data ingestion formats](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#log-collectors-and-data-ingestion-formats) in VictoriaMetrics.
11
-
12
10
The ingested logs can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
13
11
14
-
See also [data ingestion troubleshooting](#troubleshooting) docs.
12
+
See also:
13
+
14
+
-[Log collectors and data ingestion formats](#log-collectors-and-data-ingestion-formats).
|[filebeat](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Filebeat.html)|[Yes](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html)| No |
130
-
|[fluentbit](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Fluentbit.html)| No |[Yes](https://docs.fluentbit.io/manual/pipeline/outputs/http)|
131
-
|[logstash](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Logstash.html)|[Yes](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html)| No |
132
-
|[vector](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Vector.html)|[Yes](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/)| No |
131
+
|[Filebeat](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Filebeat.html)|[Yes](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html)| No |
132
+
|[Fluentbit](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Fluentbit.html)| No |[Yes](https://docs.fluentbit.io/manual/pipeline/outputs/http)|
133
+
|[Logstash](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Logstash.html)|[Yes](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html)| No |
134
+
|[Vector](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Vector.html)|[Yes](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/)| No |
Specify [`sinks.vlogs`](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) with `type=elasticsearch` section in the `vector.toml`
8
-
for sending the collected logs to VictoriaLogs:
3
+
Specify [Elasticsearch sink type](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/) in the `vector.toml`
4
+
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/):
9
5
10
6
```toml
11
7
[sinks.vlogs]
@@ -26,17 +22,32 @@ Substitute the `localhost:9428` address inside `endpoints` section with the real
26
22
27
23
Replace `your_input` with the name of the `inputs` section, which collects logs. See [these docs](https://vector.dev/docs/reference/configuration/sources/) for details.
28
24
29
-
The `_msg_field` parameter must contain the field name with the log message generated by Vector. This is usually `message` field.
30
-
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#message-field) for details.
25
+
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters) for details on parameters specified
26
+
in the `[sinks.vlogs.query]` section.
27
+
28
+
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model)
29
+
and uses the correct [stream fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields).
30
+
This can be done by specifying `debug`[parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters)
31
+
in the `[sinks.vlogs.query]` section and inspecting VictoriaLogs logs then:
31
32
32
-
The `_time_field` parameter must contain the field name with the log timestamp generated by Vector. This is usually `@timestamp` field.
33
-
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#time-field) for details.
It is recommended specifying comma-separated list of field names, which uniquely identify every log stream collected by Vector, in the `_stream_fields` parameter.
36
-
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields) for details.
42
+
[sinks.vlogs.query]
43
+
_msg_field = "message"
44
+
_time_field = "timestamp"
45
+
_stream_fields = "host,container_name"
46
+
debug = "1"
47
+
```
37
48
38
-
If some [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model)aren't needed,
39
-
then VictoriaLogs can be instructed to ignore them during data ingestion - just pass `ignore_fields` parameter with comma-separated list of fields to ignore.
49
+
If some [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model)must be skipped
50
+
during data ingestion, then they can be put into `ignore_fields`[parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters).
40
51
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
41
52
42
53
```toml
@@ -55,9 +66,6 @@ For example, the following config instructs VictoriaLogs to ignore `log.offset`
55
66
ignore_fields = "log.offset,event.original"
56
67
```
57
68
58
-
More details about `_msg_field`, `_time_field`, `_stream_fields` and `ignore_fields` are
59
-
available [here](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters).
60
-
61
69
When Vector ingests logs into VictoriaLogs at a high rate, then it may be needed to tune `batch.max_events` option.
62
70
For example, the following config is optimized for higher than usual ingestion rate:
63
71
@@ -79,7 +87,7 @@ For example, the following config is optimized for higher than usual ingestion r
79
87
max_events = 1000
80
88
```
81
89
82
-
If the Vector sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compression` option.
90
+
If the Vector sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compression = "gzip"` option.
83
91
This usually allows saving network bandwidth and costs by up to 5 times:
84
92
85
93
```toml
@@ -99,8 +107,8 @@ This usually allows saving network bandwidth and costs by up to 5 times:
99
107
```
100
108
101
109
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)`[tenant](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#multitenancy).
102
-
If you need storing logs in other tenant, then specify the needed tenant via `custom_headers` at `output.elasticsearch` section.
103
-
For example, the following `vector.toml` config instructs Logstash to store the data to `(AccountID=12, ProjectID=34)` tenant:
110
+
If you need storing logs in other tenant, then specify the needed tenant via `[sinks.vlogq.request.headers]` section.
111
+
For example, the following `vector.toml` config instructs Vector to store the data to `(AccountID=12, ProjectID=34)` tenant:
104
112
105
113
```toml
106
114
[sinks.vlogs]
@@ -121,12 +129,9 @@ For example, the following `vector.toml` config instructs Logstash to store the
121
129
ProjectID = "34"
122
130
```
123
131
124
-
More info about output tuning you can find in [these docs](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/).
125
-
126
-
[Here is a demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/vector-docker)
127
-
for running Vector with VictoriaLogs with docker-compose and collecting logs from docker-containers
128
-
to VictoriaLogs (via [Elasticsearch API](https://docs.victoriametrics.com/VictoriaLogs/ingestion/#elasticsearch-bulk-api)).
129
-
130
-
The ingested log entries can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
132
+
See also:
131
133
132
-
See also [data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting) docs.
-[How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
136
+
-[Elasticsearch output docs for Vector.dev](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/).
137
+
-[Docker-compose demo for Filebeat integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/vector-docker).
0 commit comments