|
| 1 | +[[memory-queue]] |
| 2 | +=== Memory queue |
| 3 | + |
| 4 | +By default, Logstash uses in-memory bounded queues between pipeline stages (inputs → pipeline workers) to buffer events. |
| 5 | +If Logstash experiences a temporary machine failure, the contents of the memory queue will be lost. |
| 6 | +Temporary machine failures are scenarios where Logstash or its host machine are terminated abnormally, but are capable of being restarted. |
| 7 | + |
| 8 | +[[mem-queue-benefits]] |
| 9 | +==== Benefits of memory queues |
| 10 | + |
| 11 | +The memory queue might be a good choice if you value throughput over data resiliency. |
| 12 | + |
| 13 | +* Easier configuration |
| 14 | +* Easier management and administration |
| 15 | +* Faster throughput |
| 16 | + |
| 17 | +[[mem-queue-limitations]] |
| 18 | +==== Limitations of memory queues |
| 19 | + |
| 20 | +* Can lose data in abnormal termination |
| 21 | +* Don't do well handling sudden bursts of data, where extra capacity in needed for {ls} to catch up |
| 22 | + |
| 23 | +TIP: Consider using <<persistent-queues,persistent queues>> to avoid these limitations. |
| 24 | + |
| 25 | +[[sizing-mem-queue]] |
| 26 | +==== Memory queue size |
| 27 | + |
| 28 | +Memory queue size is not configured directly. |
| 29 | +It is defined by the number of events, the size of which can vary greatly depending on the event payload. |
| 30 | + |
| 31 | +The maximum number of events that can be held in each memory queue is equal to |
| 32 | +the value of `pipeline.batch.size` multiplied by the value of |
| 33 | +`pipeline.workers`. |
| 34 | +This value is called the "inflight count." |
| 35 | + |
| 36 | +NOTE: Each pipeline has its own queue. |
| 37 | + |
| 38 | +See <<tuning-logstash>> for more info on the effects of adjusting `pipeline.batch.size` and `pipeline.workers`. |
| 39 | + |
| 40 | +[[mq-settings]] |
| 41 | +===== Settings that affect queue size |
| 42 | + |
| 43 | +These values can be configured in `logstash.yml` and `pipelines.yml`. |
| 44 | + |
| 45 | +pipeline.batch.size:: |
| 46 | +Number events to retrieve from inputs before sending to filters+workers |
| 47 | +The default is 125. |
| 48 | + |
| 49 | +pipelines.workers:: |
| 50 | +Number of workers that will, in parallel, execute the filters+outputs stage of the pipeline. |
| 51 | +This value defaults to the number of the host's CPU cores. |
| 52 | + |
| 53 | +[[backpressure-mem-queue]] |
| 54 | +==== Back pressure |
| 55 | + |
| 56 | +When the queue is full, Logstash puts back pressure on the inputs to stall data |
| 57 | +flowing into Logstash. |
| 58 | +This mechanism helps Logstash control the rate of data flow at the input stage |
| 59 | +without overwhelming outputs like Elasticsearch. |
| 60 | + |
| 61 | +Each input handles back pressure independently. |
0 commit comments