
Security News
MCP Community Begins Work on Official MCP Metaregistry
The MCP community is launching an official registry to standardize AI tool discovery and let agents dynamically find and install MCP servers.
Serilog.Sinks.Elasticsearch
Advanced tools
This repository contains two nuget packages: Serilog.Sinks.Elasticsearch
and Serilog.Formatting.Elasticsearch
.
Just a heads up that the .NET team @elastic have created their own new Serilog Sink called Elastic.Serilog.Sinks (Package: https://www.nuget.org/packages/Elastic.Serilog.Sinks#readme-body-tab and documentation: https://www.elastic.co/guide/en/ecs-logging/dotnet/current/serilog-data-shipper.html). Although this current sink will still work, I advise you to have a look first at the official Elastic implementation as it is better supported and more up to date.
The Serilog Elasticsearch sink project is a sink (basically a writer) for the Serilog logging framework. Structured log events are written to sinks and each sink is responsible for writing it to its own backend, database, store etc. This sink delivers the data to Elasticsearch, a NoSQL search engine. It does this in a similar structure as Logstash and makes it easy to use Kibana for visualizing your logs.
TypeName
is handled automatically across major versions 6, 7 and 8. Versions 2 and 5 of Elasticsearch are no longer supported. Version 9.0.0 of the sink targets netstandard2.0 and therefore can be run on any .NET Framework that supports it (both .NET Core and .NET Framework), however, we are focused on testing it with .NET 6.0 to make the maintenance simpler.Install-Package serilog.sinks.elasticsearch
Simplest way to register this sink is to use default configuration:
var loggerConfig = new LoggerConfiguration()
.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("http://localhost:9200")));
Or, if using .NET Core and Serilog.Settings.Configuration
Nuget package and appsettings.json
, default configuration would look like this:
{
"Serilog": {
"Using": [ "Serilog.Sinks.Elasticsearch" ],
"MinimumLevel": "Warning",
"WriteTo": [
{
"Name": "Elasticsearch",
"Args": {
"nodeUris": "http://localhost:9200"
}
}
]
}
}
More elaborate configuration, using additional Nuget packages (e.g. Serilog.Enrichers.Environment
) would look like:
{
"Serilog": {
"Using": [ "Serilog.Sinks.Elasticsearch" ],
"MinimumLevel": "Warning",
"WriteTo": [
{
"Name": "Elasticsearch",
"Args": {
"nodeUris": "http://localhost:9200"
}
}
],
"Enrich": [ "FromLogContext", "WithMachineName" ],
"Properties": {
"Application": "My app"
}
}
}
This way the sink will detect version of Elasticsearch server (DetectElasticsearchVersion
is set to true
by default) and handle TypeName
behavior correctly, based on the server version (6.x, 7.x or 8.x).
Alternatively, DetectElasticsearchVersion
can be set to false
and certain option can be configured manually. In that case, the sink will assume version 7 of Elasticsearch, but options will be ignored due to a potential version incompatibility.
For example, you can configure the sink to force registeration of v6 index template. Be aware that the AutoRegisterTemplate option will not overwrite an existing template.
var loggerConfig = new LoggerConfiguration()
.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("http://localhost:9200") ){
DetectElasticsearchVersion = false,
AutoRegisterTemplate = true,
AutoRegisterTemplateVersion = AutoRegisterTemplateVersion.ESv6
});
Besides a registration of the sink in the code, it is possible to register it using appSettings reader (from v2.0.42+) reader (from v2.0.42+) as shown below.
This example shows the options that are currently available when using the appSettings reader.
<appSettings>
<add key="serilog:using" value="Serilog.Sinks.Elasticsearch"/>
<add key="serilog:write-to:Elasticsearch.nodeUris" value="http://localhost:9200;http://remotehost:9200"/>
<add key="serilog:write-to:Elasticsearch.indexFormat" value="custom-index-{0:yyyy.MM}"/>
<add key="serilog:write-to:Elasticsearch.templateName" value="myCustomTemplate"/>
<add key="serilog:write-to:Elasticsearch.typeName" value="myCustomLogEventType"/>
<add key="serilog:write-to:Elasticsearch.pipelineName" value="myCustomPipelineName"/>
<add key="serilog:write-to:Elasticsearch.batchPostingLimit" value="50"/>
<add key="serilog:write-to:Elasticsearch.batchAction" value="Create"/><!-- "Index" is default -->
<add key="serilog:write-to:Elasticsearch.period" value="2"/>
<add key="serilog:write-to:Elasticsearch.inlineFields" value="true"/>
<add key="serilog:write-to:Elasticsearch.restrictedToMinimumLevel" value="Warning"/>
<add key="serilog:write-to:Elasticsearch.bufferBaseFilename" value="C:\Temp\SerilogElasticBuffer"/>
<add key="serilog:write-to:Elasticsearch.bufferFileSizeLimitBytes" value="5242880"/>
<add key="serilog:write-to:Elasticsearch.bufferLogShippingInterval" value="5000"/>
<add key="serilog:write-to:Elasticsearch.bufferRetainedInvalidPayloadsLimitBytes" value="5000"/>
<add key="serilog:write-to:Elasticsearch.bufferFileCountLimit " value="31"/>
<add key="serilog:write-to:Elasticsearch.connectionGlobalHeaders" value="Authorization=Bearer SOME-TOKEN;OtherHeader=OTHER-HEADER-VALUE" />
<add key="serilog:write-to:Elasticsearch.connectionTimeout" value="5" />
<add key="serilog:write-to:Elasticsearch.emitEventFailure" value="WriteToSelfLog" />
<add key="serilog:write-to:Elasticsearch.queueSizeLimit" value="100000" />
<add key="serilog:write-to:Elasticsearch.autoRegisterTemplate" value="true" />
<add key="serilog:write-to:Elasticsearch.autoRegisterTemplateVersion" value="ESv7" />
<add key="serilog:write-to:Elasticsearch.detectElasticsearchVersion" value="false" /><!-- `true` is default -->
<add key="serilog:write-to:Elasticsearch.overwriteTemplate" value="false" />
<add key="serilog:write-to:Elasticsearch.registerTemplateFailure" value="IndexAnyway" />
<add key="serilog:write-to:Elasticsearch.deadLetterIndexName" value="deadletter-{0:yyyy.MM}" />
<add key="serilog:write-to:Elasticsearch.numberOfShards" value="20" />
<add key="serilog:write-to:Elasticsearch.numberOfReplicas" value="10" />
<add key="serilog:write-to:Elasticsearch.formatProvider" value="My.Namespace.MyFormatProvider, My.Assembly.Name" />
<add key="serilog:write-to:Elasticsearch.connection" value="My.Namespace.MyConnection, My.Assembly.Name" />
<add key="serilog:write-to:Elasticsearch.serializer" value="My.Namespace.MySerializer, My.Assembly.Name" />
<add key="serilog:write-to:Elasticsearch.connectionPool" value="My.Namespace.MyConnectionPool, My.Assembly.Name" />
<add key="serilog:write-to:Elasticsearch.customFormatter" value="My.Namespace.MyCustomFormatter, My.Assembly.Name" />
<add key="serilog:write-to:Elasticsearch.customDurableFormatter" value="My.Namespace.MyCustomDurableFormatter, My.Assembly.Name" />
<add key="serilog:write-to:Elasticsearch.failureSink" value="My.Namespace.MyFailureSink, My.Assembly.Name" />
</appSettings>
With the appSettings configuration the nodeUris
property is required. Multiple nodes can be specified using ,
or ;
to separate them. All other properties are optional. Also required is the <add key="serilog:using" value="Serilog.Sinks.Elasticsearch"/>
setting to include this sink. All other properties are optional. If you do not explicitly specify an indexFormat-setting, a generic index such as 'logstash-[current_date]' will be used automatically.
And start writing your events using Serilog.
Install-Package serilog.formatting.elasticsearch
The Serilog.Formatting.Elasticsearch
nuget package consists of a several formatters:
ElasticsearchJsonFormatter
- custom json formatter that respects the configured property name handling and forces Timestamp
to @timestamp.ExceptionAsObjectJsonFormatter
- a json formatter which serializes any exception into an exception object.Override default formatter if it's possible with selected sink
var loggerConfig = new LoggerConfiguration()
.WriteTo.Console(new ElasticsearchJsonFormatter());
Be aware that there is an explicit and implicit mapping of types inside an Elasticsearch index. A value called X
as a string will be indexed as being a string. Sending the same X
as an integer in a next log message will not work. ES will raise a mapping exception, however it is not that evident that your log item was not stored due to the bulk actions performed.
So be careful about defining and using your fields (and type of fields). It is easy to miss that you first send a {User} as a simple username (string) and next as a User object. The first mapping dynamically created in the index wins. See also issue #184 for details and a possible solution. There are also limits in ES on the number of dynamic fields you can actually throw inside an index.
In order to avoid a potentially deeply nested JSON structure for exceptions with inner exceptions,
by default the logged exception and it's inner exception is logged as an array of exceptions in the field exceptions
. Use the 'Depth' field to traverse the inner exceptions flow.
However, not all features in Kibana work just as well with JSON arrays - for instance, including
exception fields on dashboards and visualizations. Therefore, we provide an alternative formatter, ExceptionAsObjectJsonFormatter
, which will serialize the exception into the exception
field as an object with nested InnerException
properties. This was also the default behavior of the sink before version 2.
To use it, simply specify it as the CustomFormatter
when creating the sink:
new ElasticsearchSink(new ElasticsearchSinkOptions(url)
{
CustomFormatter = new ExceptionAsObjectJsonFormatter(renderMessage:true)
});
appsettings.json
configurationTo use the Elasticsearch sink with Microsoft.Extensions.Configuration, for example with ASP.NET Core or .NET Core, use the Serilog.Settings.Configuration package. First install that package if you have not already done so:
Install-Package Serilog.Settings.Configuration
Instead of configuring the sink directly in code, call ReadFrom.Configuration()
:
var configuration = new ConfigurationBuilder()
.SetBasePath(env.ContentRootPath)
.AddJsonFile("appsettings.json")
.Build();
var logger = new LoggerConfiguration()
.ReadFrom.Configuration(configuration)
.CreateLogger();
In your appsettings.json
file, under the Serilog
node, :
{
"Serilog": {
"WriteTo": [{
"Name": "Elasticsearch",
"Args": {
"nodeUris": "http://localhost:9200;http://remotehost:9200/",
"indexFormat": "custom-index-{0:yyyy.MM}",
"templateName": "myCustomTemplate",
"typeName": "myCustomLogEventType",
"pipelineName": "myCustomPipelineName",
"batchPostingLimit": 50,
"batchAction": "Create",
"period": 2,
"inlineFields": true,
"restrictedToMinimumLevel": "Warning",
"bufferBaseFilename": "C:/Temp/docker-elk-serilog-web-buffer",
"bufferFileSizeLimitBytes": 5242880,
"bufferLogShippingInterval": 5000,
"bufferRetainedInvalidPayloadsLimitBytes": 5000,
"bufferFileCountLimit": 31,
"connectionGlobalHeaders" :"Authorization=Bearer SOME-TOKEN;OtherHeader=OTHER-HEADER-VALUE",
"connectionTimeout": 5,
"emitEventFailure": "WriteToSelfLog",
"queueSizeLimit": "100000",
"autoRegisterTemplate": true,
"autoRegisterTemplateVersion": "ESv2",
"overwriteTemplate": false,
"registerTemplateFailure": "IndexAnyway",
"deadLetterIndexName": "deadletter-{0:yyyy.MM}",
"numberOfShards": 20,
"numberOfReplicas": 10,
"templateCustomSettings": [{ "index.mapping.total_fields.limit": "10000000" } ],
"formatProvider": "My.Namespace.MyFormatProvider, My.Assembly.Name",
"connection": "My.Namespace.MyConnection, My.Assembly.Name",
"serializer": "My.Namespace.MySerializer, My.Assembly.Name",
"connectionPool": "My.Namespace.MyConnectionPool, My.Assembly.Name",
"customFormatter": "My.Namespace.MyCustomFormatter, My.Assembly.Name",
"customDurableFormatter": "My.Namespace.MyCustomDurableFormatter, My.Assembly.Name",
"failureSink": "My.Namespace.MyFailureSink, My.Assembly.Name"
}
}]
}
}
See the XML <appSettings>
example above for a discussion of available Args
options.
From version 5.5 you have the option to specify how to handle issues with Elasticsearch. Since the sink delivers in a batch, it might be possible that one or more events could actually not be stored in the Elasticsearch store. Can be a mapping issue for example. It is hard to find out what happened here. There is a new option called EmitEventFailure which is an enum (flagged) with the following options:
An example:
.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("http://localhost:9200"))
{
FailureCallback = e => Console.WriteLine("Unable to submit event " + e.MessageTemplate),
EmitEventFailure = EmitEventFailureHandling.WriteToSelfLog |
EmitEventFailureHandling.WriteToFailureSink |
EmitEventFailureHandling.RaiseCallback,
FailureSink = new FileSink("./failures.txt", new JsonFormatter(), null)
})
With the AutoRegisterTemplate option the sink will write a default template to Elasticsearch. When this template is not there, you might not want to index as it can influence the data quality. Since version 5.5 you can use the RegisterTemplateFailure option. Set it to one of the following options:
Since version 7 you can specify an action to do when log row was denied by the elasticsearch because of the data (payload) if durable file is specied. i.e.
BufferCleanPayload = (failingEvent, statuscode, exception) =>
{
dynamic e = JObject.Parse(failingEvent);
return JsonConvert.SerializeObject(new Dictionary<string, object>()
{
{ "@timestamp",e["@timestamp"]},
{ "level","Error"},
{ "message","Error: "+e.message},
{ "messageTemplate",e.messageTemplate},
{ "failingStatusCode", statuscode},
{ "failingException", exception}
});
},
The IndexDecider didnt worked well when durable file was specified so an option to specify BufferIndexDecider is added. Datatype of logEvent is string i.e.
BufferIndexDecider = (logEvent, offset) => "log-serilog-" + (new Random().Next(0, 2)),
Option BufferFileCountLimit is added. The maximum number of log files that will be retained. including the current log file. For unlimited retention, pass null. The default is 31.
Option BufferFileSizeLimitBytes is added The maximum size, in bytes, to which the buffer log file for a specific date will be allowed to grow. By default 100L * 1024 * 1024
will be applied.
Starting from version 6, the sink has been upgraded to work with Elasticsearch 6.0 and has support for the new templates used by ES 6.
If you use the
AutoRegisterTemplate
option, you need to set theAutoRegisterTemplateVersion
option toESv6
in order to generate default templates that are compatible with the breaking changes in ES 6.
Starting from version 4, the sink has been upgraded to work with Serilog 2.0 and has .NET Core support.
Starting from version 3, the sink supports the Elasticsearch.Net 2 package and Elasticsearch version 2. If you need Elasticsearch 1.x support, then stick with version 2 of the sink. The function
protected virtual ElasticsearchResponse<T> EmitBatchChecked<T>(IEnumerable<LogEvent> events)
now uses a generic type. This allows you to map to either DynamicResponse when using Elasticsearch.NET or to BulkResponse if you want to use NEST.
We also dropped support for .NET 4 since the Elasticsearch.NET client also does not support this version of the framework anymore. If you need to use .net 4, then you need to stick with the 2.x version of the sink.
Be aware that version 2 introduces some breaking changes.
FAQs
Serilog sink for Elasticsearch
We found that serilog.sinks.elasticsearch demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
The MCP community is launching an official registry to standardize AI tool discovery and let agents dynamically find and install MCP servers.
Research
Security News
Socket uncovers an npm Trojan stealing crypto wallets and BullX credentials via obfuscated code and Telegram exfiltration.
Research
Security News
Malicious npm packages posing as developer tools target macOS Cursor IDE users, stealing credentials and modifying files to gain persistent backdoor access.