Batching doesn't seem to work that well
See original GitHub issueStep 1: Describe your environment
- Windows 10
- Visual Studio 2017
- .NET Core 2.1
Step 2: Describe the problem
The Durable Http Batching doesn’t seem to respect the batch size.
Steps to reproduce:
- Create a .NET Core 2.1 Asp.Net web Api project, and replace the default ValuesController with this
[Route("api/[controller]")]
[ApiController]
public class EventsController : ControllerBase
{
static List<int> _counts = new List<int>();
// GET api/events
[HttpGet]
public IEnumerable<string> Get()
{
return _counts.Select(x=>x.ToString());
}
public IActionResult Post([FromBody] EventBatchRequestDto batch)
{
_counts.Add(batch.Events.Count());
return Ok();
}
}
public class EventBatchRequestDto
{
public IEnumerable<EventDto> Events { get; set; }
}
public class EventDto
{
public DateTime Timestamp { get; set; }
public String Level { get; set; }
public String MessageTemplate { get; set; }
public String RenderedMessage { get; set; }
public String Exception { get; set; }
public Dictionary<String, dynamic> Properties { get; set; }
public Dictionary<String, RenderingDto[]> Renderings { get; set; }
}
public class RenderingDto
{
public String Format { get; set; }
public String Rendering { get; set; }
public override Boolean Equals(Object obj)
{
if (!(obj is RenderingDto other))
return false;
return
Format == other.Format &&
Rendering == other.Rendering;
}
public override Int32 GetHashCode()
{
return 0;
}
}
- Create a new .NET Core 2.1 Console app, and make sure you have these Nugets
<PackageReference Include="Bogus" Version="24.1.0" />
<PackageReference Include="Serilog.Sinks.Console" Version="3.1.1" />
<PackageReference Include="Serilog.Sinks.Http" Version="5.0.1" />
Where this is the Console apps Program.cs class MAKE SURE YOU CHANGE THE HTTP ENDPOINT TO YOUR OWN ONE
using System;
using System.Net.Http;
using System.Threading;
using Serilog.Sinks.Http.BatchFormatters;
namespace Serilog.Http.Tester
{
class Program
{
static void Main(string[] args)
{
Random rand = new Random(5000);
ILogger logger = new LoggerConfiguration()
.MinimumLevel.Verbose()
.WriteTo.DurableHttp(
requestUri: "http://localhost:52603/api/events",
batchPostingLimit:10,
batchFormatter: new DefaultBatchFormatter(),
httpClient: new SerilogHttpSinkHttpClientWrapper(new HttpClient(new HttpClientHandler
{
ClientCertificateOptions = ClientCertificateOption.Manual,
ServerCertificateCustomValidationCallback = (_, __, ___, ____) => true
}),
true)
)
.WriteTo.Console()
.CreateLogger()
.ForContext<Program>();
var customerGenerator = new CustomerGenerator();
var orderGenerator = new OrderGenerator();
int i = 0;
while (true)
{
var customer = customerGenerator.Generate();
var order = orderGenerator.Generate();
logger.Information("{@customer} placed {@order}", customer, order);
i++;
Console.WriteLine($"Sent {i} events");
Thread.Sleep(rand.Next(0,1000));
}
}
}
}
- Run both of these together in Visual Studio
Observed results
So leave it running for a while, then in chrome hit the GET endpoint Uri (which I am using just see the batch size that was seen in the previous POST enpoint calls from this Serilog sink), so for me this is http:localhost:52603/api/events
Expected results
I expected to see that the messages were batched according to the batchPostingLimit that I have in the Console App code above. Which is 10.
But instead I see output like this, this is with me sending 29 log messages, which are randomly sent to serilog sink between 0-1s delay between
Is there some Windowing feature that is at play with the batchPostingLimit ?
Even if I leave it off entirely where the default should be 1000, I get these sorts of results
Issue Analytics
- State:
- Created 5 years ago
- Comments:7 (3 by maintainers)
Top GitHub Comments
I’ve modified your first version of the controller to look something like this:
And the console application looks like this:
The response from the route
http://localhost:52603/api/events
would look something like this.Please note that I am setting the
period
of the sink to 2 seconds. This means that the sink no more often than every other second investigates whether any log events have been written to disk, awaiting to be posted over the network. If any log events are found, they are batched up according tobatchPostingLimit
, i.e. maximum 30 log events per HTTP request. For 100 log events, that means batches of 30, 30, 30 and finally 10 log event per HTTP request, fired in quick succession after each other. When the sink has sent all batches, given no new log events have been written to disk, the log event shipper goes back to sleep and resumes it responsibility after givenperiod
.Perhaps is
batchPostingLimit
in need of clarification in the documentation? It is not describing the size of a buffer that when getting full is being flushed. It is instead a value describing the maximum number of log events that a single HTTP request can contain. It is ment to be a way for the log event producer to limit the potential size of HTTP packages being sent over the network, if for some reason the receiver has limits regarding package sizes.Did this bring any clarity to your issue, or can I help you in any other way?
Sorry about that