Skip to content

Bug in Async flush when sending >20,000 events with batch size of 2,000 #4

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
gabehunt opened this issue Nov 17, 2020 · 2 comments
Open

Comments

@gabehunt
Copy link

There is a bug in the async flush in an asp.net core web api, where only 2,000-3,000 events get sent to the Event Hub when logging >10,000 events with batch size of 2,000.

My local solution was to change the bold lines below. Now after the web api Controller returns it's instant response, the async log events write to the Event Hub in a background process on the web server until they are complete or the edge case where the web api process is intentionally terminated.

    private async Task ReadChannelAsync()
    {
        **var tasks = new List<Task>();**

        while (await this.channel.Reader.WaitToReadAsync())
        {
            // No client is available, so briefly pause before retrying.
            if (this.eventHubClient is null)
            {
                await Task.Delay(TimeSpan.FromSeconds(1)).ConfigureAwait(false);

                continue;
            }

            var eventDataBatch = this.eventHubClient.CreateBatch();

            while (this.channel.Reader.TryRead(out var eventData))
            {
                // Attempt to add the current event data to existing batch.
                if (eventDataBatch.TryAdd(eventData))
                {
                    // There was space available, try to read more event data.
                    continue;
                }

                // There was not enough space available, so send the current batch and create a
                // new one.
                **tasks.Add(TrySendAsync(eventDataBatch));**
                eventDataBatch = this.eventHubClient.CreateBatch();

                // Attempt to add the current event data to new batch.
                eventDataBatch.TryAdd(eventData);
            }

            // No more event data is currently available, so send the current batch.
            **tasks.Add(TrySendAsync(eventDataBatch));**

        }

        **await Task.WhenAll(tasks.ToArray());**
    }
@gabehunt
Copy link
Author

appsettings.json for your reference (I am using Wait mode to achieve the blocking, but it wasn't actually waiting before the local fix to the code).

{
"Logging": {
"AzureEventHubs": {
"Endpoint": "sb://a....",
"SharedAccessKeyName": ".....",
"SharedAccessKey": "........",
"QueueDepth": 2000,
"QueueMode": "Wait"
},
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
}
}

@jamesharling
Copy link
Contributor

This would suggest the background processor is getting disposed, or the channel it uses to hold the event stream is not available (which could be from the disposal). Could you give some more information on the lifecycle of your app between requests etc?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants