Performance issue with large request
See original GitHub issueNEST/Elasticsearch.Net version: 7.1.0. i also try 7.10.1 Elasticsearch version: 7.1.1 Description of the problem including expected versus actual behavior: A clear and concise description of what the bug is. We have built one API with .net and NEST. We had some issues with performance. So I have added timers that I log to check what is taking most of the time when we do a search Steps to reproduce:
private readonly ElasticClient elasticSearchClient;
public ElasticSearchService(IHttpClientFactory httpClientFactory)
{
var settings = new ConnectionSettings(new Uri(appSettings.ElasticSearch.Host)).DefaultIndex(appSettings.ElasticSearch.PlaceIndexName);
elasticSearchClient = new ElasticClient(settings);
}
private async Task<IList<Place>> SearchTextSuggestElasticAsync()
{
var time1 = DateTime.Now;
ISearchResponse<Place> searchResponse = // some query;
var requestTime = (DateTime.Now - time1).TotalMilliseconds;
if ((requestTime - searchResponse.Took) > 300)
{
log.Warn($"the difference : {(requestTime - searchResponse.Took)} End request in {requestTime} took: {searchResponse.Took}");
}
var data = searchResponse.Documents.ToList();
return data;
}
It’s fine with a single request, but when I make about 100 request/s, requestTime
with much larger than took
from ElasticSearch
I try to use HttpClient factory to do same number of request, it does not happen. So it does not network issue
In above code, a lot of log was written
the difference : 364.1491 End request in 529.1491 took :165
the difference : 351.0123 End request in 531.0123 took :180
the difference : 605.7492 End request in 747.7492 took :142
the difference : 587.7566 End request in 776.7566 took :189
the difference : 389.3551 End request in 492.3551 took :103
Expected behavior
Issue Analytics
- State:
- Created 3 years ago
- Comments:37 (20 by maintainers)
Top GitHub Comments
Thanks, i will give it a try when i comeback to company
Vào Th 2, 25 thg 1, 2021 lúc 19:50 Steve Gordon notifications@github.com đã viết:
@trongvodang Those counters look as I’d expect, no unusually high volume of connections. Thanks for running that. Please kill that cloud instance as I wouldn’t want those credentials to be misused. As part of our ongoing work for the next major release we will be focusing on performance and doing lots of profiling.