slowdown since 7.14.0
See original GitHub issuein https://github.com/elastic/elasticsearch-py/pull/1623 a preflight request was added to /
this has close to doubled response times for our aws lambda workflows
is there any official guidance on how to address this regression when it comes to short lived environments?
or is the only course of action to monkeypatch the validation and turn it off?
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:10 (4 by maintainers)
Top Results From Across the Web
Performance slowdown on high cardinality keyword fields ...
Segment files created since 7.14.0 are slower for cardinality aggregation on high cardinality (keyword) fields. This is potentially a side ...
Read more >Changelog — Snakemake 7.14.0 documentation
Empty directories are cleaned up after workflow execution. Fixed directory handling: no longer fail if the same job writes both a dir and...
Read more >PR-Booster For Bitbucket Data Center - Version history
CollapsedExpanded2022.12.06Bitbucket Server 7.14.0 - 8.6.12022-12-06Bugfix: Improve error messaging to user when rebase/squash/amend are rejected Download.
Read more >7.15.2 is also very slow - Kibana - Discuss the Elastic Stack
Elasticsearch is very slow after the upgrade. Dashboards became very slow and in some, it pops up "This page is slowing down the...
Read more >Release Notes - Black Duck - Synopsys
Note that this occurs after certificate chain signature ... Fixed a rare issue with the KbUpdateJob where a duplicate value insert could slow...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@sethmlarson are you ignoring my question? Because you closed the PR posted by @OriHoch , but have not replied here explaining the decision to not have a flag for disabling the product check
@kimchy I, as well as the rest of your customer base, would appreciate having changes like this thoroughly communicated and the rationale behind them explained. Having a product check in place, makes sense, to ensure that this repo issue tracker does not get bombarded with issues from users running the library against incompatible api’s. However, refusing a change that introduces a flag to disable this check, for users who are heavily impacted by the additional pre-flight request, makes this change look less like the team protecting themselves against bogus issues, and more like a kneejerk reaction to what aws is doing in respect to their fork. I will repeat myself. as it stands, v7.14.0 kills the performance of our service running on aws lambda, forcing us to version lock ourselves to v7.13.3 A massively impactful change like this being implemented, without adequate communication from leadership or project managers, is frankly unacceptable. All we’re left with is one maintainer with a hair trigger closing issues as soon as they’re opened, and not explaining himself whatsoever. I would highly prefer to avoid taking this through official channels, and I would prefer not to have to go through the hassle of migrating away from your hosted solution
So, I ask again: please explain the rationale behind not implementing a flag for disabling the product check?
There isn’t a way to disable the product check aside from the route you mentioned. The
info
API is very quick to respond as no computation is done aside from sending version information so shouldn’t impact request latencies beyond a single RTT.