Huge elasticsearch logs with errors and delayed logging
See original GitHub issueWe’ve experienced a LOT of elasticsearch logs (<210GB) with errors
[2015-09-01 00:00:22,844][DEBUG][action.bulk ] [ex-es01] [events-v1-201508][8] failed to execute bulk item (index) index {[events-v1-201508][events][55e4ce76ba3da70e742e1672], source[{"id":"55e4ce76ba3da70e742e1672","organization_id":"55cb599aba3dab163c38f2cc","project_id":"55cb599bba3dab163c38f2cd","stack_id":"55cd8f8eba3da70e688da1c7","is_first_occurrence":false,"is_fixed":false,"is_hidden":true,"created_utc":"2015-08-31T22:00:22.8281566Z","type":"error","date":"2015-09-01T00:00:14.7002972+02:00","message":"A potentially dangerous Request.Path value was detected from the client (&).","geo":"47,6103,-122,3341", <STRIPPED DATA>]}
org.elasticsearch.index.mapper.MapperParsingException: failed to parse
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:565)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:493)
at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:493)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:409)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:148)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:574)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase$1.doRun(TransportShardReplicationOperationAction.java:440)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Also, our exception logging currently seems to be delayed for about three hours. Are these two things related? When we set up Exceptionless, the logging were instant.
Issue Analytics
- State:
- Created 8 years ago
- Comments:19 (18 by maintainers)
Top Results From Across the Web
Big delay in writing to Elasticsearch from some log inputs of ...
I had to migrate my Elasticsearch 7.0.1 in a docker container from ssd to hdd. After that I can see some filebeat log...
Read more >Huge elasticsearch log files causing server to run out of ...
Hi,. We are having problems with elasticsearch log files growing to close to and sometimes over 1GB every day. Because of this, there...
Read more >Logstash delay of log sending - elasticsearch
I'm forwarding application logs to elasticsearch, while performing some grok filters before. The application has a timestamp field and ...
Read more >Elasticsearch operator getting timed out while connecting ...
Try to reach the Elasticsearch service through the Elasticsearch operator pod. If it did not work, check from the node level. Raw. $...
Read more >Slack's New Logging Storage Engine Challenges ...
Resilience and avoiding single point of failure (SPOF) was another criterion. Suman points out that if a single Elasticsearch node goes down, it...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Sure:
Just a heads up, I moved this to the elasticsearch setup guide since it’s kind of related: https://github.com/exceptionless/Exceptionless/wiki/Elasticsearch-setup-guide#logging-config-optional