question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Huge elasticsearch logs with errors and delayed logging

See original GitHub issue

We’ve experienced a LOT of elasticsearch logs (<210GB) with errors

[2015-09-01 00:00:22,844][DEBUG][action.bulk              ] [ex-es01] [events-v1-201508][8] failed to execute bulk item (index) index {[events-v1-201508][events][55e4ce76ba3da70e742e1672], source[{"id":"55e4ce76ba3da70e742e1672","organization_id":"55cb599aba3dab163c38f2cc","project_id":"55cb599bba3dab163c38f2cd","stack_id":"55cd8f8eba3da70e688da1c7","is_first_occurrence":false,"is_fixed":false,"is_hidden":true,"created_utc":"2015-08-31T22:00:22.8281566Z","type":"error","date":"2015-09-01T00:00:14.7002972+02:00","message":"A potentially dangerous Request.Path value was detected from the client (&).","geo":"47,6103,-122,3341", <STRIPPED DATA>]}
org.elasticsearch.index.mapper.MapperParsingException: failed to parse
    at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:565)
    at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:493)
    at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:493)
    at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:409)
    at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:148)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:574)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase$1.doRun(TransportShardReplicationOperationAction.java:440)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Also, our exception logging currently seems to be delayed for about three hours. Are these two things related? When we set up Exceptionless, the logging were instant.

Issue Analytics

  • State:closed
  • Created 8 years ago
  • Comments:19 (18 by maintainers)

github_iconTop GitHub Comments

3reactions
adamzolotarevcommented, Oct 29, 2015

Sure:

# you can override this using by setting a system property, for example -Des.logger.level=DEBUG
es.logger.level: INFO
rootLogger: ${es.logger.level}, console, file
logger:
  # log action execution errors for easier debugging
  action: DEBUG
  # reduce the logging for aws, too much is logged under the default INFO
  com.amazonaws: WARN
  org.apache.http: INFO

  # gateway
  #gateway: DEBUG
  #index.gateway: DEBUG

  # peer shard recovery
  #indices.recovery: DEBUG

  # discovery
  #discovery: TRACE

  index.search.slowlog: TRACE, index_search_slow_log_file
  index.indexing.slowlog: TRACE, index_indexing_slow_log_file

additivity:
  index.search.slowlog: false
  index.indexing.slowlog: false

appender:
  console:
    type: console
    layout:
      type: consolePattern
      conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

  file:
    type: rollingFile
    file: ${path.logs}/${cluster.name}.log
    maxFileSize: 10MB
    maxBackupIndex: 10
    layout:
      type: pattern
      conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

  # Use the following log4j-extras RollingFileAppender to enable gzip compression of log files. 
  # For more information see https://logging.apache.org/log4j/extras/apidocs/org/apache/log4j/rolling/RollingFileAppender.html
  #file:
    #type: extrasRollingFile
    #file: ${path.logs}/${cluster.name}.log
    #rollingPolicy: timeBased
    #rollingPolicy.FileNamePattern: ${path.logs}/${cluster.name}.log.%d{yyyy-MM-dd}.gz
    #layout:
      #type: pattern
      #conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

  index_search_slow_log_file:
    type: rollingFile
    file: ${path.logs}/${cluster.name}_index_search_slowlog.log
    maxFileSize: 10MB
    maxBackupIndex: 10
    layout:
      type: pattern
      conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

  index_indexing_slow_log_file:
    type: rollingFile
    file: ${path.logs}/${cluster.name}_index_indexing_slowlog.log
    maxFileSize: 10MB
    maxBackupIndex: 10
    layout:
      type: pattern
      conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

0reactions
niemyjskicommented, Jan 19, 2016

Just a heads up, I moved this to the elasticsearch setup guide since it’s kind of related: https://github.com/exceptionless/Exceptionless/wiki/Elasticsearch-setup-guide#logging-config-optional

Read more comments on GitHub >

github_iconTop Results From Across the Web

Big delay in writing to Elasticsearch from some log inputs of ...
I had to migrate my Elasticsearch 7.0.1 in a docker container from ssd to hdd. After that I can see some filebeat log...
Read more >
Huge elasticsearch log files causing server to run out of ...
Hi,. We are having problems with elasticsearch log files growing to close to and sometimes over 1GB every day. Because of this, there...
Read more >
Logstash delay of log sending - elasticsearch
I'm forwarding application logs to elasticsearch, while performing some grok filters before. The application has a timestamp field and ...
Read more >
Elasticsearch operator getting timed out while connecting ...
Try to reach the Elasticsearch service through the Elasticsearch operator pod. If it did not work, check from the node level. Raw. $...
Read more >
Slack's New Logging Storage Engine Challenges ...
Resilience and avoiding single point of failure (SPOF) was another criterion. Suman points out that if a single Elasticsearch node goes down, it...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found