Elastalert does not match any document + es_debug_trace logs have incorrect info
See original GitHub issueI have a dockerized setup. 1 container for elastalert and another for elasticsearch. When i run elastalert, i see the following message,
INFO:elastalert:Ran docker-test from 2016-12-08 12:11 UTC to 2016-12-08 12:24 UTC: 0 query hits, 0 matches, 0 alerts sent
INFO:elastalert:Sleeping for 4 seconds
INFO:elastalert:Queried rule docker-test from 2016-12-08 12:11 UTC to 2016-12-08 12:24 UTC: 0 / 0 hits
INFO:elastalert:Skipping writing to ES: {'hits': 0, 'matches': 0, '@timestamp': '2016-12-08T12:24:14.900516Z', 'rule_name': 'docker-test', 'starttime': '2016-12-08T12:11:58.403179Z', 'endtime': '2016-12-08T12:24:14.890224Z', 'time_taken': 0.01020503044128418}
There is connectivity between the 2 containers.
When i enable --es_debug_trace ES_DEBUG_TRACE, the information in the trace file is incorrect and is not useful as it logs the default elasticsearch info,
curl -XGET 'http://localhost:9200/elastalert_status/elastalert/_search?pretty&size=1000' -d '{
"filter": {
"range": {
"alert_time": {
"from": "2016-12-06T10:49:04.316439Z",
"to": "2016-12-08T10:49:04.316471Z"
}
}
},
"query": {
"query_string": {
"query": "!_exists_:aggregate_id AND alert_sent:false"
}
},
"sort": {
"alert_time": {
"order": "asc"
}
}
}'
curl -XGET 'http://localhost:9200/mos-*/_search?pretty&_source_include=%40timestamp%2C%2A&ignore_unavailable=true&scroll=30s&size=1000' -d '{
"query": {
"filtered": {
"filter": {
"bool": {
"must": [
{
"range": {
"@timestamp": {
"gt": "2016-12-08T10:34:04.324309Z",
"lte": "2016-12-08T10:49:04.324309Z"
}
}
},
{
"term": {
"_type": "mos"
}
}
]
}
}
}
},
"sort": [
{
"@timestamp": {
"order": "asc"
}
}
]
}'
curl -XPOST 'http://localhost:9200/elastalert_status/elastalert_status?pretty&op_type=create' -d '{
"@timestamp": "2016-12-08T10:49:04.336001Z",
"endtime": "2016-12-08T10:49:04.324309Z",
"hits": 0,
"matches": 0,
"rule_name": "docker-test",
"starttime": "2016-12-08T10:34:04.324309Z",
"time_taken": 0.011664152145385742
}'
The configs for elastalert are,
config.yaml
# This is the folder that contains the rule yaml files
# Any .yaml file will be loaded as a rule
rules_folder: /home/elastalert/rules
# How often ElastAlert will query Elasticsearch
# The unit can be anything from weeks to seconds
run_every:
seconds: 5
# ElastAlert will buffer results from the most recent
# period of time, in case some log sources are not in real time
buffer_time:
minutes: 15
# The Elasticsearch hostname for metadata writeback
# Note that every rule can have its own Elasticsearch host
es_host: 192.168.10.141
# The Elasticsearch port
es_port: 9200
# Optional URL prefix for Elasticsearch
#es_url_prefix: elasticsearch
# Connect with TLS to Elasticsearch
#use_ssl: True
# Verify TLS certificates
#verify_certs: True
# GET request with body is the default option for Elasticsearch.
# If it fails for some reason, you can pass 'GET', 'POST' or 'source'.
# See http://elasticsearch-py.readthedocs.io/en/master/connection.html?highlight=send_get_body_as#transport
# for details
#es_send_get_body_as: GET
# Option basic-auth username and password for Elasticsearch
#es_username: someusername
#es_password: somepassword
# The index on es_host which is used for metadata storage
# This can be a unmapped index, but it is recommended that you run
# elastalert-create-index to set a mapping
writeback_index: elastalert_status
# If an alert fails for some reason, ElastAlert will retry
# sending the alert until this time period has elapsed
alert_time_limit:
days: 2
docker-test.yaml
# Alert when the rate of events exceeds a threshold
# (Optional)
# Elasticsearch host
es_host: 192.168.10.141
# (Optional)
# Elasticsearch port
es_port: 9200
# (OptionaL) Connect with SSL to Elasticsearch
#use_ssl: True
# (Optional) basic-auth username and password for Elasticsearch
#es_username: someusername
#es_password: somepassword
# (Required)
# Rule name, must be unique
name: docker-test
# (Required)
# Type of alert.
# the frequency rule type alerts when num_events events occur with timeframe time
type: frequency
# (Required)
# Index to search, wildcard supported
index: mos-*
# (Required, frequency specific)
# Alert when this many documents matching the query occur within a timeframe
num_events: 1
# (Required, frequency specific)
# num_events must occur within this amount of time to trigger an alert
timeframe:
hours: 5
#kibana_url: "http://127.0.0.1:5601/app/kibana"
#use_kibana4_dashboard: "http://127.0.0.1:5601/app/kibana#/dashboard/New-Dashboard"
max_query_size: 1000
# (Required)
# A list of Elasticsearch filters used for find events
# These filters are joined with AND and nested in a filtered query
# For more info: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl.html
filter:
- term:
_type: "mos"
# (Required)
# The alert is use when a match is found
alert:
- "slack"
# (required, email specific)
# a list of email addresses to send alerts to
slack_webhook_url:
- "https://hooks.slack.com/services/XXXXX"
slack_username_override: "bot"
The data stored in elasticsearch is of the following form,
{
"_index": "mos-2016.12.08",
"_type": "mos",
"_id": "AVjeVPP2hn-NJ2Pp7FMA",
"_version": 1,
"_score": 1,
"_source": {
"mos": 1,
"@timestamp": "2016-12-08T12:10:24.3N"
}
}
Anybody facing a similar issue or can help with this
Issue Analytics
- State:
- Created 7 years ago
- Comments:6 (4 by maintainers)
Top Results From Across the Web
Frequently Asked Questions — ElastAlert 2 0.0.1 documentation
If you are still having trouble troubleshooting why your documents do not match, try running ElastAlert 2 with --es_debug_trace /path/to/file.log .
Read more >Yelp/elastalert - Gitter
But when it's running by the process and gets to get log, it's incorrect, because all of the sudden it's 15 minutes and...
Read more >elasticsearch - Elastalert not reading hits - Stack Overflow
I'm working on the use of elastalert to alert on getting data of a particular nature or frequency. Please see my elastalert rule...
Read more >ElastAlert Documentation - Read the Docs
“Match on any event matching a given filter” (any type) ... ElastAlert has a global configuration file, config.yaml, which defines several ...
Read more >How Can I Check My ElastAlert Rule is Configured Correctly?
Checking your yaml file. All of the below points will prevent alerts from being fired but there may not be an error message...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
This caught me out too. Completely crazy that debug logs should intentionally mislead the user in this way.
@Qmando yes i’m referring to localhost:9200. It probably makes sense to mark it as REDACTED and not put in default values, which only causes confusion.
IMO it’ll be more accurate to patch it to make it REDACTED (which is still in line with the original design philosophy of being able to share the logs)