Using es.index.read.missing.as.empty returns an empty result if one out of many indices is missing
See original GitHub issueIf a user specifies multiple indices and one of them is not found - if the es.index.read.missing.as.empty
property is set to true, then the entire connector returns an empty result. Instead, the connector should skip the missing index and continue on with processing the rest of the provided indices.
Issue Analytics
- State:
- Created 6 years ago
- Comments:7 (3 by maintainers)
Top Results From Across the Web
Unable to use es.index.read.missing.as.empty with spark sql
I'm trying to read data with spark in elasticsearch on indexe that could not exist, since my index has a date pattern.
Read more >Find documents with empty string value on elasticsearch
I get an empty result set when I know for sure that there are records that contains an empty string field. If anyone...
Read more >Cant read metadata from store; responding with empty - Opster
It is possible to search multiple indices with a single request. If it is a raw HTTP request, index names should be sent...
Read more >How to Resolve Unassigned Shards in Elasticsearch - Datadog
If that didn't solve the issue, read on to try other solutions. Reason 1: Shard allocation is purposefully delayed. When a node leaves...
Read more >B.3.4.3 Problems with NULL Values - MySQL :: Developer Zone
When reading data with LOAD DATA , empty or missing columns are updated with '' . To load a NULL value into a...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
similar issue. Fetching data using spark-sql throws and exception when an index does not exist, in stead of continuing with the next index when
es.index.read.missing.as.empty
is set totrue
.ES options:
‘’’
querying ES from spark:
val log: DataFrame = EsSparkSQL.esDF(session, elasticSearchOptions)
forcing evaluation:
log.cache()
this inconsistently runs into
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: no such index
i assume this happens when we get unlucky and the index gets deleted between spark querying ES, and actually fetching the data. Note: this is unrelated to whether the queried data is on the deleted index or not (it never was).
Yeah the pull request is at #1997.