Prometheus jmx_exporter is not working with istio
See original GitHub issueHello,
The jmx_exporter is not working with ISTIO. When prometheus try to scrape metrics, it gets a HTTP 503 errors After some investigation, the HTTP error is throw by envoy (sidecar istio proxy embedded in the pod with the application). In the envoy log, there is the error message below:
[C11] protocol error: http/1.1 protocol error: HPE_UNEXPECTED_CONTENT_LENGTH
"GET /metrics HTTP/1.1" 503 UC 0 57 48 - "-" "Prometheus/2.3.1"
Before forward a request to the app container, envoy check the HTTP syntax.
We tried to reproduce this issue with curl.
After looking in prometheus source code to find the request sent, the error appears when the Accept-Encoding
header is set with the gzip
value.
We launch jmx_prometheus_httpserver in local to debug
$ curl -v http://localhost:8080/metrics -H "Accept-Encoding: gzip"
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET /metrics HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.54.0
> Accept: */*
> Accept-Encoding: gzip
>
< HTTP/1.1 200 OK
< Date: Thu, 09 Aug 2018 09:15:56 GMT
< Content-encoding: gzip
< Content-length: 627
< Content-type: text/plain; version=0.0.4; charset=utf-8
< Transfer-encoding: chunked
According to the envoy error message, the issue come from the Content-length
header.
According to the http 1.1 spec, in the Chunked Transfer part, it specified that when a transfer-encoding is set to chunked, the content-length must be not set (https://tools.ietf.org/html/rfc7230#section-4.1.2).
Issue Analytics
- State:
- Created 5 years ago
- Comments:9 (2 by maintainers)
Any chance of a new jmx_exporter version soon including the upgrading to the new client_java (0.6.0) so that this issue can be fixed?
I have since rebuilt the cluster and cannot say exactly which version of istio it was. It was one of
[1.1.7, 1.1.10, 1.1.13]
.