Enable configuring ambassador STDOUT logs as JSON format
See original GitHub issuePlease describe your use case / problem.
After recently setting the envoy_log_type: json
option as part of enabling Datadog APM Tracing most access logs are written as JSON. It appears however that many log messages are still written to STDOUT as text.
This was unexpected and makes it significantly more complicated to define a log pipeline configuration which can parse all the different log message formats.
Describe the solution you’d like A single option for configuring log format of these ambassador pods and the ability to configure all pod logs to be JSON would be preferred.
Describe alternatives you’ve considered As a workaround I will experiment with additional logging configuration to support the multiple patterns coming from a single pod.
Additional context
Here are some examples of log lines which I was surprised came in as text and not JSON after configuring envoy_log_type: json
. They were intermingled with JSON access log message and didn’t parse.
2020-10-09 14:37:24 diagd 1.5.1 [P383TThreadPoolExecutor-0_8] INFO: 9B8F4961-FB07-4012-A5E9-484B8A520C3D: <ipaddress> "GET /metrics" 124ms 200 success
[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/wire_format_lite.cc:578] String field 'google.protobuf.Value.string_value' contains invalid UTF-8 data when serializing a protocol buffer. Use the 'bytes' type if you intend to send raw bytes.
E1008 21:09:38.924448 390 reflector.go:123] k8s.io/client-go@v0.0.0-20191016111102-bec269661e48/tools/cache/reflector.go:96: Failed to list <nil>: Get https://<ipaddress>:443/apis/getambassador.io/v2/namespaces/<namespace>/kubernetesendpointresolvers?limit=500&resourceVersion=0: dial tcp <ipaddress>:443: connect: no route to host
Trace[934878924]: [30.000552118s] [30.000552118s] END
Issue Analytics
- State:
- Created 3 years ago
- Reactions:12
- Comments:10 (2 by maintainers)
Top GitHub Comments
FWIW, we send all our logs to Datadog. If the logs are NOT JSON formatted, then Datadog treats them as
level=ERROR
, and it is not ideal to see thousands of error logs from Ambassador every hour.We have done the following to help mitigate a lot of the noisy “error” logs from Ambassador.
envoy_log_type: json
in our ambassador module configurationWe still have quite a few “snapshot” activity logs that we would also like to format as JSON. Creating a custom log pipeline in Datadog would not be ideal, because there is not much of a pattern for these messages and there is no log level in them. (And we’d rather treat the disease, not the symptom).
Examples:
I worked on https://github.com/datawire/ambassador/pull/3437 recently and was able to address JSON logging for most of the Ambassador control plane code. Unfortunately, it still doesn’t cover the logging emitted by our Kubernetes client-go dependencies and it probably won’t cover the protobuf CPP warning/error messages. I think those might be coming from within Envoy… I’d need to dig deeper.
This change should however improve life significantly for those that want “almost all JSON logs” and should reduce the noise to a manageable minimum until we can track down and address the remaining rogue logs.