question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Enable configuring ambassador STDOUT logs as JSON format

See original GitHub issue

Please describe your use case / problem.

After recently setting the envoy_log_type: json option as part of enabling Datadog APM Tracing most access logs are written as JSON. It appears however that many log messages are still written to STDOUT as text.

This was unexpected and makes it significantly more complicated to define a log pipeline configuration which can parse all the different log message formats.

Describe the solution you’d like A single option for configuring log format of these ambassador pods and the ability to configure all pod logs to be JSON would be preferred.

Describe alternatives you’ve considered As a workaround I will experiment with additional logging configuration to support the multiple patterns coming from a single pod.

Additional context Here are some examples of log lines which I was surprised came in as text and not JSON after configuring envoy_log_type: json. They were intermingled with JSON access log message and didn’t parse.

2020-10-09 14:37:24 diagd 1.5.1 [P383TThreadPoolExecutor-0_8] INFO: 9B8F4961-FB07-4012-A5E9-484B8A520C3D: <ipaddress> "GET /metrics" 124ms 200 success
[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/wire_format_lite.cc:578] String field 'google.protobuf.Value.string_value' contains invalid UTF-8 data when serializing a protocol buffer. Use the 'bytes' type if you intend to send raw bytes.
E1008 21:09:38.924448     390 reflector.go:123] k8s.io/client-go@v0.0.0-20191016111102-bec269661e48/tools/cache/reflector.go:96: Failed to list <nil>: Get https://<ipaddress>:443/apis/getambassador.io/v2/namespaces/<namespace>/kubernetesendpointresolvers?limit=500&resourceVersion=0: dial tcp <ipaddress>:443: connect: no route to host
Trace[934878924]: [30.000552118s] [30.000552118s] END

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:12
  • Comments:10 (2 by maintainers)

github_iconTop GitHub Comments

3reactions
jtybergcommented, Apr 19, 2021

FWIW, we send all our logs to Datadog. If the logs are NOT JSON formatted, then Datadog treats them as level=ERROR, and it is not ideal to see thousands of error logs from Ambassador every hour.

We have done the following to help mitigate a lot of the noisy “error” logs from Ambassador.

  • Use envoy_log_type: json in our ambassador module configuration
  • Create a custom Datadog log pipeline for the diagd log messages. Using a grok parser the one below did the trick for us.
aes.diagd %{date("yyyy-MM-dd HH:mm:ss"):date} diagd %{regex("[0-9.]*"):version} \[%{data:process_thread}\] %{word:level}: %{data:message}

We still have quite a few “snapshot” activity logs that we would also like to format as JSON. Creating a custom log pipeline in Datadog would not be ideal, because there is not much of a pattern for these messages and there is no log level in them. (And we’d rather treat the disease, not the symptom).

Examples:

PID 160, 0.06Gi (exited): envoy --config-path /ambassador/snapshots/econf-tmp.json --mode validate 

PID 37, 0.07Gi: /usr/bin/python /usr/bin/diagd /ambassador/snapshots /ambassador/bootstrap-ads.json /ambassador/envoy/envoy.json --notices /ambassador/notices.json --port 8004 --kick kill -HUP 1

PID 19, 0.14Gi: /ambassador/sidecars/amb-sidecar

2021/04/19 18:40:13 Memory Usage 1.63Gi (41%)

2021/04/19 18:40:01 Taking K8sSecret snapshot.SecretRef{Namespace:"ambassador", Name:"fallback-self-signed-cert"}

1reaction
esmetcommented, May 20, 2021

I worked on https://github.com/datawire/ambassador/pull/3437 recently and was able to address JSON logging for most of the Ambassador control plane code. Unfortunately, it still doesn’t cover the logging emitted by our Kubernetes client-go dependencies and it probably won’t cover the protobuf CPP warning/error messages. I think those might be coming from within Envoy… I’d need to dig deeper.

This change should however improve life significantly for those that want “almost all JSON logs” and should reduce the noise to a manageable minimum until we can track down and address the remaining rogue logs.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Enable configuring ambassador STDOUT logs as JSON format
A single option for configuring log format of these ambassador pods and the ability to configure all pod logs to be JSON would...
Read more >
Log4j Log to STDOUT then format to JSON layout for Logstash
What I want to do is Log to STDOUT in 'normal non-JSON' format then send logs to Fluentd in JSON format. I am...
Read more >
Log service | Edge Stack
By configuring a LogService , you can configure Ambassador Edge Stack to report its access logs to a remote service, in addition to...
Read more >
Enabling JSON logging | OpenShift Container Platform 4.8
Configuring JSON log data for Elasticsearch ; Structure types. You can use the following structure types in the ClusterLogForwarder CR to construct index...
Read more >
Configure logging drivers - Docker Documentation
If you do not specify a logging driver, the default is json-file . Thus, the default output for commands such as docker inspect...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found