Question: Capture specific logs for a given container?
See original GitHub issueHi all,
Following this example: https://github.com/splunk/splunk-connect-for-kubernetes/blob/develop/helm-chart/splunk-kubernetes-logging/examples/use_logs.yaml
I have some logs (log1 and log2) that live in acontainer at path /opt/containername/. I’ve SSH’d into the container and verified the path to these logs.
I want to tail these logs. In my values.yaml I’ve defined this snippet:
logs:
container-name:
from:
file:
path: /opt/containerName/log1.log
Retrieving the logs of the splunk-connect pod that lives on the same node as the container:
2019-12-17 21:35:16 +0000 [warn]: #0 /opt/containername/log1.log not found. Continuing without tailing it.
Any guidance would be appreciated.
Thanks! AJ
Issue Analytics
- State:
- Created 4 years ago
- Comments:5 (2 by maintainers)
Top Results From Across the Web
Docker Logging: 101 Guide to Logs, Best Practices & More
Get started with Docker logging! Learn what container and daemon logs are and the best practices and strategies on how to view and...
Read more >How to collect docker logs to a particular docker container in ...
Only issue is that it appears to only grab the logs from the time it becomes active and so one will miss the...
Read more >Docker Logging: How Do Logs Work With Docker Containers?
A containerized application should be documented in logs and consolidated conveniently. A log analysis tool can then use the log messages to create...
Read more >Logging Architecture | Kubernetes
Kubernetes captures logs from each container in a running Pod. ... Each sidecar container could tail a particular log file from a shared ......
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
In this scenario, where the logs are not sent to stdout/stderr, you would use a “sidecar” approach.
https://kubernetes.io/docs/concepts/cluster-administration/logging/#sidecar-container-with-a-logging-agent
Splunk Connect for Kubernetes is designed to be a Node Agent collecting all logs on a node.
https://kubernetes.io/docs/concepts/cluster-administration/logging/#using-a-node-logging-agent
The node agent usually covers 90% of all the things you need to collect, then you can use the sidecar or mount the volumes you are looking to monitor into the node agent pod.
To get started I recommend you try something like this example…its a docker example but applies to k8s as well if you translate it to k8s yaml…
https://github.com/matthewmodestino/container_workshop/blob/master/universalforwarder/sidecar/sidecar.yaml
This example demonstrates the concept you are after, where the splunk uf monitors a directory from another container by using volume mounts. You could use fluentd in place of the if here as well. See if this helps…
Normally, have you application print logs to standard out, then the rest is automatically handled by kubelet and docker. when application is run inside container by docker, docker will write all printed logs to a file. ie) /var/lib/docker/container//.-json.log Kubelet also creates symlinks of these files under /var/log/pod/… and /var/log/container/… when this docker container is created through kubernetes. Then this connector will read all these container log files and send it to splunk.