deadlock; recursive locking
See original GitHub issueWhat happened:
I am pretty new to splunk and deployed splunk connect for kubernetes(1.4.0) in the cluster and see the below error in the agent that runs on the master server. Kube API server logs are not pushed to the splunk server.
2020-03-23 08:31:30 +0000 [warn]: #0 dump an error event: error_class=ThreadError error=“deadlock; recursive locking” location=“/usr/share/gems/gems/fluent-plugin-concat-2.4.0/lib/fluent/plugin/filter_concat.rb:189:in `synchronize’” tag=“tail.containers.var.log.containers.kube-apiserver-k8s1m_kube-system_kube-apiserver-f71d1b0e611b1f82d45637a2aaae75b5a0849b966bab165f8fa3078194b55b1a.log” time=2020-03-23 08:31:25.150968430 +0000 record={“log”=>“I0323 08:31:25.150846 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io\n”, “stream”=>“stderr”, “source”=>“/var/log/containers/kube-apiserver-k8s1m_kube-system_kube-apiserver-f71d1b0e611b1f82d45637a2aaae75b5a0849b966bab165f8fa3078194b55b1a.log”}
2020-03-23 08:31:30 +0000 [info]: #0 Timeout flush: tail.containers.var.log.containers.kube-apiserver-k8s1m_kube-system_kube-apiserver-f71d1b0e611b1f82d45637a2aaae75b5a0849b966bab165f8fa3078194b55b1a.log:stderr
What you expected to happen:
API server logs gets pushed to the splunk server
Anything else we need to know?:
Application logs from other containers are being pushed to the splunk server.
Environment:
- Kubernetes version (use
kubectl version
): 1,15,4 - Ruby version (use
ruby --version
): ruby 2.5.5p157 - OS (e.g:
cat /etc/os-release
): Ubuntu 18.04.2 LTS - Splunk version: 8,0,1
- SCK Version - 1,4,0
Issue Analytics
- State:
- Created 3 years ago
- Comments:9
i was able to workaround this by following the thread i posted above over on the concat repo.
I started by removing all the shipped concat filters from the logging and umbrella chart
values.yaml
, (these should be optional anyways) then updating configmap sources that we need concat for to point to@label CONCAT
(containers and files sources currently allow you to set concat)Then moved the concat filter outside the
@SPLUNK
label inoutput.conf
Because not all logs need the same concat settings (ie. separator, etc - in the example above my pod needs separator “\n”, some don’t), I believe we need to expose more multiline settings in the helm chart instead of rendering all concat filters with the same settings block.
So what we need to solve this:
source.containers.conf
&source.files.conf
to point to a new label called ie.CONCAT
which contains the multiline logic inoutput.conf
separator
as not all concat rules need the same treatement.@szymonpk PR is in for review. once the team has a chance to take a look i will add another to expose the “separator” option…and will look for any others we think we need to make multiline logs pretty.