Exception stack trace logs getting split into multiple lines (events) when seen on splunk
See original GitHub issueWhat happened: We have been migrating application from EC2 based deployments to kubernetes based deployments. We are seeing the logs get split when we view on splunk. Example a java exception stack trace is multiple events on splunk. This never used to happen previously when we had it hosted on ec2(not containers) What you expected to happen: Entire exception stack trace should be part of single log on splunk How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?: This is on EKS with Docker. Environment:
- Kubernetes version (use
kubectl version
): we are on k8s 1.14 - Ruby version (use
ruby --version
): - OS (e.g:
cat /etc/os-release
): Amazon Linux 2 - Splunk version:
- Others:
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:20 (2 by maintainers)
Top Results From Across the Web
Why are my multi-line events getting split? - Splunk Community
The default behavior of Splunk is to 1) split lines based on newlines and carriage returns and then 2) merge the lines (if...
Read more >splunk is reporting each line of stacktrace as a separate event
You need to update the props.conf settings for that sourcetype so the multiple lines of the traceback are merged into a single event....
Read more >Java: Collapsing multiline stack traces into a single log event ...
In this article, I will show you how to configure a Spring Boot application to collapse exceptions into a single line for both...
Read more >Splunk Is Reporting Each Line Of Stacktrace As A Separate ...
kubernetes based deployments. We are seeing the logs get split when we view on splunk. Example a java exception stack trace is multiple...
Read more >Splunk splitting multi-line log events by date - Server Fault
In the case of the second event, Splunk correctly splits this event in its entirety. For the third event, however, the date we...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
thanks @matthewmodestino. I think we have found our issue now. Also picked up the chat in Slack regarding the timestamp parsing:
time_format was slightly different than SCK though, we were using time_format %Y-%m-%dT%H:%M:%S.%NZ so we switched to keeping time and removing time_format added a filter using the filter_with_time method with essentially
time = Fluent::EventTime.from_time(Time.iso8601(record[‘time’])) record.delete(‘time’)
it reduced CPU utilization quite a bit and improved our throughput
@rockb1017 have you seen that thread where the user was talking about the differences in performance when doing timestamp parsing?