Support logging configuration in `kubectl cloudflow deploy` and `kubectl cloudflow configure`
See original GitHub issueCurrently it is hard to configure logging for cloudflow streamlets.
You have to specify a logback.xml per sub-project in src/main/resources to get the file on the classpath, which works for akka streamlets.
For Flink streamlets, overriding the default logging does not work at the moment. Log4j and logback is set in task and job managers through:
Dlog4j.configuration=file:/opt/flink/conf/log4j-console.properties
Dlogback.configurationFile=file:/opt/flink/conf/logback-console.xml
And these files are packaged by default in Flink. For Spark, which uses log4j, defaults from a jar file bundled with Spark are used, if log4j.properties are not found on the classpath. So for Spark you need to add a log4j.properties file in the sub-project src/main/resources.
All of this is very cumbersome to configure correctly.
It would be better if logging configuration could be provided through kubectl cloudflow deploy
and kubectl cloudflow configure
, allowing users to provide log4j and logback files.
See https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/issues/753 for an idea on how to get this to work in Spark. The approach will probably work in general:
- put log config files in config maps
- mount the config maps on the pods under well-known names.
- Add java options to pass through log config as system properties.
Issue Analytics
- State:
- Created 3 years ago
- Comments:12 (8 by maintainers)
Top GitHub Comments
Thanks for the feedback @vkorenev ! The problem has been fixed here: https://github.com/lightbend/cloudflow/commit/42a96fb39d5dbc374ade14693db5ca900ae94063 but is not yet available in the
current
documentation.@franciscolopezsancho yes you can first tackle being able to change logging through the kubectl cloudflow, with deploy and configure command, even if it only supports a subset of streaming engines. It’s ok to do that iteratively.
point 2 might not work with Spark’s dependency on log4j