kafka binder logs consumer config every time when doing health check
See original GitHub issueI noticed that when my application integrates Spring Cloud Stream kafka binder, it starts to print logs repeatedly like below:
2017-10-30 14:41:26.307 INFO [zipkin,cdf7f5a0035610fa,cdf7f5a0035610fa,false] 1 --- [nio-9411-exec-4] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [kafka.nada.mobike.io:9092]
check.crcs = true
client.id = consumer-1780
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id =
heartbeat.interval.ms = 3000
interceptor.classes = null
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
2017-10-30 14:41:26.307 WARN [zipkin,cdf7f5a0035610fa,cdf7f5a0035610fa,false] 1 --- [nio-9411-exec-4] o.a.k.clients.consumer.ConsumerConfig : The configuration 'value.serializer' was supplied but isn't a known config.
2017-10-30 14:41:26.307 WARN [zipkin,cdf7f5a0035610fa,cdf7f5a0035610fa,false] 1 --- [nio-9411-exec-4] o.a.k.clients.consumer.ConsumerConfig : The configuration 'key.serializer' was supplied but isn't a known config.
2017-10-30 14:41:26.307 INFO [zipkin,cdf7f5a0035610fa,cdf7f5a0035610fa,false] 1 --- [nio-9411-exec-4] o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.10.1.1
2017-10-30 14:41:26.307 INFO [zipkin,cdf7f5a0035610fa,cdf7f5a0035610fa,false] 1 --- [nio-9411-exec-4] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : f10ef2720b03b247
This message gets printed every time when a health check is triggered. In our environment, we are doing health check every 5 seconds, so it produces too huge log file.
Code snippet in org.springframework.cloud.stream.binder.kafka.KafkaBinderHealthIndicator
:
@Override
public Health health() {
try (Consumer<?, ?> metadataConsumer = consumerFactory.createConsumer()) { // <===== This line triggers the INFO level logging every 5 seconds
Set<String> downMessages = new HashSet<>();
...
...
}
catch (Exception e) {
return Health.down(e).build();
}
}
Above code seems the culprit - why does it have to create a consumer when doing health check anyway?
By the way, I’m using it on a zipkin server, which incorporates a collector that consumes kafka stream
Issue Analytics
- State:
- Created 6 years ago
- Reactions:7
- Comments:5 (1 by maintainers)
Top Results From Across the Web
kafka binder logs consumer config every time when doing ...
I noticed that when my application integrates Spring Cloud Stream kafka binder, it starts to print logs repeatedly like below:
Read more >Spring Cloud Stream Kafka Binder Reference Guide
Configuration Options. 1.3.1. Kafka Binder Properties; 1.3.2. Kafka Consumer Properties; 1.3.3. Kafka Producer Properties; 1.3.4. Usage examples.
Read more >Spring-Cloud-Stream-Kafka Custom Health check not working
I just tried an app locally on the latest snapshot with kafka binder and hit the /actuator/health endpoint and saw that the {"status":"UP"} ......
Read more >Monitoring Kafka Streams Applications
The Kafka Streams library reports a variety of metrics through JMX. It can also be configured to report stats using additional pluggable stats...
Read more >Spring Cloud Stream Kafka | VMware Tanzu Developer Center
For example, you want to make a log processing system and do some keyword search in the incoming messages. On the other hand,...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Looks like this is fixed via https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/commit/68811cad28b27024ee4b8f25e76dd57604a47ca3 ?
@iNikem sorry for the long delay in responding. @marbon87 Which version are you using? The latest snapshots (
2.1.0.BUILD-SNAPSHOT
) and2.1.0.RC1
do not have this issue. Closing this, for the time being, please re-open or create another issue with the version of Spring Cloud Stream that you are using.