Spring Cloud Sleuth not propagating b3 traces from kafka records via Kafka Stream Binder
See original GitHub issueIssue Description Logging in spring-cloud-stream kafka-streams-binder does not log traces. It seems b3 traces are not picked up from the record in the topic.
Steps to replicate
- Bring up environment
docker-compose up -d
./gradlew clean bootRun
In a separate console, tail topic. If this step fails due to non-existent topic, please perform step 2 once to trigger topic auto-creation first.
docker-compose exec broker kafka-console-consumer --bootstrap-server localhost:9092 --topic resources --property print.key=true --property print.headers=true --property print.timestamp=true
- Emitted the following data into a
resourcestopic (via HTTP -> Producer -> topic)
curl -X PUT 'localhost:25400/namespaces/a/resources/3' \
--header 'Content-Type: application/json' \
--data-raw '{
"name": "C"
}'
- Confirm in application logs that b3 traces are generated/propagated
2021-05-20 10:14:45.103 INFO [,bb4e9443b6bcd085,bb4e9443b6bcd085] 7211 --- [ctor-http-nio-3] c.e.p.kafka.sleuth.ResourcesController : Received request to map resource. namespace=a, resourceId=3, request=MapResourceRequest(name=C)
- Confirm record in topic if b3 traces are propagated
CreateTime:1621476885103 b3:bb4e9443b6bcd085-c06a30a6b67ae651-0,__TypeId__:com.example.poc.kafka.sleuth.Resource a:3 {"namespace":"a","id":"3","name":"C"}
- Spring cloud stream kafka binder is able to pick-up record, but not the b3 traces
2021-05-20 10:14:45.132 INFO [,,] 7211 --- [-StreamThread-1] c.e.poc.kafka.sleuth.ProcessingConfig : Received to materialize resource. a:3=Resource(namespace=a, id=3, name=C)
extra["springCloudVersion"] = "2020.0.2"
...
dependencies {
...
implementation("org.springframework.cloud:spring-cloud-stream")
implementation("org.springframework.cloud:spring-cloud-stream-binder-kafka-streams")
implementation("org.springframework.cloud:spring-cloud-starter-sleuth")
...
}
spring.cloud.stream.function.definition=materializeResources
spring.cloud.stream.kafka.streams.binder.functions.materializeResources.application-id=resourceMaterializerProcessor
spring.cloud.stream.bindings.materializeResources-in-0.destination=resources
...
spring.sleuth.messaging.kafka.enabled=true
spring.sleuth.messaging.kafka.streams.enabled=true
@Bean
fun materializeResources(resourceSerde: JsonSerde<Resource>): Consumer<KStream<String, Resource>> {
return Consumer {
it.peek { key, value -> logger.info { "Received to materialize resource. $key=$value" } }
.toTable(Materialized.`as`<String, Resource, KeyValueStore<Bytes, ByteArray>>("resources-store")
.withKeySerde(Serdes.StringSerde())
.withValueSerde(resourceSerde))
}
Sample
- Please clone: https://github.com/richardkabiling/kafka-sleuth (for java maven sample: clone https://github.com/richardkabiling/kafka-sleuth-maven-java instead)
- Follow instructions above
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
Spring Cloud Sleuth different trace-ID integrate with Kafka
I found this code from a yaml file: spring: cloud: stream: kafka: binder: headers: - X-B3-TraceId - X-B3-SpanId - X-B3-Sampled - X-B3- ...
Read more >Spring Cloud Sleuth
Trace : A set of spans forming a tree-like structure. For example, if you are running a distributed big-data store, a trace might...
Read more >spring-cloud/spring-cloud-sleuth - Gitter
My spring webflux based boot application seems to post traces to zipkin properly but neither the logs nor the controller methods are getting...
Read more >Sleuth With Spring Cloud Stream Binder Kafka Streams
Removing spring cloud sleuth dependency fixed this issue and the header got With modern binders/kafka native headers are used by default and that....
Read more >Span (Spring Cloud Sleuth Core 1.2.6.RELEASE API) - Javadoc.io
Spans can be either annotated with tags or logs. An Annotation is used to record existence of an event in time. Below you...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

@marcingrzejszczak - rewrote to java and maven. Please try cloning this instead: https://github.com/richardkabiling/kafka-sleuth-maven-java
Also updated issue. For the most part behavior is still the same:
kafka-console-consumer:
application logs:
trace ID (bf864af90968acfb) appeared in both the Controller logs and the kafka console consumer but not in the logs by the ProcessingConfig.
Hi guys, I recently faced the same problem and
ReactorSleuth.tracedMonowrapper usage works fine for me. All the nested.flatMapcalls log provided trace. Here is an example, may be it’ll help you: