Propagation from Kafka to RestTemplate broken
See original GitHub issueHi all,
Here’s a minimal project to demonstrate an issue I’m having: https://github.com/timtebeek/opentracing-demo The project consists of three components: a web frontend, kafka backend and rest backend. The frontend calls the rest backend, and puts five messages on kafka. The Kafka backend picks up the messages and post to the rest backend as well.
The problem I’m having is in the @KafkaListener
here: https://github.com/timtebeek/opentracing-demo/blob/master/demo-backend-kafka/src/main/java/demo/kafkabackend/DemoKafkaListener.java#L42
It correctly picks up and reports the trace ids coming in on Kafka; as evidenced by the logs here:
o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=kafka-backend] Discovered group coordinator tim-XPS-15-9560:9092 (id: 2147483647 rack: null)
o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=kafka-backend] Revoking previously assigned partitions []
o.s.k.l.KafkaMessageListenerContainer - partitions revoked: []
o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=kafka-backend] (Re-)joining group
o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=kafka-backend] Successfully joined group with generation 13
o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=kafka-backend] Setting newly assigned partitions [opentracing-demo-0]
o.s.k.l.KafkaMessageListenerContainer - partitions assigned: [opentracing-demo-0]
i.j.i.reporters.LoggingReporter - Span reported: 3207769d3be0d5c5:f1854ae1278b9fe8:d01d70634dca85f4:1 - receive
i.j.i.reporters.LoggingReporter - Span reported: 3207769d3be0d5c5:d5e004576fe71064:ea888f922583ab16:1 - receive
i.j.i.reporters.LoggingReporter - Span reported: 3207769d3be0d5c5:f2e75089bb249f5d:c7e1fdc77e1e78cc:1 - receive
i.j.i.reporters.LoggingReporter - Span reported: 3207769d3be0d5c5:994361341b8ad741:5263077bece29bb5:1 - receive
i.j.i.reporters.LoggingReporter - Span reported: 3207769d3be0d5c5:cc7affe6421806b0:bd9bc91d7647b9f8:1 - receive
However, within the @KafkaListener
the tracer.activeSpan remains null, so the information is not propagated with the rest calls to the rest backend:
demo.kafkabackend.DemoKafkaListener - Received Message 1
demo.kafkabackend.DemoKafkaListener - Uber trace id: 3207769d3be0d5c5:d01d70634dca85f4:3207769d3be0d5c5:1
demo.kafkabackend.DemoKafkaListener - Second trace id: 3207769d3be0d5c5:f1854ae1278b9fe8:d01d70634dca85f4:1
demo.kafkabackend.DemoKafkaListener - Active span: null
i.j.i.reporters.LoggingReporter - Span reported: 765d81a656a7ff85:765d81a656a7ff85:0:1 - GET
demo.kafkabackend.DemoKafkaListener - REST API call returns OK
demo.kafkabackend.DemoKafkaListener - Received Message 2
demo.kafkabackend.DemoKafkaListener - Uber trace id: 3207769d3be0d5c5:ea888f922583ab16:3207769d3be0d5c5:1
demo.kafkabackend.DemoKafkaListener - Second trace id: 3207769d3be0d5c5:d5e004576fe71064:ea888f922583ab16:1
demo.kafkabackend.DemoKafkaListener - Active span: null
i.j.i.reporters.LoggingReporter - Span reported: 577936584a049514:577936584a049514:0:1 - GET
demo.kafkabackend.DemoKafkaListener - REST API call returns OK
demo.kafkabackend.DemoKafkaListener - Received Message 3
demo.kafkabackend.DemoKafkaListener - Uber trace id: 3207769d3be0d5c5:c7e1fdc77e1e78cc:3207769d3be0d5c5:1
demo.kafkabackend.DemoKafkaListener - Second trace id: 3207769d3be0d5c5:f2e75089bb249f5d:c7e1fdc77e1e78cc:1
demo.kafkabackend.DemoKafkaListener - Active span: null
i.j.i.reporters.LoggingReporter - Span reported: eb966a57e60e4e23:eb966a57e60e4e23:0:1 - GET
demo.kafkabackend.DemoKafkaListener - REST API call returns OK
demo.kafkabackend.DemoKafkaListener - Received Message 4
demo.kafkabackend.DemoKafkaListener - Uber trace id: 3207769d3be0d5c5:5263077bece29bb5:3207769d3be0d5c5:1
demo.kafkabackend.DemoKafkaListener - Second trace id: 3207769d3be0d5c5:994361341b8ad741:5263077bece29bb5:1
demo.kafkabackend.DemoKafkaListener - Active span: null
i.j.i.reporters.LoggingReporter - Span reported: b0d559f98873e047:b0d559f98873e047:0:1 - GET
demo.kafkabackend.DemoKafkaListener - REST API call returns OK
demo.kafkabackend.DemoKafkaListener - Received Message 5
demo.kafkabackend.DemoKafkaListener - Uber trace id: 3207769d3be0d5c5:bd9bc91d7647b9f8:3207769d3be0d5c5:1
demo.kafkabackend.DemoKafkaListener - Second trace id: 3207769d3be0d5c5:cc7affe6421806b0:bd9bc91d7647b9f8:1
demo.kafkabackend.DemoKafkaListener - Active span: null
i.j.i.reporters.LoggingReporter - Span reported: c6cb251ef5a07042:c6cb251ef5a07042:0:1 - GET
demo.kafkabackend.DemoKafkaListener - REST API call returns OK
I would have expected opentracing-kafka-spring
to set the activeSpan on my tracer bean to make it available within my @KafkaListener
method, so it’ll get propagated correctly using the RestTemplate instrumented through opentracing-spring-jaeger-cloud-starter
.
What needs to change to make this work? At present my trace information is lost and a new trace is started, making it unsuitable for full tracing.
Issue Analytics
- State:
- Created 5 years ago
- Reactions:1
- Comments:10 (5 by maintainers)
Is there a nice solution already?
There is a good article with a workaround idea: https://zhaohuabing.com/post/2019-07-02-using-opentracing-with-istio-english/
Will the workaround based on AOP (aspect around methods with
@KafkaListener
) properly work or are there pitfalls?Both the consumer and producer in this library are a bit off, they open and close a span immediately, this is more forgiveable in the case of the producer as writing to kafka isn’t particularily interesting and this does inject the SpanContext into the kafka headers (but if a span is being created around the message it should track some work). In the case of the consumer it is the expectation that the span cover the processing of a message but the kafka batch based apis don’t really make this very simple to implement as a library. The message listener container apis for spring seem like they could do this.