question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Memory leak with kafka metrics

See original GitHub issue

Hi there,

I have a springboot app wich purpose is to print the lag on all kafka topics/consumer groups. And that’s all.

It runs out of memory, and the memory usage is as follows :

image

Code :

@Service
@ManagedResource
public class KafkaLagWriter {

    private static final Logger LOGGER = LoggerFactory.getLogger(KafkaLagWriter.class);

    private final KafkaAdmin kafkaAdmin;
    private final String bootstrapServers;

    public KafkaLagWriter(final KafkaAdmin kafkaAdmin,
                          @Value("${spring.kafka.bootstrap-servers}") final String bootstrapServers) {
        this.kafkaAdmin = kafkaAdmin;
        this.bootstrapServers = bootstrapServers;
    }

    @ManagedOperation
    @Scheduled(fixedDelay = 60_000)
    public void logKafkaTopicLags() {

        try (final AdminClient adminClient = AdminClient.create(kafkaAdmin.getConfig())) {
            final ListConsumerGroupsResult consumerGroupsResult = adminClient.listConsumerGroups();
            final Collection<ConsumerGroupListing> consumerGroupListing = consumerGroupsResult.all().get();
            for (final ConsumerGroupListing consumerGroup : consumerGroupListing) {
                final String consumerGroupId = consumerGroup.groupId();

                final ListConsumerGroupOffsetsResult consumerGroupOffsetsResult = adminClient.listConsumerGroupOffsets(consumerGroup.groupId());
                final KafkaFuture<Map<TopicPartition, OffsetAndMetadata>> futureMap = consumerGroupOffsetsResult.partitionsToOffsetAndMetadata();
                final Map<TopicPartition, OffsetAndMetadata> offSetByPartition = futureMap.get();
                final Map<String, Map<TopicPartition, OffsetAndMetadata>> topicsToOffSetByPartition = new HashMap<>();

                offSetByPartition.forEach((key, value) -> {
                    Map<TopicPartition, OffsetAndMetadata> offSets = topicsToOffSetByPartition.get(key.topic());
                    if (offSets != null) {
                        offSets.put(key, value);
                    } else {
                        offSets = new HashMap<>();
                        offSets.put(key, value);
                    }
                    topicsToOffSetByPartition.put(key.topic(), offSets);
                });

                for (final Entry<String, Map<TopicPartition, OffsetAndMetadata>> topicToPartitions : topicsToOffSetByPartition.entrySet()) {
                    logOffSets(consumerGroupId, topicToPartitions.getKey(), topicToPartitions.getValue());
                }
            }
        } catch (final Exception e) {
            final ExceptionEvent event = ExceptionEventBuilder.createBuilder()
                                                              .method(Events.KAFKA_MONITORING)
                                                              .exceptionClassName(e.getClass().getName())
                                                              .exceptionMessage(e.getMessage())
                                                              .stacktrace(ExceptionUtils.getStackTrace(e))
                                                              .build();
            LOGGER.error(event, null);
        }
    }

    private void logOffSets(final String groupId, final String topic, final Map<TopicPartition, OffsetAndMetadata> offSetByPartition) {
        try (final KafkaConsumer<?, ?> consumer = createNewConsumer(groupId)) {
            final Map<TopicPartition, Long> endOffSetByPartition = consumer.endOffsets(offSetByPartition.keySet());
            for (final TopicPartition topicPartition : offSetByPartition.keySet()) {
                final Long endOffSet = endOffSetByPartition.get(topicPartition);
                final long currentOffSet = offSetByPartition.get(topicPartition).offset();
                final Long lag = endOffSet != null ? endOffSet - currentOffSet : null;

                final KafkaConsumerStatisticEvent event = KafkaConsumerStatisticEventBuilder.createBuilder()
                                                                                            .topic(topic)
                                                                                            .consumerGroup(groupId)
                                                                                            .partition(topicPartition.partition())
                                                                                            .currentOffSet(currentOffSet)
                                                                                            .endOffSet(endOffSet)
                                                                                            .lag(lag)
                                                                                            .build();
                LOGGER.info(event, null);
            }
        }
    }

    private KafkaConsumer<?, ?> createNewConsumer(final String groupId) {
        final Properties properties = new Properties();
        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
        properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        return new KafkaConsumer<>(properties);
    }
}

Can you provide some help on this matter ?

<micrometer.version>1.1.4</micrometer.version>

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:11 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
izeyecommented, Dec 10, 2019

i dont need these metrics

@gstephant If you don’t need them, you can disable them with the following application property:

spring.autoconfigure.exclude=org.springframework.boot.actuate.autoconfigure.metrics.KafkaMetricsAutoConfiguration
0reactions
alevohincommented, May 14, 2020

@jorgheymans @shakuzen I suppose the problem with memory leak is described here #2096. Version 1.3.X is still used by Springboot 2.2.6.RELEASE

Read more comments on GitHub >

github_iconTop Results From Across the Web

Debugging a memory leak in Apache Kafka® | by Javier Navarro
This is one of the most important metrics when monitoring kafka and we assume it as 100% accurate description of a production incident:...
Read more >
Possible "memory-leak" in KafkaStreamsMetrics #2843 - GitHub
When using KafkaStreamsMetrics the heap-usage seems to be ever-increasing for objects of type io.micrometer.core.instrument.ImmutableTag .
Read more >
Memory leak in KafkaMetrics registered to MBean - Apache
After close() called on a KafkaConsumer some registered MBeans are not unregistered causing leak. import static ...
Read more >
Dawn of the Dead Ends: Fixing a Memory Leak in Apache Kafka
When memory is allocated by mmap , the caller has to call 'munmap' eventually to release it back to the operating system. Not...
Read more >
Memory Leak in kafka - Stack Overflow
We have built a data ingestion pipeline using Kafka. We have a consumer that reads from a kafka topic and writes to a...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found