question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Consumer spins CPU with 400 topics

See original GitHub issue

Description

Subscribing to 400 topics spins the CPU on my AWS VM (Xeon 2.3Ghz 1 virtual CPU) to 100% in idle mode. No messages are sent/received. The code is pretty much what you’ve got as examples here.

How to reproduce

Subscribe to 400 or more topics. I haven’t explored yet at what minimum number of topics the CPU starts spinning.

Questions

  • What am I doing wrong?
  • Is library able to support that many topics in a single consumer?
    • I need to be able to serve 400-1000 topics in a single process with many processes per VM

Checklist

Please provide the following information:

  • [1.0.0] Confluent.Kafka nuget version.
  • [1.1.1] Apache Kafka version.
  • [Windows] Operating system.

Code

private void ReceiveMessages(string servers, string consumerGroupName)
{
	var config = new ConsumerConfig
	{
		BootstrapServers = servers,
		GroupId = consumerGroupName,
		EnableAutoCommit = true,
		StatisticsIntervalMs = 5000,
		SessionTimeoutMs = 6000,
		AutoOffsetReset = AutoOffsetReset.Earliest,
		EnablePartitionEof = true
	};

	CancellationTokenSource cts = new CancellationTokenSource();
	Console.CancelKeyPress += (_, e) => {
		e.Cancel = true; // prevent the process from terminating.
		cts.Cancel();
	};

	using (var consumer = new ConsumerBuilder<Ignore, string>(config)
		// Note: All handlers are called on the main .Consume thread.
		.SetErrorHandler((_, e) => Console.WriteLine($"Error: {e.Reason}"))
		.Build())
	{
		consumer.Subscribe(topics);
		Console.WriteLine($"###### {DateTime.Now} - Subscribed to {topics.Count} topics");

		try
		{
			int counter = 0;
			while (!cts.IsCancellationRequested)
			{
				try
				{
					var result = consumer.Consume(cts.Token);
					if (!result.IsPartitionEOF)
					{
						var payload = JsonConvert.DeserializeObject<SampleMessage>(result.Value);
						Console.WriteLine($"{DateTime.Now} - {payload.TopicName} - {payload.Offset} - {payload.CreatedAt}");
					}
					else
					{
						Console.WriteLine($"{DateTime.Now} - EOF - {Thread.CurrentThread.ManagedThreadId} - {counter++}");
					}
				}
				catch (ConsumeException e)
				{
					Console.WriteLine($"***** {DateTime.Now} - ConsumeException: {e.Message}");
				}
			}
		}
		catch (OperationCanceledException e)
		{
			consumer.Close();
			Console.WriteLine($"***** {DateTime.Now} - OperationCanceledException: {e.Message}");
		}
	}
}

Issue Analytics

  • State:open
  • Created 4 years ago
  • Comments:11 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
mhowlettcommented, May 22, 2019

It’s unusual to use Kafka in this way (subscribe to so many topics) - what is the use case? That said, the limiting factor is typically number of partitions (independent of number of topics) and subscribing to that many partitions should be fine. Do you see a similar issue subscribing to a single topic with 400-1000 partitions?

0reactions
alexanderycommented, May 23, 2019

assume 100M messages / day @ 10kb per message. that’s 1000 Gb / day = 1000 Gb / 86400s ~= 0.01 Gb / s = 10 Mb/s * 10 (number of consumers) = 100Mb/s. When benchmarking the python client I was pushing 100Mb/s with a 3 broker cluster and a single client (both producer and consumer). Based on my recollection of the specs for large clusters for customers (and depending on your specific requirements around latency and replication), I’d estimate you’d need maybe 5 or 6 brokers.

Thank you for the details on calculations and your opinion, Matt. Appreciate it.

i’m not sure what the client limit on # partitions is (need to look into it), but subscribing to 6400 partitions is certainly quite a way above normal / recommended use.

OK, understood. I’m going to see what’s the magic number before CPU consumption gets out of hand.

Read more comments on GitHub >

github_iconTop Results From Across the Web

CPU Speed throttled at 0.4 Ghz - Acer Hardware Forum
My client has an Acer Spin SP513-52N. The product is 9 months out of warranty and the speed shows as 0.4Ghz under Task...
Read more >
Why Skylake CPUs Are Sometimes 50% Slower - Alois Kraus
But that is a false red herring because spinning is not the root cause of slower performance. Increased lock contention means that something...
Read more >
How to Troubleshoot Fan Issues
This article provides information about how to troubleshoot and resolve fan noise, overheating, and other fan-related issues on a Dell ...
Read more >
'How I Compiled My Own SPARC CPU In a Cheap FPGA ...
Because CPUs are burned from source code, whatever clever new ideas are in the silicon must first be in the source code for...
Read more >
New BIOS Updates Attempt To Keep Ryzen 7000X3D ...
Ryzen 7000X3D processors already impose limits on overclocking and power settings, but new BIOS updates from MSI specifically disallow any ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found