question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Firehose netty leaks with reactor 3.1.2+0.7.2 Fails with reactor 3.1.3+0.7.3

See original GitHub issue

I’ve been getting some Netty leak messages when connecting to the firehose. I tried to upgrade to reactor 3.1.3+0.7.3. With that version I no longer get leak messages but the firehose fails work anymore after a handful of reconnects.

I’ve provided a sample application. Change the reactor reactor versions to see the different problems.

Using 3.1.2+0.7.2 I usually see a netty leak message after ~40-200 reconnects. Using 3.1.3+0.7.3 I usually stop getting data with my new firehose connection after ~20 reconnects.

DefaultConnectionContext context = DefaultConnectionContext.builder().apiHost("some.api").build();

TokenProvider tokenProvider = ClientCredentialsGrantTokenProvider.builder().clientId("doppler").clientSecret("secret").build();
ReactorDopplerClient doppler = ReactorDopplerClient.builder()
		.tokenProvider(tokenProvider)
		.connectionContext(context)
		.build();

long noMessagetimeout = 30000;
while(true) {
	long timeout = System.currentTimeMillis()+noMessagetimeout;
	AtomicBoolean messageSent = new AtomicBoolean(false);
	Disposable disposable = doppler.firehose(FirehoseRequest.builder().subscriptionId("subscription").build()).subscribe(envelope -> messageSent.set(true));
	while(!disposable.isDisposed()) {
		Thread.sleep(100); //Don't peg the processor
		if(messageSent.get()) {
			System.out.println("Resetting the firehose.");
			disposable.dispose();
			continue;
		}
		if(System.currentTimeMillis() > timeout) {
			System.out.println("Failed to get anything from firehose in "+noMessagetimeout+"ms.  Exiting.");
			return;
		}
	}
}

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:24 (24 by maintainers)

github_iconTop GitHub Comments

1reaction
nebhalecommented, Feb 15, 2018

Great. Thanks for the input. I’m going to leave this open until we get the Reactor releases and do the version upgrades.

1reaction
nebhalecommented, Feb 15, 2018

I stopped it at 80,000 loops this morning and never saw that issue again. So I think you can safely go to production with it, but I’ll keep trying to track down the root cause.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Reactor Netty Reference Guide
This section provides a brief overview of Reactor Netty reference documentation. You do not need to read this guide in a linear fashion....
Read more >
How to handle BubblingException - Stack Overflow
this message says that an exception was thrown somewhere and reactor channels it in the pipeline. But you're not giving enough information to ......
Read more >
How to Avoid Common Mistakes When Using Reactor Netty
The session includes how to avoid common mistakes and tricks for debugging different error cases, including how to find memory leaks, how to...
Read more >
How to Avoid Common Mistakes When Using Reactor Netty
"In Spring Boot 2.x, Reactor Netty is the default runtime for creating reactive applications. Since the very first release of Reactor Netty, ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found