zContext.destroy() hangs sporadically when used with ZMQ.proxy
See original GitHub issueI’m using JeroMQ v3.4 from the maven repository.
I have absolutely no idea how or why, and I can’t even consistantly reproduce, but zContext is definately hanging at
"Thread-1@7288" prio=5 tid=0x11 nid=NA runnable
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(EPollArrayWrapper.java:-1)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- locked <0x1ca9> (a sun.nio.ch.EPollSelectorImpl)
- locked <0x1caa> (a java.util.Collections$UnmodifiableSet)
- locked <0x1cab> (a sun.nio.ch.Util$2)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
at zmq.Signaler.wait_event(Signaler.java:135)
at zmq.Mailbox.recv(Mailbox.java:105)
at zmq.Ctx.terminate(Ctx.java:190)
at org.zeromq.ZMQ$Context.term(ZMQ.java:301)
at org.zeromq.ZContext.destroy(ZContext.java:98)
at org.zeromq.ZContext.close(ZContext.java:243)
at sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-1)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.springframework.beans.factory.support.DisposableBeanAdapter.invokeCustomDestroyMethod(DisposableBeanAdapter.java:350)
at org.springframework.beans.factory.support.DisposableBeanAdapter.destroy(DisposableBeanAdapter.java:273)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroyBean(DefaultSingletonBeanRegistry.java:540)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingleton(DefaultSingletonBeanRegistry.java:516)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.destroySingleton(DefaultListableBeanFactory.java:827)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingletons(DefaultSingletonBeanRegistry.java:485)
at org.springframework.context.support.AbstractApplicationContext.destroyBeans(AbstractApplicationContext.java:921)
at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:895)
at org.springframework.context.support.AbstractApplicationContext$1.run(AbstractApplicationContext.java:809)
"main@1" prio=5 tid=0x1 nid=NA waiting
java.lang.Thread.State: WAITING
at java.lang.Object.wait(Object.java:-1)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:106)
at java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
at java.lang.Shutdown.runHooks(Shutdown.java:123)
at java.lang.Shutdown.sequence(Shutdown.java:167)
at java.lang.Shutdown.exit(Shutdown.java:212)
- locked <0x86d> (a java.lang.Class)
at java.lang.Runtime.exit(Runtime.java:109)
at java.lang.System.exit(System.java:962)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:73)
"iothread-2@5863" prio=5 tid=0xd nid=NA runnable
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(EPollArrayWrapper.java:-1)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- locked <0x1c9f> (a sun.nio.ch.EPollSelectorImpl)
- locked <0x1cad> (a java.util.Collections$UnmodifiableSet)
- locked <0x1cae> (a sun.nio.ch.Util$2)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
at zmq.Poller.run(Poller.java:207)
at java.lang.Thread.run(Thread.java:745)
"reaper-1@5859" prio=5 tid=0xc nid=NA runnable
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(EPollArrayWrapper.java:-1)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
- locked <0x1c89> (a sun.nio.ch.EPollSelectorImpl)
- locked <0x1caf> (a java.util.Collections$UnmodifiableSet)
- locked <0x1cb0> (a sun.nio.ch.Util$2)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
at zmq.Poller.run(Poller.java:207)
at java.lang.Thread.run(Thread.java:745)
"pool-1-thread-2@5918" prio=5 tid=0xf nid=NA waiting
java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Unsafe.java:-1)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
"pool-1-thread-1@5914" prio=5 tid=0xe nid=NA waiting
java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Unsafe.java:-1)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
"Finalizer@5979" daemon prio=8 tid=0x3 nid=NA waiting
java.lang.Thread.State: WAITING
at java.lang.Object.wait(Object.java:-1)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209)
"Reference Handler@7300" daemon prio=10 tid=0x2 nid=NA waiting
java.lang.Thread.State: WAITING
at java.lang.Object.wait(Object.java:-1)
at java.lang.Object.wait(Object.java:503)
at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133)
"Signal Dispatcher@7299" daemon prio=9 tid=0x4 nid=NA runnable
java.lang.Thread.State: RUNNABLE
here’s how I’m creating the proxies.
@PostConstruct private void init(){
extEvents = zContext.createSocket(ZMQ.PULL);
attemptBind(extEvents, "tcp://*:" + EXT_EVENT_PORT);
intEvents = zContext.createSocket(ZMQ.PUB);
attemptBind(intEvents, "tcp://*:" + INT_EVENT_PORT);
executorService.execute(new Runnable() {
@Override
public void run() {
ZMQ.proxy(extEvents, intEvents, null);
logger.info("Event proxy closed");
}
});
intCommands = zContext.createSocket(ZMQ.PULL);
attemptBind(intCommands, "tcp://*:" + INT_COMMAND_PORT);
extCommands = zContext.createSocket(ZMQ.PULL);
attemptBind(extCommands, "tcp://*:" + EXT_COMMAND_PORT);
executorService.execute(new Runnable() {
@Override
public void run() {
ZMQ.proxy(intCommands, extCommands, null);
logger.info("Command proxy closed");
}
});
logger.info("CREATED");
}
//retries a bind until it works
private void attemptBind(ZMQ.Socket socket, String address){
socket.setLinger(2000);
int retries = 5;
while(retries > 0){
try{
socket.bind(address);
break;
}catch(ZMQException e){
if(e.getErrorCode() == ZError.ETERM){
retries = 0;
}else{
logger.warn("Could not set up proxy port for address " + address + (retries > 0 ? " retrying" : ""), e);
}
if(retries == 0){
throw e;
}else{
retries--;
try {
Thread.sleep(10000/(retries*retries));
} catch (InterruptedException e1) {
retries = 0;
}
}
}
}
}
Other instantiations seem to work ok. I’m having trouble tracking down a precise issue because of this issue being sporadic.
This can’t be intentional behaviour. I’ve tried so many permutations to close jeroMQ and it just never works.
edit: just tested with 0.3.5-SNAPSHOT and it still freezes.
Issue Analytics
- State:
- Created 9 years ago
- Comments:11 (3 by maintainers)
Top Results From Across the Web
Zcontext.destroy() hangs instead of terminating all open sockets
Sometimes the call to ZContext.destroy hangs. I have managed to get stack traces when the context is not destroyed. If I understand correctly, ......
Read more >Chapter 2 - Sockets and Patterns - ZeroMQ Guide
Using the sockets to carry data by writing and receiving messages on them (see ... To release (not destroy) a message, you call...
Read more >Chapter 5 - Advanced Pub-Sub Patterns - ZeroMQ Guide
The ZeroMQ pub-sub pattern will lose messages arbitrarily when a subscriber is connecting, when a network failure occurs, or just if the subscriber...
Read more >Chapter 4 - Reliable Request-Reply Patterns - ZeroMQ guide
Multiple clients talking to a broker proxy that distributes work to multiple workers. Use case: service-oriented transaction processing.
Read more >Chapter 3 - Advanced Request-Reply Patterns - ZeroMQ Guide
Here are some tips for remembering the semantics. DEALER is like an asynchronous REQ socket, and ROUTER is like an asynchronous REP socket....
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
https://github.com/zeromq/jeromq/wiki/Sharing-ZContext-between-thread
I hope this would help.
The basic rule is
I was using Thread.interrupt indeed for my PUB nodes, since I did not find that warning before. Many thanks!