Segmentation fault when using threading
See original GitHub issue-
Linux version: 64-bit Ubuntu
-
PyQ version: PyQ 4.1.3 NumPy 1.13.3 KDB+ 3.5 (2018.02.26) l64 Python 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
-
kdb+ version: 64-bit kdb+ QLIC=/home/zakariyya/q/q64
-
Not install via virtual environment
-
QHOME=/home/zakariyya/q/q64
-
Not using Conda
Hi guys,
I’m getting a segmentation fault when running a q process that loads in a Python script (.p) The use-case if fairly simple. I’m exposing Python subscription and unsubscription functions for Redis to q, then I call them so that they can run in a separate thread. That means that q can still work in “parallel”
$ cat redis.q
system "l /home/zakariyya/redis_thread.p";
.z.pc:{ if[x ~ hndl; -2" lost handle to tp"; if[`res in key `.;unsubscribe_redis enlist (res)] ] };
.z.exit:{ if[`res in key `.;unsubscribe_redis enlist (res)]; };
hndl:hopen 6006;
upd:{[t;d] data:d; neg[hndl](`upd;`tbl; data)};
show .z.i
$ q redis.q
KDB+ 3.5 2018.02.26 Copyright (C) 1993-2018 Kx Systems
l64/ 8()core 15999MB zakariyya zakariyya-pc-2193 127.0.1.1 EXPIRE 2018.07.01 zak********* KOD #51578
3645i
q)res:subscribe_redis("quote*";"quote")
And the Python script
$ cat redis_thread.p
import threading
import sys
import time
import redis
import signal
import os
from functools import partial
from pyq import q, K, _k
class PublisherThread(threading.Thread):
def __init__(self, channel, r, kdb_table):
super(PublisherThread, self).__init__()
self._stopper = threading.Event()
self.channel = channel
self.kdb_table = kdb_table
self.redis = r
self.pubsub = self.redis.pubsub()
self.redis.client_setname("kdb-feed-" + self.kdb_table)
self.pubsub.psubscribe(self.channel)
def stop(self):
print('Closing Redis connection...')
self.pubsub.close()
self._stopper = True
@property
def stopped(self):
return self._stopper.isSet()
def run(self):
while not self.stopped:
try:
msg = self.pubsub.get_message()
if msg:
if msg['type'] in ('message', 'pmessage'):
#print(msg)
qmsg = K.string(msg)
q('upd', self.kdb_table, qmsg)
time.sleep(0.001)
except _k.error as e:
print('Caught Q error. Cannot insert data to table')
self.stop()
except Exception as e:
print('Received unhandled exception. Cannot insert data to table')
self.stop()
class RedisManager(object):
def __init__(self, subscriber_init):
self._subscribers_store = {}
self.subscriber_init = subscriber_init
def add(self, feed, kdb_table):
print('Subscribing')
key = ':'.join([feed, kdb_table])
self._subscribers_store[key] = self.subscriber_init(feed, kdb_table)
return key
def remove(self, key):
self._subscribers_store[key].stop()
del self._subscribers_store[key]
return True
def subscriber_init(feed, kdb_table, redis_client):
t = PublisherThread(feed, redis_client, kdb_table)
t.start()
return t
# Create curried function of RedisManager's init
redis_manager = RedisManager(
partial(subscriber_init,
redis_client=redis.StrictRedis(host='XX.XXX.XXX.XXX', port=6399, db=0)))
# Create and expose Python functions a q callables
def q_subscribe_redis(feed, kdb_table):
return redis_manager.add(str(feed), str(kdb_table))
def q_unsubscribe_redis(key):
return redis_manager.remove(str(key))
q.subscribe_redis = q_subscribe_redis
q.unsubscribe_redis = q_unsubscribe_redis
And the segmentation fault:
q)Sorry, this application or an associated library has encountered a fatal error and will exit.
If known, please email the steps to reproduce this error to tech@kx.com
with a copy of the kdb+ startup banner.
Thank you.
/home/zakariyya/q/q64/l64/q() [0x47a8b1]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390) [0x7fc0aa739390]
/home/zakariyya/q/q64/l64/q(r0+0) [0x41bd40]
/home/zakariyya/q/q64/l64/q() [0x408a9c]
/home/zakariyya/q/q64/l64/q() [0x40fb6f]
/home/zakariyya/q/q64/l64/q() [0x4042c8]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x7fc0aa37e830]
/home/zakariyya/q/q64/l64/q() [0x4043a1]
rlwrap: warning: q crashed, killed by SIGSEGV (core dumped).
rlwrap itself has not crashed, but for transparency,
it will now kill itself with the same signal
warnings can be silenced by the --no-warnings (-n) option
Segmentation fault (core dumped)
Issue Analytics
- State:
- Created 5 years ago
- Comments:24 (15 by maintainers)
Top Results From Across the Web
segmentation fault when using threads - c - Stack Overflow
The program takes n arguments and creates n-2 threads. The thing is I get a segmentation fault and I don't know why. And...
Read more >Threading Segmentation fault - The UNIX and Linux Forums
Hello All, I am getting segmentation fault after the following lien when I try to run my datagram socket server program using threads:...
Read more >Segmentation fault at the end of MultiThreading Process
Hi,. I'm trying to run a piece of code with multithreading which executes fine if I do not enable multithreading. As the code...
Read more >Using threads gives Segmentation fault - C++ Forum
It is possible that the problem already occurs before you start the thread. It makes no sense that it cannot access even simple...
Read more >Problem with segmentation fault running C threads - Learning
Problem with segmentation fault running C threads ... Ocaml beginner here! I am trying to write a program that will process both Midi...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
… and finally, I have a proof that this issue has nothing to do with PyQ:
(on the remote end, n got incremented to 4)
I am reassigning this issue to @awilson-kx to see what the Kx team has to say about this behavior.
For completeness, the back trace is
Thanks a lot for taking the time to look into this, but also glad to know I was going crazy 😅
I’ve actually already implemented the main thread solution, which funnily enough needs to make sync calls to the “Tickerplant”, otherwise the packets get thrown God knows where. I guess the split thread solution was an interesting approach to having q and Python run in parallel to manage their bits together (Python for Redis management, q for IPC and reconnecting to TP…)
Interestingly, the processes are still running on the AWS RHEL 7.4 boxes after now 10 hours (will leave them overnight but looks like they’ve passed the threshold of 10,000 cycles).