Separate workers for parsing and database insertions
See original GitHub issueIs your feature request related to a problem? Please describe.
Decouple UDF processes from the backend/database session.
Right now, when we run UDFRunner.apply_mt(), we create a number of UDF worker processes. These processes all own an sqlalchemy Session object and add/commit to the database at the end of their respective parsing loop.
Describe the solution you’d like
Make the UDF processes backend-agnostic, e.g. by having a set of separate BackendWorker processes handle the insertion of sentences. One possible way: Connect the output_queue of UDF to the input of BackendWorker, which receive Sentence lists and handle the sqlalchemy commits.
This will not fully decouple UDF from the backend, because the parser returns sqlalchemy-specific Sentence objects, but it could be one step towards that goal.
Additional context This feature request refers to decoupling of parsing and backend. There’s likely more coupling with the backend later in the processing pipeline.
Issue Analytics
- State:
- Created 5 years ago
- Reactions:2
- Comments:5 (3 by maintainers)

Top Related StackOverflow Question
We could use (Py)Spark, Dask, etc. for distributed computing but the bottleneck would be the data persistence layer, i.e., PostgreSQL. In other words, as long as we use PostgreSQL, it’ll be the bottleneck and we end up doing ad-hoc performance optimizations here and there.
One idea is to use different appliers for different storage backends: one for in-memory, another for PostgreSQL, one another for Hive, etc. The snorkel project (not snorkel-extraction) takes this approach for different computing frameworks (LFApplier, DaskLFApplier, SparkLFApplier), but Fonduer has more appliers to take care of, i.e., parser, mention_extractor, candidate_extractor, labeler, featurizer; and Fonduer has to worry about the data persistence layer too.
That’s one idea! I think it would be better to modularize so we can 1) have better support for distributed computing from other parties (e.g., PySpark, Dask ); 2) easy to extend to other data layers.