`Connect ETIMEDOUT` in Windows 10 worker process
See original GitHub issueHi @jbielick – I have been coming across this issue a lot recently on a Windows worker. I just updated from 2.2.0 to 3.02, and am still seeing this issue.
Jobs process normally for a while, and then I hit the error described above which takes down the entire process.
On the worker, I’m running Windows 10, Node 10.16, faktory-worker-node 3.02. I do not see this same issue on my ubuntu workers. Do you have any further insight into this issue on this platform specifically? Is there something I can do in my code to be more defensive? I didn’t want to open a new issue yet since all of the context is here.
Here is an example output log from my app: https://pastebin.com/3r9Sq3KT It shows the process working for a while sending a variety of ACKs and FAILs, and then crashing.
Here is how I’m initiating the worker with a few small redactions. I’m not very great with Node, so I’m including this in case it shows something obviously incorrect.
const faktory = require('faktory-worker');
const worker = require('./worker-__');
const colors = require('colors');
const concurrency = parseInt(process.argv[2]) || 4;
// Setup the worker server & register the faktory worker(s).
(async () => {
faktory.register('Harvest::__::Scraper', async (zipCode, section, aspect, rim, url=null) => {
if (url) {
await worker.extractSingleProduct(zipCode, section, aspect, rim, url);
} else {
await worker.run(zipCode, section, aspect, rim);
}
});
faktory.register('Harvest::__::Scraper::InCartPricing', async (zipCode, section, aspect, rim, listing) => {
await worker.inCartPricing(zipCode, section, aspect, rim, listing);
});
// MIDDLEWARE
faktory.use(async (ctx, next) => {
const start = new Date(Date.now()).toISOString();
try {
console.log(`${start} ${ctx.job.jid} ${ctx.job.jobtype} Start ${ctx.job.args}`);
await next();
const end = new Date(Date.now()).toISOString();
console.log(`${end} ${ctx.job.jid} ${ctx.job.jobtype} ACK ${ctx.job.args}`.green);
} catch (e) {
const errTime = new Date(Date.now()).toISOString();
console.log(`${errTime} ${ctx.job.jid} ${ctx.job.jobtype} FAIL ${ctx.job.args}`.red);
throw e;
}
});
await faktory.work({
queues: ['www.__.com'],
concurrency : concurrency
});
})()
Please let me know if you’d like me to create a separate issue or provide more information, or if there is something I should change on my side.
_Originally posted by @ttilberg in https://github.com/jbielick/faktory_worker_node/issues/23#issuecomment-496991319_
Issue Analytics
- State:
- Created 4 years ago
- Comments:17 (8 by maintainers)

Top Related StackOverflow Question
v3.0.3is published. Timeout has been increased to 10s (I’d like to make this a user-configured value) and uncaught error events are no longer emitted and uncatchable (see #31).Still thinking about how to expose the client to jobs in middleware context.
Hmm. So each time it pushes a job it tries to connect to factory? That raises a couple flags. If it were possible to use the same faktory client for every call to
writeToFactory, that would be much better. Opening and closing that many connections could cause some strange behavior as they’re executing very fast and definitely traveling through a proxy or two (site-to-site VPN).If you create a single faktory client, you could provide it to your job functions in the middleware.
That way you could share one connection pool (the client) between all your workers. That would avoid creating a new pool each time.