The only RoleTask does not exit after completion
See original GitHub issueGreetings!
I’ve got a Cats IO based application with only RoleTask bootstrapped. Distage version: 1.0.8
My RoleTask contains long, blocking piece of code - but it doesn’t blocks forever.
I thought, that after completion of role’s start()
method the application must exit - but it doesn’t.
Am I wrong, assuming this behaviour?
My role looks something like this:
class MyTaskRole(
sender: MySender[IO],
@Id("myTask-role-logger") logger: Logger[IO]
) extends RoleTask[IO] {
override def start(
roleParameters: RawEntrypointParams,
freeArgs: Vector[String]
): IO[Unit] =
for {
_ <- logger.info("Running sending task")
_ <- sender.sendBatchesUntilEmpty
_ <- logger.info("Sending task complete!")
} yield ()
}
object MyTaskRole extends RoleDescriptor {
override def id: String = "myTaskRole"
object Plugin extends PluginDef {
include(
new RoleModuleDef {
makeRole[MyTaskRole]
}
)
}
}
sender.sendBatchesUntilEmpty
is a blocking call, performing a long, blocking task.
After its completion I see “Sending task complete!” message among logs.
Also, I observe that line:
I 2021-08-04T17:48:26.407 (RoleAppEntrypoint.scala:82) …AppEntrypoint.Impl.runRoles [31:ioapp-compute-0] phase=late No services to run, exiting…
But, as I have already mentioned, application does not exit.
Here is example of my launcher - maybe it helps somehow. I had not override anything here.
object Launcher extends LauncherCats[IO] {
implicit val ec: ExecutionContextExecutor =
ExecutionContext.fromExecutor(
Executors.newFixedThreadPool(10)
)
implicit val contextShiftIO: ContextShift[IO] = IO.contextShift(ec)
implicit val concurrentEffectIO: ConcurrentEffect[IO] = IO.ioConcurrentEffect
implicit val timerIO: Timer[IO] = IO.timer(ec)
override protected def pluginConfig: PluginConfig =
PluginConfig.const(
Seq(
MyTaskRole.Plugin,
DI
)
)
object DI extends PluginDef {
// a bunch of include(...) directives
}
}
Issue Analytics
- State:
- Created 2 years ago
- Comments:7 (4 by maintainers)
Top GitHub Comments
Oh, now I understand. Thanks, for explanation. My investigation took me very deep in details, but the answer was much more simple 😄
By the way, #1585 - great idea, will wait for that feature, but for now I’ll try the second option with helper)
I suppose this issue can be closed now. Again, thanks for your answers and patience)
@amricko0b Thanks for the reproduction! The problem in this demo is caused by creating a global – and non-daemon, so it prevents the VM from closing – execution context which is then never closed.
Solution: https://github.com/amricko0b/distage-1557/pull/1 In detail, this causes the VM to stuck and is generally preventable by either making thread pools daemonic using a custom ThreadFactory:
Or by closing the ExecutionContext properly using its
.shutdown()
method. Distage provides aLifecycle.fromExecutorService
helper for that:Also I think this problem be avoided in the future with this change https://github.com/7mind/izumi/pull/1585 - we already supply default bindings for ExecutionContext, but only for ZIO right now, I fixed that oversight in the PR so you may use a default
ExecutionContext @Id("cpu")
without having to make a custom one.