StackOverflowError in KafkaConsumerActor
See original GitHub issueI have a very simple consumer which runs into an StackOverflowError after some time (fs2-kafka 0.16.2
). After increasing -XX:MaxJavaStackTraceDepth
I located it in KafkaConsumerActor.scala:223
.
Exception in thread "fs2-kafka-consumer-20" java.lang.StackOverflowError
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:505)
[...]
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:507)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:507)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:507)
at scala.collection.MapLike$$anon$1.hasNext(MapLike.scala:186)
at scala.collection.Iterator.foreach(Iterator.scala:937)
at scala.collection.Iterator.foreach$(Iterator.scala:937)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
at scala.collection.MapLike$DefaultKeySet.foreach(MapLike.scala:177)
at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156)
at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at scala.collection.TraversableOnce.$div$colon(TraversableOnce.scala:150)
at scala.collection.TraversableOnce.$div$colon$(TraversableOnce.scala:150)
at scala.collection.AbstractTraversable.$div$colon(Traversable.scala:104)
at scala.collection.generic.Subtractable.$minus$minus(Subtractable.scala:59)
at scala.collection.generic.Subtractable.$minus$minus$(Subtractable.scala:59)
at scala.collection.AbstractSet.$minus$minus(Set.scala:47)
at scala.collection.SetLike.diff(SetLike.scala:179)
at scala.collection.SetLike.diff$(SetLike.scala:179)
at scala.collection.AbstractSet.diff(Set.scala:47)
at fs2.kafka.KafkaConsumerActor.$anonfun$poll$2(KafkaConsumerActor.scala:223)
at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:87)
at cats.effect.internals.IORunLoop$.startCancelable(IORunLoop.scala:41)
at cats.effect.internals.IOBracket$BracketStart.run(IOBracket.scala:86)
at cats.effect.internals.Trampoline.cats$effect$internals$Trampoline$$immediateLoop(Trampoline.scala:70)
at cats.effect.internals.Trampoline.startLoop(Trampoline.scala:36)
at cats.effect.internals.TrampolineEC$JVMTrampoline.super$startLoop(TrampolineEC.scala:93)
at cats.effect.internals.TrampolineEC$JVMTrampoline.$anonfun$startLoop$1(TrampolineEC.scala:93)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
at cats.effect.internals.TrampolineEC$JVMTrampoline.startLoop(TrampolineEC.scala:93)
at cats.effect.internals.Trampoline.execute(Trampoline.scala:43)
at cats.effect.internals.TrampolineEC.execute(TrampolineEC.scala:44)
at cats.effect.internals.IOBracket$BracketStart.apply(IOBracket.scala:72)
at cats.effect.internals.IOBracket$BracketStart.apply(IOBracket.scala:52)
at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:136)
at cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:345)
at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:366)
at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:312)
at cats.effect.internals.IOShift$Tick.run(IOShift.scala:36)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Consumer:
val consumerSettings = (executionContext: ExecutionContext) =>
ConsumerSettings(
keyDeserializer = new StringDeserializer,
valueDeserializer = circeJsonDeserializer[Measurement],
executionContext = executionContext
)
.withAutoOffsetReset(AutoOffsetReset.Latest)
.withAutoCommitInterval(10.seconds)
.withBootstrapServers("localhost:9092")
.withGroupId("group")
val topics = NonEmptyList.one("test")
val stream =
for {
executionContext <- consumerExecutionContextStream[IO]
consumer <- consumerStream[IO].using(consumerSettings(executionContext))
_ <- consumer.subscribe(topics)
_ <- consumer.stream
.evalTap[IO](message =>
processRecord(List(message.record), service)
)
} yield ()
stream.compile.drain.as(ExitCode.Success)
Issue Analytics
- State:
- Created 5 years ago
- Comments:13 (7 by maintainers)
Top Results From Across the Web
StackOverflowError in KafkaConsumerActor · Issue #14 - GitHub
I have a very simple consumer which runs into an StackOverflowError after some time (fs2-kafka 0.16.2).
Read more >why does this code throw java.lang.StackOverflowError
I am just thinking on why its throwing stackoverflow error, when everything seems to be perfect in double. There is no conversion happening...
Read more >禁用Kafka日志记录 - 免费编程教程
动态更改Kafka Connect 的日志记录级别如果您正在调试Kafka Consumer Actor 的内部结构,您可能希望启用接收日志记录以 ... java.lang.stackoverflowerror null jpa.
Read more >java.lang.StackOverflowError – How to solve ... - YouTube
Interested in learning more about java.lang. StackOverflowError ? Then check out our detailed video on how to solve Stack Overflow Error, ...
Read more >Stack Memory and StackOverFlowError in Java - YouTube
In this video, I have explained what is Stack memory and StackOverFlowError in Java.Learn:How to handle stackoverflow errorWhat is -xssWhat ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@vlovgr Good news, no more errors so far. Thanks for this great library 👍.
@sebastianvoss Thanks for the update and the stack trace! That’s a different issue though, but will take a look shortly.
Edit: looks like this happens because we use
filterKeys
, which create a newMap
with the filter applied lazily. Over time, we do manyfilterKeys
on the sameMap
, and the stack eventually overflows. Will fix this shortly.