Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Issue with ConcurrentModificationException for Metadata in StatusMetricsBolt

See original GitHub issue

We have been using stormcrawler with elasticsearch under huge load using parallelism across multiple workers for some time without any issues. After we upgraded the version from 1.18 to 2.1 (storm 1.2.3 to 2.2.0 as well) we started getting ConcurrentModificationException from kryo for Metadata class.

2021-09-14 17:13:48.797 o.a.s.e.e.ReportError Thread-20-spout-executor[59, 59] [ERROR] Error
java.lang.RuntimeException: com.esotericsoftware.kryo.KryoException: java.util.ConcurrentModificationException
Serialization trace:
md (com.digitalpebble.stormcrawler.Metadata)
	at org.apache.storm.utils.Utils$ ~[storm-client-2.2.0.jar:2.2.0]
	at [?:1.8.0_292]
Caused by: com.esotericsoftware.kryo.KryoException: java.util.ConcurrentModificationException
Serialization trace:
md (com.digitalpebble.stormcrawler.Metadata)
	at com.esotericsoftware.kryo.serializers.ObjectField.write( ~[kryo-3.0.3.jar:?]
	at com.esotericsoftware.kryo.serializers.FieldSerializer.write( ~[kryo-3.0.3.jar:?]
	at com.esotericsoftware.kryo.Kryo.writeClassAndObject( ~[kryo-3.0.3.jar:?]
	at com.esotericsoftware.kryo.serializers.CollectionSerializer.write( ~[kryo-3.0.3.jar:?]
	at com.esotericsoftware.kryo.serializers.CollectionSerializer.write( ~[kryo-3.0.3.jar:?]
	at com.esotericsoftware.kryo.Kryo.writeObject( ~[kryo-3.0.3.jar:?]
	at org.apache.storm.serialization.KryoValuesSerializer.serializeInto( ~[storm-client-2.2.0.jar:2.2.0]
	at org.apache.storm.serialization.KryoTupleSerializer.serialize( ~[storm-client-2.2.0.jar:2.2.0]
	at org.apache.storm.daemon.worker.WorkerTransfer.tryTransferRemote( ~[storm-client-2.2.0.jar:2.2.0]
	at org.apache.storm.daemon.worker.WorkerState.tryTransferRemote( ~[storm-client-2.2.0.jar:2.2.0]
	at org.apache.storm.executor.ExecutorTransfer.tryTransfer( ~[storm-client-2.2.0.jar:2.2.0]
	at org.apache.storm.executor.spout.SpoutOutputCollectorImpl.sendSpoutMsg( ~[storm-client-2.2.0.jar:2.2.0]
	at org.apache.storm.executor.spout.SpoutOutputCollectorImpl.emit( ~[storm-client-2.2.0.jar:2.2.0]
	at org.apache.storm.spout.SpoutOutputCollector.emit( ~[storm-client-2.2.0.jar:2.2.0]
	at com.digitalpebble.stormcrawler.persistence.AbstractQueryingSpout.nextTuple( ~[stormjar.jar:?]
	at org.apache.storm.executor.spout.SpoutExecutor$ ~[storm-client-2.2.0.jar:2.2.0]
	at org.apache.storm.executor.spout.SpoutExecutor$ ~[storm-client-2.2.0.jar:2.2.0]
	at org.apache.storm.utils.Utils$ ~[storm-client-2.2.0.jar:2.2.0]
	... 1 more

We did some research and found that the root cause of the issue was that the Metadata object was being mutated after it was emitted. Also we were able to identify that the issue disappeared whenever we removed the status_metrics bolt from the topology. That made sense because the metadata emitted by the spout was modified later in the default stream and was also emitted to the status metrics bolt.

We propose a solution that is connecting the status metrics bolt to the __system using the __tick stream. That’s because the mentioned bolt only uses the tick tuple to perform kind of a cron job. This way we avoid passing the real tuple that is not used at all with the metadata that was causing the issue.


  - from: "__system"
    to: "status_metrics"
      type: SHUFFLE
      streamId: "__tick"

BTW, I’m working with @matiascrespof and @jcruzmartini.


Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

jniochecommented, Sep 17, 2021

Not that I don’t want to do it myself: I really want you to take full credit for it 😉

juli-alvarezcommented, Sep 17, 2021

Hi @jnioche! Glad you like the approach. Yes, we have the crawler running for the past 24hs and everything is working as expected, no exception, no workers dying and grafana dashboards looking good as well. Sure, I will create the PR with the changes in the flux file.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Fix the ConcurrentModificationException | TechTarget
Java's ConcurrentModificationException is thrown when a collection is modified while a Java Iterator is trying to loop through it.
Read more >
Why is a ConcurrentModificationException thrown and how to ...
This is not a synchronization problem. This will occur if the underlying collection that is being iterated over is modified by anything ......
Read more >
java.util.ConcurrentModificationException - DigitalOcean
Let's see the concurrent modification exception scenario with an example. package com.journaldev.ConcurrentModificationException; import java.
Read more >
Issue #9164 · prestodb/presto · GitHub
Trying to drop a column from a Hive table with Presto 0.186 throws a ConcurrentModificationException: This very problem was fixed in #9750.
Read more >
How to Avoid the Concurrent Modification Exception in Java
For example, if a Collection is modified while a thread is traversing it using an Iterator , a ConcurrentModificationException is thrown from ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Post

No results found

github_iconTop Related Hashnode Post

No results found