[1.16.5] Network set in TransmitterNetworkRegistry constantly growing
See original GitHub issueIssue description:
Basically, the title.
private final Set<DynamicNetwork<?, ?, ?>> networks = new ObjectOpenHashSet<>();
On my public server this network collection grows larger over time, peaking at 50-100k objects.
This eventually leads to TransmitterNetworkRegistry
’s onTick
method taking really long to complete.
While investigating, I’ve found out that network is not always removed from this set even if all its transmitters were unloaded, so I believe it’s some sort of a leak.
Is this behaviour intended/Is there some sort of a cleanup over time?
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:9 (4 by maintainers)
Top Results From Across the Web
Mekanism Generators - Mods - Modrinth
Mekanism Generators: Advanced energy generation for Mekanism. View other minecraft mods on Modrinth today! Modrinth is a new and modern Minecraft modding ...
Read more >KAIROS Manager Function Manual
It is continuously changing according to the product developing and updating, but its main functions remain the same and they are explained here....
Read more >5136-PFB-VME Software Guide - StockCheck
Create a network configuration. Edit the sample file netslv.ncf file to set the network parameters. At minimum, you need to set: - baud...
Read more >Enigmatica 6 - E6 - 1.16.5 - Modpacks - Minecraft - CurseForge
The changelog for 1.2.0 is incomplete at the moment, we have a little issue with our changelog generation tool. Changes/Improvements. Improve server-setup- ...
Read more >Passenger Vessels Accessibility Guidelines - Federal Register
Increasing the weight and size of high-speed passenger vessels has a significant impact on the fuel consumption on these vessels.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
So, I was able to reproduce this issue locally and get to the bottom of it. Mekanism fully relies on
onChunkUnloaded
callback to clean up its networks. If the callback wasn’t executed in-time, the affected network will hang in memory until the server stops. Performant, on the other hand, does a deliciously cruel thing - it delaysonChunkUnloaded
execution if it’s considered laggy from Performant’s POV.I feel like odds are this is probably fixed by 10.3.2 due to our extra unload checks for when a chunk just becomes inaccessible but isn’t actually unloaded yet.