Out of memory on ObjectIntMap
See original GitHub issueSometimes occasionally, after long time, I get out of memory error (heap space) with this code
if(map.size<10000)
map.put(b, dmg);
The exception
java.lang.OutOfMemoryError: Java heap space
at com.badlogic.gdx.utils.ObjectIntMap.resize(ObjectIntMap.java:465) ~[gdx.jar:?]
at com.badlogic.gdx.utils.ObjectIntMap.putStash(ObjectIntMap.java:264) ~[gdx.jar:?]
at com.badlogic.gdx.utils.ObjectIntMap.push(ObjectIntMap.java:258) ~[gdx.jar:?]
at com.badlogic.gdx.utils.ObjectIntMap.put(ObjectIntMap.java:149) ~[gdx.jar:?]
weird, as even 10000 objects shouldn’t make out of memory error. after this error, if I access the map, I get something like this
java.lang.ArrayIndexOutOfBoundsException: 100489219
at com.badlogic.gdx.utils.ObjectIntMap$Entries.next(ObjectIntMap.java:637) ~[gdx.jar:?]
at com.badlogic.gdx.utils.ObjectIntMap$Entries.next(ObjectIntMap.java:1) ~[gdx.jar:?]
Now I know… arrays of size ~100489219 a few times might cause out of memory. But with less than 10k objects in map, this shouldn’t be that big.
Bug, perhaps?
Issue Analytics
- State:
- Created 9 years ago
- Comments:15 (8 by maintainers)
Top Results From Across the Web
Unwanted out of memory error in ArrayList::new - Stack Overflow
I am getting out of memory error with just 50000 objects. I checked computeIfAbsent implementation but unfortunately did not find any thing ...
Read more >Object Int Map - Collections Data Structure « Java - Java2s.com
32. An implementation of the java.util.Map interface which can only hold a single object. 33. Utility methods for operating on memory-efficient maps.
Read more >ObjectIntMap (Eclipse Collections - 9.1.0)
public interface ObjectIntMap<K> extends IntIterable. This file was automatically generated from template file objectPrimitiveMap.stg. Since: 3.0.
Read more >Bag — The Counter - Medium
The ObjectIntMap is an open address map which has Objects as a Key but the values are primitive ints. This implementation makes the...
Read more >All Classes (Kryo 5.0.0-RC9 API) - javadoc.io
A ByteBufferInput that reads data from direct ByteBuffer (off-heap memory) using sun.misc.Unsafe. UnsafeByteBufferOutput. A ByteBufferOutput that writes data to ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
No problem! For future reference if anyone comes back to this issue: libGDX used a different internal implementation for ObjectMap/ObjectSet/(all other GDX Maps and Sets except for ArrayMap) between one of the earliest versions up to 1.9.10. This implementation used cuckoo hashing, which at least in the papers that introduced it a few years before, looked really good, and still performs well. But, that performance can only be counted on if and only if the Objects it has for keys have well-distributed hashCode() results. If two keys are unequal (equals() returns false) but have the same hashCode() result (or the same just in the lower bits), then those keys are said to collide, and that collision may only apply to some of the bits. With more than a few collisions, the approach to cuckoo hashing wouldn’t perform as well at first, and in extreme cases would repeatedly attempt to make a collision resolve by using more and more of the hashCode() result, which needs double the memory each time it uses another bit of each hashCode(). It was possible to run out of memory entirely if less than 50 maliciously-crafted Strings were all entered as keys, where each String had the same hashCode() result, since the ObjectMap would double its required RAM after every two added keys. The specific problem had to do with the “stash” for problematic keys (those that collide often), since only 3 keys in the stash are ever considered to resolve a hash collision with a new key, and if the new key and all keys tried from the stash have the same hashCode(), then the map or set had to double its capacity and try again. The stash idea is a relatively new one, and none of the papers I’ve seen discussing it mentioned this problem.
The main way I saw around the memory explosion was to reimplement the maps and sets in libGDX using a conceptually-simpler approach, one that could hopefully be debugged more easily or rewritten in a more robust way if it wound up required again. I switched from cuckoo hashing with a stash, to open addressing with linear probing, backwards-shift deletion, and Fibonacci hashing. Fancy names for some decidedly un-fancy concepts. What matters is that other data structure libraries already use this general type of map and set (fastutil is extremely similar except for how it implements Fibonacci Hashing, though it still uses something close to it). Fibonacci hashing matters a lot here; it’s a way of mixing up a hashCode() into a hopefully-better value so all of its bits are equally able to affect the actually-used hash value. This means that the hashCode() implementations in the keys don’t matter much at all, and even if they are all identical, other parts mean that isn’t a catastrophe. The “linear probing” part means a hashCode() has to be mixed around, because clusters of similar hash values cause slowdown when there are collisions; Fibonacci hashing gives us that mixing, and in exchange linear probing is one of the faster ways of resolving collisions that doesn’t hog memory. Open addressing is in contrast to chained hash tables, like HashMap in the JDK; HashMap can actually be faster but uses a lot more memory, and isn’t at all friendly to primitive-backed maps or sets (the chained part refers to the many lists of objects inside a HashMap, where one gets checked if there’s a collision, and all of those objects use RAM). The backwards-shift deletion is just a good trick for lowering how much time deletion and resizing take.
So far, the new maps and sets have done their job since they were introduced in libGDX 1.9.11. There were some copy/paste and similar mistakes that got fixed in 1.9.12 and I think one in 1.9.13 too. In these messy benchmarks I was running to verify whether there was a good gain in performance, the current maps and sets shine in memory usage (typically as good as it gets for RAM), and are about the same in speed as comparable maps and sets in fastutil (without needing the 15+ MB JAR dependency that fastutil needs). One potential unexpected change when updating from 1.9.10 or earlier to the current version is that random numbers requested from MathUtils may be different – in some cases, the old maps and sets would request random numbers from MathUtils, which could change the sequence of numbers produced if you had seeded the
MathUtils.random
field. The new maps and sets stick to deterministic results, avoiding any randomness.The mentioned issue is not a problem (or at least, shouldn’t be) on libGDX 1.9.11 and newer. There are other issues possible with ObjectMap, such as the known problems of Atom processors with any Java program, which can result in
ArrayIndexOutOfBoundsException
s on those older processors. @winrid I’d suggest making a new issue with the details of your situation, because this issue was closed almost 7 years ago, and all of the code involved has probably changed a lot. We’d need to know what version it’s still an issue with, on what platforms it’s shown up, etc.