The issue is not so much one of wasting space but that we allocate the map in a size that is "too small" to hold all the elements we want it to hold, so it gets resized right away instead of being allocated with the right size from the beginning.
Yes I had understood that; sorry I just wrote the wrong thing as I was getting ahead on myself on a related matter.
One can have "each code path take responsibility", but as shown it's very easy to allocate maps too small when e.g. driving the initialization from a list. So having something as Guava's newArrayListWithExpectedSize() would be nice.
The analogy with a List implementation isn't a good argument to do this on hashmaps as lists don't use buckets and don't aim to reuse buckets.
Whether it makes any difference in the larger scheme of things? Probably not. So I'd rather shave different yaks
Sounds reasonable. I'd rather not merge this either then: it adds some more math computation which is of questionable value. Merge it if you have proof with i.e. JMH, but then again this seems more like a potential optimisation to suggest to the OpenJDK team (if it's proven valid). |