My conceptual understanding of a java.util.HashMap is as follows:
Its main asset over other Map implementations is constant lookup time, assuming there are no collisions. For this reason the underlying implementation uses an array of fixed length - the only data structure in computer science that has O(1) lookup.
The fixed length array used to store the Map entries is initialised to a given size upon instantiation and expanded (by expanded, I mean a larger array is created and the values copied across) as the size of the Map approaches the length of the fixed length array.
When a value is put into the Map, the key value pair are put into an internal linked list implementation for the given key. When there is a collision subsequent key value pairs are appended to the list.
When getting from the Map, the hashCode() of the key is used to derive the array index of the internal linked list implementation and you either have your value if the list has size 1, or you iterate through the list calling equals() on the key of each element until you find your values.
Based on point 2, HashMap has to expand an array, an operation which is surely linear. Why does it use an internal linked list implementation (O(n) lookup) for collision resolution? Why doesn't it use a datastructure with O(log n) lookup, like a binary or red black tree, to enhance performance?