Capacity

empty, size and max_size have semantics identical with that described for standard containers.

Remarks

The load factor of a hash container is the number of elements divided by the number of buckets:

size()

load_factor = --------------

bucket_count()

During the life time of a container, the load factor is at all times less than or equal to the load factor limit:

size()

-------------- <= load_factor_limit()

bucket_count()

This is a class invariant. When both size() and bucket_count() are zero, the load_factor is interpreted to be zero. size() can not be greater than zero if bucket_count() is zero. Client code can directly or indirectly alter size(), bucket_count() and load_factor_limit(). But at all times, bucket_count() may be adjusted so that the class invariant is not compromised.

The final item in the bulleted list results to a "shrink to fit" statement.

myhash.bucket_count(0); // shrink to fit

The above statement will reduce the bucket count to the point that the load_factor() is just at or below the load_factor_limit().

bucket_count()

Bucket_count returns the current number of buckets in the container.

The bucket_count(size_type num_buckets) sets the number of buckets to the first prime number that is equal to or greater than num_buckets, subject to the class invariant described above. It returns the actual number of buckets that were set. This is a relatively expensive operation as all items in the container must be rehashed into the new container. This routine is analogous to vector's reserve. But it does not reserve space for a number of elements. Instead it sets the number of buckets which in turn reserves space for elements, subject to the setting of load_factor_limit().

load_factor()

returns size()/bucket_count() as a float.

load_factor_limit()

returns the current load_factor_limit.

The load_factor_limit(float lf) sets the load factor limit. If the new load factor limit is less than the current load factor limit, the number of buckets may be increased.

You can completely block the automatic change of bucket_count with:

myhash.load_factor_limit(INFINITY);

This may be important if you are wanting outstanding iterators to not be invalidated while inserting items into the container. The argument to load_factor_limit must be positive, else an exception of type std::out_of_range is thrown.

The growth_factor functions will read and set the growth_factor. When setting, the new growth factor must be greater than 1 else an exception of type std::out_of_range is thrown.

The collision(const_iterator) method will count the number of items in the same bucket with the referred to item. This may be helpful in diagnosing a poor hash distribution.