To be quite clear: I see the danger of hash collisions and also understand the risk of higher CPU consumption when the hashes collide since behind a hash, we have a linear list in the bucket.
But since the "hashofheaders" is currently only built of 32 buckets,
(i.e. a 5 bit hash), with a big number of headers, we will have many
many collisions anyway. So I dont't quite see the point. What am I thinking wrong?
Maybe we should use a tree implementation, e.g. an rbtree.
That would give us really quick searching.
To be quite clear: I see the danger of hash collisions and also understand the risk of higher CPU consumption when the hashes collide since behind a hash, we have a linear list in the bucket.
But since the "hashofheaders" is currently only built of 32 buckets,
(i.e. a 5 bit hash), with a big number of headers, we will have many
many collisions anyway. So I dont't quite see the point. What am I thinking wrong?
Maybe we should use a tree implementation, e.g. an rbtree.
That would give us really quick searching.
Cheers - Michael