Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

It is often said that hash table lookup operates in constant time: you compute t

ID: 648003 • Letter: I

Question

It is often said that hash table lookup operates in constant time: you compute the hash value, which gives you an index for an array lookup. Yet this ignores collisions; in the worst case, every item happens to land in the same bucket and the lookup time becomes linear (?(n)).

Are there conditions on the data that can make hash table lookup truly O(1)? Is that only on average, or can a hash table have O(1) worst case lookup?

Note: I'm coming from a programmer's perspective here; when I store data in a hash table, it's almost always strings or some composite data structures, and the data changes during the lifetime of the hash table. So while I appreciate answers about perfect hashes, they're cute but anecdotal and not practical from my point of view.

P.S. Follow-up: For what kind of data are hash table operations O(1)?

Explanation / Answer

There are two settings under which you can get O(1) worst-case times.

If your setup is static, then FKS hashing will get you worst-case O(1) guarantees. But as you indicated, your setting isn't static.

If you use Cuckoo hashing, then queries and deletes are O(1) worst-case, but insertion is only O(1) expected. Cuckoo hashing works quite well if you have an upper bound on the total number of inserts, and set the table size to be roughly 25% larger.

There's more information here.

Hire Me For All Your Tutoring Needs
Integrity-first tutoring: clear explanations, guidance, and feedback.
Drop an Email at
drjack9650@gmail.com
Chat Now And Get Quote