Research November 27, 2011 by timonk 7 Comments Author’s Note: This is part 3.5 of a series of posts about my adventures in building a “large”, in-memory hash table. This post is a handful of observations I made while running someone else’s benchmark of C/C++ hash table implementations. I ran across Nick Welch’s Hash Table Benchmarks during my research and decided to rerun a subset of his benchmark for much larger key counts. (You can find my fork of his code on GitHub .) The differences between the benchmarks he ran and the ones I ran are: I’m using sparsehash 1.11 (vs. 1.5), Boost 1.41 (vs. 1.38), and Qt 4.6 (vs. 4.5). I removed Ruby’s hash implementation because of its abysmal performance in his benchmark. I increased the insertion count to 1.5 billion, and the step size to 100 million from 40 million/2 million. I only provided the Random Inserts: Execution Time (integers) an...