Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nice link! It's not that the algorithms are slow in some Big-O notation; it's that the utilization of hardware resources is typically poor. It's the general problem of balancing a general purpose solution versus an optimized high performance one. When you have more domain knowledge you can do better. A handful of examples:

* A STL vector is typically at least 3 * sizeof(void * ) for the begin, end, and allocated size to be a general purpose solution [0]. You can trivially trim down space if you assume your memory is contiguous and then store deltas rather than pointers for the vector; which can be a nice win for 64-bit pointers.

* A std::function blossoms to 48 bytes to support all the various usecases from lambdas to functors to function pointers. Using a template for lambda functions can allow inlining and be a win here, or writing your own that doesn't support all the usecases.

* The lifetime of a std::string can involve a lot of allocations and deallocations. Fixed size char arrays can be a win.

You can address all these with your own library functions. This is very consistent with the rest of the post's content, and in fact they suggest some of these approaches themselves.

While these don't directly attack some fictitious ++i vs i++ performance bottleneck example, they will improve cache utilization and probably actually improve performance of your application :)

[0] http://info.prelert.com/blog/stl-container-memory-usage



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: