Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For small amounts of data, strictness is better then laziness. There is a simmilar lesson even in non-pure languages: copying small amounts of data is almost always faster than fancy scheme you want to do to avoid copying.

For large amounts of data in an immutable structure, laziness is pretty much essential. The theory behind strict-by-default is that most of the time, you are not dealing with large amounts of data; even if you have pointers to large data, or pointers to pointers of large data. Even though laziness is vital to a pure language, you only need to use it in a relatively small number of places.



Wait why is laziness vital to large data structures? Even with strict functions on strict, but persistent functions, you almost never copy the entire data structure.

For example a strict function on a strict persistent list that swaps the head of the list doesn't have to copy the entire tail of the list.

The thing that laziness enables is short-circuiting, so e.g. a fold over the list can bail out early if necessary. But this is actually a minority of use cases for large data, since we often bound the size of data structures with something like a `take n` function for some integer n, which only processes n elements in a strict language.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: