Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Those languages were not effective in practice. The kind of loop parallelism that most people focus on is the least interesting and effective kind outside of niche domains. The value was low.

Hardware architectures like Tera MTA were much more capable but almost no one could write effective code for them even though the language was vanilla C++ with a couple extra features. Then we learned how to write similar software architecture on standard CPUs. The same problem of people being bad at it remained.

The common thread in all of this is people. Humans as a group are terrible at reasoning about non-trivial parallelism. The tools almost don't matter. Reasoning effectively about parallelism involves manipulating a space that is quite evidently beyond most human cognitive abilities to reason about.

Parallelism was never about the language. Most people can't build the necessary mental model in any language.



This was, I think, the greatest strength of MapReduce. If you could write a basic program you could understand the map, combine, shuffle and reduce operations. MR and Hadoop etc. would take care of recovering from operational failures like disk or network outages by idempotencies in the workings behind the scenes, and programmers could focus on how data was being transformed, joined, serialized, etc.

To your point, we also didn't need a new language to adopt this paradigm. A library and a running system were enough (though, semantically, it did offer unique language-like characteristics).

Sure, it's a bit antiquated now that we have more sophisticated iterations for the subdomains it was most commonly used for, but it hit a kind of sweet spot between parallelism utility and complexity of knowledge or reasoning required of its users.


That's why programming languages are important for solving this problem.

The syntax and semantics should constrain the kinds of programs that are easy to write in the language to ones that the compiler can figure out how to run in parallel correctly and efficiently.

That's how you end up with something like Erlang or Elixir.


Maybe we can find better abstractions. Software transactional memory seems like a promising candidate, for example. Sawzall/Dremel and SQL seem to also be capable of expressing some interesting things. And, as RoboToaster mentions, in VHDL and Verilog, people have successfully described parallel computations containing billions of concurrent processes, and even gotten them to work properly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: