Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Confessions of a Used Programming Language Salesman (2007) (acm.org)
49 points by alokrai on April 13, 2020 | hide | past | favorite | 35 comments


Speaking of Haskell and laziness.. I found myself wondering last night, what was the issue with laziness again? I think it was that it's making it difficult to parallelize things. Why would that be? Is it just an implementation artefact, or are the rules of lazy evaluation actually disallowing the compiler/runtime from batching things appropriately for parallel computation?

EDIT: Guess I was thinking of this: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.36.3...

> we have shown that lazy languages, even when implemented using a framework well-suited for parallelism, can generate only small amount of parallelism. [..] this result, when combined with the result concerning the low parallelism for really lazy programs, can also be interpreted as saying that there is not much hope for lazy languages as far as parallelism is concerned: in effect, they are saying that we can hope to obtain parallelism from programs written in a lazy language only when the programs do not need to be written in a lazy language!

That's a really old paper though (1995), if there are more recent relevant studies or results, I'd be interested..


The big problem with laziness is that it becomes impossible to reason about performance, because performance becomes noncompositional. If y = g(x) then we expect the computation time of f(g(x)) to be the computation time of g(x) plus the computation time of f(y), but in the presence of laziness that's no longer true. Any sufficiently large codebase hits a performance problem (usually a trivial one that ought to be easily resolved) eventually, and at that point you lose the ability to refactor fearlessly that's the great advantage of Haskell-like languages.


Hmm, I'm not sure that's quite right. The time taken by f(g(x)) under lazy evaluation is known to be no more than the time taken under strict evaluation. It's the space usage that can be hard to predict.


> The time taken by f(g(x)) under lazy evaluation is known to be no more than the time taken under strict evaluation.

True but irrelevant (particularly given that idiomatic Haskell involves a lot of constructions that would take infinite time under strict evaluation). If x < a, y < b, z < c and a + b = c, that doesn't give you any useful relationship between x, y and z.


A fair point. An example would be something like

    take 10 (map (+1) [1..])


I don't think that's true in practice because excessive memory usage can lead to slower execution.


It's not more operations, but that extra space usage can mean the difference between creating and freeing objects with bump allocation in Eden that fits in cache, and walking over the heap doing a full GC so you don't run out of memory.

There's a very real difference between what a simplified model of time complexity says should happen, and what actually happens when the layer of abstraction underlying the ivory tower start to spring a leak.


> To appreciate the difficulties of XML, let’s make a short excursion to the world of XML schema [...], which must be one of the most complex artifacts invented by mankind. The complexity of XSD is especially baffling since the existing solution, DTDs, is already a perfectly fine solution.

Ah, the period where it was attempted to integrate XML as first-class type/literal into programming languages. I guess the conclusion can only be that what SGML/XML is designed to represent (namely, semistructured text documents with regular content models) was fundamentally misunderstood by folks wanting to represent (relatively benign) business records-like structures for service-oriented apps. The complexity results for statically type-checking programming languages having XML as literals were disappointing, and the whole thing got out of fashion. Then, shortly after Mozilla dropped E4X (XML literals in JavaScript), JSX and React happened. I honestly don't know what conclusion should be drawn here :)


Tried to read the article but

"We use cookies to ensure that we give you the best experience on our website. It seems your browser doesn't support them and this affects the site functionality."

Yes indeed, no cookies = no article, displaying text apparently being 'functionality'.

Anwyay, going from your quoted extract, I agree XML schemas are one of the most revoltingly complex and awful things I've ever seen, and badly at that, but to claim DTDs as being "a perfectly fine solution" is just ridiculous. DTDs give an overall structure, not any kind of typing that XSDs give. DTDs are a crude start, not an endpoint.

Lord how I got to hate XSDs and XSLT. XML by itself is fine (in the right place) and I like it but it seemed to become the core of a everyone-lost-your-sanity frenzy. It was mad, very wasteful and I'm glad it's all over.

Edit: only XSLT could be so godawful as to require a new mechanism to pass parameters. https://www.xml.com/pub/a/2004/03/24/tr.html


My conclusions would be: usability in the small matters. Overengineering is the natural fate of standards and must be fought at every turn. Insist ruthlessly on conciseness at the cost of virtually anything else (but not consistency; remember Perl).


Yes, and it's all the more ironic that XML started out as a simplified subset of SGML - only to turn into a monstrosity shortly after with W3C's XML Schema and WS-* "death star" series of standards.


Early Scala's inclusion of XML as a thing baffled me from the start.


Orthogonal to the topic but Erik Meijer is hilarious


The author compares himself with the prophet and apparently believes that the acceptance of functional programming languages requires intensive missionary work - just like with an ideology or religion. However, the history of science shows that the right knowledge always prevails in the end. If it is so difficult to get functional programming languages to market, either the time is not yet ripe, or there are better solutions.


Programming languages are seldom chosen because of the merit of their core semantics. Successful programming languages are coupled to some platform or ecosystem and it is the platform that is chosen. For example JavaScript could have been functional or imperative or whatever - it wouldn't have mattered for it's success. Objective C was obscure until the success of the iOS platform, C# was pushed by MS as the way to write Windows apps, VB was the way to script office applications. None of these languages became successful due to their core semantics.

Haskell will always remain niche since it is not coupled to a successful platform. But compared to the thousands of other "second tier" languages it is doing pretty well.


I believe that you're right about why languages aren't chosen, but miss the alternative explanation. Languages are chosen for speed of deployment and ease of use, almost nothing else. From the dawn of time, when C and Unix beat out Lisp, all the way to modern JavaScript, the tool that solves a given set of problem quickest and easiest wins regardless of 'merit'. Worse is better, all the way down.


I agree, but there's one more twist: Problems are different. Languages are chosen for ease of writing specific programs, not for "general programming".

I think, then, that FP is great for certain kinds of problems, and not so good for others. If you've got an FP problem, reach for an FP language. If you don't, don't.

What's an FP problem? Someone here (I wish I remembered who, I'd give credit where due) said "if you can think of your program as a pipe", then FP is appropriate. If the nature of your problem has a lot of state, FP probably isn't the answer. (Note well: problems themselves can have a lot of state. It isn't always just the implementation - the problem itself may demand it.)


Fully agree! I just wanted to emphasize that what makes a language the quickest and easiest for a given task is typically more due to ecosystem or platform integration than to the "geeky qualities" like advanced language constructs.


C/C++, Java, Python aren't tied to any specific platform and are some of the most popular languages used today.


C was tied to unix and C ++ was on the coattails of C. Java was actively sold to every major corporation in the early aughts. I will agree, that python came up without a platform to support it.


> python came up without a platform to support it

What about numpy (+ its ecosystem, scipy/pandas/etc)? I get the impression that's contributed a lot to Python's growth over the years.


Those are probably the only reason Python still exists.


python is supported by easy to learn basic syntax + great math/science libraries.

as i typed that i thought: python is the excel of programming languages.

it also has very solid web frameworks (Django, Flask, etc.).


Well, the author is Erik Meijer https://en.m.wikipedia.org/wiki/Erik_Meijer_%28computer_scie... He pretty much spent his entire career promoting FP concepts. He was behind RxJava and linq among others. I highly recommend watching his lessons on reactive programming if you can still locate the videos after coursera removed that course


Even though I kinda want to agree with that sentiment, there are plenty of examples where this is not true. Betamax vs VHS, IPV4 vs IPV6, Windows vs. Unix vs. Linux vs. BSD vs. Microkernels, fuzzy logic (of which we kinda inherited neural networks) vs symbolic systems. There are a lot of outside factors that inhibit the dissemination of even good ideas, social stagnation, networking effects, or sometimes even cultural preferences to flavour.


It's not always immediately obvious - even in retrospect - why one technology was preferred over another. Perhaps the price was the decisive factor, or perhaps the additional effort required to achieve minimal benefits was considered too high and convenience triumphed. Paradigm shifts are not necessarily rational, but if you make the area of investigation large enough, the overall advantage will be evident in hindsight. You may know these ideas: https://en.wikipedia.org/wiki/Wisdom_of_the_crowd


What you've got there is a circular argument. You're defining "the right choice" as "the choice that was made".


No, you cannot make this conclusion from my statement.


I don't know about science, but the history of engineering shows that, so often, Worse is Better.


Yet, large gains are always taken.

You may not switch the world into your favorite language, but if it is really much better, the world will somehow put the advantages into some other language and that other language will win.

The world is currently working very hard into making Frankenstein monsters out of parts taken from Haskell. At some point one of them will get things right.


History of science, like all history, is written by the winners. But even this history shows that it often takes hundred of years.


I have faith that knowing what's going on is important to winning, and that therefore, it's the winners accounts we want to rely on ;)


History is usually written by some underpaid scientists or journalists. They are rarely among the winners. And yes, a development often takes a long time.


> They are rarely among the winners

Perhaps, but they will almost invariably be working for them.


That is a one misleading title.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: