>This is not a good thing. The shorter the learning curve, the quicker you run out of ways to improve your work.
I disagree with this. There are many languages that are perfectly productive which are reducible to a very small core: Haskell, Standard ML, OCaml, Modula-2/3. The size of the ecosystem is an extrinsic property to the language.
Simple languages are easier to learn, easier to implement, and it's easier to understand code written in that language, since the semantics are simpler. Very rarely does language complexity buy you anything except painful surprises or obfuscated code contest entries.
For example: Rust is a much simpler language than C++ and you can use it to do essentially anything reasonable you'd want to do in C++.
Counterexample: Brainf*ck is very easy to learn and very difficult to program in.
You have a point that a large feature set doesn't necessarily mean a productive language, but the same can be said for a small feature set. Your examples only show that there is some set of features that can subsume other features, so the number of features can be effectively reduced. At some point however there would be an irreducible set of features.
There's obviously going to be a happy middle ground between too many features and too few. That middle ground is going to differ for different people too.
That's the beauty of having different languages. If Bob finds Go too simplistic then he can use something else. Personally I like Go and don't want the language to change into yet another C++-like jack of all trades.
The real problem with languages these days is people conflate personal preference with irrefutable fact. Probably because we're taught our profession is a science.
> The real problem with languages these days is people conflate personal preference with irrefutable fact.
This bears repeating. I've found language preference and style depends a huge amount on education, language exposure, and even neurotypes. For example, personally I dislike keeping pieces of state in my head. I have gravitated towards a very functional style, eschew OOP, short functions, lots of static typing. I need it to stay sane. My kryptonite is long, scripty things with tons of mutable state and magic. Some people can work like that. They can keep a big chunk of varying state in their head. I don't proscribe that either type is better (though the former is much more approachable for those new to the codebase). It's just different tradeoffs.
> I disagree with this. There are many languages that are perfectly productive which are reducible to a very small core: Haskell, Standard ML, OCaml, Modula-2/3.
In regards to “small core” and Haskell, one of the complaints about I’ve heard is that any real code will inevitably end up using all sorts of language extensions, which seems to be the case in the admittedly small amount of Haskell code I’ve seen.
Now I’m not a Haskell developer (unfortunately it seems at the end of the day the energy expert learning it would be wasted), so I want to stay away from citing this as fact. But those pragmas tend to scare me away from the language.
I agree that Haskell in practice is the Cartesian product of all sorts of language extensions. But nevertheless Haskell 98 and Haskell 2010 are, in and of themselves, perfectly productive languages without extensions.
>Simple languages are easier to learn, easier to implement, and it's easier to understand code written in that language, since the semantics are simpler.
But "simpler" language specifications can also be harder to use in the real world. I made a previous comment on how the simplicity causes extra complexity in real-world code bases: https://news.ycombinator.com/item?id=14561492
For example, a "simple" language I used did not have bitwise operators like C/C++ (|&^~). However, my problem still had irreducible complexity that required reading individual bits of a byte so I wrote a bit reader using math:
function bitread && returns .T. or .F. value
parameters cByte, nPos
return ! (int( asc(substr(cByte, nPos/8+1, 1))/(2^(nPos%8)) )%2 == 0)
By eschewing the so-called "extra complexity" of "&" bit operator in C/C++, we end up using mathematical combination of exponentiation, modulus with substring extraction.
Yes, one can argue that not having bitwise operators means it's "simpler to learn the language because it's one less piece of syntax to grok" -- but now you've caused extra complexity in the codebase. This extra complexity multiplies in other ways:
- Programmer John spelled his custom bit reader as "BitGet()"
- Programmer Jane spelled her custom bit reader as "bit_fetch()"
- in addition to different spellings in the wild not being interoperable, each may have subtle bugs. (Did the programmer implement the math correctly?!?)
Therefore, adding a bitwise operator adds complexity to the base language spec but also simplifies real-world coding.
A lot of so-called extra complexity (extra keyword concepts in C#, Swift, Rust, Javascript ES6) in newer modern languages let you write simpler programs because the base language encodes a common pattern that a lot of people were re-inventing.
E.g. C Language doesn't have generics but that doesn't mean the need for expressing a concept of generics goes away in actual real-world C codebases. See comment by pcwalton: https://news.ycombinator.com/item?id=14561664
You can very justifiably move bitwise operators to the standard library given that they are all just functions of the form `(bitfield * bitfield) -> bitfield`.
I believe you are mixing 'complex' and 'complicated'.
> productive which are reducible to a very small core
That's like saying first-class continuations are easier than coroutines or generators because they effectively subsume both. I don't think that's the case at all: you now need to understand continuations in addition to coroutines and generators.
> Rust is a much simpler language than C++
C++ is more complicated than Rust, but Rust is more complex than C++. If you don't understand RAII, lifetime annotations are going to be tough to figure out...
I think if both languages were fully formalized Rust would have a smaller formalization. I don't know if Rust's grammar is context-free but it's much closer than C++'s grammar.
The grammar is not a big issue. The assertion it trivially true through how much C++ features interact with one an other, badly.
Just understanding the generation of special methods depending on which you implement by hand is an 8x6 matrix, and that tells you nothing about how they misbehave when you fail to follow the “Rules of Whatever” (variously 0, 3, 5, 6) properly.
(Rust's grammar has one teeny tiny corner that's context sensitive, and so that makes the entire thing context sensitive in a formal sense, but in practice, it is much simpler than that in the vast, vast majority of cases.)
I disagree with this. There are many languages that are perfectly productive which are reducible to a very small core: Haskell, Standard ML, OCaml, Modula-2/3. The size of the ecosystem is an extrinsic property to the language.
Simple languages are easier to learn, easier to implement, and it's easier to understand code written in that language, since the semantics are simpler. Very rarely does language complexity buy you anything except painful surprises or obfuscated code contest entries.
For example: Rust is a much simpler language than C++ and you can use it to do essentially anything reasonable you'd want to do in C++.