Hacker News new | past | comments | ask | show | jobs | submit login
Stages of Denial (beyondloom.com)
225 points by clashmeifyoucan on Aug 9, 2021 | hide | past | favorite | 117 comments



This is a category of cursed knowledge which causes psychic damage to the reader.

When you go down certain rabbit holes you develop a fascination with obscure forms of programming and start to realize it has some powerful benefits of which you can never take advantage because it isn't widely adopted.

To free yourself from this curse write ten regular for-loops in C and say a prayer to K&R while tighly holding your copy of The C Programming Language.


If you internalize this cursed knowledge, you will see how language[C]-evangelists cherry-pick which dimensions to measure the value/success of a programing language. Don't proceed. It will make you frustrated, cynical, and disillusioned with the state of PL discourse.


Can I write one regular for loop to ten around one regular for loop, instead of ten regular for loops?


I know it's a joke but I don't really agree that this area is cursed! It just delves way deeper into syntactical terseness than many programmers will tolerate.. even compared to assembler it's quite a different beast.

I think still potentially very interesting if you're keen on expressing the most computation using the smallest number of characters though!


I developed this on a society scale. Every time I go to a place and people are using very wrong ways (from normale like using an old excel file as a collaborative dB or wilder like printing Excel sheets to write changes wih a pencil to edit the file later...) I go depressed.


I find it rather weird to compare K to C. In what circumstances would the choice be between K and C? A more reasonable comparison would be between say Python with numerical libraries and K.


K&R is a common nickname for the book "The C programming language" by Kernighan and Ritchie, and a reference to the authors:

https://en.m.wikipedia.org/wiki/The_C_Programming_Language


But TFA is about K.


Ah, true!


I'm torn. Not about the conciseness of the language overall, but about using symbols vs. names. I do see how the symbols make things more concise, less clutter, and can even make it easier to grasp a piece of code if you are deeply familiar with them[1].

On the other hand, there might be a limit. I do know how it is to use the Haskell lens package only occasionally, and then having to lookup again what the operators read out loud as "percent-percent-at-squiggly" and "hat-dot-dot" stand for (withIndex and toListOf respectively).

But arguably, the ones that are long and seldom used need looking up anyway, and their name is not that much more useful maybe?

[1] Also compare "2+2*8" with "sum 2 to the product of 2 and 8", or even just "sum(2,prod(2,8))".


The lens library has a reasonably consistent visual language baked into it:

- Operators containing `%` apply a function to the target (mnemonic: % is "mod" in many languages).

- Operators containing `=` have a `MonadState` constraint and operate on the monadic state.

- Operators containing `~` operate directly on values.

- Operators starting with `<` pass through the new value.

- Operators starting with `<<` pass through the old value.

- Operators containing `@` operate on indexed optics (mnemonic: indices tell you where you are "at" in the structure you're traversing).

So an operator like `(<<<>=)` means "semigroup append something in the monadic state and return the old result". This is the power of a well-chosen symbol language, and I don't know how you'd do this ergonomically and compactly with only named operators.


You wouldnt make it compact, so the new dreaming intern joining your codebase would be able to gloriously produce changes rapidly and get hired under the well intentioned smiles of the rest of the team.

Instead, in the kdb team I joined recently (a one-letter, symbol heavy language used by quants), nobody can remember what things do, nobody has time to teach you the insanity of code older than 2 months, and you re instead spending weeks "learning" under the heavy sighs of everyone suffering from the new burden guy.

Why do you want to do things compactly rather than with clarity?

I wish google could search for one-letter language specific symbol so I could ask "what does -9! do on type J vs type C" (nonsensical exemple, I cant remember types without looking at the type table, but I think -9! transforms a bit array into an object)


Look, the parent comment lays out a regular dictionary that allows to unpack a symbolic name into words, or pack words into a symbolic name.

In your case, the vocabulary is absent. (And yes, the custom of naming K functions with numbers, especially negative numbers, is sort of annoying.)


I think the K team recognized these issues and tried to address them - at least partially - with Q and it's readable function names and SQL-like extensions. But, AFAIK most Kdb programmers keep writing straight K instead. Don't know why either.


As anyone who uses such a language could tell you, you get used to the symbols quite quickly, and commit their meaning into memory. For that reason I mostly view the heavy use of symbols in a programming language as a negative only insofar as it makes it intimidating to newcomers.

The only other thing can I find can be an issue with the heavy use of symbols in a language is that it can lead to a certain degree of inflexibility in the language. There are only so many symbols that can be used, and once they're all used up your options are either use normal variable names (and reduce the number of ideas that can be expressed concisely), or make symbols context dependent.

The designers of k went to some lengths to avoid the use of variable names, so some symbols are better than others in terms of clarity. Some mean the same thing everywhere, but others are heavily context dependent in an effort to reuse the limited amount of symbols they have at their disposal.

k is a great domain-specific language, but struggles at being a good general purpose language for a variety of reasons. I'd love to be able to use it for data processing inside of other languages. Let the other less expressive but more robust languages handle the control flow, library interactions, etc., and then run k code to work with data as vectors and tables. If only k supported n-dimensional arrays, then it'd be very interesting to see what it could do if integrated with something like Numpy, but I could spend all day wishing k was better than it is. I'm very happy with what it's able to do, and generally groan when I find myself having to use other less expressive data processing tools (which is practically everything).


> I'm torn. Not about the conciseness of the language overall, but about using symbols vs. names. I do see how the symbols make things more concise, less clutter, and can even make it easier to grasp a piece of code if you are deeply familiar with them[1].

I think the "if you are deeply familiar with them" is important. Using non-standard symbols makes comprehension more binary: either you've memorized them and understand or you don't; there's no muddling through, relying on common concepts and terms to make up for incomplete memorization.

That probably also results in a much more binary user base: true-believers who dedicated a bunch of time to become proficient, and non-users who were unable or unwilling to, and not a whole lot in between.


Are you talking about vim? I thought this article was about programming languages.


The trouble in Haskell is that an operator like %%@~ is just an arbitrary name.

In languages like K or APL, the combination of symbols is the actual definition.


As APL shows, you then need quite a large alphabet of symbols, though.

But I'm not strongly in either camp. A few common branches of mathematics taken together have quite a large alphabet of symbols, too, and we work well with it. It feels like symbolic notations can seem obtuse at first and be hard to get into, but once you're used to it the alternative may appear worse.


> As APL shows, you then need quite a large alphabet of symbols, though.

I count 71, which doesn't seem like a lot.


That's more than key-words in most languages, I think.


Quick'n'dirty wetware lookup:

Go: < 30

C: ~ 31

C++: > 70


Python builtins (not even keywords): 68


lens operators in haskell at least have some structure https://news.ycombinator.com/item?id=28124226


how is "%%@~" different from, say, "°"? Since they're both designed to be contentless, couldn't you form a bijection between them without losing anything? (And if your objection is atomicity, how do you feel about "%" and "°"?)


The premise here is wrong, because lens operators are very much designed to carry meaning. See my sibling comment (parent->parent->sibling, I guess), but operators with `@` operate on indexed optics, and the leading `%%` means "modify, and collect summary".


    {x#x{x,+/-2#x}/0 1}
I'm sure if you used K for a year or so, that would be obvious and understandable at a single glance. But all I can think of is that I used to see that in my terminal session right after my modem got disconnected.


I think this hints at the main issue people have when reading more terse code: you need to read it slower. If you try to read K at the same speed as C, it’s going to fly by. I can feel this in my own Python code when I write in a more or less compact style. But on the more dense code, if I slow down, I can understand it faster and more clearly than the verbose code.


It's not only slower to read but requires much more context to understand. Take a look at the Rosetta code page for Fibonacci [1]. For most languages you'll be able to mostly identify what each part is doing. They rely on concepts that most programmers consider intuitive. K relies on a completely different set of knowledge that you need to have to even start to grasp what the code is doing.

1: https://rosettacode.org/wiki/Fibonacci_sequence


I'm not going to deny that learning any array language requires thinking slightly differently (as does any new paradigm), but really this example doesn't use anything particularly strange.

    {x#x{x,+/-2#x}/0 1}
         x                  x is the argument of an anonymous function {}
          ,                 concat
           +/               plus reduce (sum)
             -2#x           last two elements of x
       x f        /0 1      applied x times to 0 1
     x#                     take first x elements


It does have some strange things:

- What's the scope of x? it appears to be the argument of two different anonymous functions.

- Is / both apply and reduce?

- So 'x f list' applies f to list x times. What if I use ',' instead of f? That is, what does 'x,/list-of-lists' do? does it flatten the list of lists and then concatenates x or does it flatten the list of lists x times? It seems confusing to have symbols that act both as operators and as functions.

- Why is '-2#x' "take last two elements of x" and not "negate the first two elements of x"? If I understand correctly, based on the evaluation order you're using it should be the latter, no?

And all of that from a simple Fibonacci function. I can't imagine how difficult it must be to dive into an actually complex codebase. And it's not just about the different paradigm. That's not the problem, the problem is focusing on terseness above all.

I get that once you are deep into a language or codebase you lose sight of the complexities because you get used to them. But it doesn't mean they aren't there.


An anonymous function is defined (can't find a better word) between braces like {x+1}, in python (lambda x: x+1). There is an inner one, {x,+/-2#x}, and the outer one (the entire thing).

/ (like a lot of k symbols) does a few different things depending on context. In this case, if you do n f/x, where f takes a single argument (is a unary/monadic function), it applies f to x n times.

-2#x: yeah, it seems reasonable that it might negate the first two elements, APL uses ¯2 instead of -2 for this reason. In k however -2 is parsed as one number, as - then 2#...

Sure, there may be some complexities or questions, but there are in all languages, and in this case they were fairly simple things anyway to me.


> here is an inner one, {x,+/-2#x}, and the outer one (the entire thing

So in the inner one, x is the argument of the inner function or the argument of the outer one? Is x always an argument to anonymous functions?

> / (like a lot of k symbols) does a few different things depending on context

That's a recipe for confusion.

> Sure, there may be some complexities or questions, but there are in all languages

I can go back to the initial example: just browse other implementations of Fibonacci in different languages. For most of them you can actually understand a bit what's happening, even if it's a different paradigm (e.g, I can understand the Haskell or Clojure implementations without too many issues, and in fact I can learn things about the language from that). But operators that do different things depending on context, insistence on non-standard symbols, weird scope issues... That's not "complexities or questions that are in all languages", that's a recipe for confusion and extra complexity that you need to have in mind on top of the complexity of whatever you are coding.


Default arguments to anonymous functions are x, y, and z. You can also name arguments like this. {[foo; bar] foo+bar} is the same as {x+y}

I'm not denying it is probably more possible to gain a superficial understanding of what code written in other languages does than code written in k to someone who's never seen k before. This just doesn't seem like that important of a language feature. The 'confusion and extra complexity' you mention wouldn't really confuse anyone who'd tried k for more than a couple of hours (at most).


Been doing k for a bit (under a year). A single glance is optimistic, but probably only took me a few seconds. Almost certainly faster than any equivalent code in any other language. It's even easier if you give it a nice name, like "fib".


What kind of projects does K get used in?


Mainly time-series data processing through kdb+. There's an IDE and language tools for q (which is a thin layer over k) that is primarily written in q called: https://code.kx.com/developer/

Unfortunately it's closed source so I can't share much about the world of application and library development with q/k. If you want to learn more about how it's used though, I'd recommend checking out KX and kdb+.


I don't use it professionally or anything like that but most of the real world use as far as I know is in finance stuff.


This website has a few companies which use K: https://github.com/interregna/arraylanguage-companies


How many years did you spend on mathematics? How comfortable are you with reading math notation?

I think the same problem applies. If you don't remember the specific of a symbol it becomes difficult to understand what the symbol might be without context.


Is it that much worse than, for instance:

  kvPairs.reduce((acc, [k,v]) => ({...acc, [k]: v}), {})
(Which obviously has nothing to do with the K-Snippet but is the first thing that came to my mind that's equally as symbol-heavy)


Yes. I don't know what language that is, but I know what it means: take some key-value pairs, and reduce them with an accumulator, and make a new dictionary with the items from the accumulator, plus another entry mapping a list of the key to a value, starting with {}.

Alternatively:

  mut acc = {}
  for k, v in kvPairs {
    acc[[k]] = v;
  }
  acc
Am I much wrong?


Huh, no you're exactly right. (It's javascript, btw)


  fib = 0 fby ( 1 fby fib + next fib );
That's how this would look in a data-flow language.

"fby" means "flowed by". It constructs a stream.

So you can be quite terse without loosing the clarity just by using the right abstraction.

Example taken form:

https://en.wikipedia.org/wiki/Lucid_(programming_language)


The /0 1 gives it away practically immediately for me, but that might just be because I've seen/used that particular example so many times.


Just replace K and JavaScript with German and English to see the vacuity of this argument. Either of several possible representations can become native to one’s thinking. The question is which is a better aid in reaching some non-arbitrary goal. The only merit of K presented and emphasized here was the supposed brevity of its programs. Personally I’ve found the habitable zone somewhere that allows for more air between ideas.


How about this: K is an interpreted language, but your whole program's source code plus the interpreter plus the whole database engine fits inside your server's L1 cache, so K programs tend to be faster than their C equivalents (in addition to all the array operations being highly optimized).

And you don't get to waste time scrolling, your typical module's code fits on your single screen.


I know it's all been said before, but the performance take here is somewhere between misleading and wrong. K runs code quickly for an interpreter because it has a simple grammar and a small number of types, but you don't get up to compiled speed just by reducing overhead, so it will lose to C, Javascript, or LuaJIT in this regard. If you can concentrate the program's work in a few operations on large arrays (not always possible), then K might beat idiomatic C. I don't think I've ever seen an example of this.

Anything about the L1 cache and K is just wrong, usually. At 600KB the K4 database engine is much too large to fit in L1 (K9 from Shakti is somewhat smaller but still a few times too large). And L1 instruction cache misses aren't a bottleneck for other languages, so there's little benefit in reducing them even to the extent K does it.

The long version: https://mlochbaum.github.io/BQN/implementation/kclaims.html


> but your whole program's source code plus the interpreter plus the whole database engine fits inside your server's L1 cache, so K programs tend to be faster

Is that really the bottleneck? I've done quite a lot of profiling on high performance code and I've almost never hit a bottleneck in the instruction cache. Data access bottlenecks or branching hit performance harder and sooner than instruction fetching.

> And you don't get to waste time scrolling, your typical module's code fits on your single screen.

How much of the time you save scrolling is spent on decoding an array of symbols and remembering what those symbols are?


Sorry to be pedantic, but even if you said english and chinese, the analogy would still be off IMHO.


> Does giving a K idiom a name make it clearer, or does it obscure what is actually happening?

There's a balancing act around that. It's called "abstraction".

At some point, you cannot afford to know what's actually happening. If you try, the entire problem won't fit in your head. So you cut the thing you're working with to a name and an interface, and forget what's "actually happening". You do that right in the K code, because you e.g. cut your understanding of `/` to "fold the array with the preceding operation", and totally don't think about the way it's implemented in the machine code.

This, of course, does have a cost; the simplest case is inefficiency, worse are circuitous ways to arrive to logically the same result, when a much simpler way exists.

I'd argue that `/` or `+` are very much names, of the same nature as `fold` or `add`, just expressed shorter. So if you prefer point-free style, you can likely do a very similar thing in, say, Haskell (and some people do).

I'd hazard to say that APL and J are DSLs for array / matrix numeric code. They allow to express certain things in a very succinct way. What APL traditionally had, mmm, less than ideal experience was I/O, because the language is not optimized for the branchy logic required to handle it robustly. K is a next step that allows to escape into "normal-looking" language with long names, explicit argument passing, etc when you feel like it.

Also, I love this idea: «APL has flourished, as a DSL embedded in a language with excellent I/O. It's just got weird syntax and is called Numpy.» (https://news.ycombinator.com/item?id=17176147) The ideas behind APL / J / K are more interesting than the syntax, and haven't been lost.


You certainly can do points-free programming in Haskell. It's the first place I ever heard of it.

Ironically, points-free programming in Haskell has a lot of '.' in it.


Haha, I never understood why the point-free (aka "pointless") form in Haskell actually is the form that requires lots of "."!


"Points" means something like "elements". When you write

    \x -> f (g x)
you are defining a function that explicitly specifies how each "point" `x` is to be mapped. When you write

    f . g
you don't mention any point. You are abstracting away from the notion of point. That's why it's "point free".


I love the idea of using k as a DSL (domain specific language) for working on time-series data. Something like it would also be great for working on n-dimensional arrays, but unfortunately k doesn't really support working with them in any elegant or efficient way.

Python and Numpy would benefit a lot from having something like k to express vector/matrix operations elegantly.


I recently started using rust-analyzer with vscode.

One common sight is this:

thing

  .stuff()  

  .other()  

  .whatevs()

Each of the calls returns a different type. Rust-analyzer displays the return type of each call to the right of it.

I imagine something similar could reconcile the benefits of terseness with readability and discoverabilty.

The blog post already has the prototype:

+ / ! 100

plus reduce range 100

Imagine the second line being added in by your ide in a light gray.


But why not just write

  (0 to 99).sum
That's not much longer, and quite readable even for the uninitiated. (It's a Scala expression, so not made up).


in rust you would write (0 .. 99).sum() for range not including 99

or (0 ..= 99).sum() for range including the 99

which makes me think that i don't know how to read (0 to 99) unless i learn whether it's exclusive or inclusive (but at least it's googlable, unlike a random looking operator)


The "to" method on an Int creates an inclusive Range.

There is also an "until" method which would create the noninclusive Range (which would be actually closer to the original code as it would be "0 until 100").

The Rust syntax is not better. Maybe you could come up what means what if you see it in comparison (this would also work maybe for the Scala code).

The K example has even two implicit assumptions: It's an noninclusive range starting at zero. (They write on the Wikipedia page where I looked this up, as I don't know K, that the range is over "nonnegative integers" lower than the given one. But "nonnegative integers" means usually, but not always, the natural numbers, so zero not included; this makes it even more inconclusive than the Rust or Scala code snippet, imho. I had to look on examples to find out whether zero is included or not in "!").


Nonnegative always includes zero, unless the author had muddled thinking themselves. Since positive is >0 and negative is <0, their negations are nonpositive for <=0 and nonnegative for >=0.


OH! That's of course right.

I wasn't precise enough. The Wikipedia page talks actually about "positive" integers, and I transformed this in my head to what should be written there.

Original quote:

> !x enumerate the positive integers less than x.


> Imagine the second line being added in by your ide in a light gray.

So what would be the benefit compared to just writing `plus reduce range 100`?


The benefit would be it being optional. You only need it while learning the language, whereas once you're familiar with the notation the terseness becomes a feature.

Of course once you've come up with clear names for each symbol you could do the opposite, let the IDE turn `plus reduce range 100` into `+/!100`. But as long as IDEs are still glorified text editors and devs care about the representation that gets stored on disk I would argue making the terse notation the default is the right choice.


I'm not sure, but imagine once you learned it you parse the sentence like a word. Like some of the Asian alphabets?


less keystrokes


It's a very weak benefit. I don't know about you but most of my time programming is spent thinking about the program, not writing it. In fact, this would only increase the time I need to think about how to write things (what was the symbol for reduce again?)


And easier to spot patterns in the source files. Whether someone finds that beneficial or not is subjective I guess, but I think it's useful to be able to immediately recognise what something like +/! is doing whenever you see it in source after using it once or twice.


The uncanny rapport I feel by reading "lazy Tuesday afternoon, visiting The Orange Website, self-consciously averting your eyes from the C compiler output in a nearby terminal" builds enough trust that I'm willing to pay attention to what John Earnest (IJ or RtG) has to say about this. He's a great writer.

Naming things is hard. It's one of the 2 most difficult problems in CS, along with cache invalidation and off-by-one errors. Of course, in this context I mean "CS" as "Computer Science", not "CouchSurfing".

For the K programming language, meanings are specifically defined. In the article:

"The word “ordinal” can mean anything, but the composition << has exactly one meaning."

That's fine for the K compiler, but not for Google Search, or grep. I use "<<" to mean a bit shift to the left, presumably because I was taught C in university.

Unique names are more useful for addressing (e.g. IPV6) but common names are more memorable (e.g. a URL. Translators (e.g. DNS) can't be perfect when there's a one-to-many or many-to-one correlation, but they try their best.

K does well to enforce structure, but in the process, makes it very hard for the programmer to find examples and other documentation. I guess that's why the language hasn't become as popular as other languages, whose syntax is sufficiently familiar to be legible and memorable but unique enough to be searchable.


I guess the ending is supposed to be a joke, but because the piece never explains what | means, I couldn’t figure it out.


I guess it means "max" or something like that (comparison operation?), and |/ means "max of an array".

Author builds this up by showing how long is writing "max" function with other approaches (iteration, reduce).

Can be wrong though


Exactly. From the K2 manual [0] referred to in the article, |/ is "Max-Over". The ostensibly-unreadable K manages in two characters what the article's subject and coworker need either three lines of boring loop or one line of a more cognitively-demanding lambda to accomplish.

...FWIW my understanding of the language is such that I would have suspected the last line of the article to read "|/ list". I thought Max-Over needed an argument.

[0] pg36, http://web.archive.org/web/20050504070651/http://www.kx.com/...


    Math.max(...list)


Got to be careful with the argument spread, since depending on the JS engine and how large the `list` is, you can end up with a RangeError

e.g.,

    Math.max(...Array(100_0000).fill(0))
results in:

    Uncaught RangeError: Maximum call stack size exceeded


Bizarre. on V8 this seems to be an underlying limit on Function.prototype.apply -- and in the REPL, it's not even deterministic, somewhere in the neighborhood of 123,125. (side effect of optimization?) It's enforced on the caller side as well, the empty function with no arguments still throws when apply'd too large an array.

It's not at all clear to me why this would be a necessary limit to exist, as functions can't reasonably have more than a few hundred formal parameters so passing 100,000 would always imply using unspread or arguments on the receiver, which could surely trivially handle arbitrarily sized arrays.


In V8 I believe it's limited by the stack size and when you get a stack overflow that gets converted to a RangeError in JS land.

OTOH, Webkit and Firefox have arbitrary limits 65537, and 500k args.

https://bugs.webkit.org/show_bug.cgi?id=80797 https://github.com/mozilla/gecko-dev/blob/1475b3b0cb274b2a71...


Huh, I had no idea that function had variable arity. Thanks.


Funnily enough, that variable arity is mentioned in the article as a nuisance:

> ... wincing slightly at the lambda notation you need to avoid running afoul of JavaScript’s variadic Math.max()...


It is a typical language-comparison strawman. He is writing

   list.reduce((x,y)=>Math.max(x,y)) 
To get the max number in a list. And he is "winching" because he can't write:

    list.reduce(Math.max)
Because the variable arity of max does not play well with reduce. But because of the variable arity he can write:

    Math.max(...list)
Which is even simpler.

So he deliberately writes overly complex JavaScript code to show how the other language is more concise.


IIRC the spread syntax only works with relatively small arrays, because it's function application at the end of the day.

So worthwhile keeping in mind that the nice JS code falls apart.


Fair point. In any case, if you wanted to do more than trivial array processing in JS, you would use a library like lodash which have an array max function.

And if you think lodash is still to verbose, you can define your own:

  const å = list => list.reduce((x,y) => Math.max(x, y));
Now you can write å(list) to get the max value of an array!


well yeah I could also write a function in C that does that. I don't think that's the point.


Enum.max(...list) ;)


So are array languages polish notation?

    +/!100
feels like the opposite way you'd write it in some kind of RPN concatenative language

    100 ! [ + ] /


You read it strictly right to left, weird idea to me but it’s common in the array languages - j and APL do it this way too.

I believe the reasoning is something like Iverson (or maybe Whitney) didn’t like the complexity of PEMDAS in maths so decided on this rule.


Iverson wanted consistent symbols and rules for math notation. So it's math first, then programming language next. Makes it extremely strong with describing algorithms, but gets a bit complicated for programming.


In Polish notation all the operators are prefix operators.

In APL family languages there are infix operators and prefix operators.

Prefix operators take as their operand the result of everything to their right. Infix operators take as one operand everything to their right and take as their other operand the first thing on their left.


Not quite. + isn’t the last thing done (which it would be in Polish notation). Instead it’s done during the / operation.

Polish notation would be lisp (or the notion most people have for lisp, special forms and macros break it up a bit).


Lisp is polish notation for trees of variable arity. hence all the parens - you need some way to group them.

So in some K flavoured LISP I think it would be

    ((/ +) (! 100))


Or in clojure with the threading macro

    (->> 100 range (reduce +))
which is equivalent to this, without the macro

    (reduce + (range 100))


yes, quite a bit more long winded than K


Programming language syntax is very much like the human languages themselves: it (re-)shapes your thinking.

I want to learn things that shape my thinking in a way that makes me more efficient, and in a way that makes it easier to express ideas.

If your language isn't giving me that then it's bye bye.

I get the appeal but squinting your eyes at a string of single-character symbols is taking it too far. The whole thing has to be somewhat ergonomic as well.


I love the bit about junior stanford educated Google employees :)


In my opinion, the conceal feature in Vim provides the best of both world to the name vs symbol problem.

When you want to review your code, you just go into normal mode, concealing all lengthy but frequent names into appealing symbols. When you want to write code, you switch to editing mode and all symbols on the editing line turn back into names

I use this feature mostly with python code. One liners making proficient use of lambdas, maps, reduces, and other functional treats are satisfyingly condensed, reducing the amount of attention required to parse each line.

No need for custom keyboards. People don't have to parse arbitrary symbols to understand what I wrote because the underlying code remains vanilla.

A drawback is that the apparent conciseness tricks me into writing very big one-liners difficult to parse when all terms are expanded.


This is either the most important paper in Computer Science.... or a trivial side effect of representing function composition by string concatenation...

http://nsl.com/papers/rewritejoy.html


I have come to call:

        [A] a = A
        [A] b = [[A]]
    [A] [B] c = [A B]
        [A] d = [A] [A]
        [A] e =
    [A] [B] f = [B] [A]
This the minimal Joy.

  [[db]fbcabc[da]cfc]da
This program is one I most like.

Vg vf gur cnenqbkvpny pbzovangbe.


also posted previously as

Stages of denial in encountering K, March 9, 2020, 422 comments https://news.ycombinator.com/item?id=22504106


Wow, this is art

Wonder how many people did not skip to the end and then skim back


Wait... How did that happen.


> list.reduce((x,y)=>Math.max(x,y))

Honestly, while I like the clarity, I would still dislike it in JavaScript for performance reasons, because in JavaScript reduce is not optimized at all. And yes, I do work on a code-base where that difference is significant.

Also, that loop can still be cleaned up a bit in modern JS:

    let max = list[0];
    for (const v of list) {
      if (v > max) max = v;
    }


A slightly more verbose (less abstract) version of k would be q. Kx (proprietary owners of k/q) have a nice reference page if you're interested in learning more about the language, e.g., where it's used and how it's used... https://code.kx.com/q/ref/


Maybe the issue is that the symbols we readily accept are the ones we grew up with when learning Mathematics, + - * / ^ % = ..

If 'reduce' and 'map' had widely used symbols, which were taught in school / university as part of the standard curriculum, how different would coding look today?


Hint: K is the name of the central character in Franz Kafka's The Castle, as well as that of The Trial.

> "The term "Kafkaesque" is used to describe concepts and situations reminiscent of Kafka's work, particularly Der Process (The Trial) and Die Verwandlung (The Metamorphosis). Examples include instances in which bureaucracies overpower people, often in a surreal, nightmarish milieu that evokes feelings of senselessness, disorientation, and helplessness. Characters in a Kafkaesque setting often lack a clear course of action to escape a labyrinthine situation. Kafkaesque elements often appear in existential works, but the term has transcended the literary realm to apply to real-life occurrences and situations that are incomprehensibly complex, bizarre, or illogical."

~ https://en.wikipedia.org/wiki/Franz_Kafka#%22Kafkaesque%22


I remember a discussion on HN on this language like a year ago or so. In the end what I got was some extremely weird snippets for very specific little quizzes that someone claimed were faster than with other languages, the argument that "it's shorter" and "but I can work with it".

So yeah, if you like it, I get it, go enjoy it. But this smugness of "it's actually better and I'm better for knowing it", it irks me. One of the most important aspects of programming languages is that they need to be used, and K is the single hardest language that I've tried to understand, even with documentation on the other screen and reading just short snippets. A language that people will ignore because they don't even know where to start reading is not that good of a language.


Where to start reading: https://github.com/JohnEarnest/ok/blob/gh-pages/docs/Manual.... maybe

If you ever feel like trying it again, and get stuck on something, https://chat.stackexchange.com/rooms/90748/the-k-tree is the place to ask :)


I already have problems reading arrow/inline functions... for example where a method is called with a function that returns a function... if there is one more (async) call in there my brain starts smoking.


It's likely a mater of what you're used to.

I start to feel uncomfortable when I see an explicit loop.

But that's actually very weird as I write Scala a lot where you use "for" (instead of "do-notation") for monadic computations.

My brain starts smoking when someone uses a "for" as an regular loop! That causes usually a few seconds of complete confusion. :-D


It's all up to preference, really. It took me a while to get around my muscle memory from general imperative programming by the time I started using languages like K competently. Getting that motivation is definitely hard.


So it is a language with single-character symbols for the most common array operations like map, filter etc.

I think the article tries to make it sound more mysterious and groundbreaking than it really is.


the symbols compose in both directions and are aware of tacit parameters, so I don't really think your summary is even remotely correct


By "compose in both directions" you mean like arithmetic operators like + and * ? No doubt it is useful to have built-in operators for array operations if you have to do a lot of array operations.


> a more direct translation into another language might look like: range(100).reduce(plus, 0)

That's...uhh...one way to do it. But it's a lot shittier to read than sum(range(100)).


Yeah, ever since running into this little snippet eons ago, I've found the symbols are better argument for APL style languages quite weak: http://nsl.com/papers/kisntlisp.htm

Since the arity of all the primitives are known there's less parens than lisp. That seems like the clear sweet spot to me.

And I think this is born out with K the product. One of the things they added with Q is a more text oriented syntax.

The big a-ha in APL style languages is shifting from thinking about loops and iterating over single elements to transforming tensor like objects. It's a powerful approach no matter what language and syntax you use.


Yes, a direct translation. (Using the reduction)


One important thing to remember about esolangs like this is that theory != implementation. Just because Brainfuck "theoretically" exists as a language doesn't automatically solve issues like memory management or extensibility. Sure, maybe it is more ergonomic to write some functions in K. But, as with most code-golfing languages, the goal isn't to build a better language, it's to build a faster one.

Maybe K can run those programs faster than Lisp/C/$FAV_LANG, but it's ultimately up to a much smarter programmer to implement that bit.


K is not an esolang and was not designed for golf; it has real users. https://en.wikipedia.org/wiki/K_(programming_language)

Also, the primary goal is not always speed... runtime speed or speed of development. There are other things we trade off for all the time.


It should be noted that, according to Wikipedia, the K language isn't a toy language like Brainfuck but an actual language successfully used for a suite of financial software. It was developed for this purpose, not for code golf.


K language and APL family languages are serious business, though. And they get a head start on "extensibility" and other nice practical properties - notice how all those single-character operations are inherently and extremely composable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: