Russ Cox (who worked together with Rob Pike on Go for a long time) solved several of last year's Advent of Code puzzles using Ivy. Here's a YouTube playlist where he has recorded himself doing so: https://youtube.com/playlist?list=PLrwpzH1_9ufMLOB6BAdzO08Qx.... I don't think he's solving the problems for the first time in these videos, instead merely demonstrating how they can be solved using Ivy.
It took me a while to understand the syntax in general and the primes example in particular. For those facing the same obstacle, here is what I understood:
1. Read single lines from right to left.
2. Imagine a stack of one variable that you keep replacing the top as you go from right to left, kind of like concatenative languages (e.g. FORTH, kitten, factor), but right to left instead of left to right.
3. Here is a step by step translation of the primes example to imepartive pseudo code:
op primes N = (not T in T o.\* T) sel T = 1 drop iota N
First of all this is a function definition, so can be put to
function primes(N)
return (not T in T o.\* T) sel T = 1 drop iota N
Now we apply the two rules above:
function primes(N)
x = iota(N) // 1..N
x = drop(1, x) // 2..N
T = x
ToT = outerproduct(T, T) // a*b for all a, b in T, so 4,6,...,N^2
return [x in T not in Tot]
So we get a O(N^2) (or maybe O(N^3), depending on how `in` is implemented) algorithm for primes until N.
Here is a similar algorithm in python that might help further:
def primes(N):
T = range(2, N)
ToT = [a*b for a in T for b in T]
return [x for x in T if x not in ToT]
I guess you’ve added a backslash in the outer product to prevent HN formatting weirdness. It isn’t an outer product where the operation is ‘scan product’.
Edit: I see in another comment that you tried to curl https://robpike.io/ivy and got a 302 redirect to pkg.go.dev. That's meant for humans. You need to add ?go-get=1 for the go toolchain:
Personally I don't bother with responding differently based on the go-get query param. I just put a <meta http-equiv="refresh"> alongside the <meta name="go-import"> so that the whole thing can be served statically and still be useful to both humans and machines.
You're not dependent on GitHub. If you ever decide to switch to, say, sourcehut, you can keep the install command and not break anything.
It also looks way nicer, and it's easier to deal with (I'd rather type `import ”robpike.io/ivy”` than a GitHub link), but that's a flaw in the way go does packages imo
The precedence rules are terrifying. I get it, but:
> 3/6*4 is 2 while 3 / 6*4 is 1/8 since the spacing turns the / into a division operator. Use parentheses or spaces to disambiguate: 3/(6*4) or 3 /6*4.
having to be up-front as a warning feels like a sign that your approach to precedence is fundamentally broken and you should require parentheses.
(Also the idea that 1/8 and 1 / 8 are different things feels bonkers to me in general, but maybe that's just taste.)
Sound like a typical Rob Pike design. Idiosyncratic, not that user friendly, and mostly "works for me" affair (considering things like the Sam editor and parts of Golang).
Rubbishing someones character for a design choice is just too extreme. Sounds like you have something personal against Rob Pike rather than anything constructive to add
This approach to precedence comes straight from Iverson and APL itself. Iverson originally started out designing a more uniform mathematical notation. It developed into a programming language secondary to that. Note that J, K, and others in this family all duplicate this choice largely. And there's mainstream languages that have chosen the middle ground of allowing basic precedence for + - * / expressions but require explicit grouping otherwise.
It's not a particularly outlandish idea.
So attributing it to Rob Pike's personality traits is just off the mark. It's ok to be critical but it should be specific and informed, not just bashing someone character as a whole, and if you're gonna take it that far, you'd probably better at least know what you're actually talking about.
And note I've been critical of Rob's choices and behavior surrounding Go's date/time on a particular issue, so note this isn't empty hero worship on my part.
>So attributing it to Rob Pike's personality traits is just off the mark
That's fine, since I didn't speak about Pike's "personality traits", but to his architecure/design tendencies.
And I didn't attribute this design to him. I know of APL, and even if I didn't the influence is right there in the TFA title. I compared it to his designs ("sounds like a typical Rob Pike design"), i.e. he adopted it because he does have these design tendencies, and so this was to his taste.
>not just bashing someone character as a whole
Where exactly are you getting that from? As if I've called him a "bad person" or something?
You can't just cherrypick examples and say this is typical.
Even worse, you (GP) can't say this is typical without providing any examples when the most obviously relevant analogy (precendence in Go) does not support your statement.
You are misunderstanding his point even though masklinn explained it. It's not about the specific point of operator precedence. It is about designing something that has internally sound logic but is very unexpected by a typical new user. And he provided the example of time package patterns in Go which imho fits very well. Yea it makes sense but I'm sure there are hordes of users that think "why doesn't this work like in virtually every other language?".
It's probably good to not take the "typical" too literally and more as a "Rob Pike has done a few designs in the past that have this quality".
It's not about the particular intepreter or specific precedence design, it's about the tendecies described: what kind of stuff he often appears to like and/or produce.
Now, you might disagree with that, but at least disagree with what I tried to say, not what I didn't mean :)
The -option I'd say yes. Although this is vintage UNIX style behavior, in 2009, when Golang emerged, it should have picked GNU-style. No reason except idiosyncrasy or nostalgia to go with single dash.
For method capitalization, I'd say no, as this is the case in any other popular languages too (C# to name but one).
The capitalisation itself isn’t idiosyncratic (though it seems uncommon in the unix world), however that having a semantic impact is rather rare.
Not completely unique (e.g. variables v symbols in erlang, concrete type v generic placeholder in haskell) but really rare, and usually the capitalisation isn’t a switchable thing, it’s a very fixed meaning.
Fully agreed. I stopped considering this practical in any way when I read
> 3*4+5 is 27
I'm sorry but if your calculator does not follow standard mathematical rules for operator precedence then I'm out. But the space issue takes it to another level.
I also get it, it derives from APL which is right to left and all operators have same precedence but it's just so far from practically usable for normal people... there's a reason noone uses APL today. And sure it's just a plaything but seems weird to even create apps for iPhone, iPad and Android and a logo for it.
Quick, what should be the order of precedence among (binary!) flip, transp, rot, rho, and iota?
APL / Ivy have so many operators a conventional precedence order would likely be completely impossible to remember or work with.
(One option I’d consider viable in general would be to define associativity and perhaps compatible groups like {+, -} or {*, /} but force explicit parentheses when operators from different groups are used, but that wouldn’t feel like a calculator.)
I’d rather actually we used postfix notation as that not ambiguous at all. I would like a modern “dc” written in Go if I’m honest. With engineering units, BCD decimal arithmetic, blackjack and hookers.
Actually if there was a postfix notation spreadsheet with that capability it would be an interesting tool.
APL was the first language I learned in school and I was thrown by the precedence of the mathematical operators, but once the other operators were put in play, it was an incredibly simple rule -- and rarely did I or other APL developers have bugs relating to order of operation.
I then moved to C++ and was gobsmacked the the precedence rules and was forced to use parentheses to get things right. I have seen hundreds of bugs produced by programmers that tried to be clever and avoid parens relying on their faulty memory of the precedence priorities.
Just look at the C++ Operator Precedence table listed at the following URL. It's wild!
One thing I really like about kdb+/q is the fact that it has strict right-to-left execution/precedence. While I know it's a standard, I don't want to deal with bodmas, or whatever it is. Having right to left makes it clear and unambiguous how it will execute. Saying that, it does also encourage huge one line abominations. Everything in moderation i guess...
Yeah, there's a legendary twitter thread that removed all doubt for me. Thousands of teachers, professors, and engineers all confidently asserting that their version of PEDMAS is the correct and unambiguous one.
Stuff like this is very difficult to change, because the conventions both global and fundamental... but we also shouldn't pretend like history happens to always chose the perfect convention. It's more a matter of we've just learned to live with the warts.
I wonder if programming languages having precedence rules at all is an atavism from the 70s when it was an interesting problem for parsing. Adding parentheses makes things clearer in 99% of the cases.
Programing languages require precedence rules because mathematicians decided that having a screwball precedence would make their notation more concise, over the hundreds of years of being used and taught these notations have become the expected form of math, and programing languages(all of which tend to be math heavy) usually will try to follow the expected rules.
Honestly as an interested amateur I think that much of the notation used in math is one of the weakest parts of the discipline. there are two things that I think would aid the notation greatly.
1. Most complex equations have a lot of moving parts within them, however the notation has no way of indicating what these parts are for and why they are there, you better hope that the author has taken to time to document their equation properly.
2. the terrible symbology. you have heard the joke about the two hard things in computer science. it turns out that programmers are usually fairly good at naming things, the mathematicians are the true dark masters at names. if you have an item in your equation that represents the confidence of a rating they will not name it confidence_rating, no, they will name it σ(a small sigma), good luck searching(or even typing) that.
Now I get why it was done, they tended to a notation that was as concise as possible, this makes it much faster to manipulate parts when you are working on something. however I feel this has the opposite effect when trying to teach it to others.
As its designers explain: A bare a * b + c is an invalid expression. You must explicitly write either (a * b) + c or a * (b + c).
Why doesn't WUFFS have precedence? Because it's focused intensely on solving its specific problem, Wrangling Untrusted File Formats Safely. Having solved the buffer problem (you can't write buffer[oops_too_big] type mistakes in WUFFS, they will not compile) and the integer overflow problem (likewise, it doesn't compile) they wanted to also solve that problem where the programmer thought there were making one calculation, but due to precedence rules they were actually making a different calculation.
Well yeah, having precedence rules makes the notation more concise and can save you from typing lots and lots of brackets. And I don't think multiplication/division having higher precedence than addition/subtraction (and, by extension, AND having higher precedence than OR) is all that "screwball" - and even if it were, if you paid attention in elementary school, you already know it, so programming languages can rely on it. Of course, the moment they start getting "creative" with precedence, those exceptions turn into massive footguns...
> I wonder if programming languages having precedence rules at all is an atavism from the 70s
It's not an "atavism from the 70s", it's a mirror of mathematical conventions, as well as a syntactic convenience.
Language designers have tried to do away with precedence all along by having uniform evaluation (e.g. smalltalk, as well as APL and all its descendents hence Ivy, probably), removing infixes (lisp, forth, assemblies), or requiring explicit prioritisation (I think I saw that a while back though I don't remember the language).
> Adding parentheses makes things clearer in 99% of the cases.
It also adds a significant amount of noise for 99% of the cases, for no value since any schoolchild past 12 or so has integrated the precedence rules.
> It also adds a significant amount of noise for 99% of the cases, for no value since any schoolchild past 12 or so has integrated the precedence rules.
As all the viral "Only 25% of people got this right" stuff that pops up from time to time proves: No they haven't.
Also since we're rarely writing equations composed entirely of single digit numeric literals and are using reasonable variable names, the percentage of characters overhead and therefore noisiness of using brackets is much less than in the short examples being thrown around.
If memory serves well, normal operator precedence was already present in the first FORTRAN, which was proposed in 1953.
And APL's priority system is: there is no priority. Everything is evaluated right to left. Pike's Ivy is no exception. 3/4 differing from 3 / 4 is a lexical convention: 3/4 is a rational number.
The latter example is more noisy, you do learn precedence in high school yet the latter example is also more clear because it is the most accurate way of conveying intent.
I really like Ivy as a simple, friendly introduction to APL. There is a surprising lack of APL-derived languages that use words to name things -- most stick with the original symbols; J and friends choose equally-cryptic symbols composed of ASCII characters.
Earlier this year I decided to solve AoC 2021 in Ivy, then watch Russ Cox's videos to see how he did it and use that to learn something about array programming -- a topic I knew absolutely nothing about going into this.
Unfortunately, Ivy really is, as Rob Pike says, a plaything. It is buggy -- if you ever write a function that returns a vector or a higher-rank array, you are entering bizarre undefined behavior territory. The array-language equivalent of "concat_map" or "flat_map" or "mapcat" or whatever you want to call it just produces garbage values, which is very confusing when you're learning about array programming for the first time ("Wait, this vector says its length is 25, but it contains 50 elements...?" or "The transpose of this array is just the first column repeated over and over??").
Beyond that, a very cool thing about array languages is that, you know, functions can implicitly act on entire arrays. You can multiple a vector by 2 and it will know to multiply every element in the vector by 2, because multiplication is defined for scalars.
But in Ivy, this is only true for the built-in functions. There is no way to write user-defined functions that have this implicit act-on-every-element behavior. Which is basically the looping primitive in array languages -- so to do anything nontrivial, you have to write it out with explicit recursion (still with the caveat that your functions can only return scalars, or you enter undefined behavior town) or rewrite your operations as binary operations with an ignored right-hand side and use "fold" to "map" them. It's bad.
The latter is crippling enough that Russ Cox eventually forks Ivy to add support for it, but it is not currently part of the language. https://github.com/robpike/ivy/pull/83
Anyway that's a long comment to say: Ivy is a good, friendly introduction to APL syntax (stranding, ambivalent functions, precedence, etc) and some array language concepts, but it is far more of a calculator than a programming language.
But it's a good arbitrary-precision calculator! And if you're still interested in trying it, maybe check out this thing I made. It's an... Ivy programming environment?... that lets you run Ivy scripts and see the results inline. (Ivy's repl is... very primitive, and has to be wrapped by something like readline. Russ Cox uses 9term to get around this; self-modifying programs are my preferred approach.)
My frustration with Ivy led me to look into other array languages, trying to find one that 1) used English words instead of cryptic symbols and 2) worked. And I really couldn't find any! Someone should do something about that. :)
'There is a surprising lack of APL-derived languages that use words to name things' - This isn't really surprising when you consider the history of APL, and what it was designed for. Arguably you could say something like numpy is APL derived (or at least inspired), and that uses words.
I think if you tried APL for a bit, you'd quickly get over the symbols (and even come to prefer them eventually).
You know, I discounted Nial prematurely because its array model seemed weird and different and -- still not actually knowing anything about array languages -- I became very enamored with J's ability to slice and reassemble arrays based on function rank, which is something that I thought Nial could not do (I don't actually remember why I thought this). I should actually read through the Nial docs and see what I've been missing.
Nial uses the nested array model, which arguably is weird, but is also shared by every surviving APL dialect that supports nesting. J's boxed model comes from SHARP APL, but A+ was the only other language to pick it up and both those have stopped development.
The q and k languages are part of the commercial Kdb+ array database that all the major banks use for rapid analysis. Seriously cool stuff and probably the state of the art in array programming. Kdb+ also has dashboards and other useful things. The original creator left to form his own company called shaktiDB, so that is another commercial product. Dyalog APL is the most advanced true APL out there. Unfortunately, you may notice a trend here that all the best tech in this space is currently commercial and closed source. That's a major deal breaker for some and for good reason.
> Unlike in most other languages, operators always have the same precedence and expressions are evaluated in right-associative order. That is, unary operators apply to everything to the right, and binary operators apply to the operand immediately to the left and to everything to the right. Thus, 34+5 is 27 (it groups as 3(4+5))
This is entirely in line with the APL tradition. I suspect part of the motivation is that APL has a lot of binary operators, none of which have a conventional infix form in traditional mathematics or anywhere else.
(Cf also the Smalltalk tradition, which associates all “binary messages” to the left, but that has the motivation of allowing user-defined names without fixity declarations.)
Now, let's imagine an APL-like language with precedence:
APL has a huge number of operators and worse, you can and will define your own, as the classic distinction between operators and functions does not apply to APL. So a huge precedence table to memorize! Plus, with user defined operators you would either have to force them to be all the same precedence, which is kind of a bummer, or you allow them to define their own precedence. It should be obvious that this would be a huge mess.
So actually no having precedence is the most sane way forward but you also gain something: Order of precedence errors are quite common both in mathematics and programming. Doing away with precedence rules gets rid of a whole class of errors. Awesome! (If you are going to respond that you never make any errors, well congrats on being smart, I am not. I try to avoid relying on lesser known precedence rules in my code as best as I can.)
So what do we gain from precedence rules again? Shorter and less noisy? Ugh, have you seen APL? I think they are good.
Ivy is described as a "plaything". Rethinking rules that everyone takes for granted and seeing where it leads seems like a fun way to play.
My first CASIO scientific calculator at school didn't understand operator precedence. Since that was how basic calculators worked too, it seemed perfectly natural. I loved that calculator - there was a simplicity and purity to it.
When I got a fancier one at uni that had been taught BIDMAS, it took me a long while to get used to, and I never really enjoyed using it. I don't think it was just because the squidgier keys were less satisfying to tap.