I was interested in how it would make sense to define complex numbers without fixing the reals, but I'm not terribly convinced by the method here. It seemed kind of suspect that you'd reduce the complex numbers purely to its field properties of addition and multiplication when these aren't enough to get from the rationals to the reals (some limit-like construction is needed; the article uses Dedekind cuts later on). Anyway, the "algebraic conception" is defined as "up to isomorphism, the unique algebraically closed field of characteristic zero and size continuum", that is, you just declare it has the same size as the reals. And of course now you have no way to tell where π is, since it has no algebraic relation to the distinguished numbers 0 and 1. If I'm reading right, this can be done with any uncountable cardinality with uniqueness up to isomorphism. It's interesting that algebraic closure is enough to get you this far, but with the arbitrary choice of cardinality and all these "wild automorphisms", doesn't this construction just seem... defective?
It feels a bit like the article's trying to extend some legitimate debate about whether fixing i versus -i is natural to push this other definition as an equal contender, but there's hardly any support offered. I expect the last-place 28% poll showing, if it does reflect serious mathematicians at all, is those who treat the topological structure as a given or didn't think much about the implications of leaving it out.
More on not being able to find π, as I'm piecing it together: given only the field structure, you can't construct an equation identifying π or even narrowing it down, because if π is the only free variable then it will work out to finding roots of a polynomial (you only have field operations!) and π is transcendental so that polynomial can only be 0 (if you're allowed to use not-equals instead of equals, of course you can specify that π isn't in various sets of algebraic numbers). With other free variables, because the field's algebraically closed, you can fix π to whatever transcendental you like and still solve for the remaining variables. So it's something like, the rationals plus a continuum's worth of arbitrary field extensions? Not terribly surprising that all instances of this are isomorphic as fields but it's starting to feel about as useful as claiming the real numbers are "up to set isomorphism, the unique set whose cardinality matches the power set of the natural numbers", like, of course it's got automorphisms, you didn't finish defining it.
You need some notion of order or of metric structure if you want to talk about numbers being "close" enough to π. This is related to the property of completeness for the real numbers, which is rather important. Ultimately, the real numbers are also a rigorously defined abstraction for the common notion of approximating some extant but perhaps not fully known quantity.
There's a related idea in mathematics, the proof that the real numbers are a vector space over the rational numbers. If you scramble the basis vectors, you obtain an isomorphic vector space, but it is effectively a "permutation" of |R. Of course, vector spaces don't even have multiplication, but one interesting thing is that the proof requires the axiom of choice.
I think that actually constructing a "nontrivial" model of C using the field conception might require choosing a member from each of an infinite family of sets, i.e. it requires applying the axiom of choice, similar to the way you construct R as a vector space.
Ooh, I've run into this one before! I'm a big fan of interval index[0], which performs a binary search, so Josh's suggestion is the one I prefer as well (the implementation might sometimes optimize by transforming it into a table lookup like the other solutions). Searching for +`≠¨ in my BQN files turns up a few uses, including one used to group primitives into types in a utility involved in compiling BQN[1] and an annotated function used a few times in the markdown processor that builds BQN's website[2].
"Ken Iverson - The dictionary of J contains an introductory comment that J is a dialect of APL, so in a sense the whole debate is Ken's fault! He is flattered to think that he has actually created a new language."
Dunno why electroly is dragging me into this but I believe you've misread the article. When it says "His languages take significantly after APL" it means the languages themselves and not their implementations.
I think the article expresses no position. Most source code for array languages is not, in fact, inspired by APL. I encourage you to check a few random entries at [0]; Kap and April are some particularly wordy implementations, and even A+ mostly consists of code by programmers other than Whitney, with a variety of styles.
I do agree that Whitney was inspired to some extent by APL conventions (not exclusively; he was quite a Lisp fan and that's the source of his indentation style when he writes multi-line functions, e.g. in [1]). The original comment was not just a summary of this claim but more like an elaboration, and began with the much stronger statement "The way to understand Arthur Whitney's C code is to first learn APL", which I moderately disagree with.
I unfortunately glossed over the part of the original comment that gives it substance: "The most obvious of the typographic stylings--the lack of spaces, single-character names, and functions on a single line--are how he writes APL too."
That's backing for a claim.
Also, I haven't once written APL. I think this might've been borderline trolling, just because of how little investment I have in the topic in reality. Sorry.
It looks like a weirdo C convention to APLers too though. Whitney writes K that way, but single-line functions in particular aren't used a lot in production APL, and weren't even possible before dfns were introduced (the classic "tradfn" always starts with a header line). All the stuff like macros with implicit variable names, type punning, and ternary operators just doesn't exist in APL. And what APL's actually about, arithmetic and other primives that act on whole immutable arrays, is not part of the style at all!
It's just. So gross. Say it. Sudden interruption of slime coming up your throat. Like walking out the door into a spiderweb. Alphabetically I was mistaken but in every way that matters I was right.
Ordinarily I'd make fun of the Germans for giving such an ugly name to a nice concept, but I've always found "comfortable" to be rather unpleasant too (the root "comfort" is fine).
Well, do you know how it works? Don't judge a book by its cover and all. Although none of these are entirely aiming for elegance. The first is code golf and the other two have some performance hacks that I doubt are even good any more, but replacing ∧≢⥊ with ∧⌜ in the last gets you something decent (personally I'm more in the "utilitarian code is never art" camp, but I'd have no reason to direct that at any specific language).
The point that the article is addressing (but you have to ignore the image and study the equations to see this!) is that this sort of shifting can't equalize everything. In the span of 3 white keys C to E at the front, you have 2 black keys at the back, so if you take r to be the ratio of back-width to white key front-width then you have 3 = 5r. But in the 4 keys F to B, you've got 3 black keys so 4 = 7r. No single ratio works! So the article investigates various compromises. The B/12 solution is what seems to me the most straightforward, divide white keys in each of the sections C to E and F to B equally at the back, and don't expect anyone to notice the difference.
I don't see the problem... Use one unit of width per semitone. Then raise the black keys up a bit. Then for the white keys, elongate them and append some extra stuff on the sides of their fronts so the white keys' fronts' all have one same width as well. They are two separate "problems", not interdependent.
The relevant operations for matrix multiply are leading-axis extension, shown near the end of [0], and Insert +˝ shown in [1]. Both for floats; the leading-axis operation is × but it's the same speed as + with floating-point SIMD. We don't handle these all that well, with needless copying in × and a lot of per-row overhead in +˝, but of course it's way better than scalar evaluation.
And the reason +˝ is fairly fast for long rows, despite that page claiming no optimizations, is that ˝ is defined to split its argument into cells, e.g. rows of a matrix, and apply + with those as arguments. So + is able to apply its ordinary vectorization, while it can't in some other situations where it's applied element-wise. This still doesn't make great use of cache and I do have some special code working for floats that does much better with a tiling pattern, but I wanted to improve +˝ for integers along with it and haven't finished those (widening on overflow is complicated).
To be clear, you are referring to the preface to "An Introduction to Array Programming in Klong", right? Having just checked it, I find this to be a very strange angle of attack, because that section is almost exclusively about why the syntax in particular is important. Obviously you disagree (I also think the syntax is overblown, and wish more writing focused on APL's semantic advantages over other array-oriented languages). I think this is a simple difference in taste and there's no need to reach so far for another explanation.
It feels a bit like the article's trying to extend some legitimate debate about whether fixing i versus -i is natural to push this other definition as an equal contender, but there's hardly any support offered. I expect the last-place 28% poll showing, if it does reflect serious mathematicians at all, is those who treat the topological structure as a given or didn't think much about the implications of leaving it out.
reply