Still hoping for someone to design a font for J with ligatures so it can match APL's beauty for reading without making it more difficult to type, i.e. automatically rendering /: as ⍋, |: as ⍉, |. as ⌽, etc.
Keyboards are awfully similar to how they were in the 90s, is I think the salient point. Representing the symbols in the source isn't now an issue really, but typing them still would be.
Typing symbols is not really an issue. First off, you can configure your text editing environment to automatically replace digraphs with their associated symbols (e.g. when you type i. replace it with ⍳ automatically). But second of all, memorizing an alternate keyboard layout isn't that big of a deal. Is it really harder to remember that grade up is super+shift+4 (⍋) than that it's slash+shift+; (/:)?
I would argue that difficulty in reading code and knowing what symbols represent what operations is a much more pertinent consideration. And though neither is especially mnemonic (can you really have a mnemonic for something as abstract as 'sort'?), it's much clearer that single-character grade down (⍒) is the reverse of single-character grade up (⍋), than for digraphs (/: and \:). (It's also easier to parse symbols when they're only one character.) Not least because unqualified / and \ represent very different operations—the first is reduce, and the second is either scan or prefix—as do /. and \.; so there's no precedent for it.
And so, assuming you accept the obvious superiority of graphical/unicode representation of symbols, the digraph method for typing them becomes superfluous. You now have to associate the mental concept of grade with two completely separate representations: the graphical representation of the completed character (⍋) and the ascii representation which you type (/:). You can't escape the latter, because every time you type a grade, you'll see the '/' on screen for a moment before you type the ':' and the character gets digraphed.
I mentioned in the beginning that you can configure your editor (and repl) to automatically replace digraphs with their associated symbols (so /: automatically gets replaced with ⍋). On face, this seems functionally equivalent to the ligature suggestion, but it's not. Mainly, it affords flexibility. If you want to type digraphs in your environment, you can, all my criticisms aside. But I can configure my environment to use an alternate keyboard layout, avoid complicating my editor environment by introducing ligatures, and we can work together seamlessly. Doing it that way also adds flexibility to the language; if ⍋ is the single canonical representation of 'grade up', and / and : are separate symbols with their own unique semantics, then they can be freely juxtaposed. It'd also be somewhat of a pointless indirection, to have ascii digraphs underlying what are essentially unicode-pictorial representations.
(Note: I said only 'digraphs', for clarity, but everything I said applies also to trigraphs, of which j has a couple.)
It looks to me like you see no difference between "I need to configure my environment" and "my environment is already configured by default". But, from my point of view, the difference is very important.
I like being able to type some J (or, most times, some K) in an email, in a note I take in my phone, in a comment of a C program, ... Yes, I could configure my environment(s) to do all this with a custom input method, but being able to do it with anything I find without any configuration is a huge advantage.
I don't know J, but an APL keyboard layout is not a problem for any major current OS, nor is a layout with ISO Layer 3 (Mac Option, Windows AltGraph) mnemonically allocated to useful symbols.
Though modern "emoji pickers" have extended the range of IME and soft keyboard tools that regular users use (often daily). I've been joking that the next APL is probably made from emoji, but it's not entirely a joke: the Windows emoji keyboard (Win+. or Win+;) has a pretty full Unicode math symbol section (click the Omega top-level tab and then flip through the bottom level tabs), and while it is missing some of the nice search features of the more colorful emoji, gives relatively quick access to a lot of options.
Is it more demanding than asking them to memorize a collection of (mostly arbitrary) ascii symbols and digraphs? If you learn apl, you have to remember that grade up is ⍋ (s-S-4); if you learn j, you have to remember that grade up is /:. The primary barrier to entry is remembering what operations you can do and how you can do them, not how to type them.
> Requiring users to memorize a second keyboard layout is very demanding
Not really. It's a natural learning process as you learn the language. Very easy. If you are truly learning the language (rather than just messing around) you can get to the point where you are comfortable typing most common symbols in a week or two.
I used to touch-type APL back in the day. Even though I don't use APL at all these days, every so often, when I do, I am amazed by how quickly I remember where the various symbols are on the keyboard. Some of it just make sense, for example "iota" is shift-i, "rho" is shift-r, etc.
I have been working on a "modern" language based on APL.
It implements features such as macro-like custom syntax, first-class functions and closures.
It's developed in Kotlin and can be compiled to the JVM as well as natively using the new Kotlin multiplatform feature. Javascript will also be possible once Kotlin multiplatform supports reflection. The project is still nowhere near finished, but it can at least run Conway's Game of Life:
Not only does it have it, it has it in (what I feel) is a really accessible way. Seen a symbol and you don't know what it is or how to reproduce it? Enter "?" and paste them symbol into the repl and it'll tell you what's it's called, the shortcut to make it, what it does, and the equivalent non-symbol function name.
And while not exactly what the parent asked for, the recently released "JuliaMono" font has ligatures for several common combos - including the right arrow - which amounts to practically the same thing:
I've been using Dhall ( https://dhall-lang.org ) and it does this. But it goes a step further: because `dhall format` defines the canonical form of any dhall code, it does the unicode substitution for you. So you get convenient entry with a normal keyboard, plus nice unicode symbols in code.
I don't know if it's intentional, but the clumsy character shapes, the haphazard line weights and the "bleeding ink" effect whenever there is a curve or a corner give it a strong "early DTP" aura. Or possibly even earlier, like a mimeographed pamphlet in the late 1960s, when APL was new.
Kerning on Cyrillic is pretty much awful with this font. Ever on the example string it looks like there is a space between Ь and Э, and Ш and Щ are almost glued together.
> Is [...] Comic Sans Serif something someone wanted?
Maybe. What comes to mind is OpenBSD. In the OpenBSD community, developers use Comic Sans exclusively in all official slideshows. One purpose is making an in-joke, another purpose is trolling everyone else not genuinely interested in system development ("Weaponized Comic Sans. This page scientifically designed to troll web hipsters.") [0]
When I'm in the right mood, I use Comic Code[1] for my terminal, along with a different palette than my normal "working" environment. Even if it's just placebo, it helps me shake up my thinking a bit.
When I was doing stuff for a pre-Head Start research project, Comic Sans was a great font to do forms in. It went with the program and people were less stressed by the forms even though they asked the same information as the old ones. I really wish someone would have studied that.
I sorta dig it, looks fun and artistic. I don't know what's wrong with "W" though. Not sure if my browser is rendering is bad or there is a bug, because it doesn't look like rest of the ligatures, it looks nothing like "M".
But yeah, the "W" and to a lesser extent the "o" look more heavily struck than the rest of the letters. (Same problem shows up in font viewer, so it's definitely not just your browser.)
It makes the code look absolutely gorgeous, readable and it's very easy to type too. I use Emacs agda-mode so it just automatically replaces e.g. \r with → or \== with ≡ etc...
I don't use Agda for theorem proving, I make real life programs in Agda, I compile them to Haskell and compile with GHC to executables.
That's pretty cool. What kind of problems do you find yourself solving with your Agda code?
(Or, asked another way, what sort of stuff do your programs do?)
Currently my programs parse text. I have a JSON formatter, a lisp formatter. I'm also writing an Agda preprocessor in Agda i.e. parse Agda-like code, process it, print valid Agda code. I use them in my other projects as tooling.
Using Sized Types you can do pretty much anything in Agda though. It's not technically Turing complete since all programs are proved to terminate but you can go very far with it. I really like it because it gives very robust guarantees which makes it very easy think about the state of your program.
Julia also allows Unicode characters, which I think is great for math-heavy programs. I love that about it. I always forget the keybindings for them though.
agda-mode in emacs has fallback option for latex like aliases. E.g. \infty gives ∞ or \cdot gives · it's very convenient if you use latex often. Other than that, if you have no clue, M-x describe-char explains you how to produce a character.
Pretty much, yes. It's not "required" required, but in order to use the stdlibrary (i.e. use Unicode chars) you need to use Emacs and Emacs mode is extremely useful to develop Agda code. You can still compile and type check using `agda` binary but that's pretty much the only thing you can do.
There's a full rune reference available at https://urbit.org/docs/reference/hoon-expressions/rune/, but tldr they're group into families, with the first character being broadly what it does. AST nodes that have to do with "conditionals" are ?:, ?., ?@, etc.
I don't think anyone uses non-ASCII diagraph fonts for them, but it wouldn't be a big jump.
Obviously it belies a very anglo-centric view of the computer engineering landscape, but frankly I think that's a good thing, as it allows easier exchange.