Hacker News new | past | comments | ask | show | jobs | submit login
APL386 Unicode – An APL Font (abrudz.github.io)
136 points by chrispsn on Aug 18, 2020 | hide | past | favorite | 65 comments



Still hoping for someone to design a font for J with ligatures so it can match APL's beauty for reading without making it more difficult to type, i.e. automatically rendering /: as ⍋, |: as ⍉, |. as ⌽, etc.


I made one as an experiment a while ago: https://github.com/sordina/japl


Perfect! Thank you.


Emacs-specific solution, but there is this[0] example of J and prettify-symbols-mode, the results look something like this:

    quicksort=: (($:@(<#[) , (=#[) , $:@(>#[)) ({~ ?@#)) ^: (1<#)

    quicksort ⤆ ((∇⍛(<#⊣) , (=#⊣) , ∇⍛(>#⊢)) ({⍨ ?⍛#))⍣(1<#)
[0]: https://wjmn.github.io/posts/j-can-look-like-apl/


Ligatures are an abomination. But a unicode j that uses those symbols natively would be cool; it's not the 90s anymore.


Keyboards are awfully similar to how they were in the 90s, is I think the salient point. Representing the symbols in the source isn't now an issue really, but typing them still would be.


Unicomp sells APL keycaps...and the good type of 90’s keyboards.

https://www.pckeyboard.com/page/product/USAPLSET


Changing the keycap doesn't change what is typed when you press the key...


Nothing at the keyboard changes that. All the interpretation of keypresses happens at an abstraction layer above the hardware.


Typing symbols is not really an issue. First off, you can configure your text editing environment to automatically replace digraphs with their associated symbols (e.g. when you type i. replace it with ⍳ automatically). But second of all, memorizing an alternate keyboard layout isn't that big of a deal. Is it really harder to remember that grade up is super+shift+4 (⍋) than that it's slash+shift+; (/:)?

I would argue that difficulty in reading code and knowing what symbols represent what operations is a much more pertinent consideration. And though neither is especially mnemonic (can you really have a mnemonic for something as abstract as 'sort'?), it's much clearer that single-character grade down (⍒) is the reverse of single-character grade up (⍋), than for digraphs (/: and \:). (It's also easier to parse symbols when they're only one character.) Not least because unqualified / and \ represent very different operations—the first is reduce, and the second is either scan or prefix—as do /. and \.; so there's no precedent for it.

And so, assuming you accept the obvious superiority of graphical/unicode representation of symbols, the digraph method for typing them becomes superfluous. You now have to associate the mental concept of grade with two completely separate representations: the graphical representation of the completed character (⍋) and the ascii representation which you type (/:). You can't escape the latter, because every time you type a grade, you'll see the '/' on screen for a moment before you type the ':' and the character gets digraphed.

I mentioned in the beginning that you can configure your editor (and repl) to automatically replace digraphs with their associated symbols (so /: automatically gets replaced with ⍋). On face, this seems functionally equivalent to the ligature suggestion, but it's not. Mainly, it affords flexibility. If you want to type digraphs in your environment, you can, all my criticisms aside. But I can configure my environment to use an alternate keyboard layout, avoid complicating my editor environment by introducing ligatures, and we can work together seamlessly. Doing it that way also adds flexibility to the language; if ⍋ is the single canonical representation of 'grade up', and / and : are separate symbols with their own unique semantics, then they can be freely juxtaposed. It'd also be somewhat of a pointless indirection, to have ascii digraphs underlying what are essentially unicode-pictorial representations.

(Note: I said only 'digraphs', for clarity, but everything I said applies also to trigraphs, of which j has a couple.)


It looks to me like you see no difference between "I need to configure my environment" and "my environment is already configured by default". But, from my point of view, the difference is very important.

I like being able to type some J (or, most times, some K) in an email, in a note I take in my phone, in a comment of a C program, ... Yes, I could configure my environment(s) to do all this with a custom input method, but being able to do it with anything I find without any configuration is a huge advantage.


I don't know J, but an APL keyboard layout is not a problem for any major current OS, nor is a layout with ISO Layer 3 (Mac Option, Windows AltGraph) mnemonically allocated to useful symbols.


I think it comes down to accessibility. Requiring users to memorize a second keyboard layout is very demanding.


Though modern "emoji pickers" have extended the range of IME and soft keyboard tools that regular users use (often daily). I've been joking that the next APL is probably made from emoji, but it's not entirely a joke: the Windows emoji keyboard (Win+. or Win+;) has a pretty full Unicode math symbol section (click the Omega top-level tab and then flip through the bottom level tabs), and while it is missing some of the nice search features of the more colorful emoji, gives relatively quick access to a lot of options.


Is it more demanding than asking them to memorize a collection of (mostly arbitrary) ascii symbols and digraphs? If you learn apl, you have to remember that grade up is ⍋ (s-S-4); if you learn j, you have to remember that grade up is /:. The primary barrier to entry is remembering what operations you can do and how you can do them, not how to type them.


> Requiring users to memorize a second keyboard layout is very demanding

Not really. It's a natural learning process as you learn the language. Very easy. If you are truly learning the language (rather than just messing around) you can get to the point where you are comfortable typing most common symbols in a week or two.

I used to touch-type APL back in the day. Even though I don't use APL at all these days, every so often, when I do, I am amazed by how quickly I remember where the various symbols are on the keyboard. Some of it just make sense, for example "iota" is shift-i, "rho" is shift-r, etc.


I have been working on a "modern" language based on APL.

It implements features such as macro-like custom syntax, first-class functions and closures.

It's developed in Kotlin and can be compiled to the JVM as well as natively using the new Kotlin multiplatform feature. Javascript will also be possible once Kotlin multiplatform supports reflection. The project is still nowhere near finished, but it can at least run Conway's Game of Life:

https://peertube.mastodon.host/videos/watch/4a19ca9e-7ca6-41...

I haven't worked on it for the last several months due to other projects having higher priority, but I will probably get back to it later.

https://github.com/lokedhs/array


Agree this is a good idea.

To clarify: it'd be cool if language's supported multiple lexemes (?) for a single token. So -> and → (U-2192) are equivalent.


Raku does this enthusiastically:

https://docs.raku.org/language/unicode_ascii



I was going to mention Julia.

Not only does it have it, it has it in (what I feel) is a really accessible way. Seen a symbol and you don't know what it is or how to reproduce it? Enter "?" and paste them symbol into the repl and it'll tell you what's it's called, the shortcut to make it, what it does, and the equivalent non-symbol function name.


And while not exactly what the parent asked for, the recently released "JuliaMono" font has ligatures for several common combos - including the right arrow - which amounts to practically the same thing:

https://juliamono.netlify.app/


I've been using Dhall ( https://dhall-lang.org ) and it does this. But it goes a step further: because `dhall format` defines the canonical form of any dhall code, it does the unicode substitution for you. So you get convenient entry with a normal keyboard, plus nice unicode symbols in code.


This is something Comma, the IDE for the Raku Programming Language, also does: https://commaide.com


Haskell has the UnicodeSyntax language extension which accomplished this https://wiki.haskell.org/Unicode-symbols


IIRC Scala has this.


Or you could just make a custom input method. It's surprisingly easy.


I don't know if it's intentional, but the clumsy character shapes, the haphazard line weights and the "bleeding ink" effect whenever there is a curve or a corner give it a strong "early DTP" aura. Or possibly even earlier, like a mimeographed pamphlet in the late 1960s, when APL was new.


Kerning on Cyrillic is pretty much awful with this font. Ever on the example string it looks like there is a space between Ь and Э, and Ш and Щ are almost glued together.


Isn't this a monospaced font?


It is, but it does not excuse it. Latin characters in it do not have such problems.



I intend to obtain an IBM 2741 terminal and APL typeball before I start learning how to write APL--Anyone got any leads, other than ebay?


I used to own one. Pretty clever technology, particularly for the time.


There's Ę, ę present but it lacks Polish and Lithuanian (mostly, there are other languages that use this particular letter) Ą, ą


Is "with a fun, whimsical look, inspired by Comic Sans Serif" something someone wanted?


> Is [...] Comic Sans Serif something someone wanted?

Maybe. What comes to mind is OpenBSD. In the OpenBSD community, developers use Comic Sans exclusively in all official slideshows. One purpose is making an in-joke, another purpose is trolling everyone else not genuinely interested in system development ("Weaponized Comic Sans. This page scientifically designed to troll web hipsters.") [0]

[0] https://www.openbsd.org/papers/bsdcan14-libressl/mgp00025.ht...


When I'm in the right mood, I use Comic Code[1] for my terminal, along with a different palette than my normal "working" environment. Even if it's just placebo, it helps me shake up my thinking a bit.

[1] https://www.myfonts.com/fonts/tabular-type-foundry/comic-cod...


I actually use it for real work. It fixed a lot of problems for Comic Sans and is surprisingly comfortable to read.


When I was doing stuff for a pre-Head Start research project, Comic Sans was a great font to do forms in. It went with the program and people were less stressed by the forms even though they asked the same information as the old ones. I really wish someone would have studied that.


I sorta dig it, looks fun and artistic. I don't know what's wrong with "W" though. Not sure if my browser is rendering is bad or there is a bug, because it doesn't look like rest of the ligatures, it looks nothing like "M".

EDIT: In case people are wondering this is what I see: https://imgur.com/a/P3npCYW

As you see W looks "bolder" (?) than others.


> https://imgur.com/a/P3npCYW

Please link to the actual image ( https://i.imgur.com/qMshely.png ), not imgur's gallery crapware.

But yeah, the "W" and to a lesser extent the "o" look more heavily struck than the rest of the letters. (Same problem shows up in font viewer, so it's definitely not just your browser.)


The original APL385 font is one of my favorites for coding.


Is it also a fun, whimsical look inspired by Comic Sans?


It definitely looks like it could be.


It reminds me of some old-school typewriter and printer fonts, and indeed it's a lot like Consolas.

I like it, though some of the spacing is odd (might be able to fix it even if it is a monospaced font.)


Absolutely. See Fantasque for example.


It's interesting to me to see these projects as alt-history, and see them as the dead-end technical choice that they were.

Has any programming language since then tried to use more than ASCII for its keywords?


I program in Agda pretty often and community usually uses Unicode characters for most things, so I do too.

The impl of AVL trees in stdlib: https://github.com/agda/agda-stdlib/blob/master/src/Data/Tre...

Some basic properties of natural numbers: https://github.com/agda/agda-stdlib/blob/master/src/Data/Nat...

It makes the code look absolutely gorgeous, readable and it's very easy to type too. I use Emacs agda-mode so it just automatically replaces e.g. \r with → or \== with ≡ etc...

I don't use Agda for theorem proving, I make real life programs in Agda, I compile them to Haskell and compile with GHC to executables.


What font do you use for this? It looks weird with my defaults.

I once seriously considered trying to build something that included math symbols in the syntax. It's pretty cool to see that I don't have to.


Not currently on my computer with that setup but afair I use Inconsolata.


That's pretty cool. What kind of problems do you find yourself solving with your Agda code? (Or, asked another way, what sort of stuff do your programs do?)


Currently my programs parse text. I have a JSON formatter, a lisp formatter. I'm also writing an Agda preprocessor in Agda i.e. parse Agda-like code, process it, print valid Agda code. I use them in my other projects as tooling.

Using Sized Types you can do pretty much anything in Agda though. It's not technically Turing complete since all programs are proved to terminate but you can go very far with it. I really like it because it gives very robust guarantees which makes it very easy think about the state of your program.


Julia also allows Unicode characters, which I think is great for math-heavy programs. I love that about it. I always forget the keybindings for them though.


agda-mode in emacs has fallback option for latex like aliases. E.g. \infty gives ∞ or \cdot gives · it's very convenient if you use latex often. Other than that, if you have no clue, M-x describe-char explains you how to produce a character.


> I use Emacs agda-mode

I think Emacs is basically required to use Agda, right? To the point that the set of Agda users is a strict subset of the set of Emacs users.


Pretty much, yes. It's not "required" required, but in order to use the stdlibrary (i.e. use Unicode chars) you need to use Emacs and Emacs mode is extremely useful to develop Agda code. You can still compile and type check using `agda` binary but that's pretty much the only thing you can do.


Haskell's GHC compiler has a tiny UnicodeSyntax extension and some packages that explore the idea further.

UnicodeSyntax: https://downloads.haskell.org/~ghc/latest/docs/html/users_gu...

Base Library Symbols: http://hackage.haskell.org/package/base-unicode-symbols

Containers Library Symbols: http://hackage.haskell.org/package/containers-unicode-symbol...


Mathematica does this, and they solve the keyboard problem with floating palettes of symbols.


They also have ESC symbolname ESC, with autocomplete.


The Z ("zed") notation -- but that is a non-executable specification language:

https://en.wikipedia.org/wiki/Z_notation


Racket uses λ for “lambda” and it’s pretty common in Racket code.


Urbit's programming language Hoon uses two character "runes" instead of keywords. Here's the standard library and compiler source, for example: https://github.com/urbit/urbit/blob/master/pkg/arvo/sys/hoon...

There's a full rune reference available at https://urbit.org/docs/reference/hoon-expressions/rune/, but tldr they're group into families, with the first character being broadly what it does. AST nodes that have to do with "conditionals" are ?:, ?., ?@, etc.

I don't think anyone uses non-ASCII diagraph fonts for them, but it wouldn't be a big jump.


Obviously it belies a very anglo-centric view of the computer engineering landscape, but frankly I think that's a good thing, as it allows easier exchange.


Would look right at home on a 1990 Mac.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: