Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Honestly this is the area I'm most excited about. SRS is just one facet---correct choice of input media, datatypes, and mental models are a big deal here.

I'm a math undergrad. And one thing that drives me crazy is how mich rewriting I do. What I want is a semi-tangible expression tree that I can manipulate and "snapshot" (to show my work) into Latex. The Latex rendering was easy, but figuring out what "core tree manipulation ux actions" are is another thing.

Similarly--am I the only one tired of typing? I'd kill for a gait analyzer or a haptic glove that let me program while walking through the park.

Some of this stuff sounds farfetched, but looking at emacs, or chorded stenotype machines, or paredit, really drives it home that we could be doing so, so much better, with very little new tech required.

When I was a kid I had mild synesthesia where I'd tie the words "fourth" and "favorite" with a feeling of "reaching." I still today associate clojure's map with spreading butter, or reduce with "rolling up." Chinese abacus math, developed to the point where thr actual abacus is no longer necessary, is a thing to behold. The human mind seems to make these connections easily and automatically. Why not leverage that?

I'd love if text became just the shared human-legible bytecode for our own personal interfaces.

(Check out this clip: https://m.youtube.com/watch?v=cHIo96yBf70 where some people get fed targeting data from an AI plugged into all the camera feeds---and note especially around the middle of the video how the girl customizes the interface on the fly)



I'm currently working on a structured editor which is at its core a customizable rewriting engine; you specify a grammar, and essentially all editor actions are rewrites on that grammar. My current target grammars are for simple LISPy languages because of the simplicity, but my original (and eventual) goal was/is as a tool for exploratory mathematics. I've always felt mathematical manipulation as more of tactile, synesthetic phenomena than a verbal/linguistic one; I feel like the current state of haptic input is not quite up to reifying this yet, though it is improving very rapidly these days! So I'm trying to tackle some of the software side, and seeing what can be done towards this goal with extant input devices.

What I've done is pretty rudimentary so far, but I have a talk about it here if anyone is interested: https://www.youtube.com/watch?v=CnbVCNIh1NA


Style checking that enforces a single representation for a given ast, seems now a thing in javascript land. One less worry for often-novice programmers.

Which makes code files simply an ast storage format. And makes people's text editors and language modes into an representational ast editors. So... it seems industrial use of semantic editing has finally begun?

Relieved of the impedance mismatch between code-as-ast and code-as-richly-expressive-text, perhaps there's an opportunity here to quicken the innovation and deployment of non-traditional editing? At a time when XR seems likely to be creating the next "great software rewrite".


I watched Andrew's talk recently, and I highly recommend it to other readers interested in the above topic. It's full of clearly expressed, interesting ideas.

When I read:

> What I want is a semi-tangible expression tree that I can manipulate and "snapshot" (to show my work) into Latex. The Latex rendering was easy, but figuring out what "core tree manipulation ux actions" are is another thing.

—in the parent comment, it was the first thing that came to mind.


> What I want is a semi-tangible expression tree that I can manipulate and "snapshot" (to show my work) into Latex.

It definitely seems like there's a gap in the computer-math ecosystem: you've got systems like LaTeX that are great for typesetting math you've already mostly figured out, and you've got systems like SymPy that are good if you want the computer to do the vast majority of the work, but I'm not aware of much in between.

It would be really cool if there was something like Photoshop for math, where you could work with expressions in a visual, semi-tangible way and the computer would give you "smart" tools to help you out.


Being a math undergrad puts you in the top 1% of the population in terms of the ability to manipulate symbols.

So your semi-tangible expression tree will only be comprehensible to maybe ten thousand users worldwide, at most.

This is why "worse is better" is a thing. Most of the population is at the middle of the bell curve, and they struggle with things developers find easy. Most of the developer population is to the right using tools and languages that are sort-of-workable but not particularly sophisticated.

The people who find symbolic thinking easy and natural are on the extreme right.

I think assistive AI has limited prospects for fixing this, because the XY problem and Dunning Kruger mean that people don't know what questions they need to ask. And to some extent symbolic perception relies on experiential qualia which most people don't have access to.

AI is going to be go big or go home. It will either be a physical prosthetic, giving users direct mental experience of patterns and data through brain stimulation, or it will be external and set up to anticipate and solve problems with only the vaguest strategic directions from a user.

Current technologies can't do much because UIs are low-bandwidth and can only do a limited amount to help people with average memories and average abstractive thinking. Which is why most apps and web pages are brochures/forms/shop fronts - familiar and uncomplicated models that don't require abstract thinking.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: