While we're at it could we please bring back file upload/download support, like terminals used to support 20 years ago. The lrzsz ZMODEM implementation actually still compiles on modern systems despite not having been touched for 20 years, and in any event server-side packages are readily available. I got it working with iterm2 (one of the few, if not only, terminals on macOS that let you hook escape sequences), but this is really something that should be built into terminals.
Once upon a time you could easily push print jobs through the terminal, too. Unfortunately the terminals didn't prompt the user before spooling the job locally, so you could send print jobs to logged in users using the write(1) and wall(1). Good times....
Which, come to think of it, is probably an attack vector people should keep in mind when drafting and implementing these protocols. Who here hasn't accidentally cat'd a file downloaded from the internet even though you hadn't yet resolved to trust it yet? Perhaps you were cat'ing it to vet it, but forgot to pipe it to od(1) or hexdump(1).
I vaguely remember file upload/download using Kermit, but it's been so long ... I'm not sure this should be part of "shell integration", though.
However, I could be open to adding it to DomTerm, if I can understand the benefit and usage. What would you use this feature for and what should the user interface be? It only makes sense when connecting to a remote computer - but I assume you'd use ssh for that. In which case you could use zssh - or scp. Please explain why these are insufficient.
It's mostly about convenience and reduced friction. I can't count how many times I've been working interactively on a remote server, which can require non-trivial invocation incantations (e.g. different ssh config file) or multiple hops, and wanted to be able to quickly download some data or upload a binary. Of course you can usually copy it to a file and downloaded it on the side, but the friction can be stressful.
For example, at my current job a shell session often requires logging into a k8s node through a bastion host, which in turns requires ScaleFT (which causes automation and configuration headaches), and from there maybe entering a container.
To use scp or sftp you need to follow that whole sequence again, which if you've got 5 terminal windows open and were bouncing around machines debugging a problem, can be a headache. Just getting to the same working directory can seem like a chore. I usually copy a file to /tmp if I'm gonna scp it, but that's not very secure (e.g. a previously unreadable file might become readable under /tmp because it was was 0644 but located under directories lacking group or other execute permissions, or because it was 0600 and owned by a user, like root, lacking ssh/scp/sftp remote access) and I hate myself for this habit.
zssh is problematic because you need to know ahead of time that you'll want to transfer some files, but in many contexts you would rarely know this ahead of time.
The server-side utility, like lrzsz, might not be installed, but you can install it without having to end your session. The link in the tool chain that necessarily requires widespread, upstream adoption to achieve the convenience dividend is the local terminal software.
Myself and, I suspect, a great number of developers would find much more value in easy file transfers across existing sessions then in being able to display graphics or even more featureful command edit capabilities. I think these newer semantic terminal extensions are an awesome idea (a friend and I almost tried our hand at it 12 years ago, actually), but I know I would immediately and regularly benefit from in-band file transfer support whereas other improvements are more aspirational. If there's any chance this specification stands a chance at widespread adoption by mainstream terminal applications, then I pray that it directly address the file (or binary blob) transfer pain point, and the easiest way to do that might be to build on ZMODEM as lrzsz is already widely available in Linux and BSD package repositories.
There are two separate issues, I think: The protocol and the UI. Since DomTerm has extensible escape sequences to send and receive arbitrary data, it might make sense to start with a simple DomTerm-specific protocol and implement a UI for that. I.e. instead of sz/rz you would use a 'domterm put-file'/'domterm get-file' command. That would require installing the domterm command on the remote end, so not as convenient as sz/rz. (On the other hand, using domterm at the remote end enables detachable sessions, so that is the long-term plan anyway.)
Once the UI is there, if there is demand, we can add the zmodem protocol. There are JavaScript implementations that can be used.
A few terminals support special escape sequences for "shell integration", which allows nicer display and features for shells and other REPLs. In addition to the actual proposed specification (in the main article link), I've written an article with screenshots (http://per.bothner.com/blog/2019/shell-integration-proposal/) to show what functionality this proposal enables, based on the current DomTerm implementation (https://domterm.org). I welcome feedback both on the proposed specification, the DomTerm implementation, the feature set, and the blog article.
Once upon a time you could easily push print jobs through the terminal, too. Unfortunately the terminals didn't prompt the user before spooling the job locally, so you could send print jobs to logged in users using the write(1) and wall(1). Good times....
Which, come to think of it, is probably an attack vector people should keep in mind when drafting and implementing these protocols. Who here hasn't accidentally cat'd a file downloaded from the internet even though you hadn't yet resolved to trust it yet? Perhaps you were cat'ing it to vet it, but forgot to pipe it to od(1) or hexdump(1).