Typically, if you were writing your hypothetical sql client in rc shell, you'd implement an interface that looks something like:
<>/mnt/sql/clone{
echo 'SELECT * from ...' >[1=0]
cat /mnt/sql/^`{read}^/data # or awk, or whatever
}
This is also roughly how webfs works. Making network connections from the shell follows the same pattern. So, for that matter, does making network connections from C, just the file descriptor management is in C.
This is... I don't know. I don't get why I would care to sling SQL over a file system versus a network socket.
I mean, Postgres could offer an SSH interface as a dumb pipe to psql to just have you push text SQL queries in your application. But it doesn't, it offers a binary protocol over a network socket. All the database engines have had the same decision point and have basically gone down the same path of implementing a wire protocol over a persistent socket connection.
So yeah, I don't get what doing things this way would give me as either a service provider or a service consumer. It looks like video game achievements for OS development nerds, "unlocked 'everything is a file'." But it doesn't look like it actually enables anything meaningful.
But if it requires understanding of a data protocol, it doesn't really matter if it's over the file system or a socket or flock of coked-up carrier pigeons. You still need to write custom user space code somewhere. Exposing it over the file system doesn't magically make composable applications, it just shuffles the code around a bit.
In other words, the transport protocol is just not the hard part of anything.
It's not hard, but it's sure a huge portion of the repeated boilerplate glue. Additionally, the data protocols are also fairly standardized in Plan 9; The typical format is tabular plain text with '%q'-verb quoting.
There's a reason that the 9front implementation of things usually ends up at about 10% the size of the upstream.
The benefit is that you can allocate arbitrary computers to compute arbitrary things. As it is now, you have to use kubernetes and it's a comedy. Though perhaps the same in effect, there are dozens of layers of abstraction that will forever sting you.
You're thinking from the perspective of the terminal user—ie, a drooling, barely-conscious human trying to grasp syntax and legal oddities of long-dead humans. Instead you need to think from the perspective of a star trek captain. Presumably they aren't manually slinging sql queries. Such tasks are best automated. We are all the drooling terminal user in the end, but plan9 enabled you to at least pretend to be competent.
I’ve been on/off playing with 9front on an old laptop. I’ve been having a lot of fun with it, it’s fun to write code for, but i have had a hard time using it as anything but a toy.
I would love to use it as my main desktop, but ultimately (and kind of unsurprisingly), the blocker is the lack of a modern browser and a lack of video acceleration.
I’m sure I could hobble something together with virtualization for the former but I don’t see a fix for video acceleration coming.
Maybe I could install it on a server or something.
I did the same with an old Thinkpad but somehow found it relies too heavily on the mouse. I might still go back to it because I love how far they've taken the "everything is a file" idea and would like to experiment more with that.
I saw on HN in a different Plan 9 thread (though I'm having a bit of trouble finding it), where someone mentioned the idea of using Plan 9 to build a custom router.
I have a very rough mental model of how that could be done, and I think it would be cool to say I have that, but I haven't been bothered to replace my beloved NixOS router yet.
That's interesting, thanks. I feel a need for simple multitasking/networking OS for synthesizable RV32I core (not RTOS like, but more like Unix or CP/M). Would be nice to try Plan9 on it once port is out.
Regardless, ascii encoding isn’t raw data. You’re making software engineer assumptions. Statistical noise is introduced 4-5 steps before the data is recorded digitally.
Even after it’s digitized, more noise is introduced through recording errors and normalization.
To understand the original distribution, the entire workflow needs to have been recorded
One of my clients is an AI startup in the security industry. Their business model is to use AI agents to perform the initial assessment and then cut the security contractors hours by 50% to complete the job.
I don't think AI will completely replace these jobs, but it could reduce job numbers by a very large amount.
It is the only one that actually writes to memory. It's occasionally convenient, but it's also largely unnecessary: the caller can typically make multiple calls to printf, for example, noting the return value for each one. Or use strlen and fputs. And so on.
The C11 printf_s functions don't support it at all, so it's clearly already on the naughty list even from the standard's perspective.
As soon as you forget (or your adversary manages to delete) an \0 at the end of any string, you may induce buffer overflows, get the application to leak secrets, and so on. Several standard library functions related to strings are prone to timing attacks, or have weird semantics that may expose you to attack. If you roll your own security-related functions (typical example: a scrubber for strings that hold secrets), you need to make sure these do not get optimised away by the compiler.
There's an awful lot of pitfalls and footguns in there.
I thought you meant a hello world or similar program only handling strings would be fundamentally insecure but rather you mean that it is hard to write secure code with C strings.
There are indeed a lot of pitfalls and footguns in C in general but I would argue that has more to do with c's memory focused design. I always feel like C strings are a bit of an afterthought but it does confirm well with the C design. Perhaps it is more so a syntax issue where the memory handling of strings is quite abstracted and not very clear to the programmer.
> I thought you meant a hello world or similar program only handling strings would be fundamentally insecure but rather you mean that it is hard to write secure code with C strings.
Disclaimer: I am not the author of the comment, and honestly I am more than happy if OpenBSD broke %n in printf because it looks awful from a security standpoint.
> you mean that it is hard to write secure code with C strings.
Indeed I do :) It is possible to write a "secure" hello world program in C; the point is that both the language and the standard library make it exceedingly easy to slip in attack vectors when you deal with strings in any serious capacity.
Torrenting is easy, but what are you goung to do with the torrented files then? Without additional external hardware you probably won't be able to play your downloaded files on your large TV, and most people prefer a laggy simple route over having to do more work. I do torrent from time to time, but the hassle associated with the whole process really highlights why streaming apps took over.
Unboxed records are fine, but stack-allocated lists make me nervous. What happens when someone gives you 8 megs of headers, and you run out of stack?
This code seems to put a 32k limit on it, but it's a manual check and error return. What about code that forgets to manually add that limit, or sets it too high? How do you decide when to bump that limit, since 32k is an artificial constraint?
By default in oxcaml, "stack" / local allocations happen in a separate stack on the heap (which the runtime allocates for you). If you allocate enough to exceed that capacity, it will resize it dynamically for you.
> Older UIs were built on solid research. They had a ton of subtle UX behaviors that users didn't notice were there, but helped in minor ways. Modern UIs have a tendency to throw out previous learning and to be fashion-first.
Yes. For example, Chrome literally just broke middle-click paste in this box when I was responding. It sets the primary selection to copy, but fails to use it when pasting.
Middle click to open in new tab is also reliably flaky.
I really miss the UI consistency of the 90s and early 2000s.
reply