Hacker News new | past | comments | ask | show | jobs | submit login
Programmer's critique of missing structure of operating systems (rfox.eu)
221 points by Bystroushaak on Feb 19, 2020 | hide | past | favorite | 193 comments



This is mostly a critique of UNIX, but several of these concepts are already implemented in other OSes, either in production or experimentally.

The database for a filesystem is an classic. WinFS, the filesystem that should have been a key feature of Longhorn/Windows Vista is based on a relational database.

The "death of text configuration files" is the idea behind the Windows registry.

Powershell (Windows, again) is based on structured data rather then text streams.

For the "programs as a collection of addressable code blocks", when we think about it, we are almost there. An ELF executable for instance is not just a blob that is loaded in memory. It is a collection of blocks with instructions on how to load them, and it usually involves addressing other blocks in other ELF files (i.e. dynamic libraries), with names as keys. We could imagine splitting the executable apart to be saved in a database-like filesystem, that would work, but if wouldn't change the fundamentals.

The problem I have with structure is that it implies a schema. And without that schema there is nothing you can do. And of course, because we all have different needs, there are going to be a lot of schemas. So now you turn one problem into two problems: manage the schemas and manage the data. With a UNIX-style system, even if you need some kind of structure to actually process the data, the system is designed in such a way that for common operations (ex: copy), you don't need an application-specific schema.


Yes, text configuration files are dumb because they require N parser/script editors times M configurable programs. Furthermore, what people really want is universal progmatic and CLI access to configuration.

Microsoft Microsoft'ed configuration management with the way they implemented the registry. The Apple PList way sort-of goes there but doesn't quite master it.

A better way would've been is configuration via code and command-line that's easy to interface for all purposes. It's important to:

- be able to transactionally backup and restore all settings

- have multiple instances of the same program with different settings

- wipe out settings to default

- enumerate all settings within a program or within the whole system

- subscribe and be notified of setting changes

- ACLs or privileges to separate users and processes from each other (similar to the Windows Registry)

- audit and undo setting changes

- hide secrets that are provided from dynamic providers

- allow dynamic providers for values of settings


Switch and router OSes have a unique CLI configuration system that is both simple and quite powerful. Compared to Linux I especially like the ability to enumerate the entire configuration with one command ("show running-config" in Cisco's IOS, for example).

I would love to be able to see and manage a Linux system's configuration in an analogous way. Obviously there's a lot more to a modern general purpose OS than a simple network device, so I'm not sure, maybe it wouldn't work very well?


> Yes, text configuration files are dumb because they require N parser/script editors times M configurable programs.

No, they don't. It's N editors + M parsers (where M=programs) I suppose if you want really smart editors that are also developed completely independently, such that each require it's own parser, its N+M parsers plus N editors.

> Furthermore, what people really want is universal progmatic and CLI access to configuration

I'm fairly certain most people don't want anything CLI anything.


I want better CLIs. The author's criticism is spot on when it comes to those. The CLI flag format should be specific to the shell not specific to the program.


> I'm fairly certain most people don't want anything CLI anything.

I'm fairly certain they all want CLI everything, it's just they don't know it yet.


It's more convenient to call and export needed function directly via C API than force everybody to make and use another "universal" layer.


The low hanging fruit is a common parser/serializer, which is usually handled in library code in Unix - handwritten parsers are usually the enemy.

The other bits describe a whole configuration management system, which gets into the realm of questions like "For version X of software Y, value Z can be between A and B, but this changes when using version X+1", which is extremely difficult logic to encode into an external data store. Even worse are relational consistency issues that come up across structures.

Having a "fail if invalid" results in people setting values and then being frustrated at the system.

BTW, this all existed in a language neutral way in the early 2000's with XML parsing and validation leveraging RelaxNG and Schematron. Unfortunately those were deemed ugly and hard to use, and thrown out in exchange for the half baked JSON/YAML solutions.


I know it’s a bit hacky, but you can satisfy most of your requirements with environment variables; and most of your other requirements with a mini daemon that sits on top of `env`.

When I think about it, this is basically what the various “keychain” daemons provide, which IMO are underused despite their terrible ergonomics.


This sounds like your basic configuration management tool plus a service discovery tool.


> So now you turn one problem into two problems: manage the schemas and manage the data

This seems to completely ignore the fact that we are already managing schemas right now in the form of ad-hoc parsers and serializers which are arguably much worse than a more formally specified alternative.


The HTML vs XHTML situation: would you prefer a hard failure on any error, or a graceful degradation system that is therefore always slightly degraded?

We go back and forth on this because tightening the schema only works when you can adequately define the requirements of both ends up front, and there isn't a vendor battleground happening in your standard. Developers end up escaping into an unstructured-but-working zone. Classics like "it's such a hassle to get the DBA to add columns, so we'll add a single text column and keep all the data in there as JSON".


> tightening the schema only works when you can adequately define the requirements of both ends up front

There are degrees of tightening though, aren't there? It would be a matter of necessity (to me, at least) that an interface like he describes be extensible. To me, it's about better OS primitives with more guarantees, etc. Versioned APIs with options to fallback to older versions (to at least some extent) seems like an obvious outcome. The last thing anyone would want would be the equivalent of JSON in a database column, which would completely defeat the purpose.


> The HTML vs XHTML situation: would you prefer a hard failure on any error, or a graceful degradation system that is therefore always slightly degraded?

Definitely would prefer the hard failure. The community will literally fix all of that in a week -- all template engines and DOM generators will adapt to avoid the hard errors and everything has a good shot at becoming better as a result.


"The database for a filesystem is an classic. WinFS, the filesystem that should have been a key feature of Longhorn/Windows Vista is based on a relational database."

I always wondered why Unix never added record-based files, in addition to stream-based files...like mainframes have. That would have simplified many things.


> I always wondered why Unix never added record-based files, in addition to stream-based files...like mainframes have. That would have simplified many things.

You can implement record-based files (RAX, ISAM, etc) on top of stream-based files. The Unix philosophy was like Lego: small parts that do simple things that can be put together in combinations to build greater things. If you can build it from the basic blocks, it's not a basic block.

There are plenty of more advanced storage systems available for Unix. You don't need the OS vendor to supply The One.


>If you can build it from the basic blocks, it's not a basic block.

The idea behind the author's post is that a directed acyclic graph is a basic block that is just one layer of abstraction above raw bytes. JSON, YAML, TOML, INI, .properties, environment variables are all just different ways to represent a graph and if it were possible to store the graph directly in the filesystem then one would not have to manually [0] serialize the graph to bytes and deserialize it back later.

[0] graph -> byte conversion is the job of the graph filesystem


Thank you for pointing that out, thats exactly one of my points.


The problem is triviality to implement, a similar problem occurs with TCP and UDP.

Lots of people implement half of TCP over UDP because they want packet based transmission but also care about losing packets.

Implementing packets on top of a stream is not as trivial as implementing a stream over packets.

Implementing streaming files over record-based files is easier, you just iterate over each record and append to the stream. Implementing records in stream-based files means you have to define a framing format, internal format, bookkeeping, etc.

Record-based files would be the proper lego building blocks.


The disk is a block device, i.e. it reads and writes data in blocks, the system just abstracts that into a stream interface.


Yeah and that is IMO the wrong abstraction. Blocks on disk should be organized into records, where multiple blocks can belong to a single or multiple (if using dedup or Copy-on-Write) records. Implementing a performant stream over that can be done in libc with minimal overhead.


Now you want a different block structure.


No? ZFS already organizes blocks on disks this way to some extend and Btrfs uses extends which include multiple blocks together and can be referenced multiple times.


The problem with the TCP vs UDP debate is that there exists a middle ground that satisfies most people's requirements.

TCP offers a single reliable globally ordered stream.

UDP offers unreliable packets in arbitrary order.

What people usually want is to send whole packets(or multiple streams) in arbitrary order with configurable reliability.

TCP and UDP offer neither but UDP has the least restrictions which makes it the only protocol on which you can implement your custom protocol.


> Implementing streaming files over record-based files is easier

What a performance-wise nightmare. Goodbye page cache and other zero copy optimizations.


Not really, the OS still has to abstract it onto the harddisk, this should be trivial to achieve without impacting page cache and zero copy optimizations if the on-disk metadata is seperated sufficiently. COW Filesystems already do it to a lesser degree and they're certainly fine.


What to do with clients who want a stream of records? Separated meta + payload allows fast byte streams. Combined meta + payload allows fast record streams. Structured implementation can't provide performance for both cases. But byte-oriented can. And we see this approach in modern and fast OSes.


Allow clients to choose which they receive?


May be I was unclear. If we introduce record abstraction for file ops then OS low-level implementation have to decide which way to store structured data. meta + payload or meta separated from payload. In latter case there is no way to provide fast record streams. Former case does not allow fast byte streams. Client decision to receive bytes or records doesn't magically zero copy reorder stored data.


Why would the later cause record streams to be slow? Snapshots in COW FS' aren't that terribly slow either, even if there is quite a few of them, which basically boils down to the same structure, really.


"You can implement record-based files (RAX, ISAM, etc) on top of stream-based files"

Sure. And we have, many times over. A base implementation that supported fixed and variable records, shared process mmap() access, etc...would have value. And perhaps the kernel being aware of it would have some benefit.


> The Unix philosophy was like Lego: small parts that do simple things that can be put together in combinations to build greater things. If you can build it from the basic blocks, it's not a basic block.

And the normal programmer's logic is: "if it's not already integrated in the OS then I am not going to use it". So stream files it is, forever.

I think history has proven this. People are very risk-averse even if the risk is quite minor (example: using sqlite3 instead of the file system).


Exactly. And with more opinion behind it: the fact that most of those items were shipped in a extraordinarily well-supported, mass market OS literally decades ago and still didn't catch on maybe says something about the value of the design ideas.


> the fact that most of those items were shipped in a extraordinarily well-supported, mass market OS literally decades ago and still didn't catch on maybe says something about the value of the design ideas.

that statement seems heavily draw on the idea that success is the result of a meritocracy. But 'how good something is' often plays a small part in the selection of winners and losers.


In software architecture, it's really the only good evidence we have, though. I'm all for better metrics, but there aren't any. The point is just that none of these ideas are remotely "new", they've been heavily pushed by one of the biggest players in the industry for decades, and if anything was really beneficial in a profound way we'd surely know about it by now.

But we don't. So, no, that doesn't prove they're bad ideas. It does show that whatever benefit they have is underneath the noise floor of the experiment, though.


I think the only thing in the parent's list that didn't catch on was WinFS, which I believe is because it never even shipped if I'm not mistaken.

Registry and Powershell are integral parts of being a Windows administrator.

I might be leading a little, but just because nix didn't choose to use these things doesn't imply they are bad. Just nix is "different".


Right, I meant "catch on" in the sense of being clearly superior to the point that everything feels they need to have an answer for it and move to it. In fact the Registry[1] and Powershell are uniformly considered controversial even within their problem domain.

And I'm not arguing these things are "bad", exactly. I'm saying they aren't the panacea claimed in the article, that frankly they provide limited real benefit in a practical sense, and that the proof of this is that the one system that tried to implement them generically is aging glacially into legacy status without these ideas having propagated.

Basically: if these things were that great, at some point other systems would have picked them up while they moved on past windows. But they didn't. So... I guess my point is "why should we care about this stuff specifically?".

[1] Obviously the registry was emulated in the Linux world with the gconf/dconf stuff, which likewise was quite controversial and hasn't had anything close to universal adoption.


File system allocation tables effectively are highly optimised databases already. I think the issue isn't the file system itself but rather the OS syscalls.


> File system allocation tables effectively are highly optimised databases already. I think the issue isn't the file system itself but rather the OS syscalls.

It really is the file systems as well. If allocation tables are highly optimized databases, they're not the kind of safe and robust database we're used to:

https://danluu.com/file-consistency/


> If allocation tables are highly optimized databases, they're not the kind of safe and robust database we're used to

Those two points aren't mutually exclusive (that link, by the way, is discussing syscalls like I'd mentioned).

When I talk about "optimised" I mean in terms of other limits placed on the table that makes sense with regards to storing file metadata but wouldn't be desirable for a RDBMS.


Old operating systems like VMS, MULTICS, OS/400, etc. aimed for this; they were large and contained support for a lot of structure. Problem is, they always evolved to be too complex, and/or the available complexity was not what turned out to be needed, but different complexities had to be built on top of the old, unneeded, complexities.

Along comes Unix and dispenses with all of that. Files? Bytes only. Devices? Accessible as files in the file system. Etc. This became popular for a reason.


I came here to write this. VM/CMS was designed for processing structured data. Application UIs were designed around forms to be displayed on 3270 terminals and data was structured around what could be input on a single punch card. It was great as long as this fit your model.

What UNIX gave the world was maximum flexibility: an os that only really cared about streams and got out of your way.


And in my eyes this is the only right way, because it allows to build structured services on top. If on the other hand the structure is already in the underlying system, it's incredibly hard to build something useful on top.

Similarly, think how useful memcpy() is: Because it can be applied anywhere.


I'm not sure it's that they were too complex. More that you couldn't shoehorn them into an 1980's era microcomputer.


With all the crufty layers now being built on top, perhaps a new simplification is now needed?

On the other hand, it would take quite a lot of effort to reach a parity of capability for a new OS these days.


We already did that. It was called Plan 9 From Bell Labs[1]. And while it gave us UTF-8, procfs, 9P, etc, it failed to become a popular OS.

[1] https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs


The important question here is, why? Was it because of those features? Or something else?

I don't know, but one major factor in OS adoption is network effects. "The rich get richer", and it's very hard to introduce a new and different system, no matter how good it is.


Unlike Unix, which was basically free software and therefore extendable and available for companies to build their own versions of, Plan 9 was proprietary from Bell Labs.


Ah, Plan 9. Vastly superior to its predecessor in every way, except the one that counts: bums on seats.


And, for a long time, price/licensing.


The thing about the crufty being on top instead of inside it is that we can replace it from time to time.

(Too bad the replacements aren't always better...)


Having cruft in your top layer is fine and healthy; a sign of a living, evolving system that is still learning how best to do its job and hasn’t prematurely ossified.

The problems occur when a still-crufty layer becomes set in stone, ensuring its base defects go on to pollute everything built on top. That may happen either because its designers lack perspective and cannot accept their precious design is flawed; or because everyone else starts building on top before it is ready for prime-time, then reject all attempts to straighten it out because that would require them to rewrite as well.


So Unix is the dynamic type system of operating systems?


Even subject to similar high-minded criticism: https://www.jwz.org/doc/worse-is-better.html



Not so much dynamically typed; more like stringly typed.


Worth bearing in mind here that files and file systems are themselves a kludge, a 1950s workaround for not having enough primary storage to keep all programs and data live at all times.† (Also, primary storage traditionally tends to be volatile; not good when the power goes out.) I point this out because institutionalizing kludges as The Way Things Should Be Done is an awfully easy mistake to make, and in a long-lived systems like OSes have serious architectural sequelae.

..

What’s interesting about the Unix File System is that it’s a general abstraction: a hierarchical namespace that can be mapped to a wide range of resources; not just “files” in secondary storage but also IO devices, IPC communication points, etc. And that’s all it did: mount resources at locations and define a common API for reading/writing/inspecting all resources, without getting bogged down on the internal details of each resource. Nice high cohesion, low coupling design.

Plan 9 made much fuller use of the namespace idea than Unix did, but the core concept was there for the start and it is excellent… except for one MASSIVE mistake: individual resources are neither typed nor tagged.

Without formal type information there is no way for a client to determine the type of data that a given resource represents. Either there is a common informal agreement that a resource mounted at location X is of type Y (e.g. /dev/*), OR there is an informal resource naming convention (typically DOS-style name extensions), OR the client has to guess (e.g. by sniffing the first few bytes for “tells” or by assuming ASCII).

Formally tagging each resource with, say, a MIME type (as in BeFS, or properly implemented HTTP) would’ve made all the difference. THAT is K&R’s million-dollar mistake mistake, because without that key metadata it is impossible to ensure formal correctness(!) or implement intelligent helpers for data interchange (e.g. automatic coercions).

Arguments over the pros and cons of alternative namespacing/association arrangements (e.g. relational vs hierarchical) are all secondary to that one fundamental cockup.

..

Unix became popular not because it was Good, but because it was Just Good Enough to entertain the cowboys who built and used it, who enjoy that sort of pants-down edge-of-the-seat living. And because a lot of them were in education, they spread it; and so made cowboy coders the status quo for much of this profession and culture. And while I admire and largely approve of their Just Do It attitude, I abhor their frightening disregard for safety and accountability.

And while I’m heartened to see some signs of finally growing up (e.g. Rust), there is a LOT of legacy damage already out there still to be undone. And retrofitting fixes and patches to endemic, flawed systems like Unix and C is and will be infinitely more pain and work than if they’d just been built with a little more thought and care in the first place.

--

† If/When memristors finally get off the ground, the primary vs secondary storage split can go away again and we finally get back to single flat storage space, eliminating lots of of complex bureaucracy and ritual. And when it does, there’ll still the need for some sort of abstract namespace for locating and exchanging data.


> a 1950s workaround for not having enough primary storage to keep all programs and data live at all times

Even today you don't have enough primary storage to keep your data. And even then it would require a structure when data outlives the process / application.

Most times there are no best solution, but only tradeoffs. Anyone who has done a bit of systems work knows this. And Hierarchical file system was a Ok-ish trade-off to make. Perfect is enemy of good.

> MASSIVE mistake: individual resources are neither typed nor tagged.

It comes with its own set of tradeoffs. I am a huge proponent of static typing when it comes to PLs. But in a system where multiple actors operate on shared resources, it is easy to get illusioned into a false sense of correctness. Also it imposes some extra complexity in the programming model. I am no experienced systems engineer. But someone here can address it better.

> .... entertain the cowboys who built and used it ....

You are going beyond HN standards to justify your anger against a particular methodology or people that embrace it in programming.

The universally accepted point is that Unix succeeded due to political factors (low cost and easy modification compared to proprietary counterparts), simplicity of the API, and being arguably better than others despite lacking some features people love to lament these days. But in many cases, that simplicity is a desirable thing to have. It is nice to objectively point out faults in systems. But what you did is totally dismissing some people's contributions.

It is easy to see some hyped thing and think that's the Next Big Thing(TM) after reading two fanboys preaching on Reddit, while being totally ignorant of tradeoffs.


> Unix became popular not because it was Good, but because it was Just Good Enough to entertain the cowboys who built and used it ... and so made cowboy coders the status quo for much of this profession and culture

I disagree very strongly with these insults directed at programmers from 50 years ago because now, in retrospect, half a century later what they did doesn't live up to some flawless system written in Rust that exists only in one's imagination.

Doesn't is seem a little bit like calling Thomas Edison a cowboy that made the terrible mistake of giving us electric lighting through filament bulbs when LED lights would have been so much better.

In these early days of computer science I read virtually every important published article on programming languages and operating systems, the field was still that small. MIT didn't even think it warranted a separate department, it was just a subsidiary branch of EE like say communications. Researchers like Edsger Dijkstra, Tony Hoare, Niklaus Wirth, Per Brinch Hansen, Leslie Lamport, David Gries, Donald Knuth, Barbara Liskov, and David Parnas were all trying to figure out how to structure an operating system, how to verify that a program was correct, how to solve basic problems of distributed systems, and how to design better programming languages. Practitioners working on operating systems would have been familiar with almost everything written by these giants.

It's easy to insult C, I myself wouldn't choose it for work today. But in 1989, 20 years after the birth of Unix, I did choose it for my company's development of system software--it still made sense. And back in the 1960's what alternatives were there? Fortran? PL/1? Pascal? Lisp? We were still programming on keypunch machines and relational databases hadn't been invented. The real competition back then for system programming was assembly language.


> Formally tagging each resource with, say, a MIME type (as in BeFS, or properly implemented HTTP) would’ve made all the difference.

That gives you a "global naming of things" problem, which is surprisingly hard. Who controls the namespace? Who gets to define new identifiers? Do they end up as intellectual property barriers where company A can't write software to work with files of company B?

> without that key metadata it is impossible to ensure formal correctness(!)

That seems irrelevant - even with the metadata you have to allow for the possibility of a malformed payload or simple metadata mismatch. I don't believe this alone would prevent people from sneaking attack payloads through images or PDFs, for example.

> THAT is K&R’s million-dollar mistake mistake

K&R wrote UNIX before MIME. Not only that, but before the internet, JPEG, PDF, and indeed almost all the file types defined in MIME except plain text.

Refusing to choose also prevented UNIX from being locked into choices that later turned out to be inconvenient, like Windows deciding to standardise on UCS-2 too early rather than wait for everyone to converge on UTF-8.

Even the divergent choice of path separators and line endings has turned out to be a mess.

> cowboys

The "cowboy" system is the one that beat the others, many of which never launched (WinFS, competing hypertext protocols) or were commercially invisible (BeOS, Plan9 etc).

Both Windows and MacOS have alternate file streams which can be used for metadata, but very rarely are.

Memristors aren't going to save you either. Physical space ultimately determines response time. You can only fit so much storage inside a small light cone. We're going to end up with five or six different layers at different distances from the CPU, plus two or three more out over the network, getting cheaper and slower like Glacier.

We probably are going to move to something more content-addressable, in the manner of git blobs or IPFS, and probably a lot closer to immutable or write-once semantics because consistency is such a pain otherwise. It would be interesting to see a device offering S3-style blob interface plus content-addressable search ... over the PCIe interface.

Oh, and there's a whole other paper to be written on how access control has evolved from "how can we protect users from each other, but all the software on the system is trusted" to "everything is single-user now, but we need to protect applications from each other".


> That gives you a "global naming of things" problem, which is surprisingly hard. Who controls the namespace? Who gets to define new identifiers? Do they end up as intellectual property barriers where company A can't write software to work with files of company B?

Well, you could go the sqlite route: if you want the DB to be globally accessible for everyone else, pass an absolute path; if you want it to be anonymous and for your temporary usage only, pass ":memory:" as the DB path.

Additionally, UUIDs / ULIDs / what-have-you can be extended but IMO just make 512-bit global identifiers the norm and say "screw it" for the next 1_000 - 100_000 years. (While making a standard library to work with those IDs baked in every popular OS's kernel).

Sure, it might get complicated. Still, let it be one complicated thing on one specific topic. Still a better future status quo compared to what we have today.

---

I mean, nobody here seems to claim that changing things will be instantly easier and stuff will automagically fall in place in a year. No reasonable person claims that.

But freeing us from the "everything is strings on shell" and the "zero conventions" conundrums themselves will probably solve like 50% of all technical problems in programming and system/network administration.

(While we're at it, we should redefine what the hell a "shell" even is.)


I understand the natural frustrations articulated here, especially given OP’s experience working with files, but it seems to dismiss what is actually a core strength of current operating systems: they work. Given a program supporting 16 bit address spaces from the 1970s, you can load it into a modern x86 OS today and it works. This is an incredible feat and one that deserves a lot more recognition than offered here! Throughout an exponential explosion of complexity in computing systems since the 70s, every rational effort has been made to preserve compatibility.

The system outlined here seems to purposefully avoid it! Some sort of ACID compliant database analogy to a filesystem sounds nice until 20 years down the line when ACIDXYZABC2.4 is released and you have to bend over backwards to remain compatible. Or until Windows has a bug in their OS-native YAML parser (as suggested here) so now your program doesn’t work on Windows until they patch it. But when they do, oh no you can’t just tell your users to download a new binary. Now they have to upgrade their whole OS! Absolute chaos. And if you’re betting on the longevity of YAML/JSON over binary data, well just look at XML.


Want to admire your fancy After Dark win 3.1 screensaver? Just emulate the whole environment! We don't want to keep suporting the broken architectures and leaky abstractions of past, they drag us down. Microsoft's dedication to backwards compatibility is admirable but IMO misguided and unsustainable in the long run. The IT industry has huge problem with complexity. We need to simplify the whole computing stack in interest of reliability, security and future innovations.

The proposed improvement as I understood it would be future proof. It seems trivial to build a rock-solid YAML/XML/JSON/EDN parser on OS level, and since it would be so crucial part of OS the mistakes would be caught and fixed quickly. It shouldn't even matter if structured data syntax is replaced or expanded in future, as long as it is versioned and never redacted. Rich Hickey's talk "Spec-ulation" has much wisdom about future-proofing the data structure.


> The IT industry has huge problem with complexity. We need to simplify the whole computing stack in interest of reliability, security and future innovations.

Yes! I really hope I keep hearing more of this sentiment and that eventually we collectively take action. What would be the first practical step? There's a lot of effort duplicating the same functionality across different languages and frameworks. Is reducing this duplication a good first goal? Should we start at the bottom and convince ARM/x86/AMD64 to use the same instruction set? After that, should we reduce the number of programming languages? It seems there's still a lot of innovation going on, would it be worth stifling that?


The actual non-snarky first step would be to admit that we are over our depth and we can no longer deliver software that is reliable, secure and maintainable. We can only guarantee that our software works for at least some users, on current versions of OS/browser, and is hopefully secure against some of poorer attackers.

Countless variants of programming languages and of instruction sets are not an issue. The problem is lack of well-defined non-leaky interface on boundaries of abstraction layers.


> What would be the first practical step?

This is too big a topic to reliably cover in a comment (or ten) but standardising using strongly and strictly typed data formats like ASN.1 and EDN and practically forfeiting everything else (JSON, YAML, TOML, INI, XML) for configuration might be a good first step.

You cannot innovate if you keep insisting on eternal backwards compatibility. That's just the facts of life. At some point a backwards compatibility breaking move must be made. It's absolutely unavoidable and we'll see such moves in the near future.

> Is reducing this duplication a good first goal? Should we start at the bottom and convince ARM/x86/AMD64 to use the same instruction set?

Not sure about the CPU architectures; it seems they have been stuck in a local maxima for decades and just in the last few years people started finally asking if there are better ways to do things.

But as for some of the author's points, you can bake in certain services directly in the OS (say, utilise SQL for accessing "files" and "directories" instead of having a filesystem), standardise that and then just make sure you have a good FFI (native interface) to those OS services no matter the programming language you use -- akind to how everybody is able to offload to C libraries, you know?

> After that, should we reduce the number of programming languages?

We absolutely should, even if that leads to street riots. We have too many of them. And practically 90% of all popular languages are held together by spit, duct tape and nostalgia -- let's not kid ourselves at least.

It cannot be that damned hard to identify several desirable traits, identify the languages that possess them, combine that with the knowledge of which runtimes / compilers do the best work (benchmarking the resulting machine code is very good first step in that), then finally combine that with desirable runtime properties (like the fault tolerance and transparent parallelism of Erlang's BEAM VM). Yes it sounds complicated. And yes it's worth it.

> It seems there's still a lot of innovation going on, would it be worth stifling that?

Yes. Not all innovation should see production usage. I can think of at least 10 languages right now that should have remained hobby projects but became huge commercial hits due to misguided notions like "easy to use". And nowadays we no longer want easy to use -- we want guarantees after the program compiles, not being able to spit out a half-working code in 10 minutes (I definitely can't talk about all IT here, of course, but this is a sentiment / trend that seems to get stronger and stronger with time).

Many languages and frameworks aren't much better than weekend garage tinkering projects and should have stayed that way -- Javascript is the prime example.


> Want to admire your fancy After Dark win 3.1 screensaver? Just emulate the whole environment!

That is literally what Microsoft does.


Most operating systems ship a general-purpose structured binary serialization format parser as an OS component: ASN1. There have over the years been a number of security critical bugs in there, and everybody hates ASN1 anyway.


ASN.1 has an amazing idea and an awful implementation. :(

I'd say standardise a subset of ASN.1's binary and text representations and introduce a completely different schema syntax -- LISP seems like a sane choice -- and just stop there.

ASN.1 suffers the same problems that many other technologies suffer: they have way too many things accumulated on top of one another. Somebody has to put their foot down and say: "NO! Floating-point numbers in these binary streams can be 32 bit, 64 bit and arbitrary precision but no more than 1024 bits! I don't care what you need, there's the rest of the world to consider, deal with it". And people will find a way (maybe introduce a composite type that has 2x 1024-bit floats).

We need standard committees with a bit more courage and less corporate influence.


> Given a program supporting 16 bit address spaces from the 1970s, you can load it into a modern x86 OS today and it works.

Actually, it doesn't. It is extremely hard to properly return to 16-bit userspace code from a 64-bit kernel, so Windows removed support for it entirely, and it's not enabled by default on Linux.


Well, I don't want to say anything about the utility, longevity or appeal of yaml/json, but I somehow think a user is going to upgrade their entire operating system before they upgrade my little app.

And if they're inclined to upgrade my app, I mean, nothing stops me from using a third party library to parse yaml. It sounds like we're talking about an app from three operating systems and 20 years ago so it's likely I'm doing that anyway - maybe not in the current Windows version, but in a recent enough version on some other operating system.


Article resume: biggest cs problem now is a diversity of serialization formats, because most of current code consist of parsing various formats, so OS must to do something about it.

No, it is not a so big problem. And no it will not do our life easier. Also author did not mention about the real problem of semantic. How client should interpret a structure to compose a valid request.

OS should not know about userspace structures because OS don't do anything with it. It stores and transfers chunks of bytes and its semantic is defined by userspace. And forcing current popular serializing format on OS level is the most dumb idea ever.


most dumb idea ever is a bit strong, but yeah. If you could get OSs to adopt this, then as an app writer, you're going to have to worry about how Microsoft's jacked up version of the standard broke your content when it was moved between computers or even OS versions. You'd have the UNIX/MS line ending problem not just in text files, but with every Type recognized by the database


Additionally we've got enough fun already with the case insensitive filesystems.


>And forcing current popular serializing format on OS level is the most dumb idea ever.

The idea is that you take the common elements of all of those serialization formats and when you take a good look you notice that the lowest common denominator isn't actually raw bytes on a disk.


Except when it is.


Preach! I've felt this way too, that adding structured data as a universal feature to operating systems would be a pretty agreeable next step.

I wonder if we're past the point of return, though, in terms of technical divergence. It sounds like, in the Ancient Times, there was a handful of great programmers whose work created the world we program in now. But now, there's way more programmers being paid to make slightly different versions of this "next step", and it would require widespread agreement/coordination to implement it on a scale where it's a seamless feature that's taken for granted the way the shell/network/fs are.


Problem: We make a lot of CRUD apps.

Solution: The OS should do it.

Ehhh... Why is that the obvious solution? We can't decide on the right way to do it in user space, why does moving the problem to the OS help? This seems to be based on the whimsical idea that having the OS do it would somehow fix the varied problems of structured communication. Are we enforcing WSLDs in the OS? One size fits all structures defined by the OS? I don't think the rambling thoughts really made it back to the thesis.

That said, I suggest to anyone interested in this stuff to try Powershell...no really! I don't use it often but it is a window into another world where everything has a structured definition behind the text output.


> Problem: We make a lot of CRUD apps. > Solution: The OS should do it. > > Ehhh... Why is that the obvious solution? We can't decide on the right way to do it in user space, why does moving the problem to the OS help?

The article is indeed ridiculous. An OS should not do everything. Hardware storage resources are generally the memory, disk, and network connection (and if you're getting really deep, the cache and registers). A good OS should only provide access to those resources as efficiently as possible across a wide variety of hardware.

There is a vast myriad of ways of utilizing those resources, and it would be a fools errand to implement a one-size-fits-all approach. The better approach is to provide access to the resources, and let higher level software developers build on top of them.

A disk only database is far different than a disk database with a memory cache, is far different than a memory only database, is far different than any collection of the above coordinated via a network connection. Further, storing text is different than storing images, which is different from storing video, which is different from storing JSON or XML.

Pushing everything to the OS will often give you worse performance, locks you into a single OS vendor, and slows down innovation from third parties. Bad idea.


> An OS should not do everything. Hardware storage resources are generally the memory, disk, and network connection (and if you're getting really deep, the cache and registers). A good OS should only provide access to those resources as efficiently as possible across a wide variety of hardware.

It sounds like you're talking about kernel only. So I guess your OS of choice is something like LFS?

My view of an operating system is very different; it's supposed to be a complete system ready for productive work as well as a programming environment and platform for additional third party software.

> Pushing everything to the OS will often give you worse performance, locks you into a single OS vendor, and slows down innovation from third parties. Bad idea.

Pushing everything to third parties will often give you massive duplication of effort and dependencies, excessively deep stacks that eat performance and make debugging harder, locks you into a clusterfuck of dependency hell, and slows innovation from first party because now they must be very sure not to break the huge stack of third party stuff that everyone critically depends on. There'll be no cohesion because third parties invent their own ways of doing things as the stripped-to-the-bones OS has no unified vision, documentation is going to be all over the place, there's nothing holding back churn... development of third party applications is slow and frustrating because the lowest common denominator (underlying OS) is basically magnetic needle. Bad idea.

This is largely why I prefer BSD over Linux, but I share the author's frustrations with Unix in general.


Logging: Structured logging with automatic rotation etc was implemented in Windows Event log

Structured data passing between programs instead of just text is part of Powershell concept.

Calling of other programs to request specific actions with smooth UI called Android intents.

If you want to store structured data, you should use well, a database.

So, part of critique of author is Linux specific.

But generally I agree with author: OS are poor abstractions and really need to be improved.


> Structured data passing between programs instead of just text is part of Powershell concept.

dbus, CORBA and COM would like to have couple words with you


I don't know about DBus and CORBA but COM is unreasonably hard compared to REST or Protobuf.


Imagine interoperability nightmare when we cannot rely on everything being just bytes being streamed.

I mean, everything would be stored and transmitted one time or another. Word document, sqlite database, an email and it's attachment. Imagine you could not send something as simple as word document because ip protocol assumes stream of bytes, and your operating system talks custom storage format. Imagine you cannot store, use sqlite database efficiently because operating system does not present you with efficient, fast, compatible byte storage.


“ Imagine interoperability nightmare when we cannot rely on everything being just bytes being streamed.”

As I’ve noted above, the problem isn’t transmittability; the problem is never knowing what the bytes being transmitted represent.

I mean, C is hardly renowned for the robustness or expressivity of its “type” system, but untagged untyped byte streams are tantamount to declaring all data as void*. That is a ridiculously shaky foundation to build on, yet could have been entirely avoided by simple addition of one more piece of metadata and an ftype() API.

K&R were brilliant, but also kinda dumb. I certainly wouldn’t want to eat chicken cooked by either one.


there would still be streamable bytes at the bottom, as the author explained..


I think the author was more advocating for having something like sqlite being a part of the kernel and the de facto way of accessing a lot of data (as opposed to files and directories and byte streams in general).


I suspect that one of the keys to user-friendly structured data will be storing sparse data, in the sense of incompletely defined data (note that's real persistence, not just existing in the editor). It's also important to allow data that doesn't (yet) conform to whatever schema is relevant, but I think sparse data will generalize to that case.

One of the undersold features of plain text is being able to exist in "invalid" states while it's being edited. Structured data UI needs to have a least that level of malleability to have a chance over the current paradigm.


Windows supports storage of sparse data but probably it’s not what you mean.


If you are referring to sparse files, those are also present elsewhere (see the seek parameter to dd).


Things like comments purely exist as bytes in YAML. If you were to deserialize and serialize YAML back you would lose all comments.


Definitely don't do it that way, then. Give comments first class status.

Comments should actually be independent of the underlying structure they're documenting. I'd like them to be an independent layer of annotations that can be applied to any data.


Wait, Isn't the appeal of a stream of bytes that the OS is agnostic to whatever the application layers above are doing with it? That way, you can separate concerns and evolve each layer independently. It's probably the reason the OS has had such a long staying power of decades. It seems to be the same strategy of IP (internet protocol)--that it's a narrow waist architecture--so the layers above and below it can evolve independently.

How would versioning of the data structures work? Would it be append only? I'm pretty sure part of the reason abstractions are hard is because we don't know exactly how to model the domain the first couple of times around. So if these data structures in the OS aren't right the first time, we end up lugging it around and it'd be more expensive than the parsing that we currently do. When I look at the EDI spec, it just has a lot of cruft I don't need.

And lastly, I'm afraid when we invoke meaning in a stream of bytes, different groups with different interests will have different ontologies they want to enforce. We see these in format wars and internet working groups. I'm not sure we can easily agree what these data structures are, and how to evolve them. Maybe there's a good way and I just haven't seen it.


> Wait, Isn't the appeal of a stream of bytes that the OS is agnostic to whatever the application layers above are doing with it? That way, you can separate concerns and evolve each layer independently.

Dude, we all know the supposed theoretical ideal state of things. And yet it almost never happens.

> It's probably the reason the OS has had such a long staying power of decades.

Or the fact that without an OS your hardware is a simple paperweight? Not sure what is your point here, can you clarify?

> It seems to be the same strategy of IP (internet protocol)--that it's a narrow waist architecture--so the layers above and below it can evolve independently.

We don't have that much choice there either. Networks have to be streams of bytes due to the physics and realities of our network hardware -- which aren't likely to change anytime soon. So I don't think yours is a transferable analogy.

> How would versioning of the data structures work? Would it be append only?

Yes. That's how FlatBuffers work and every single team I talked with are grateful that they moved to it and no longer use JSON. Has to mean something.

(FlatBuffers also allows you deprecate and thus ignore parts of the structure you are transmiting. But indeed and as you alluded to, anything new can only be appended at the end of the byte stream.)

> I'm pretty sure part of the reason abstractions are hard is because we don't know exactly how to model the domain the first couple of times around.

Sure, agreed. But as an area IT already has a lot of experience and can tap into some past lessons already.

Configuration: zero programming language constructs inside it, please! You need config with programming? Here's that small subset of LISP or Lua, you can only use that.

I see nothing wrong with an approach like that. People will grumble a lot and will adapt to it as they always do. It's not like Ansible's YAML programming language is better, no?

> And lastly, I'm afraid when we invoke meaning in a stream of bytes, different groups with different interests will have different ontologies they want to enforce. We see these in format wars and internet working groups. I'm not sure we can easily agree what these data structures are, and how to evolve them. Maybe there's a good way and I just haven't seen it.

I also haven't seen it and I am sure most people don't as well but it doesn't mean we shouldn't try. Stuff like RDF were pretty good for describing many types of data, for example. And if somebody needs something drastically different, well, they shouldn't use that format.

---

IMO the author's point isn't that tomorrow we can have the one ultimate programming language and data format to rule them all -- of course not. Their point is that nobody is even trying. And we actually have a lot of low-hanging fruit, the "everything is a string on the shell" being one of the best examples.


I'm not against trying. I'm just saying, ok, if we're going to try it, these are the things that immediately come to mind as hurdles to get over. I wish they addressed those hurdles in the post (or maybe in a later post).


I think the browser is the described OS here: every-api passes objects around, IndexDB is your filesystem, remote REST & GraphQL servers are returning structured JSON objects to your program, you can add a <script> tag to your <html> to pull in your single-function scripts (i.e. leftpad).

Seriously though, the question of "why isn't it done this way" is answered a couple times: "it's all just bytes in the end". Bytes & arrays-of-bytes are really the only thing that you can trust that all systems and all languages interpret the same (even 2-byte integers can be interpreted differently by different systems). Presumably the author just wants to be able to memcpy structures between programs or from disk, or maybe with some high efficiency pre-allocated heap stuff, but you really need to have code to validate the bytes, essentially requiring the parsing of your data-structure which the author is trying to avoid.

Running programs with some common argument notation does sound really nice, but ideally that's just calling a function from your language's binding to their library (as their executable should do, right?).


> I think the browser is the described OS here

I was actually describing something more close to the Smalltalk (http://pharo.org) / Self (http://blog.rfox.eu/en/Series_about_Self.html) image.


> remote REST & GraphQL servers are returning structured JSON objects to your program

They are not. They are sending a HTTP message, which consists of just octets and has to be parsed. The parsers have to have a ridiculous complexity for various reasons. The HTTP header contains sublanguages such as RFC 7231 §5.3.1 or RFC 8288 §3.

JSON meanwhile has amassed at least six incompatible specifications with the same exact media type, so there is not even a theoretic hope of parsing all messages correctly. https://news.ycombinator.com/item?id=20736665


Yes but that's the underlying system, not your "application code" handling the structure parsing. Maybe I shouldn't have said JSON, but you do always get a JavaScript object back, as opposed to UNIX filesystem/socket operations which only really return application/x-octet-stream data. I thought that was the thesis of the article: it'd be a better world if your operating system supported parsing common structures for you.


Wow that statement is completely untrue.

It links to several RFCs. RFC-7158 and RFC-7159 differ only in (a) that one is RFC-7158 and the other is RFC-7159 (b) that one is dated in March 2013 and the other is March 2014 (c) that one obsoletes RFC-7158 in addition to RFC-4627 which they both obsolete. It appears that 7159 was pushed only to handle an error in the dating of 7158, given that the irrelevant RFC-7157 is dated March 2014.

So to say that there's more that five incompatible specs is just a cruel disregard of the facts. No implementation can differ in its parsing of a text based on the metadata of the spec. There needs to be a substantive difference in the spec.

As to the current RFC 8259, it makes no substantial differences from the previous version. There are a few typos that have been fixed ("though" becoming "through", when "though" was not a possible interpretation). It eliminates the obligation to parse public texts not coded in UTF-8. Previously, documents SHALL be UTF-8 - that's barely a change, but it is a change. It has specified more concretely its relationship with ECMA-404. It's possible that someone reading earlier RFCs would have concluded "a valid RFC 7159 json parser can parse and only the documents produced by ECMA-404". If there were any discrepancies between ECMA-404 and RFC-7159, therefore, an interpretation along these lines would lead a person to conclude you cannot parse a document according to RFC-7159, so the spec doesn't really count as an incompatible spec. Under the current spec, it identifies that there are possible discrepancies, and RFC-8259 is intended to accept fewer documents that ECMA-404.

The first RFC did not permit the json documents "true", "false" and "null", accepting only json documents that comprise an object or an array. It permitted any Unicode encoding, defaulting to UTF-8, and non-Unicode encodings as well (so a Latin-1 document is valid RFC-4627, but not valid RFC-7158+). It also included some incorrect statements about its relationship to javascript. These were reduced in later versions and eventually eliminated. They do not affect the specification of the language, but merely how you may handle data in the language.

No document that is accepted by the current RFC will be rejected by any RFC other than the first. The only such documents to be rejected by the first are those which consist entirely of "true", "false", or "null". No UTF-8 document that is accepted by the first RFC will be rejected by subsequent RFC parsers, unless a person reads a paragraph called "security considerations" that says "you can kinda do this but it's insecure" as somehow trumping the clear statements of the grammar in earlier sections.

I have not investigated the other specs and I probably won't. But the idea that there are 6 incompatible specs is false. There is not more than 5, and it is almost trivial to accept documents according to 3 of 5 or 4 of 6 specs at once by ignoring the restrictions on character coding.


> It links to several RFCs. RFC-7158 and RFC-7159 differ only in (a) that one is RFC-7158 and the other is RFC-7159 (b) that one is dated in March 2013 and the other is March 2014 (c) that one obsoletes RFC-7158 in addition to RFC-4627 which they both obsolete. It appears that 7159 was pushed only to handle an error in the dating of 7158, given that the irrelevant RFC-7157 is dated March 2014.

"they only differ in", yeah, indeed.

Dude. We all have work to do and grappling with some obscure JSON parsing corner is the last thing we need in our workday. This applies to 95% of commercial programmers, I am willing to bet.

What you described does sound simple and small in isolation. Now multiply it by 50 and see how "they only differ in" is a problem that must not be allowed in the first place.

Where's the-one-and-only JSON spec? Why no organisation has the courage to step forward and standardise it once and for all?

Given the current realities, just cut your very small and ignorable losses by losing those obscure JSON parsing corners and let people describe JSON simply as:

"RFC #ABCD".

That's it. Nothing else. Only that. Everything is contained in it.

Complexity compounds and makes everything non-deterministic. We should stop allowing that.


Let's reminisce on some OS-bundled databases of Unix: The libdb files, the utmp/wtmp/btmp style files, the /etc/services and /etc/passwd style ascii databases.

I think looking back we can be thankful that structured databasey stuff has been decoupled from OS interfaces and has been allowed to evolve on its own.


> Let's reminisce on some OS-bundled databases of Unix: The libdb files, the utmp/wtmp/btmp style files, the /etc/services and /etc/passwd style ascii databases.

I would argue that you can't really call this a (ascii) database, because it isn't (no transactions, no data types, no query language, no parallel write, nothing). It's what you get with unix - shitty stream of bytes.


I would disagree.

Take /etc/passwd. It does have a query language. getpwent(), login, passwd

It has data types -- again getpwent() (man getpwent) needs pwd.h

It has "parallel writes". Multiple users can use passwd at the same time. It has "transactions" -- mostly because of inodes.

Similar comments apply to utmp and wtmp (note that utmp and wtmp are not ascii -- they are binary records). /etc/services is ALSO structured but is ascii. Less controlled, because it is almost always "read-only".

Of course, in a very strange sense, you are correct. Any structure imposed is NOT by the kernel -- this is "post init", or user level.

FredW


Plenty of classic databases were missing all of those. In fact arguably many of the recent nosql ones fit the description..


One reason that we don't "just stored structured data as-is" is because there are many kinds of structured data where the in-memory representation is architecture and/or compiler specific.

Many a novice programmer has no doubt made the mistake of thinking that you could, for instance, do this to deliver a C/C++ struct across a socket:

        writer: write (socketfd, &struct, sizeof(struct));
        reader: read (socketfd, &struct, sizeof(struct));
that works until the reader and writer are on machines with different architectures.

Same observation could be made about filesystems used to store "structured data". We serialize and deserialize it because the in-memory representation is not inherently portable.


The author's point is that if our APIs worked with structured data instead of byte streams, then we wouldn't need to care about the in-memory format. The standard API would do the hard work for us, allowing us to call send(fd, my_struct);

Since there's no standard API (blessed by the OS) and the lowest common denominator is byte streams, we're seeing all these ad-hoc solutions and a hodgepodge of formats and libraries to deal with. That's lots and lots of time and money spent on rather basic stuff.


Thats one of my points, yes.


And that's why I think the author is wrong about this (he is probably right about thinking (more often) about filesystems as databases, but that's somewhat orthogonal).

The approach you're describing only works for POD-style "structured data". Once you start using OOP of almost any type (though not every type), you no longer have ... well, POD that you can move to/from a storage medium. You have objects whose in-memory format IS important and compiler dependent.

There are other concerns too. His WAV example (I write pro-audio software for a living) doesn't even begin to touch on the actual complexities of dealing with substantial quantities of audio (or any other data that cannot reasonably be expected to fit into memory). Nor on the cost of changing the data ... does the entire data set need to be rewritten, or just part of it? How would you know which sections matter? Oh wait, the data fundamentally IS a byte stream, so now you have to treat it like that. If you don't care about performance (or storage overhead, but that's less and less of a concern these days), there are all kinds of ways of hiding these sorts of details. But the moment that performance starts to matter, you need to get back the bytestream level.

And so yes, there's no standard API and yes the lowest common denominator is byte streams ... because the __only__ common denominator is byte streams. Thinking about this any other way is a repeat of a somewhat typical geek dream that the world (or a representation of the world or part of it) can be completely ordered and part of a universal system.


Structured data can be streamed too, and indeed there is software that does it at scale. Data with much more complex structure than audio frames.


Of course it can be streamed! Nobody ever suggested it could not be. The point is that to stream it portably (i.e. without knowing the hardware characteristics - and possibly software characteristics too - of the reciever) you have to first serialize it and then deserialize it, because the in-memory representation within the sender is NOT portable.


You're too hung up on in-memory representation. Yes, if it's not right, then it needs to be converted. That can be done for you, or you can do it manually with byte streams like cave man. If you can do it manually fast, then it can be done just as fast automatically based on the declared structure.


It isn't possible to be in a world where the in-memory representation doesn't matter. Someone has to write and maintain that code, and if it's you, then you get the fun task of telling some users their workload won't fit into it. And then they go off and write their own in-memory representation.


Different architectures is actually a smaller problem than evolution of data format. In-memory formats are not extensible by default.


As I’ve noted above, this problem would go away if data streams were adequately tagged in the first place.

Having that high-level knowledge of data structure enables all sorts of intelligent automation.

In the event that the client uses a different memory layout, it could look up a coercion handler that converts the supplied data from its original layout to the layout required by the client. This is, for instance, how the Apple Event Manager was designed to work: all data is tagged with a type descriptor:

typeSInt16, typeSInt32, typeSInt64 typeUInt16, typeUInt32, typeUInt64

typeIEEE32BitFloatingPoint, typeIEEE64BitFloatingPoint, typeIEEE128BitFloatingPoint

typeUTF8Text, typeUnicodeText (UTF16BE), typeUTF16ExternalRepresentation (UTF16 w. endian mark)

typeAEList (ordered collection) typeAERecord (keyed collection)

and so on. (The tags themselves are encoded as UInt32; nothing so advanced as MIME types, but at least they’re compact.)

The AEM includes a number of standard coercion handlers for converting data between different representations, and clients may also supply their own handlers if needed. Thus the server just packs data in its current representation, and if the client uses the same representation then, great, it can use it as-is. Otherwise the client-side AEM automatically coerces the data to the form the client as part of the unpacking process.

There are limits in the AEM’s design, not least the inability to describe complex data (arrays and structs) with a single “generic” descriptor, e.g. `AEList(SInt32)`. That would vastly simplify packing and unpacking—in best case to simple flat serialization/deserialization, at worst to a single recursive deserialization—instead of two recursive operations with lots of extra mallocs and frees for interim data. But the basic principle is sound, and adheres well to the “be liberal in what you accept” principle.

I believe Powershell does something similar when connecting outputs to inputs of different (though broadly compatible) types, intelligently coercing the output data to the exact type the input requires. No manual work required; it “Just Works”.

Or, if you don’t mind the extra overhead then content negotiation is also an option, which is something HTTP does very well (though web programmers very badly). That is advantageous when communicating with “less intelligent” clients as it permits the server, which best understands its own data, to pre-convert (e.g. via lossy coercion) its data to a form the client will accept.

Lots of ways that Unix’s “throw its hands up and dump the problem all over the users” non-answer can be massively improved on, in other words, without ever losing the lovely loose coupling that is a Unix system’s strength. It only requires a single piece of essential—yet missing—information: the data’s type.


The problem outlined in the original article isn't about data streams. It is, at bottom, about the contrast between data storage and in-memory representation.

Typed data streams were not invented by Apple. Back in the 1980s, there was (for example) Sun's RPC mechanism, which gave you "seamless" remote procedure call, including transfer of arbitrary structures over a network.

But the original post is much more about filesystems. I used the socket example merely to illustrate the problem, not the actual topic.


Problem: There are N different ways of serializing data and it's a lot of work to support them all.

Solution: Create a new, unified, universal way of serializing data.

Problem: There are N+1 different ways of serializing data and it's a lot of work to support them all.


That's a bit of a knee-jerk reaction, IMO.

Problem: the computer systems we use are insufficiently flexible/too flexible, encouraging the use of N different ways of serializing data.

Solution: make a computer system which encourages centralizing the mechanism by which data is serialized.


New problem: Data serialization is now centralized and there are N ways of converting between the centralized way it's been serialized and everyone's incompatible structural needs.


Hey, at least then we've improved the format conversion problem from O(N^2) to O(N). :)

In practice, though, you end up with something like XML, which (to steal a phrase, I wish I could remember where I read it) is less a common language than it is an alphabet. I guess it's better than CSV.


stop serializing data.


That reminds me of this: https://xkcd.com/927/


I always disliked the file systems being hierarchical databases with the limitations they have.

Even from the simple, non programmer user (or a "power user") perspective I don't actually want the files the way they are. A file usually needs a name (which is not good for anything) and stores both the data and the metadata inside, a metadata correction automatically changes the file hash and the file system doesn't consider the fact most of my files (or at least the data part of them) are never meant to change (taking this fact in account could provide extra optimization and protection). Ideally a file should only store data (e.g. a pure sound stream for an audio file) + a collection of metadata fields (including tags!) I can query to find the files I need. AFAIK something like that was supposed to be implemented in WinFS but it was cancelled.

In fact you can use extended attributes/streams or even use SQLite as a file system (or a file format). But nobody does that. I feel interested in implementing a file manager to store all my files in SQLite and provide a convenient GUI to work with them but it seems too much work (I don't even have an idea of the UI) and its usefulness is going to be questionable as no application developed by others is going to be able to open files directly from that database anyway. Nevertheless it can be a good idea to use SQLite as a file system for particular projects like those you have described.


What I can conclude, however, is the idea we shouldn't choose one. Traditional FS should be kept there (for now at least, mostly because it's going to be an extremely hard to replace it with anything which would make more sense for the task) to store system files (and code files perhaps) + a better storage for user files (like pictures, videos, audios, documents etc) and this is mostly a matter of inventing a proper file manager. As for the projects where you are to store enormous numbers of files (so inode number limitation becomes a problem) like mentioned in the article there is little point in using a general purpose FS in the first place, one should either use a database or a raw hard drive (or a database on a raw hard drive).


If one application is able to use it or has a version able to use it the users of that application could format a drive for it. More applications could follow if the gains are big enough. Hardware could be created to fit better. Again, if the gains are big enough it is worth it. More applications will follow. Different things could migrate there like those SaaS like functions. Data could be shared between applications.

Inserting data A into B, serializing it into C, storing it as D then morph it back into C, turn it back into B again so that you can read A from it???

If A is 10 bytes, B is 10 GB OR we don't know if A exists in C? The ease of use and/or performance gain would be substantial even without memorizing 1000 manuals worth of exotic formats.


> its usefulness is going to be questionable as no application developed by others is going to be able to open files directly from that database anyway

Don't most operating systems provide for some kind of vfs nowadays? So you can indeed expose it as a legacy hierarchical filesystem if you want.

I guess the reason every extended database type filesystem has failed is the same one: basically, you need legacy support. Otherwise the user agents don't exist to access your data. Several older operating systems had support for attributes of various type and purpose, but they roughly died as the web became popular, because HTTP has no way to handle that problem.

The nearest exception is on handheld devices that run non-legacy operating systems, but then power users complain they don't have adequate control.


> but then power users complain they don't have adequate control.

They wouldn't if they were actually given adequate control. The problem of iOS is not the fact it doesn't expose a traditional file system, it's the fact it doesn't expose any and won't let you control the apps beyond the level Apple likes nor to sideload custom apps you or the free software community might have developed (which certainly isn't a 100% evil policy - it actually protects you from evil agents like the Chinese TSA which is known to install hidden apps on Android phones it searches). I couldn't even find a way to play OPUS audio files on a recent iPhone (despite the fact iPhone supports OPUS in hardware). As a power user I can be pretty well-satisfied without access to the file system if only the UIs and APIs above it provided the level of control and functionality I want.


Well, you convinced me. It's an interesting project idea.

Concerning user interface, I think the ordinary user interface of such a system would have to be application based. Rather than increasingly impure desktop metaphor, you simply list the applications available to user. They open the application and select from the available documents. You would have some kind of list of filters, including automatically maintained filters (like "time last opened") and manually maintained tags.

You could probably have a power users interface that is "the file manager". But it's just the normal file picker, without a default filter of "files i know how to open".

But you still have a problem when you export a file. How do you export that compressed audio stream when you want to send it to your friend? You need to identify all the metadata and reconstruct an MP3 file. Presumably you could implement version 0 of this without changing the file storage, since file importer/exporter is kinda crucial. So you just cache the important data somewhere.

It would all be more important if files were still as relevant as they used to be. I used to have ogg vorbis files and photos and everything. Nowadays, it's really only code and occasionally a document. Otherwise I'm in my web browser or on an app on my phone.


One thing that annoys me is that when "directing" other programs, you need to write tests that check if each program is alive and kicking. If you for example is checking a http server, you need a http client to test it. And the most frustrating part is that you don't know when the program is ready to do whatever it's supposed to do, so you have to constantly poll until it answers, but then when you disconnect it might have some warmup/warmdown mechanism, so you can't put it to use right away, you have to do manually testing - then set a timeout with a good margin. Then when you "director" worker crashes (and restarted by another "init" service) those children are still alive, so when you try to start another instance it will complain about the port being used, so the "director" worker also have to check for existing orphans. .. It's sometimes easier to just rewrite the program yourself so it becomes a port of you "monolith". I like the unix philosophy and micro-services. But I feel it can be better...


Hah, yeah.


I get where the author is coming from, and I'm often dismayed by how much of programming is just shuffling data from one platform/format/structure/persistence layer to another. It's tedious AF.

Still, there appears to be a lot of conflation of concepts here, and I don't see anything approaching crystallization of a coherent solution.

What do I gain by telling the OS that I want a JPG rather than a sequence of bytes that I know is a JPG? I'm probably already using a library to deserialize this for me, unless I'm a masochist, so the heavy lifting has already been done. And that object/struct/whatever is going to be represented somewhat differently in memory for Java than for C++ than for Python. There is no way to reconcile those differences without creating even more problems. So a programming language would need to either be written or adopted to standardize on.

Most application-specific data is already stored in a database of some sort anyway, which is itself (potentially) platform agnostic. To assert that applications should instead rely on the OS and filesystem to provide this persistence layer is to assert that there is a universally appropriate choice of database -- a bold claim.

Moreover, if structure can (must?) be defined by the application which is interfacing with the OS, then nothing is gained but overhead. Every object would need a description of itself, so we either end up with redundant structural information for every file, or else we have some centralized table of "object types" that the OS has to look at every time we request something.

Maybe I'm missing something, but I don't see the appeal here. I understand the desire to reduce overhead, but as far as I can see, this just creates more.


Ok, today we have shells like bash that can help you navigate around a file system, find files, run files, search files, view files etc. Common tools are grep, less, cat, "|", awk, sed, perl, tr, wc, find, etc. The <cr> is a poor mans record seperator.

Experienced people can often do non-trivial things in shell by combining a tools, regular expressions, and manually filtering out false positives. I.e. using grep foo | grep bar for that one email you are looking for.

But as a result things that need more structure require significant coding and create a sandbox that doesn't work well with other systems. Like say thunderbird (an email client).

Now imagine something different that has some higher level of abstraction. Maybe every file gets by default a list of functions to help the OS understand it. How about dump (raw byte encoding), list, add, view, delete, and search. Each file type supported by the OS would get those features, one of those might be jpeg. So of you created an address book, you'd define a record type called person, and a list of fields. One of those fields might even be a JPEG for a image of the person. In a GUI (like thunderbird) you'd just wrap a function called addressbook.insert much like just about every GUI platform has a file picker.

Ideally every application on this OS would make use of these function calls so every app that needed an addressbook could share the OS calls to interact with the addressbook. But also instead of being frustrated at thunderbird you could use your addressbook for new and different things. Like say a map viewer might put icons up for every home address in your addressbook. Or you could query you addressbook for the home address nearest you... from the command line.

Ideally this files are actually objects that include data and code. Said code could inherit based on primitives from the OS like records and fields. That would enable things like extending the addressbook to handle a new field like your keybase ID, PGP key, 3D representation of your face or whatever.

Similarly any image viewer, or even a pipeline using standard image tools could iterate over all your JPEGs and extract geo tags.

The object aware "find" replacement could access records/fields for all file types so you could find photos within a distance of a long/lat.

By combining the above with a relational database instead of a filesystem you could mix and match and create virtual folders for things like the top 1000 newest images on your "filesystem". Suddenly replacements awk, sed, wc, less, etc would understand email message metadata.

Or make a directory that contains the newest email messages you haven't replied to. Running ls --fields=From,To,Date,subject would give you a summary of email in a folder.

The code replication for understanding a file format, serialization, communication, etc would be greatly reduced and moved largely from the application space to the OS space and result in significantly increased compatibility between applications. Imagine instead of a zillion files under ~/.thunderbird that instead it was all in a database compatible with any email client that's supporting the new record/field standard.


I get what the programmer is trying to say but I think the problem is when you think everything is a nail because all you have is a OOP hammer. OOP lets you wire things up in a very unnatural memory hungry way kinda way.

When you start dealing with 500,000 files in one directory and you try to do a files->getname(10)->size->to_string(). You are bound to run out of physical memory. This is probably why newer operating systems are so slow. Caching will only save you for so long.


I don't actually really like the concept of the OOP as it is understood today. I mean it more in the Self / Smalltalk sense, which is quite different and thanks to late binding more similar to lisp.

Also your example is weird and you won't run out of memory if you use generators.


Generators are OOP service packs. It feels like the intention is to make things simple and easy to mesh together at the cost of overall flexibility. This would make a case for a new operating system built around this idea.


Well, its just criticism. I am working on my own programming language (http://blog.rfox.eu/en/tinySelf.html) which could be probably used for experiments on this field of work, but mostly I am focused on creating personal wiki systems these days.


interesting and confusing language. if we really want to get new break throughs we will eventually have to tackle the root cause of issues and re-write the OS or we will be patching languages forever. For example; image every variable would be data + meta? It would take up twice the space but allow for stranger types.


I've came to the same conclusion and I decided to build it. It's called Boomla [0]. Every OS needs a killer app or they will die. I decided to create a website builder or CMS as the first product, hence I call it a Website OS. Please cut through that noise when you look at the docs. It is 2 products, the OS and the website builder built on top of it. They will be properly separated in time.

https://boomla.com/


This looks really impressive. Thanks for posting your work! I'll do a deep dive at some point since it got me curious.


Cool! I'll shoot you an email so that you can easily find me.


I would like to point out, that having structured information as "bytes" or "text" is general and universal, but pretty much no one uses it as such. Maybe for simple append, or maybe for tasks like "count number of bytes". But every time you want to do something with the actual structure, you end up using ad-hoc parsers.

Unix utilities sound like a great thing, because how universal they are. Want to find something in text? Just use grep, or maybe with regexp. But what this really say is "use a parser made from simple condition", or "create parser using single character pattern matching language". And ok, this would be fine, if it really worked, but it can't really handle the structure. It may be great for admin, who just wants to quickly find something, but it is horrible and broken by design for anyone who really want to make something more permanent. So you end up with writing better parser / using library. And you are not working with text anymore, but with typed graphs / trees. And this happens every single time you actually do something even slightly complicated with "just text".


Text is structured: character, word, line, possibly field. Even "bytes" (octets) have some structure. On top of text, I usually layer, these days, usually JSON. Bytes? usually sqlite3

Yes, creating a parser with grep isn't usually desired. But, "plain text" is quite useful. And, with JSON and sqlite3 in easy reach, I don't see the massive issue.

Please -- I would like examples where you had issues with doing something even slightly complicated with "just text", and had difficulty. I really want to examine this. Either the text toolkits are not adequate, or I will be convinced that I am wrong, and will investigate structured CLI and OS interfaces.

FredW


Regarding structured logging, the systemd journal supports structured logs, for those who want it.


It's hard to really innovate in operating systems because people still want to run existing code, and they want to run it really badly, even if means compiling from source code.

If an operating system supports fundamentally different APIs, old code won't run. You can build an abstraction layer, but if the OS is different enough inside to be simpler, lighter, and higher performance running native apps, the abstraction layer would run code with awful performance and terrible reliability.

I think there's a market for an OS for systems that don't have a traditional MMU, getting rid of the TLB could save a lot of die area and make memory access more predictable. Smaller systems could live comfortably in a 24-bit address space, but 32-bits and no MMU would also be good.

I have dreamed about stripping down a Linux kernel and distro to the point where it has just a very simple root filesystem and everything is services offered through unix domain sockets (e.g. when you want to access files you use 9P in user space)

When I work it out I start realizing that you lose the benefits of the page cache, some IPC applications really take advantage of memory mapping, and pretty soon you're back to the "not-so-micro" microkernel.

So far as structured serialization, it's a tough nut.

Whenever it is tried, people tend to hate it. Back in the 1980s and 1990s people were obsessed with performance and made inscrutable systems like Sun's RPC and Microsoft's COM based on C. More recent systems are easier to understand, except for the notorious Protocol Buffers from Google. I love the idea of Apache Arrow but like all the others it has no real answer for text (e.g. variable length fields, unicode, etc.)

"Database as part of the OS" was common in mainframe operating systems (which have a filetype which looks like a single SQLlite table) and in minicomputer operating systems (Pick, AS/400) that have something like a relational and/or object db built in.


> OS for systems that don't have a traditional MMU

Running a single program from a single vendor for a single user?


Maybe.

You can do memory protection by having registers that limit memory access to a certain range. Segmentation isn't fashionable these days, but it could implement something just a bit better than that. System/360 had memory protection without an MMU.

Other options would be something like the Java virtual machine or Rust that let you statically check that code is safe.

The worst problem for MMU-less systems is memory fragmentation. This was a problem throughout much of the history of Classic MacOS. An MMU lets you "move" memory without the application knowing anything moved. Without an MMU you either have to live with fragmentation or have some kind of moving garbage collector (possibly quite coarse-grained)

Other advantages of MMU are paging (less popular these days, not viable for real-time) and being able to map the same pages into multiple address spaces.


I agree. I see major practical obstacles:

General purpose editing of structured data. Org-mode is the closest thing I know of but it is still infuriatingly ad-hoc and text-first.

Grepping (search in) and diff/patch of structured data. While many text manipulations are easy to implement in search/replace manner, structured data are more involved.

Errors and other kinds of unexpected input/output. Semantics of unix process failure are clear. State of the system after one process crashes is readily known. We can compare the filesystem before/after.

If we instead throw an exception in object-oriented runtime, the result is not so easy to analyze or compare, there may be dangling references or other complications. I don't know of any language runtime that can dump itself whole in (semi-)readable format for later comparisons.


Smalltalk can.


There is a very entertaining discussion about fs-as-db-and-more from core linux devs: https://yarchive.net/comp/linux/reiser4.html.


Wow. After living on the google tech island for a while, it's surprising how much of this people have tried to "solve". Need a message? Pass a protobuf. Want to log something? Got you covered. Files? What's a file? I only make rpc calls to some database server.

So I think there's definitely some truth to what he is saying. However I'm not sure if we can climb down from our local optimum long enough to all climb up there together. I think even at google where it's pretty restricted you still might have two ways to do some things (the deprecated way and the not yet ready way).


>After living on the google tech island for a while, it's surprising how much of this people have tried to "solve"

You're missing the point of the article. Most people and organizations have written libraries and services that make common development tasks much easier when developing within their own software ecosystems. Google is just one of many.

The question this article is getting towards is solving those problems in a pattern that transcends individual implementations and the conceptual model becomes as ubiquitous as the filesystem hierarchy.

Dumping a "this works for $DAYJOB" solution onto the public by publishing a standard isn't the answer. If that worked, those problems would be solved and this article wouldn't exist.


Before it can become as ubiquitous as the filesystem, people need to be using it somewhere. And also talking about it.

It's reasonable to say that "google implements some of those ideas" counts as evidence for "those ideas are right", which is mostly what I wanted to say.


You make a good point by the way. I really think that next iteration of the OS paradigm will come from this angle. OS is becoming redundant anyway, in the age of cloud services (I don't think it will be desktop OS, just another OS for cloud). You are basically only interested in environment for your app, hardware abstraction is not that important, when whole filesystem is read only docker container anyway.


Concept of the memory. Think about what it really is: a limited flat key-value database. It is limited because it not only limits the allowed subset and size of the key (e.g. 32 bit or 64 bit address) but also the value itself: a region of bytes or words starting at that address, of some unknown size, which spans across adjacent addresses. That seems reasonable only until you realize that it is a flat database which doesn't allow you to store structured data directly. How can we improve this shitty, eighty-year-old metaphor?


See https://news.ycombinator.com/item?id=14542595 if you're seriously interested in this.


I'll look into that, thanks.


We certainly need some standardization over binary formats, that's for sure. It could take away a lot of redundant work that's been done over and over again (i.e. serialization/deserialization).

This need not be at O/S level though. It can be a bunch of standards, like HTML, and each O/S could provide the appropriate implementations.

And the data format does not need a particular schema, it just needs a structural description.

The same format could be used to describe schemas.


The sentiment about application developers reinventing languages (the example of Ansible) is understandable. Some standard OS service offering structure could be convenient for some use cases.

However how would that help this common consumer scenario? I record a video using my camera (some unknown OS). It's stored on a SD Card as a list of bytes. I guess there is little to do about that. I either send the video to my Android phone over WiFi (my backup strategy when travelling, the camera is an AP) or I insert the SD Card into my computer. When the video is on the phone I syncthing it to my computer, so it always ends up there. Then I could process it or not, send it to friends using several different OSes or upload it to YouTube. Friends could receive the video over Google Driver or via a https link to a server of mine.

A video file definitely has some internal structure, mandated by standards and it must be interoperable across OSes. Maybe a list of byte is all we need here. Probably most YAML files must be cross OS (definitely docker, also ansible). The list of bytes is the minimum common denominator for all OSes so I guess it's here to stay.


The article makes a good point about the dumpster fires named ansible and docker.


I like the complaint against the command line interface.

Parsing commands should be the responsibility of the shell and all the program should do is check for the existence of flags and values.

bash: keytool -list -keystore keystore.p12

customshell: keytool --list --keystore=keystore.p12

One could convert the above into this JSON Object (or any other format):

["keytool", {"flag": "list"}, {"flag":"keystore", "value":"keystore.p12"}]


> Parsing commands should be the responsibility of the shell and all the program should do is check for the existence of flags and values.

I would love to see more consistency in command line parsing, and I think argument splitting has helped add consistency for Unix-likes compared to Windows, but going further has some major challenges:

getopt(3) parsing currently depends on the accepted options. For example, the parsing of `command --option value` depends on whether --option requires an argument. Similarly for `command -cmyfile` which depends on whether -c requires an argument (or -m if -c doesn't, etc.).

For argument, lets say the shell implements the One True CLI Option Syntax™ and passes the parse result to programs. What are the big advantages over the status quo? A stronger encouragement for consistency than getopt(3)? Slightly less work for new option parsing libraries? Unfortunately, most of the code for option handling, which parses and/or validates options and option arguments, prints errors, prints --help, etc. would likely remain largely unchanged.

However, in the same vein, one area that I would love to see more standardization (although probably not at the OS level) is a more expressive way to declare supported options and arguments. There is all sorts of information beyond `struct getopt_long *` which would be useful for parsing/validation, documentation (--help and man pages), and shell autocompletion that is currently ad-hoc.


The goal of the OS is to have the minimal support to do it jobs and gets out of the way. Cramming more feature into the OS just increases the complexity unnecessary.

If you need a database, why not just use a database? Why force the solution on the file system?

Microsoft had WinFS 20 years ago that's exactly what you ask for but it never caught on because the complexity in the kernel wasn't worth it.


https://github.com/apache/arrow/tree/master/format

Are your in-memory formatted structures. Even better, there are libraries in many languages for accessing them. As such you can consider the data almost as an ffi between languages. Metadata can also be attached to the arrow data.

As far as the rest goes, A2/bluebottle defines that standard program unit similar to COM etc, ot has an object shell similar to Oberon or powershell, so looks to be pretty much what the author is looking for.

Having said that, I'd prefer an OS with managed memory but no GC as per Composita.

https://github.com/btreut/a2

http://www.composita.net/Composita.html


Interesting, thanks for pointers.


I agree especially with the proposition that we should exchange structured data (via files, network connections, ...).

I started this project several years ago to support live code updates in production systems without sacrificing performance, but I eventually hit the same issues with exchanging structured data: https://github.com/Morgan-Stanley/hobbes

I split out structured logging into a single header-only lib, structured RPC into a single header-only lib, etc. It's not quite the same as having native support in the OS, but because the mechanism is so lightweight (don't even need to link anything special), it's a pretty good substitute.


The most practical way to give these ideas an opportunity to grow (IMO), is to start with code. Rather than code being stored as text, we store it as "objects" (as described in the article). To the extent which you are already using high quality tooling we already get many of these benefits, with the unfortunate addition of tons of parsing code duplicated between your compiler/interpreter, ide, text editor (syntax highlighting), and other tools (refactoring, etc.), as well even more duplicate code covering the mechanics of the internal model of the language(s) you are using (linters, intellisense, etc.).


In the case of the Linux kernel, the internal APIs are much nicer that the user space POSIX API.

People love to praise "everything is a file", but the other side of the medal is "every piece of configuration is a byte array".


Microsoft had a project to make the FS a database. It failed, didn't it ?


WinFS - it had some interesting ideas although I can see why it was abandoned:

https://en.wikipedia.org/wiki/WinFS


A lot of these criticisms are about the OS not doing enough high level stuff for the programmer. If you want to see what happens when the OS does all that, have a look at Android, where the "OS" is what's giving you sockets, key value stores, structured logging, etc. because the only real APIs it's giving you are the Java ones. Now look at the horrible workarounds you need to apply to make old code compatible with newer versions as demands changed over the years tk focus more on user privacy and battery life.

For nearly every problem encountered here there's a standard people just ignore. You can implement structured logging at OS level, like Windows does, and see random binary data drops and text files appear all over your logging interface and your file system. Linux config should follow the XDG standard so that configuration for applications can be structured into directories except half the programs don't do that. Windows provides an API for this stuff (the registry) and it's near impossible to configure an application using the registry. The registry is also combined with random config files all over the system, of course, to make it extra difficult to modify a program's behaviour.

Most nice modern features are also missing because operating systems are still being made in C, not C++. The OS doesn't have a concept of objects so yes your socket function will have to fall back to calling select(). A lot of these abstractions for basic communications have been solved on Linux using DBUS, which provides a somewhat standardised interface for many OS daemon and GUI features, all not being used because programmers forget that it exists or because programmers want to use their own solution instead.

On Windows there's COM to try and help with that and well, see where that ended up: a versioned mess of pointers and factories to try to make it easier for everyone, where functionality sometimes completely breaks or needs to be emulated because it turned out the high level concept had a design flaw and now programs won't work if you fix it.

I've done some thought experiments about structuring an OS and a file system to store data consistently and easily parsable, with modern bindings for most concepts. In the end I've had to conclude that the only way to keep the system working like intended would be to either convince everyone to do exactly as I say or to only allow me to write software for such a system. Whatever structure I can think off will inevitably be too constrained for someone else and the middleware abstraction problem starts all over again.

My only conclusion is that I want the OS to be as simple as possible with people following common standards when they write applications, such as using YAML/ini/XML configuration with Syslogd logging and XDG directory structures for user data, with the technical abstractions left at the library level. If we can just get that, most of the inconsistency problem would be solved, but even this is too difficult to do in practice as it turns out.


It's probably worth the owner checking out the Lisp Machine, Oberon, and Inferon (A descendent of Plan9 where they improved over the original concept). They say Plan9 is a step in the right direction, but rather than attacking a specific aspect of Plan9, they critique how the concepts were imported into Linux, which seems inherently silly to me.

It's like critiquing a master craftman's tools by observing how a master craftman uses his tools, and then critiquing how the apprentice uses those tools. Sure, some of that criticism of the concepts is valid, but they picked the worst implementation of those concepts to critique.

One of the things mentioned in the critique is how the filesystem in question communicates the changes back to the user, and how the user is expected to know what parts of the filesystem to alter. While in the examples given, it does seem relatively obscure, the decry the fact that they had to check the manual for the position to write to.

However, both of those would still be the case for the system that they talk about. Let's talk about object orientated systems, which already exist, in limited (compared to the operating system) forms. You still have to pick up a manual to figure out what keys and functions are relevant to you. In Pharo (which is more or less the system they're after!), you can inspect objects that exist and the methods that exist for them -- however -- often, similarily-named functions do different things, which also requires reading a manual. In addition, using these tools as a first-time user, I was overwhelmed by the number of functions available, most of which I could only guess at what their purpose was. Pharo integrates the manual into the system, but the manual is still there!

Standardized error codes are given in structures passed back (As in Erlang) or in Exceptions, but there are problems with those too. You still have to figure out what those exceptions or errors mean, and there isn't necessarily a standardized format for those errors. As a developer I've been recently been working on writing a Python api. As someone who has little experience with the intimate details of the Python language, I do not always know of the exceptions that exist that I am able to throw, I do not always know of which exceptions would be most appropriate. The same problem exists (obscure error values), but in a different form. There are existing libraries that I rely on that implement more exceptions, sometimes these aren't documented, and even with the venerable Requests library, I have still had to crack open the objects that exist to find easier methods of passing data that the Requests library has, and uses, but does not document for external use.

Let's look at the windows hive database. That's a database of keys for values in the operating system. As a random developer, would you be able to open it and figure out what it does? I wouldn't. As a windows power user I often relyied on articles from howtogeek without really fully understanding what the keys were doing (Although because of my experience an a systems developer, I could guess, but only after the fact). Again, the same problem ("I know what I want to do, but I do not know how to do it") is exposed in a different form, and the methods and practices of the Microsoft organization make that difficult hard to reach. Yet again, the same problems are there, but in a different form.

I do agree that the shell should handle program arguments, a program could expose keys and values and a shell could read that and tell the user about it. I would be interested to see the problems that the Oil shell encounter in their implementation of an object-orientated shell.


I've checked Genera (Lisp Machine).

Oberon is on my todolist, Inferon is new for me.

> They say Plan9 is a step in the right direction, but rather than attacking a specific aspect of Plan9, they critique how the concepts were imported into Linux, which seems inherently silly to me.

Well, I've played with Plan9 several times, but I don't really feel like criticizing it, because I don't really know it that much. Only thing worth of criticizing is that it almost feels like objects, but not implemented fully.

I agree, that criticism of Plan9 would make more sense from philosophical point of view, but it wouldn't be authentic.

> However, both of those would still be the case for the system that they talk about. Let's talk about object orientated systems, which already exist, in limited (compared to the operating system) forms. You still have to pick up a manual to figure out what keys and functions are relevant to you. In Pharo (which is more or less the system they're after!), you can inspect objects that exist and the methods that exist for them -- however -- often, similarily-named functions do different things, which also requires reading a manual. In addition, using these tools as a first-time user, I was overwhelmed by the number of functions available, most of which I could only guess at what their purpose was. Pharo integrates the manual into the system, but the manual is still there!

Thats true, but you can also do a type-checking and infer a lot only from the proper naming of the objects and methods in the "Clean code" (book) style.

> Let's look at the windows hive database. That's a database of keys for values in the operating system. As a random developer, would you be able to open it and figure out what it does? I wouldn't.

One of the things that is not strongly captured in this article and which I since consider increasingly important is ability to use reflection on the system. Database that you can't open and figure out what it does is not worth using.


oops! I typo'd, it's Inferno :D


>But why do we still use unstructured format of binary data transmission even today when all communication is structured? Even seemingly binary stuff like streamed audio is structured.

This reminds me of Bluetooth with it's billions of different profiles. Want to send FLAC or opus audio streams over Bluetooth? Tough luck, you better use the blessed profiles or make your own proprietary implementation (aptX).


I've imagined what a perfect OS would look like in my opinion;

the primary way of exchanging access is handles. Opening a file is obtaining a handle, which represents both the file and the driver/type behind it.

The file handle has some functions associated with it; write, read, flush, close. You can pass the handle to other processes, either cloned (RO or RW) or as ownership (loosing access to it entirely).

A printer has a different handle and different functions.

Programs can be the source of handles, so you can write a program that wraps a file behind a different set of primitives. Or that wraps the raw printer network socket and allows passing a PDF file handle into a function to print the PDF.

And if the printer goes away, the printer wrapper is notified that it's gone away, so it can queue up any prints. Or exit and notify everyone owning a PDF printer handle that the handle is now invalid.

Device files like on Linux wouldn't exist, you only need handles to operate on devices and you abstract over those, if you need it, you can abstract the filetree handle to provide a simulation of the /dev/lp0 file in it.

The filesystem would be a database, split into various tables ("default-configuration", "system-configuration", "user-configuration", "user-homes", "executables", etc.) with some tables merged into a view ("configuration"), so you can completely forget about where a config file is. Files themselves can either be formatted as a table themselves (key-value, json/nested data, etc.) or used as text or binary files. Of course a program can choose to use the FS transactionally, including SERIALIZABLE for your backup programs, that need a very consistent view of the FS.

Every interface would be malleable, a program with higher privilege can override it and change it for programs of lower privilege. Chroot becomes as simple as prepending the chroot path onto access to the filesystem functions. Going down the scheduler itself being runtime modifiable so a user can swap to a different scheduler at runtime or define a scheduler for a specific subset of programs.

If everything can be used, manipulated and introspected freely, then the programmer will have to do minimal work. Why use JSON configs when there is a perfectly good database system shipped with the OS? Why bother with logging when there is an interface for the system logger that supports structured logging and can ship logs to anywhere you want in any format at any detail level?

Sadly, some of these ideas aren't very efficient or hard to implement. But luckily not impossible. Just not wanted by the greybeards of the industry.


DOS used to have record oriented file functions. More like some kind of legacy from CP/M.

Guess what, they were abandoned. I don't know, maybe they were not flexible enough?

Does the OP suffer from not invented here? Most of the file formats he describes have free (at least to use) libraries available.

Even for sockets there are countless message passing solutions. He just need to research a bit and pick one.


I think this misses the point in a lot of ways. There are showstoppingly huge practical issues with schematising data, including standardisation, version control, schema access, schema localisation, schema politics - who owns the definitions? - and many more.

But there is a valid point about data ownership. A lot of current privacy issues and political problems are created by the way that applications own your data. If I want to access a Photoshop file ten years from now, chances are good I'm going to be paying Adobe a fee to do it - likely in the form of a subscription.

This is crippling, because it means Adobe (MS, Apple, Amazon, etc, etc) have a choke point over my own personal access to my work.

Of course I can export documents to some other format, but in principle what I can and can't do with my work is controlled by big corporations.

Open source alternatives don't fix this, because outside of developer tool space they're usually poor competitors and never have the leverage - nor the quality - to become an industry standard.

At some point this data siloing became a "personal" computing principle, which is unfortunate because it undermines the idea of a computer as a personal tool.

OS schematising wouldn't really change this. But forcing open access to the internal file structures used by large corporate applications would be a game changer, because it would allow shell-like automation and composition of data that is currently trapped behind corporate paywalls with either no low-level access at all, or high-friction save/load only import/export features.


ITT Software Developers: This is a great idea! System Admins: Why on earth would you want to do this?

I'm not sure this situation (devs and admins disagreeing) will ever go away. Is there some future where our opinions will converge?


I doubt system administrators are against operating systems having better data structures for configuration or storage. In fact administrators might be more in favor of structure since they are the ones who do the most ad-hoc scripting on machines. Developers can largely insulate themselves from the operating system with frameworks, libraries, and databases.

Anyway, system administrators, outside of BOFH types, and developers generally aren't at odds with each other.


Too bad the author doesn't yet seemsto have found https://www.unisonweb.org/ it really covers some of his points.


I'd go further:

= Everything as text - There's a fundamental problem with turning and treating everything as text: N programs having to know M parsing formats. Structured data that is readily usable without de/serializing would be an improvement.

- Logging - logging is broken because of the previous point, because log entries are streams of events poorly serialized to text files, log rotation and loss of structured data.

- Processes - Processes should be pause-able to disk, migrate-able to other systems (as long as I/O and files can be migrated)

- Indexing - there ought to be a tunable Spotlight-like system that doesn't need to constantly reindex because it indexes everything that was changed using file notifications in the background during idle (not on battery)

- Caching - there ought to be a central web cache on a computer that all web browsers and web operations can share

- Hypercard - we need more of this

- Introspection - the system, processes and every variable should be profile-able, inspectable and queryable

- Databases / files - there really shouldn't be any files or database, every program should be able to offer data types and format transformation services that the OS can use to add new functionality to all other programs and services. The notion of a file should be containers of user data that programs can use directly. IOW, a programming language should have a NoSQL/SQL-like query language built-in so there's no need for ORMs... the OS should handle how best to store and index data with the storage allocated.

- Typed memory / VM - it would be much easier to accomplish all of the above if the OS were a lightweight system similar to Erlang BEAM, Pony, LLVM runtime features.

- Kernel security, isolation and performance - something like seL4 but with additional transactional (N > 3)-process support to be able to make a bunch of changes, validate state and commit/rollback on failure.

- Fragmentation - the problem with systems like Linux that are developed as zillions of piece-meal, semi-interchangeable parts that duplicate functionality and have many options are the mess and confusion of trying to integrate them. The BSD's have it right in terms of base system development.

- Complexity - Look at how much stuff is cram-packed into the Linux kernel or OpenSSL.

- Churn - Deprecating and changing APIs creates incompatibilities. An OS should pick one set of APIs for all time and make it very difficult to change them. Heck, every OS should have the same API and same VM opcode format so that there's never again an obsolete or incompatible program. There's no point to re-inventing the same thing over and over and over that doesn't work with anything else that came before.


the article mentions plan9 but says that it left a lot to be desired due to unstructured text input/output interface.

i want to note that Inferno, the other OS that came from Bell-Labs in the 90s did have a typed shell called alphabet:

http://www.vitanuova.com/inferno/man/1/sh-alphabet.html

mayhaps you'll find this interesting.


I've come across this several times in the past, but never paid too much attention to it. I'll check it again. Thanks.


Filesystem as a database is complicated because let's assume that you have a network drive, then suddenly a transaction spans more than one computer.


Did anyone else have trouble reading this because of the bizarre automatic font sizing of the page?

I wish sites wouldn't try annoying tricks like this.


What device did you used to read the page? The CSS is pretty basic, but I did some hacks to make text more readable on iphone.


Not op but practically unreadable on mobile with thin noodle of text covering left half of screen.


I read it on my Android phone with Firefox, no problems. Only some unused white space to the right and no margin to the left. Same with Chrome.


I'll look into the issues. Most of the CSS is just export from the Notion.so and I didn't tuned it much yet.


There are a bunch of things that don't really make sense in modern software. For example, we now commonly package things up as docker containers with immutable/ephemeral file systems. For better or for worse, that's how a lot of server-side stuff gets deployed.

The whole point of that is simply emulating what the program is expecting to such an extent that it can run as if it was running on a normal file system. Often that means working around many broken assumptions. For example it might expect some configuration in a particular place in the filesystem. However, since the filesystem is immutable, you can't really modify that after you build the docker container. So you work around it by e.g. using dockerize to template the config file and then inject the actual configuration from the environment variables on the docker command line. Likewise, applications produce logs but writing those to an ephemeral file system is kind of pointless; so you instead write to stderr/stdout and leave it to the host OS, Docker, Kubernetes, or whatever is running your container.

These workarounds are something to avoid if you are writing new software. When you ship as a Docker container, you don't care about the host OS. It might be Linux, BSD, Mac, Windows, or whatever else is capable of running docker containers these days. Chances are that the host OS itself is also running in some VM if you are shipping to a cloud environment. The nice thing with Docker is that you (mostly) don't have to care about any of that.

With WASM this is extending to other places. People are running edge computing functions in WASM form, running WASM programs in a browser, or even in operating system kernels. Most of those places don't necessarily even have filesystems, environment variables, or log files that you can access (or should access).

IMHO modern development requires a bit of upfront thinking on how to configure things, where to send logs, and where to store data. Rarely is the answer for any of those things a file on an (ephemeral) disk. Logs need to be aggregated so they can be analyzed. Local storage is not always available/reliable. Configuration gets injected rather than loaded from a file. Files/state gets written to some specialized service (a networked file bucket, some DB, a queue, REST service, etc.). Most of those things are accessed via networking rather than a file system.

This is also true of most frontend development these days. A browser application does not have a filesystem; it has no access to environment variables; and while it has a console, it's considered bad form to actually log to it in production apps. Instead browser apps write to remote services (including logging and analytics data) and get their configuration from things like cookies, in browser databases, and remotely stored user preferences.

Come to think of it, most modern development is kind of detached from the operating system these days.


i think android & iOS provide the data structures the author is looking for e.g. queues scheduled tasks, relational storage, key value and mostly typesafe


Good luck using one as a regular workstation (without resorting to classic unix, at which point we're back where we started).


My brain hurts from this article. I really tried to make some sense out of it, but man, seems to me that author has very little knowledge of how some things work.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: