Part of the background for this entire dispute is that prior to the OSI's founding, "open source" was a generic phrase which was broadly understood to just mean "the source code is available". See many documented cases in https://dieter.plaetinck.be/posts/open-source-undefined-part...
So it's a bit ironic to argue that terms cannot be redefined, when that's already what happened with "open source" and what got us here in the first place. If OSI had chosen a novel term (e.g. "Sourceware" was one option they considered), they would have been able to trademark it and avoid this entire multi-decade-long argument.
Your first link (Canada/BC) offers guidelines for BC government usage of open source software. In this type of situation, the OSI's list of approved licenses (and OSD in general) is very helpful, since it avoids massive duplicative legal overhead of evaluating software licenses. But in my opinion, that has little bearing on whether or not people should strictly follow this definition in an international public forum.
As far as I can see, your second link (applies to all EU member states) makes no mention of the OSI whatsoever, and uses a definition that is far briefer and less specific than the OSD.
I cannot evaluate the third link (Germany) as I don't speak German and automatic translation may introduce subtle changes.
They applied for a trademark and were rejected due to the term being too generic/descriptive. It has nothing to do with whether they hold IP.
That list doesn't appear to be "legally binding" in a general sense; to me, the way you worded that implies "there is a law saying OSD is the definition of open source in this country" which is very far from the case.
Instead that list appears to be specific cases/situations e.g. how some US states evaluate bids from vendors, or how specific government organizations release software. And many things on that list are just casual references to the OSI/OSD but not laws at all.
I didn't say a trademark isn't a form of IP. I said their application for a trademark was rejected due to "open source" being too generic/descriptive, not due to the reason you directly asserted above ("They don't hold the IP, I don't really see any way they could be granted a trademark on it").
Arguably it is, in the sense that they didn't actually invent the term; there are many documented pre-OSI uses (including by high-profile folks like Bill Joy) saying "open source" to just mean "source available". And OSI's attempt to trademark the term was rejected.
> if you don’t welcome outside contributions, it isn’t open source
That isn't even part of the OSI's definition, so what are you basing this on?
There's actually a potential solution here, but I haven't personally tested it: transportable tablespaces in either MySQL [1] or MariaDB [2].
The basic idea is it allows you to take pre-existing table data files from the filesystem and use them directly for a table's data. So with a bit of custom automation, you could have a setup where you have pre-exported fixture table data files, which you then make a copy of at the filesystem level, and then import as tablespaces before running each test. So a key step is making that fs copy fast, either by having it be in-memory (tmpfs) or by using a copy-on-write filesystem.
If you have a lot of tables then this might not be much faster than the 0.5-2s performance cited above though. iirc there have been some edge cases and bugs relating to the transportable tablespace feature over the years as well, but I'm not really up to speed on the status of that in recent MySQL or MariaDB.
Rocksdb / myrocks is heavily used by Meta at extremely massive scale. For sake of comparison, what's the largest real-world production deployment of bcachefs?
We're talking about database performance here, not deployment numbers. And personally, I don't much care what Meta does, they're not pushing the envelope on reliability anywhere that I know of.
Many other companies besides Meta use RocksDB; they're just the largest.
Production adoption at scale is always relevant as a measure of stability, as well as a reflection of whether a solution is applicable to general-purpose workloads.
There's more to the story than just raw performance anyway; for example Meta's migration to MyRocks was motivated by superior compression compared to other alternatives.
Postgres's strategy has traditionally been to focus on pluggable indexing methods which can be provided by extensions, rather than completely replacing the core heap storage engine design for tables.
That said, there are a few alternative storage engines for Postgres, such as OrioleDB. However due to limitations in Postgres's storage engine API, you need to patch Postgres to be able to use OrioleDB.
MySQL instead focused on pluggable storage engines from the get-go. That has had major pros and cons over the years. On the one hand, MyISAM is awful, so pluggable engines (specifically InnoDB) are the only thing that "saved" MySQL as the web ecosystem matured. It also nicely forced logical replication to be an early design requirement, since with a multi-engine design you need a logical abstraction instead of a physical one.
But on the other hand, pluggable storage introduces a lot of extra internal complexity, which has arguably been quite detrimental to the software's evolution. For example: which layer implements transactions, foreign keys, partitioning, internal state (data dictionary, users/grants, replication state tracking, etc). Often the answer is that both the server layer and the storage engine layer would ideally need to care about these concerns, meaning a fully separated abstraction between layers isn't possible. Or think of things like transactional DDL, which is prohibitively complex in MySQL's design so it probably won't ever happen.
(I believe) OP's point is about a company being global relative to amount of users, not just their geography. If you have single digit thousands of users or less, you still don't need those optimizations even if those users are located all around the world.
Non-relational databases existed in the 60s, and many programmers who worked in the 60s presumably continued working into the 70s, so either way I don't see any problems with the timeline GP mentions.
Sure, I never claimed that relational databases were the first ones. I was confused because they were trying to explain a specific timeline different from the article, but only mentioned a single time period that didn't seem likely to be what they intended, and I thought it might make sense to clarify what that time period was. I'm sure this isn't the case for everyone, but at least to me, it's surprising not to be explicit in the information they were trying to claim the article glossed over.
Your docs say live queries for MySQL and MariaDB are "coming soon", but your post here strongly suggests they're already supported. Is this actually implemented yet or not?
Thanks for spotting that — to clarify: the current production implementation of Live Queries is Postgres only.
MySQL/MariaDB support is in progress (binlog-based) and is why the docs say “coming soon.”
The post wasn’t meant to imply that MySQL/MariaDB are already live; the intention was to describe the overall design rather than claim full parity. I’ll update the wording to avoid that confusion.
So it's a bit ironic to argue that terms cannot be redefined, when that's already what happened with "open source" and what got us here in the first place. If OSI had chosen a novel term (e.g. "Sourceware" was one option they considered), they would have been able to trademark it and avoid this entire multi-decade-long argument.
reply