I don't think Austral uses second-class references. Even the page you linked says:
> But is it worth it? Again, the tradeoff is expressivity vs. simplicity... Austral’s linear types and borrowing is already so simple. Austral’s equivalent of a borrow checker is ~700 lines of OCaml. The only downside of Austral is right now you have to write the lifetimes of the references you pass to functions, but I will probably implement lifetime (region) elision.
The whole post was about how that doesn't solve the problem perfectly - there is no way to interactively edit the output.
> by embracing LISP principles directly
This could just as easily have been javascript+json or erlang+bert. There's no lisp magic. The core idea in the post was just finding a way for code to edit it's own constants so that I don't need a separate datastore.
Eventually I couldn't get this working the way I wanted with clojure and I had to write a simple language from scratch to embed provenance in values - https://news.ycombinator.com/item?id=43303314.
This is an old prototype. I ended up making a language for it from scratch so that I could attach provenance metadata to values, making them directly editable even when far removed from their original source.
"Database query compilation: our journey" by Thomas Neumann & Viktor Leis
"A YJIT interview" by Maxime Chevalier-Boisvert
"A quick ramp-up on ramping up quickly" by Iain Ireland
"Can we democratize JIT compilers?" by Haoran Xu
"Safe and productive performance with user-schedulable languages" by Jonathan Ragan-Kelley
1000-1100: Understanding programs
"How debuggers work" by Sy Brand
"Debugging compiler-optimized code: how it works and doesn't" by Stephen Kell
"Side-Eye: ask your programs anything" by Andrei Matei
"Let's run a million benchmarks" by Yao Yue
"Rocket science of simulation testing!" by Alex / matklad
1100-1200: Wild ideas
"Back to modularity" by Daniel Jackson
"DB usability: as if" by Jonathan Edwards
"Twizzler and far out memory sharing: precise abstractions" by Daniel Bittman
"Programming without pointers" by Andrew Kelley
"Throwing it all away - how extreme rewriting changed the way I build databases" by Tyler Neely
1200-1230: Programmers are people
"A case for feminism in programming language design" by Felienne Hermans
"Malloy, mic drop, peace!" by Michael Toy
1230-1300: Lightning talk buffet
"Learning about the odd bits of SQL by reading the PostgreSQL docs" by Chris Zetter
"Hacking Observable notebooks from within" by Tom Larkworthy
"Zero copy data structures" by Evan Chan
"Reliable serverless needs distributed transactions" by Stu Hood
"Pangeo is a database" by Alexander Merose
"Rubbing a database on a language server" by Philip Zeyliger
"Language agnostic simulation testing on a budget" by Stevan A
"Shapeshifter: using LLMs inside a database for schema flexibility" by David Nachman
"Why S3's conditional writes made people excited" by Miikka Koskinen
"pghttp: backend-free, lowest latency web apps" by Damir Simunic
1300-1400: Query languages
"Pipe syntax in SQL; it's time" by Jeff Shute
"PRQL: a modern, pipelined SQL replacement" by Tobias Brandt
"AquaLang - a streaming dataflow programming language" by Klas Segeljakt
"A polymorphic data model for SQL using algebraic types" by Steve McCanne
1400-1430: Databases
"Use of time in distributed databases - don't fall behind the times" by Murat Demirbas
"Enough with all the Raft" by Alex Miller
"Database ideas in Convex" by Thomas Ballinger
"Serverless primitives for the shared log architecture" by Stephen Balogh
"Good thing we're not writing a database" by Peter van Hardenberg
1430-1500: Wasm
"Thinking in Wit" by Dan Gohman
"Bringing the WebAssembly standard up to speed with SpecTec" by Dongjun Youn
It matches my anecdotal experiences with graal. It's a huge complex machine with a lot of surface area for weird performance bugs.
> TruffleRuby beats cruby on the same benchmark by 4.38x
That benchmark only looks at peak performance, which TR unarguably dominates at, and ignores other important considerations like warmup time. Here's another railsbench (https://i.imgur.com/IzMRKdQ.png) from the same paper (https://dl.acm.org/doi/pdf/10.1145/3617651.3622982). TR achieves the best peak performance, but only after 3 minutes of warmup and it starts out 17x slower than cruby. If every time you deploy new code your latency spikes by 17x for 3 minutes, that's gonna be painful.
I don't know of anyone that is using TR in production. Shopify were doing some work on it for a while, but they've since developed and shipped yjit so I don't know if they still have any investment in TR.
The tradeoff in SingleStore is interesting. By default and unlike eg postgres, parameterized queries are planned ignoring the values of the parameters. This allows caching the compiled query but prevents adapting the query plan - for the example in the post SingleStore would pick one plan for both queries.
If it ignores the parameter values, then how does it estimate cardinality? Does it optimize for the worst case scenario (which runs the risk of choosing join implementations that requiring far more data from other tables than makes sense if the parameters are such that only a few rows are returned from the base table), or does it assume a row volume for a more average parameter, which risks long running time or timeouts if the parameter happens to be an outlier?
Consider that Microsoft SQL Server for example, does cache query plans ignoring parameters for the purposes of caching (and will auto-parameterize queries by default), but SQL Server uses the parameter value to estimate cardinalities (and thus join types, etc) when first creating the plan, under the assumption that the first parameters it sees for this query are reasonably representative of future parameters.
That approach can work, but if the first query it sees has outlier parameter values, it may cache a plan that is terrible for more common values, and users may have preferred the plan for the more typical values. Of course, it can be the reverse, where users want the plan that handles the outliers well, because that specific plan is still good enough with the more common values.
On the other hand zig errors can't have any associated value (https://github.com/ziglang/zig/issues/2647). I often find this requires me to store those values in some other big sum type somewhere which leads to all the same problems/boilerplate that the special error type should have saved me from.
If I have multiple errors then that in-out parameter has to be a union(enum). And then I'm back to creating dozens of slightly different unions for functions which return slightly different sets of errors. Which is the same problem I have in rust. All of the nice inference that zig does doesn't apply to my in-out parameter either. And the compiler won't check that every path that returns error.Foo always initializes error_info.Foo.
> ...the compiler won't help you check that your function only throws the errors that you think it does, or that your try block is handling all the errors that can be thrown inside it.
It will do both of those:
const std = @import("std");
fn throws(i: usize) !void {
return switch (i) {
0 => error.zero,
1 => error.one,
else => error.many,
};
}
fn catches(i: usize) !void {
throws(i) catch |err| {
return switch (err) {
error.one => error.uno,
else => |other| other,
};
};
}
pub fn main() void {
catches(std.os.argv.len) catch |err| {
switch (err) {
// Type error if you comment out any of these:
// note: unhandled error value: 'error.zero'
error.zero => std.debug.print("0\n", .{}),
error.uno => std.debug.print("1\n", .{}),
error.many => std.debug.print("2\n", .{}),
// Type error if you uncomment this:
// 'error.one' not a member of destination error set
//error.one => std.debug.print("1\n", .{}),
}
};
}
It wouldn't hurt to just read the docs before making confident claims.
> But is it worth it? Again, the tradeoff is expressivity vs. simplicity... Austral’s linear types and borrowing is already so simple. Austral’s equivalent of a borrow checker is ~700 lines of OCaml. The only downside of Austral is right now you have to write the lifetimes of the references you pass to functions, but I will probably implement lifetime (region) elision.