Exactly. This is like buying a smoothie blender when you already have an all-purpose mixer-blender. This whole space is at best an open-source project, not a (multiple!) whole company.
It's very unlikely that any of these tools are getting better results than simply prompting verbatim "review these code changes" in your branch with the SOTA model du jour.
What we need is a programming language that defines the diff to be applied upon the existing codebase to the same degree of unambiguity as the codebase itself.
That is, in the same way that event sourcing materializes a state from a series of change events, this language needs to materialize a codebase from a series of "modification instructions". Different models may materialize a different codebase using the same series of instructions (like compilers), or say different "environmental factors" (e.g. the database or cloud provider that's available). It's as if the codebase itself is no longer the important artifact, the sequence of prompts is. You would also use this sequence of prompts to generate a testing suite completely independent of the codebase.
I am working on that https://github.com/gritzko/librdx
Conflictless merge and overlay branches (ie freely attachable/detachable by a click). That was the pie-in-the-sky of the CRDT community for maybe 15 years. My current approach is RDX tree CRDT effectively mapping to the AST tree of the program. Like CRDT DOM for the AST, because line based diffs are too clumsy for that.
Back in the day, JetBrains tried revision-controlling AST trees or psi-nodes in their parlance. That project was cancelled, as it became a research challenge. That was 10 years ago or so. At this point, things may work out well, time will tell.
Just a clarifying question to understand if I understand librdx correctly, it seems you've implemented a language explicitly for being easy to be used as CRDTs for syncing purposes (with specific types/structures for communicating just changes too?), rather than taking an existing language and then layering that stuff on top?
It depends on what you mean by a language. If JSON then yes. There are ways to implement CRDT by layering on top of JSON (see Automerge) but the result is really far from idiomatic readable JSON people expect to see.
RDX is more like CRDT JSON DOM in fact, not just JSON+. If that makes sense.
Ah I see. You mean they were trying to build a custom VCS that had special support for AST merging. MPS uses regular git with custom merge drivers to do AST-level merging instead of textual merging, but that's a bit different
I think this could be very useful even for regular old programming. We could treat the diffs to the code as the main source of truth (instead of the textual snapshot each diff creates).
Jonathan Edwards (Subtext lang) has a lot of great research on this.
You can literally have a 20 line Python script on cron that verifies if everything ran properly and fires off a PagerDuty if it didn't. And it looks like PagerDuty even supports heartbeat so that means even if your Python script failed, you could get alerted.
>There is already partisan support within those provinces, and Trump is going to offer money to push it. If that happens, Yukon and the Northwest Territories are next. (Side note: these are Republican voters, which gives Republicans the Senate for years to come.)
Disagree.
1. If any Canadian province becomes an American state (with electoral votes), the Republicans won't win an election for the next 100 years. Even if it's Alberta.
2. Alberta likely won't secede unless they get full statehood. Nobody wants to be another Puerto Rico.
3. I think if you did a referendum in Alberta today (even with full US statehood on offer), the votes to secede would number over 10%.
Remember, Quebec in 1995: 50.58% voted to stay, with a turnout of 93.52%. And they were all but ready to leave to the point of engaging in IRA-style terrorism.
Also, the famous failure of Brexit all but precludes any such referendums from getting serious wind in our lifetimes.
The FLQ killing two politicians (one being accidental*) is very far removed from the scope of the IRA's terrorism. They were infiltrated to the bone by the RCMP that was trying to get them to escalate to put the war measure act in place and engage in a massive intimidation campaign on the massive peaceful and liberal part of the independence movement, something that is quite reminiscent of what is currently happening in Minneapolis.
*They did kidnap him but didn't intend to kill him, they were dumb revolted teenagers who fucked up.
Such a terrible business decision considering the crashes and their impact on Boeing's reputation. If you think a feature will keep the product from catastrophic failure, it should be standard on every unit you sell.
Assuming it's the US we're talking about, the federal minimum wage is $7.25, which means that if every worker involved is paid at the minimum wage, you incur a cent of labour costs every 4.97 person-seconds. AFAICT, most Amazon workers are paid substantially higher than the federal minimum wage. And that's just labour costs.
While Amazon is efficient, "fractions of a cent" is probably the wrong order of magnitude for even the most efficient order.
You might be 140 miles round trip to the nearest fulfilment centre, but you're almost certainly closer to your nearest neighbours who regularly buy stuff from Amazon, so the van is probably coming pretty close to you any way.
Exactly. AI is minimally useful for coding something that you couldn't have been able to code yourself, given enough time, without explicitly investing time in generic learning not specific to that codebase or particular task.
Although calling AI "just autocomplete" is almost a slur now, it really is just that in the sense that you need to A) have a decent mental picture of what you want, and, B) recognize a correct output when you see it.
On a tangent, the inability to identify correct output is also why I don't recommend using LLMs to teach you anything serious. When we use a search engine to learn something, we know when we've stumbled upon a really good piece of pedagogy through various signals like information density, logical consistency, structuredness/clarity of thought, consensus, reviews, author's credentials etc. But with LLMs we lose these critical analysis signals.
While you're correct. I truly believe the velocity offered outweighs this consideration for 90% of the application teams and startups. I've personally never worked in a clean codebase, and I've been convinced long ago that they're mythical. I don't see an issue with an LLM spitting out bad / barely maintainable code because that's basically every codebase I've ever seen in production.
Although your criticisms are totally contrived, if it makes you feel better, nobody considers this real programming. Things like v0 are just an evolution on the no-code front, which has some utility but will always remain a niche without possibility of serious scale.
I would guess it's because people blamed the device/OS manufacturer for when their device got infected with malware (which is almost always due to user error).
Through the 00s, Apple practically built their reputation on being "virus-free" which really just meant they locked out the user from being able to do anything too extreme.
It's very unlikely that any of these tools are getting better results than simply prompting verbatim "review these code changes" in your branch with the SOTA model du jour.
reply