I like it, and indeed neatly shows the power of Lisp. The JS variety (well the one that I could come up with) is far less elegant, but works [0] (well, mostly). It really shows how the different LLMs stack up; some really cannot get anything right, but something like openai/gpt-4o-mini seems to get it right mostly (8/10).
Similar to others, the unevenness of prompt results makes pricing per message quite tricky; it generated a nice looking app for me in the first go, I asked for an enhancement and it spent the rest of my free messages on trying, and failing, to fix 1 TS error that came from that enhancement. As this is using one or more of the openai/anthropic/google models, I also know that it probably won't be able to actually fix that error without my (either in code or specifically telling it what to do, which is coding) explicit help and just loop until I burn through whatever plan I have that way.
I got a cheap 1 year account on replit and it suffers from this of course; first few prompts yield amazing results and then it gets stuck. This is fine for me as I can fix it myself and by now I have good feeling of which type of error the current llms will just loop forever on, but it is hard to justify the pricing model of per message because for people who cannot fix it themselves and need to vibe on it; spend 50 bucks/mo for looping over one error for a day and that's that; pay more or wait or eject and go to another; I see non devs (vibe-only devs?) doing the latter so you never retain clients; they pick the good deals every month and move from one to the other.
So far only cursor is reasonable (that I know off) because it continues after the paid messages run out with the slower requests; you can go on forever.
That was what I tried on the train [0] a few weeks ago. I used Groq to get something very fast to see if it would work at least somewhat. It gives you a PDF in the end. Plugging in a better model gave much better results (still not really readable if you actually try to; at a glance it's convincing though), however, it was so slow that testing what kind of impossible. Cannot really have things done in parallel either because it does need to know what it pushed out before, at least the summary of it.
I tried to vibe code (I don't like the term and normally would never) this as someone here suggested, so here [0]. It did not 'quite' take less than an hour and it's not usable yet for writing (just reading); it does have working dark mode though.
I only helped out when it was completely stuck (twice), other than that I didn't read or check the code, just talked (voice) ideas to it when it was done while doing other (not vibe) coding while it was looping over it. I did, I guess stupidly, dictate the tech; I wanted to try Go + htmx; that probably didn't help; probably Python and/or React would've lead to better results.
I for one partially wrote a few mobile games with LOAD81 ; I was travelling and only had my Pandora with me; I wrote all the logic tested them all out with load81. Of course I had to actually get them on a mobile platform device and create a better UI, however all the logic and most of the UX was all done in lua on load81.
This is great; I have been thinking about this for a long time. I like reading about past and current implementations that try to better sql; from a programming and a data science and performance perspective. I am aware of the ones you linked and some others like 'Real' (shakti.com) sql and some enhancements from papers.
just to make them easier to instrument for my experiments. The first one I made because after trying all tutorials on haproxy/nginx to make proxying work without the target domain being resolvable (so no dns entry until it's up), I got annoyed (nothing worked while everything only and gpt said it should work) and just did it like this. Also it makes it very easy to throw in my own, complex as I want rules and logging and telemetry (to whatever I want) at any level.
The second I needed to test an existing, not able to change (they have an old version running they cannot update at the moment, don't ask) software against a redis sentinel setup.
The main thing is that I am more and going towards having the program language as config instead of greenspun's tenth rule where everyone builds some annoying and limited 'almost (usually esoteric) programming language' as config language over time. I want that hard lifting to be done in my native language (Lisp, Go, Rust) and have the config be for ip addresses and ports only.
[0] https://github.com/tluyben/pseudo-js