Exactly, and main reason I've stopped using GPT for serious work. LLMs start to break down and inject garbage at the end, and usually my prompt is abandoned before the work is complete, and I fix it up manually after.
GPT stores the incomplete chat and treats it as truth in memory. And it's very difficult to get it to un-learn something that's wrong. You have to layer new context on top of the bad information and it can sometimes run with the wrong knowledge even when corrected.
Reminds me of one time asking ChatGPT (months ago now) to create a team logo with a team name. Now anytime I bring up something it asks me if it has to do with that team name. That team name wasn’t even chosen. It was one prompt. One time. Sigh.
So a thing with claude.ai chats is that after long enough they add a long context injection on every single turn after a while.
That injection (for various reasons) will essentially eat up a massive amount of the model's attention budget and most of the extended thinking trace if present.
I haven't really seen lower quality of responses with modern Claudes with long context for the models themselves, but in the web/app with the LCR injections the conversation goes to shit very quickly.
And yeah, LCRs becoming part of the memory is one (of several) things that's probably going to bite Anthropic in the ass with the implementation here.
Plugging away with reviews of Genrative AI tech with detailed comparisons. I announced the launch on HN a while ago, thought I’d use this month’s for a status update.
I just took Qwen-Image and Google’s image AIs for a spin and I keep a side by side comparison of many of them.
Thanks, the 3D asset creators are very interesting. I'm working on LLM -> CAD tool (for 3d printing) and your post confirms that I should keep my focus, because there is so much other things to do (uv unwrapping!) if you are targeting games for example.
this happened to my mother-in-law, where she was the crasher.
in North London there is a large Turkish centre that hosts Turkish weddings. She was invited to a wedding there.
Traditionally, the bride and groom stand in the centre of the room and then family members lineup next to them all in a procession.
As you enter the room to reach the bride and groom, you must shake the hands in turn of all of the people in the procession.
When my mother-in-law eventually got to the bride and groom, they realised that the bride and groom were strangers. The accurate wedding was taking place upstairs at the same time.
There are multiple wedding venues in that particular Turkish Centre.
Changing domain to writing and images and video you can see LinkedIn is awash with everyone generating everything by LLMs. The posting cadence has quickened too as people shout louder to raise their AI assisted voice over other people’s.
We’ve all seen and heard the AI images and video tsunami
So why not software (yet but soon)??
Firstly, Software often has a function and AI tool creations cannot make that work. Lovable/Bolt etc are too flakey to live up to their text to app promises. A shedload of horror debugging or a lottery win of luck is required to fashion an app out of that. This will improve over time but the big question is, by enough?
And secondly, like on LinkedIn above: perhaps the standards of the users will drop? LinkedIn readers now tolerate the llm posts, it is not a mark of shame. Will the same reduction in standards in software users open the door to good-enough shovelware?
I mean, LinkedIn, even before the advent of LLMs, has been the worst and most bullshit-heavy of the social networks. There's a reason that r/LinkedInLunatics exists. "It can write a LinkedIn post" is not necessarily good evidence that it can do anything useful.
Exactly what I wanted to say. LinkedIn was slop before there was AI slop. So that's probably where LLM generated stuff fits the best. That, and maybe Medium.
Even medium, you'll sometimes see people who can write properly on medium. LinkedIn is kind of fascinating in that, even before LLMs, everything highly rated on LinkedIn was in that grotesque almost content-free style beloved by LLMs.
MS Windows has excellent multi window management with Alt Tab Win Tab etc. Far superior to others.
I have all my terminals with distinct icons and background colours to tell them apart. The operating system (Windows) does the heavy lifting.
i tried Mac for about five years but missed MS Windows “every window can be alt tabbed to”. Mac has “every app can be command tabbed to and therein each app has its own subwindow management”
> MS Windows has excellent multi window management with Alt Tab Win Tab etc. Far superior to others.
If by "others" you mean Mac, okay, but KDE and some other Linux desktops are at least as good as Windows at this out of the box, and much more customisable.
Windows has basic window and desktop management, but I would hardly describe it as excellent. Most tiling window managers would provide those features and then more.
Just don’t minimize the window. Removing a window from the alt-Tab list is basically the only reason to minimize it in the first place on Mac. (Not reflexively minimizing windows does take some time to get used to if you’re coming from Windows, admittedly.)
you can use workspaces for that. for comparison, gnome on linux doesn't even support minimizing windows any more. you move windows/apps that you don't want to use right now to a different workspace.
On Windows there are applications that minimize to the tray instead of remaining on the task bar. That’s my most common reason to minimize, so that it disappears from the task bar when not in use.
i have used WindowMaker but also the original NeXTstep for years, and WindowMaker's integration with GNUstep apps and its emulation of the NeXTstep interface always felt incomplete.
Do you know if there is a way to quickly switch between only two individual windows in different applications? A very common paradigm for me is swapping between two windows, for example a terminal session for code editing and a browser window for reference. On Windows and most Linux WMs, this is just a quick alt-tab hit to toggle between the two most recently focused windows. As I know there is no way to do this on macOS without bringing _all_ the windows to the foreground, which is not what I want. This is my #1 complaint about macos, I'd be so happy if there is just some shortcut I'm missing to accomplish this.
I'm pretty sure that's part of what stage manager is for — you can drag windows in the same stage and they operate how you want — but there's too much manual setup required for me to realistically suggest it as an alternative.
There are a bunch of third party tools you can use though, [AltTab](1) is free and tries to replicate windows experience on Mac. [Raycast](2) has a Switch Windows command which also allows direct access to any window via the keyboard (bind to alt+tab if you like) amongst many other features.
Control+F4 - ‘move focus to the active or next window’ is essentially that. (With the caveat that if your keyboard focus winds up on the menu bar or otherwise not on a window at all, control+f4 shifts focus back to the window, rather than switching windows. The main way to make that happen is with the other control-f-key shortcuts that no-one uses, though)
If you’re going to use it I’d probably rebind it in the keyboard shortcuts settings.
Longer answer: It can do an okay job if you prompt it certain specific ways.
I write a blog https://generative-ai.review and some of my posts walk through the exact prompts I used and the output is there for you to see right in the browser[1]. Take a look for some hand holding advice.
I personally tackle AI helpers as an 'external' internal voice. The voice that you have yourself inside your own head when you're assessing a situation. This internal dialogue doesn't get it right every time and neither does the external version (LLM).
I've had very poor results with One Stop Shop builders like Bolt and Lovable, and even did a survey yesterday here on HN on who had magically gotten them to work[2]. The response was tepid.
My suggestion is paste your HN comment into the tool OpenAI/Gemini/Claude etc, and prefix "A little bit about me", then after your comment ask the original coding portion. The tool will naturally adopt the approach you are asking for, within limits.
Q: Has anyone on HN built anything meaningful with Lovable/Bolt? Something that works as intended?
I’ve tried several proof of concepts with Bolt and every time just get into a doom loop where there is a cycle of breakage, each ‘fix’ resurrecting a previous ‘break’
I had a trip with my family and used v0 to create an itinerary app with a timeline view of our flights, hotel/airbnb bookings, activities, etc.
It was the only thing I’ve 100% vibe-coded without writing a line of code myself. It worked pretty well. In an earlier era I might have used a shared google doc but this was definitely a better experience.
If you’re looking for things to use lovable/bolt for, I’d say don’t use it for software you otherwise would have written by hand; use it for the software you would never have written at all.
Even better use it to prototype, to play, to fling spaghetti at the wall. If something works but the AI code sucks, rewrite the thing that works by hand.
Currently, my team and I use v0, (and try Lovable, or Bolt) as tools for fast prototyping. Mostly, Product Owners and Architects create functional prototypes to support Epics. We use these prototypes to communicate with stakeholders, suggest solutions, and verify requirements. We discard the code from these tools and sometimes only take screenshots.
was quite impressed after building my label maker application [1] and stylex playground [2]. Had some real world needs and both were built in bolt with 99% of edits made through prompts. My tips would mostly center around:
- don't try to fix mistakes, revert and try with an updated prompt. the context seems to get polluted otherwise.
- don't treat it as a black box, inspect the generated code and prompt to write specific abstractions. don't just tell it what to build, but also how. this is where experienced programmers will get way more mileage.
I built a daily newsletter with myself as the only recipient using v0. It hits the Gemini API and returns a short story based on a historical event from that day in the language(s) that I'm learning, along with a translation, transliteration where applicable, vocabulary list from the story, and grammar tips.
I've had work in the past where I spent way too long building email templates, so having that all done for me, along with the script for sending the mail, was useful. It took an afternoon project that I probably would have abandoned, into an hour project.
With that said, I'm pretty bearish on these platforms, because I think you can't build anything beyond a toy like that. And for toys or short scripts, Claude, Gemini, ChatGPT are usually good enough.
https://generative-ai.review/2025/09/september-2025-image-ge... (non-pro Nano Banana)