Is this correct? My assumption is that all the data collected during usage is part of the RLHF loop of LLM providers. Assumption is based on information from books like empire of ai which specifically mention intent of AI providers to train/tune their models further based on usage feedback (eg: whenever I say the model is wrong in its response, thats a human feedback which gets fed back into improving the model).
Design patterns are one of those things where you have to go through the full cycle to really use it effectively. It goes through the stages:
no patterns. -> Everything must follow the gang of four's patterns!!!! -> omg I can't read code anymore I'm just looking at factories. No more patterns!!! -> Patterns are useful as a response to very specific contexts.
I remember being religious about strategy patterns on an app I developed once where I kept the db layer separated from the code so that I could do data management as a strategy. Theoretically this would mean that if I ever switched DBs it would be effortless to create a new strategy and swap it out using a config. I could even do tests using in memory structures instead of DBs which made TDD ultra fast.
DB switchover never happened and the effort I put into maintaining the pattern was more than the effort it would have taken me to swap a db out later :,) .
In my experience, there is no in-memory database replacement that correctly replicates the behavior of your database. You need to use a real database, or at least an emulator of one.
For example, I have an app that uses Postgres as the database. I have a lot of functions, schemas, triggers, constraints in Postgres for modifying database state, because the database is 100x faster at this than my application will ever be. If I have an in-memory version of Postgres, it would need to replicate those Postgres features, and at that point I really should just be standing up a database and testing against it.
I have worked with people claiming that unit tests need to hermetically run in-memory, because reasons. Ok, I don't disagree, but if my bug or feature requires testing that the database is modified correctly, I need to test against a real database! Your in-memory mock will not replicate the behavior of a database _ask me how I know_ ...
These days, Docker makes this so easy that it's just lazy to not standup a database container and write tests againts it.
Yea. I think people underestimate this. Yesterday I was writing an obsidian plugin using the latest and most powerful Gemini model and I wanted it to make use of the new keychain in Obsidian to retrieve values for my plugin. Despite reading the docs first upon my request it still used a non existent method (retrieveSecret) to get the individual secret value. When it ran into an error, instead of checking its assumptions it assumed that the method wasnt defined in the interface so it wrote an obsidian.shim.ts file that defined a retrieveSecret interface. The plug-in compiled but obviously failed because no implementation of that method exists. When it understood it was supposed to used getSecret instead it ended up updating the shim instead of getting rid of it entirely. Add that up over 1000s of sessions/changes (like the one cursor has shared on letting the agent run until it generated 3M LOC for a browser) and it's likely that code based will be polluted with tiny papercuts stemming from LLM hallucinations
The problem with X is that so many people who have no verifiable expertise are super loud in shouting "$INDUSTRY is cooked!!" every time a new model releases. It's exhausting and untrue. The kind of video generation we see might nail realism but if you want to use it to create something meaningful which involves solving a ton of problems and making difficult choices in order to express an idea, you run into the walls of easy work pretty quickly. It's insulting then for professionals to see manga PFPs on X put some slop together and say "movie industry is cooked!". It betrays a lack of understanding of what it takes to make something good and it gives off a vibe of "the loud ones are just trying to force this objectively meh-by-default thing to happen".
The other day there was that dude loudly arguing about some code they wrote/converted even after a woman with significant expertise in the topic pointed out their errors.
Gen AI has its promise. But when you look at the lack of ethics from the industry, the cacophony of voices of non experts screaming "this time it's really doom", and the weariness/wariness that set in during the crypto cycle, it's a natural tendency that people are going to call snake oil.
That said, I think the more accurate representation here is that HN as a whole is calling the hype snake oil. There's very little question anymore about the tools being capable of advanced things. But there is annoyance at proclamations of it being beyond what it really is at the moment which is that it's still at the stage of being an expertise+motivation multiplier for deterministic areas of work. It's not replacing that facet any time soon on its current trend (which could change wildly in 2026). Not until it starts training itself I think. Could be famous last words
I’d put more faith in HN’s proclamations if it hadn’t widely been wrong about AI in 2023, 2024, and now 2025. Watching the tone shift here has been fascinating. As the saying goes, the only thing moving faster than AI advances right now is the speed at which HN haters move the goalposts…
Mmm. People who make AI their entire personality and brag that other people are too stupid to see what they see and soon they'll have to see the genius they're denying...does not make me think "oh, wow, what have I missed in AI".
AI has risen the barrier to all but the top and is threatening many peoples' livelihood. It has significantly increase the cost of computer hardware and is projected to increase the cost of electricity. I can definitely see why there is a tone shift! I'm still rooting for AI in general. Would love to see the end of a lot of diseases. I don't think we humans can cure all disease on our own in any of our lifetimes. Of course there all sorts of dystopian consequences that may derive from AI fully comprehending biology. I'm going to continue being naive and hope for the best!
I initially felt a bit offended when I saw this. Then I thought about it and at the end of the day there's a decent amount of infrastructure that goes into displaying the build information, updating it, scanning for secrets and redacting, etc.
I don't know if it's worth the amount they are targeting, but it's definitely not zero either.
You would think the fat monthly per-seat license fee we also pay would be enough to cover the costs of checks notes reading some data from the DB and hosting JSON APIs and webpages.
Yeah, I think we’re seeing some fallout from how much developer infrastructure was built out during the era where VCs were subsidizing everything, similar to how a lot of younger people complained about delivery charges going up when they had to pay the full cost. Unfortunately, now a lot of the competition is gone so there isn’t much room to negotiate or try alternate pricing models.
Curious... Why does VPN access disruption suggest the breach may be deeper than initially disclosed?
My understanding is that this prevents anonymous access to servers which would help during investigation if any further unauthorized access showed up. But it doesn't confirm that unauthorized access continued. Just curious how you are thinking about this though.
Time pressures during christmas/holidays mean that the original calendars were becoming too stressful to handle. Seen several calendars switching to 12 consecutive days or 1 every 2 days challenges.
Yea. I can see what the parent is getting at. However the linked PR's contain the employee name. Their username is the same name mentioned in the article. So it would have been the same even if the author had just mentioned the username instead (which would be completely acceptable in all cases). I think junior employee or not, it's clear that they have the autonomy to check a PR for errors and fix it. So it's very much on them.
Welp. I wish I had read the comments first to discover that this is AI generated. On the other hand, I got to experience the content without bias.
I opted to give it a try instead of reading the comments and the book was arranged in a super strange way where it's discussing concepts that a majority of programmers would never be concerned with when starting out with learning a language. It's very different to learn about some of these concepts if you are reading a language doc in order to work on the language itself. But if you want to learn how to use the language, something like:
> Choose between std.debug.print, unbuffered writers, and buffered stdout depending on the output channel and performance needs.
is absolutely never going to be something you dump into chapter 1. I skimmed through a few chapters from there and it's blocks of stuff thrown in randomly. The introduction to the if conditional throws in Zig Intermediate Representation with absolutely no explanation of what it is and why it's even being discussed.
Came here to comment that this has been written pretty poorly or just targets a very niche audience and now I discover it's slop. What a waste of time. The one thing AI was supposed to save.
Also, if you are on Google Workspace, then everything changes there too. Activating the Gemini CLI is a smile while crying emoji kind of activity if you are trying to provide this to an entire organization [1]
reply