Hacker Newsnew | past | comments | ask | show | jobs | submit | lewdwig's commentslogin

A language which is not 1.0, and has repeatedly changed its IO implementation in a non-backwards-compatible way is certainly a courageous choice for production code.

So, I'm noodling around with writing a borrow checker for zig, and you don't get to appreciate this working with zig on a day to day level, but the internals of how the zig compiler works are AMAZING. Also, the io refactor will (I think) let me implement aliasing checking (alias xor mutable).

In my experience, migrating small-scale projects takes from minutes to single digit hours.

Standard library is changing. The core language semantics - not so much. You can update from std.ArrayListUnmanaged to std.array_list.Aligned with to greps.


Right? People must really like the design choices in Zig to do that instead of choosing another language. It's very interesting just because of that.

It's certainly not a choice I would have made, but there's sufficient precedent for it now (TigerBeetle, Ghostty, etc) that I can understand it.

also Bun

also Roc

This one is far from prod-ready however

the upside is absolutely worth it

One thing that seems to mark most nerds is a tendency towards being utopian about tech in general but deeply sceptical of specific tech.

I don’t hate git either but you’ll meet very few people who will claim its UX is optimal. JJ’s interaction model is much simpler than git’s, and the difficulty I found is that the better you know git, the harder it is to unlearn all its quirks.


To Broadcom you’re not a customer, you’re a mark, a patsy, stooge, a _victim_. Their aim is to establish exactly what they can get away with, how far they can abuse you, before you’ll just walk away.


But this is where all/most “platforms” go. As the product offering flounders over time, your quality talent (engineering and business) boils off to other opportunities. Then the short term value extraction methodologies show up, and everyone looks on in horror as the platform is “destroyed” through “mismanaged” consumer relationships.

Working in agtech, I’ve always wondered if this isn’t just the disenfranchised farmer story.

Give a farmer 1,000 acres to farm, and if they’re playing the long game, they’ll intermix their high value crops with responsible crop rotations. Managed well, this business can go on indefinitely.

But tell them they have 5 years left to farm the ground, and that the land will be of no value after that, they’ll grow the most expensive crop they can every year, soil quality be damned. It makes the most sense from a value extraction point of view.

Broadcom seems to be the kind of farmers that buy up forsaken land and extract as much value as possible before it finally fails.


I have noticed that LLMs are actually pretty decent at redteaming code, so I’ve made it a habit of getting them to do that for code they generate periodically. A good loop is (a) generate code, (b) add test coverage for the code (to 70-80%) (c) redteam the code for possible performance/security concerns, (d) add regression tests for the issues uncovered and then fix the code.


The glaring thing most people seem to miss that llm generated code is like TOS and unless you work in a more enterprise team setting? You are not going to catch 90% of the issues...

If this was used before releasing the tea spill fiasco, only to name one? It would never have been a fiasco. Just saying..


I’m sure this’ll be misreported and wilfully misinterpreted because of the current fractious state of the AI discourse, but given the lawsuit was to do with piracy, not the copyright-compliance of LLMs, and in any case, given they settled out of court, thus presumably admit no wrongdoing, conveniently no legal precedent is established either way.

I would not be surprised if investors made their last round of funding contingent on settling this matter out of court precisely to ensure no precedents are set.


TBH I’m surprised it’s taken them this long to change their mind on this, because I find it incredibly frustrating to know that current gen agentic coding systems are incapable of actually learning anything from their interactions with me - especially when they make the same stupid mistakes over and over.


Okay they're not going to be learning in real time. Its not like you're getting your data stolen and then getting something out of it - you're not. What you're talking about is context.

Data gathered for training still has to be used in training, i.e. a new model that, presumably, takes months to develop and train.

Not to mention your drop-in-the-bucket contribution will have next to no influence in the next model. It won't catch things specific to YOUR workflow, just common stuff across many users.


> Not to mention your drop-in-the-bucket contribution will have next to no influence in the next model. It won't catch things specific to YOUR workflow, just common stuff across many users.

I wonder about this. In the future, if I correct Claude when it makes fundamental mistakes about some topic like an exotic programming language, wouldn't those corrections be very valuable? It seems like it should consider the signal to noise ratio in these cases (where there are few external resources for it to mine) to be quite high and factor that in during its next training cycle.


They wouldn’t be able to learn much from interactions anyway.

Learning metric won’t be you, it will be some global shitty metric that will make the service mediocre with time.


Or get more value from the users with the same subscription price. I doubt they are giving any discounts.


It's actually pretty clever (albeit shitty/borderline evil), start off by saying you're different by the competitors because you care a lot about privacy and safety, and that's why you're charging higher prices than the rest. Then, once you have a solid user-base, slowly turn on the heat, step-by-step, so you end up with higher prices yet same benefits as the competitors.


With code, I’m much more interested in it being correct and good rather than creative or novel. I see it is my job to be the arbiter of taste because the models are equally happy to create code I’d consider excellent and terrible on command.


There are nascent signs of emergent world models in current LLMs, the problem is that they decohere very quickly due to them lacking any kind of hierarchical long term memory.

A lot of what is structurally important the model knows about your code gets lost whenever the context gets compressed.

Solving this problem will mark the next big leap in agentic coding I think.


I use Claude Code almost daily now, and I think I’d rather cut off my own arm than go without it, but I don’t delude myself into thinking that current gen tools don’t have significant limitations and that it is my job to manage those limitations.

So just like any other tool really.

I have discovered this week that Claude is really good at redteaming code (and specs, and ADRs, and test plans), much better than most human devs who don’t like doing it because it’s thankless work and don’t want to be “mean” to colleagues by being overly critical.


Would you share with us what kind of job you do?

I keep seeing people saying how amazing it is to code with these things, and I keep failing at it. I suspect that they're better at some kinds of codebases than others.


> I suspect that they're better at some kinds of codebases than others.

Probably. My works custom dev agent poops the bed on our front-end monorepo unless you're very careful about context, but then being careful about context is sort of the name of the game anyway...

I'm using them, mainly for scaffolding out test boilerplate (but not actual tests, most of its output there is useless) and so on, or component code structure based on how our codebase works. Basically a way more powerful templating tool I guess.


Devops/SRE/Platform Engineering

Downside: lots of Python, and Python indentation causes havoc with a lot of agentic coding tools. RooCode in particular seems to mangle diffs all the time, irrespective of model.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: