Hacker Newsnew | past | comments | ask | show | jobs | submit | notnullorvoid's commentslogin

Yup, it's a bad feeling hearing that lower performers are getting payed more than you.

The American way.

> Over the years, I've lost the ability to focus on writing code myself

If this weren't the case how do you think it would affect your usage of coding agents?


I've always been more interested in the computer science and software engineering aspects. I did enjoy writing code occasionally, but overall, I was wishing I had some kind of neural implant to convert my thoughts into code. Coding agents are now good enough that I consider that dream as realized, and with that benefit that I do not actually need any implant in my brain. :)

Interesting, I've found AI to be further from the goal of converting my thoughts to code than writing code myself.

English is so ambiguous that by the time I provide sufficient context to the prompt it's taken more typing than writing the code would have been. That's not even accounting for the output often needing further prompting to manipulate into what was intended, especially if I try to be light on initial context.

I like it for quick proof of concept stuff though, the details don't matter so much then.


I really approach is the same way as helping my coworkers to be productive. Most of the context is spent on the initial familiarizing with the code and I just double-check that it has the right understanding, there is minimal prompting on my side for this. The next step is to explain the problem I'm trying to solve and and for the simpler ones, it gets what needs to happen 8/10 times. I don't need to be detailed, because it already knows the context. For the complex problems, I split them into small tasks myself and only ask it to do the small steps, small enough to fit into the first category. I feel like the worst outcomes happen when you specify the problem first and let it do it's own research with that in "mind", then it just overthinks and comes up garbage.

Give it the problem first. Then have it generate the context. Make edits and iterate on the context. Then hit go. Finally, have it write down whatever it needs to for next time.

As far as I'm aware oxlint only supports plugins for non type aware rules, and type aware rules themselves aren't fully stable because it relies on a fork of tsgo.

That is correct, every rule with a custom parser (e.g. vue/svelte/astro tempaltes) and also type-aware rules can't be used as JS plugin.

Type-aware rule are indeed not marked as stable but work like a charm. tsgolint is indeed tsgo + shims + some works, but that won't change soon as tsgo won't have a JS API for a while.


Yeah it's a shame that few people realize running 3 (or more) different programs that have separate parsing and AST is the bigger problem.

Not just because of perf (though the perf aspect is annoying) but because of how often the three will get out of sync and produce bizarre results

The act of programming will look very similar. The community of programmers will be smaller, as more and more former programmers decide they don't like programming.

I don't see the utility of async on the GPU.

> Async splits the ecosystem. I see it as the biggest threat to Rust staying a useful tool.

Someone somewhere convinced you there is a async coloring problem. That person was wrong, async is an inherent property of some operations. Adding it as a type level construct gives visibility to those inherent behaviors, and with that more freedom in how you compose them.


itd be interesting to see a setup where there's only async and you have to specify when you actually want to block on a result.

flip the colouring problem on its head


Further bloating the web spec with something that won't be used in a couple years if at all.

I used Tesseract v3 back in the day in combination with some custom layout parsing code. It ended up working quite well. When looking at many of the models coming out today the lack of accuracy scares me.

There's no need to have software engineering be regulated. It'd be a restriction/deterrent at the wrong level.

In order to fix this we need the individuals in charge to be held legally accountable without hiding behind a corporation.

In the software industry management rarely ever listens to concerns brought up by engineering even if it's technical concerns.


Management not having to listen to engineers is the structural problem. How do managers know which concerns that engineers bring up are actually relevant? How do engineers know which concerns have real world consequences (without having a incredibly high burden of proof)?

Having regulation, or standardisation is a step toward producing a common language to express these problems and have them be taken seriously.

Leadership gets a strong signal - ignoring engineers surfacing regulated issues has large costs. Company might be sued and executives are criminally liable (if discovered to have known about the violation).

Engineering gets the authority and liability to sign off on things - the equivalent of “chartership” in regular fields with the same penalties. This gives them a strong personal reason to surface things.

It’s possible that this is harder for software engineering in its entirety, but there is definitely low hanging fruit (password storage and security etc).


> In the software industry management rarely ever listens to concerns brought up by engineering even if it's technical concerns.

Yet they have to listen to a Chartered Accountant or a Chartered Engineer. Maybe it would be as much in the engineers interest to have a professional body as it would for the public


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: