It’s not too much of an exaggeration to say that everything about using fork() instead of vfork() plus exec() is essentially fundamentally broken in modern osdev without a whole stack of hacks to try and patch individual issues one-by-one.
Why? I’ve been using dein since it was released and haven’t needed to switch to anything (though I’ve noticed the momentum shifts quickly between different ones).
It is an unfortunate recycling of an existing regime that no doubt offends Stallman to his very core, but I wouldn't call it meaningless.
If you're in a company and need a model which one do you think you're getting past compliance & legal - the one that says MIT or the one that says "non-commercial use only"?
I feel like this is mixing agendas. Is the goal freeing up /temp more regularly (so you don’t inadvertently rely on it, to save space, etc) or is the goal performance? I feel like with modern nvme (or just ssd) the argument for tmpfs out of the box is a hard one to make, and if you’re under special circumstances where it matters (eg you actually need ram speeds or are running on an SD or eMMC) then you would know to use a tmpfs yourself.
(Also, sorry but this article absolutely does not constitute a “deep dive” into anything.)
I actually don’t have a problem with the SSL changes as they specifically pertain to http servers – it’s largely a dived problem with automated solutions compatible with all the major players on most fronts.
But certs and every other context have become neigh impossible except in enterprise settings with your own CA and cert servers. From things like printers and network appliances to entirely non-http applications like VPN (StrongSwan and OpenVPN both have/support TLS with signed SSL certs, but place very different constraints on how those work in practice and what identities are supported, how or if wildcards work, etc).
Very little attention has been paid to non-general purpose and non-http contexts as things currently stand.
I can't tell if it's a typo but HTTP-01 would contact your webserver on :80 in order to successfully retrieve a very, very, very specific ACME path and does not care at all what you do with your issued TLS afterward, including what port you run it upon
Also, I know firsthand that the DNS Validator also works perfectly fine, no http check required
Granted I've only been using Claude for a short time, but in my experience it tends to write code that matches the style of the code it's editing. Which is sort of a "no duh" thing in retrospect but I hadn't considered that. That is to say old crappy code I wrote long ago when I was dumber gets old-crappy dumber code style suggestions from Claude. And newer better-practices code that is very careful gets Claude to follow that style as well.
It wasn't something I considered at first but it makes sense if you think about text prediction models and infilling and training by reading code. The statistics of style matching what you are doing against similar things. You're not going to paint a photorealistic chunk into a hole of an impressionist painting, ya know?
So in my experience if you give it "code that avoids the common issues" that works like a style it will follow. But if you're working with a codebase that looks like it doesn't "avoid those common issues" I would expect it to follow suit and suggest code that you would expect from codebases that don't "avoid those common issues". If the input code looks like crappy code, I would expect it to statistically predict output code that looks like crappy code. And I'm not talking about formatting (formatting is for formatters), it's things like which functions and steps are used to accomplish whatever. That sort of thing. At least without some sort of specific prompting it's not going to jump streams.
Edit: one amusing thing you can do is ask Claude to predict attributes of the developers of the code and their priorities and development philosophy (i.e. ask Claude to write a README that includes these cultural things). I have a theory it gives you an idea about the overall codesmell Claude is assigning to the project.
Again I am very new to these tools and have only used claude-code because the command line interface and workflow didn't make me immediately run for the hills the way other things have. So no idea how other systems work, etc because I immediately bounced on them in the past. My use of claude-code started as an "okay fine why not give these things the young guns can't shut up about a shot on the boring shit and maybe clear out some backlog" for making chores in projects that I usually hate doing at least a little interesting but I've expanded my use significantly after gaining experience with it. But I have noticed it behave very differently in different code bases and the above is how I currently interpret that.
Thanks for sharing; that sounds rather reasonable. But I was under the impression that this new "vibe coding" thing was where you start with a "clean slate" altogether (the llm itself generates/picks the "initial state" in terms of idiomatic or not-so-idiomatic handling of whatever conditions rather than copying it from existing code)?
I haven't tried any of that sort of thing yet... but I would expect the prompt to probably colors expectations.
Overall "meta" commands seem to work much more effectively that I expected. I'm still getting used to it and letting it run more freely lately but there's some sort of a loop you can watch as it runs where it will propose code given logic that is dumb and makes you want to stop it and intervene... but on the next step it evaluates what it just wrote and rejects for the same reason I would have rejected it and then tries something else. It's somewhat interesting to watch.
If you asked a new "I need you to write XYZ stat!" vs "We care a lot about security, maintainability and best practices. Create a project that XYZ." you would expect different product from the new hire. At least that's how I am treating it.
Basically I would give it a sort of job description. And you can even do things like pick a project you like as a model and have it write a file describing development practices used in that project. Then in the new project ask it to refer to that file as guidance and design a plan for writing the program. And then let it implement that plan. That would probably give a good scaffold, but I haven't tried. It seems like how I would approach that right now as an experiment. It's all speculation but I can see how it might work.
Maybe I'll get there and try that, but at the moment I'm just doing things I have wanted to do forever but that represented massive amounts of my time that I couldn't justify. I'm still learning to trust it and my projects are not large. Also I am not primarily a programmer (physicist who builds integrations, new workflows and tools for qc and data handling at a hospital).
LLMs don't "have taste" - they statistically model human preferences from training data, mapping features to popularity patterns without experiencing the music itself.
Indeed. I'm deeply skeptical of the proposition that LLMs can have taste at all. However, there's an even more fundamental question: is "taste" an objective thing in the first place? If not, then it has no meaning to say anyone's (or anything's) taste is "good" or "bad". The most you can say is how similar another's taste is to yours.
Exactly, I feel like at the very least, you need to have the model listen to the music/song and have it see if it “likes” it etc. I don’t think the OP did that unless I missed it? Also see my other comment.
Saying a non-multimodal LLM (like many of the ones in the article) can have "taste" in music is like asking a blind person to critique pieces at the Louvre based on their descriptions.