Hacker Newsnew | past | comments | ask | show | jobs | submit | more zobzu's commentslogin

china stats reporting is worse than US stats - impossible to trust - it'll say everything and nothing every time.


no change in trend though.

and cheaper gas is basically trump policies.

i think we should ban gas and let other countries take over (with gas) /s


No change in trend is part of the problem, no change or accelerating is exactly the whole problem...


Cheaper gas is a thing Trump says but basically nothing the administration has done is leading to cheap gas. It's the inevitable result of demand declines and decades of domestic production capacity increases.

Interestingly one policy Trump actually controls: he has cut the rate of adding stocks to the Strategic Petroleum Reserve in half. From Dec '24 to Dec '25 they added only 19 million barrels, compared to the 40 million barrels added in the prior year, despite Trump campaigning on filling the SPR "right to the top". The last, and only, administration that has topped off the SPR was Obama.


Well, he has been looking like he's going to invade Venezuala for a little while now. If they do a Syria-esque takeover of the oil-producing regions there could possibly be cheaper gas for wherever that gas would get shipped.


Don't hold your breath. Those fields need investment.


Also IBM: we are fully out of the AI race, btw. Also IBM: we're just an offshoring company now anyway.

So yeah.



"now". They've been doing it since forever.


They don't wanna be Redhat - but IMO it's their biggest mistake (I worked there 10 years).


Not ICE, but a bunch of similar data at scale for the US https://usdebtclock.org/


I read this a few times but there's no info.


Wrong. If you know nix then you know "leverages the unique way that Flox environments are rendered without performing a nix evaluation" is a very significant statement.


> leverages the unique way that Flox environments are rendered without performing a nix evaluation"

I'm curious! and ignorant! help!

Is that via (centrally?) cached eval? or what? there's only so much room for magic in this arena.


Yeah, it's essentially cached eval, the key being where/how that eval is stored.

When you create a Flox environment, we evaluate the Nix expressions once and store the concrete result (ie exact store paths) in FloxHub. The k8s node just fetches that pre-rendered manifest and bind-mounts the packages with no evaluation step at pod startup.

It's like the difference between giving the node a recipe to interpret vs. giving it a shopping list of exact items. Faster, safer, and the node doesn't need to know how to cook (evaluate Nix). I don't know, there's a metaphor here somewhere, I'll find it.

Only so much room for magic, for sure, but tons of room for efficiency and optimization.


Correction: we don't eval when you create environments.

Our catalog continuously pre-evaluates nixpkgs in the background. 'flox install' just selects from pre-evaluated packages -- no eval needed, ever. The k8s node fetches the manifest and mounts the packages.

Eval is done once, centrally, continuously. So... even more pre-val'd, so to speak.


actually we wrote this many years ago and left mozilla ans nobody is really updating it other than adding new configs. its not super useful anymore :)

at the time it made sense to us because you couldnt have good SSL configuration everywhere (it was not well supported) so we had trade-offs and created tiers of configs. We barely had TLS coming out, so SSL eas still the name of the game.

nowaday just use the latest TLS defaults and you're golden.


thanks, i enjoyed reading it (though a bit lengthy).

what gets me personally is what you describe at https://github.com/little-book-of/c/blob/main/articles/zig-i... - zig is made to feel easy and modern for people who don't know any better, and it does this well. But as soon as you actually need to do complex stuff, it gets in the way moreso than C and it's current environment/ecosystem will.

And to be fair, as much as I enjoyed writing in C in my younger years - I only use C when I actually need C. And asm when I actually need asm. Most of my code now uses higher level languages - this puts zig into such a niche.. it feels like golang to me: the cool language that isn't really solving as much of a need as you'd think.


I don't think zig is that much more complex than golang, with a (currently) crappier standard library. The bonus being you leave no performance on the table. I wonder if it would work with devops, where both c++ and rust fails.


And I want to clarify again, these are just personal notes written with some help from LLMs. They may contain mistakes, so please read them with curiosity, or feel free to skip them altogether.


I mean, if you embed Zig in a larger C++, Rust, or Python project, coordinating the build systems can be difficult. Zig prefers to manage the entire pipeline itself, so mixing it with other compilers and dependency managers can require workarounds. In my opinion, the only practical way to do this is by exposing C interfaces.


in some towns (like SF) folks have used Waymo thousands of times - its everywhere, for a good reason - while its not always faster, it's more consistent, pleasant and safe.


IMO this is a context window issue. Humans are pretty good are memorizing super broad context without great accuracy. Sometimes our "recall" function doesn't even work right ("How do you say 'blah' in German again?"), so the more you specialize (say, 10k hours / mastery), the better you are at recalling a specific set of "skills", but perhaps not other skills.

On the other hand, LLMs have a programatic context with consistent storage and the ability to have perfect recall, they just don't always generate the expected output in practice as the cost to go through ALL context is prohibitive in terms of power and time.

Skills.. or really just context insertion is simply a way to prioritize their output generation manually. LLM "thinking mode" is the same, for what it's worth - it really is just reprioritizing context - so not "starting from scratch" per se.

When you start thinking about it that way, it makes sense - and it helps using these tools more effectively too.


I commented here already about deli-gator ( https://github.com/ryancnelson/deli-gator ) , but your summary nailed what I didn’t mention here before: Context.

I’d been re-teaching Claude to craft Rest-api calls with curl every morning for months before i realized that skills would let me delegate that to cheaper models, re-using cached-token-queries, and save my context window for my actual problem-space CONTEXT.


>I’d been re-teaching Claude to craft Rest-api calls with curl every morning for months

what the fuck, there is absolutely no way this was cheaper or more productive than just learning to use curl and writing curl calls yourself. Curl isn't even hard! And if you learn to use it, you get WAY better at working with HTTP!

You're kneecapping yourself to expend more effort than it would take to just write the calls, helping to train a bot to do the job you should be doing


My interpretation of the parent comment was that they were loading specific curl calls into context so that Claude could properly exercise the endpoints after making changes.


He’s likely talking about Claude’s hook system that Anthropic created to provide better control over context.


i know how to use curl. (I was a contributor before git existed) … watching Claude iterate to re-learn whether to try application/x-form-urle ncoded or GET /?foo wastes SO MUCH time and fills your context with “how to curl” that you re-send over again until your context compacts.

You are bad at reading comprehension. My comment meant I can tell Claude “update jira with that test outcome in a comment” and, Claude can eventually figure that out with just a Key and curl, but that’s way too low level.

What I linked to literally explains that, with code and a blog post.


> IMO this is a context window issue.

Not really. It's a consequential issue. No matter how big or small the context window is, LLMs simply do not have the concept of goals and consequences. Thus, it's difficult for them to acquire dynamic and evolving "skills" like humans do.


There are ways to compensate for lack of “continual learning”, but recognizing that underlying missing piece is important.


Worth noting, even though it isn’t critical to your argument, that LLMs do not have perfect recall. I got to great lengths to keep agentic tools from relying on memory, because they often get it subtly wrong.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: