What's everyone's experience with modern PF in production? Also, not to start a holy war, but what people think about modern PF vs nftables? I've only ever used nftables (and only in fairly simple scenarios) but I've always been curious about the PF side of the world.
I manage a pf.conf with about 400 rules across a dozen VLANs, I find it intuitive and even enjoyable to work on. It feels kinda like editing source code - there are some host, network, and port declarations at the top, a section for NAT and egress, then a section for each VLAN that contains the pass in/pass out rules.
I tail the pflog0 interface in a tmux session so I can keep an eye on pass/block, and also keep a handy function in my .profile to make it easy to edit the ruleset and reload:
In my experience, PF operates a LOT more like commercial firewalls in how you think about filtering and NAT.
In Linux, even with nftables you still have the concepts of "chains" which goes all the way back to the ipchains days. IME this isn't a particularly helpful way of viewing things. With PF you can simply make your policy decisions on in or out and on which interface(s). Also I'm not sure I ever saw a useful application of why you'd apply a policy on the pre/post-routing chains that wasn't achievable elsewhere in PF and in a simpler way.
Also I've never been a fan of having a command that just inserted or deleted a policy instead of working from a configuration file. (nft "config" files are really just scripts that run the command successively.) I get why some folks would want that (it probably makes programmatic work a lot easier) but for me it was never a benefit.
Anyhow it's been a long time since I've had to do this kind of thing so maybe I'm out of touch on the details. Happy to hear about how I'm wrong lol.
I haven't used Linux as a gateway in years, so I can only compare pf to iptables. The two biggest differences are the way the rules are applied and the logging.
pf rules work a little backwards compared to iptables. A packet traverses the entire ruleset and the last rule to match wins. You can short-circuit this with a "quick" directive. It takes a bit of getting used to coming from iptables.
The logging on pf doesn't integrate with syslog automatically like iptables does. You're expected to set up a logging system for your particular use case. There are several ways to do it, and for production you'd be doing it regardless, but for honelab setups it's an extra thing you need to worry about.
I prefer pf, but I don't recommend it to people new to firewalls.
It's fine if all you need is a packet filter, but in 2026 I question that many production use cases can get away with just packet filter.
As a host firewall, it's obviously fine, I assume your question is about using pf as a network firewall. Given the threat landscape, you usually want threat protection. At the very least that means close-to-real-time updates from reputation lists. You can script that with pf, but it's not fun. Really, you want protocol dissection and - quite possibly - the ability to decrypt on the box and do payload analysis. Just doing packet filtering doesn't buy you all that much anymore these days, and anything production that requires compliance or that you genuinely care about should be behind what you might also call IPS or layer 7 firewall capabilities.
pf doesn't do any of that. You don't have to use Palo Alto or Cisco for this, either.
If all you need is packet filtering, it's a good option, though.
I'm just glad we don't have to deal with iptables anymore. That said, due to iptables -A crap being embedded in countless tutorials and LLM FFN-head weights, we'll end up needing to keep it fresh in mind for decades to come.
Their BDFL thinks BC breaks are great “we’ll be in a better place” I remember him saying, and has blessed breaking pf multiple times by changing the rule syntax, whereas prior versions of this book are suddenly obsolete along with countless tutorials, forum posts, etc.
This is one thing M$ gets right, in business environments you don’t do that. I wouldn’t use pf for anything outside a home lab.
Can you add an email to your profile, so I can reach out to learn more? I'm really into air quality and been trying to improve conditions in my apartment. You mentioned that
> Generally learn about diffusion in wall construction materials and figure out where organic material is used in your house. If organic material is next to something that limits diffusion (plastic, foam, metal, concrete, cement, paint) it is a possible point of water condensation and mold growth.
which is super interesting — I've found a couple of electrical sockets in my apartment which have a very strange smell, similar to soil/mold (I've confirmed that with other people, just to reduce the chance that I'm crazy). I'm still trying to investigate/fix the issue, and it seems that you know more about that, would love to learn from you.
Thanks for sharing your thoughts, very interesting!
Thanks for your kind words. I've added mail to profile, feel free to reach out.
If you are into air quality monitoring you might like homeassistant either with DIY sensors based on esphome (quite easy if you like very basic tinkering with low voltage) or with some off-the-shelf IOT products. If you just want to have a reliable CO2 sensor I can recommend the aranet4, but unfortunately those are quite expensive.
I had some electrical sockets which were super corroded from the humdity, so that the copper wire turned black even though the plastic wrap of the cable was still on it. The humidity must have moved up the cable for ~10cm. The mold damage that I found a year later was at the same wall, but I didn't mentally connect these two things at the time.
Re AIQ, I've actually built a couple of devices myself (using different sensors, plantower being the most popular one, but I've played with sensiron and others as well) but I've mostly focused on the PM monitoring.
The sockets that have strange "smell" are actually on the (inside) wall that is the building boundary (i.e. not a wall with a neighbour — these sockets don't "smell"). Still, it's a bit shocking to me that this could happen. Do you know how the humidity "got" onto your wall? How were you able to find out? I'm pretty early in my mini "investigation".
Yes, it might've been lost in translation but my socket also was located on the inside of an exterior wall of the building. So one side was room other side was outside. If all your problematic outlets are located like this, then it might be a condensation/insulation problem.
Obviously you should rule out a leaking pipe, especially if someone created a slow leak by putting a nail into a wastewater pipe, and also rule out a damage to the outside of the wall where rain could come in.
Maybe you can find out if there was a change to the exterior walls after the house was originally built, for example someone insulating the building by putting foam mats on the exterior walls during the most recent "renovation", or putting insulation wallpaper on the inside of the exterior walls. When houses are originally built, normally experts ensure with calculations that no condensation problems will happen within exterior walls.
But after many decades people think they are clever by putting additional insulation on the exterior walls in order to save some money, or to simply change the style of the building. In worst case, additional insulation will move the dew point towards the inside of the wall, and then condensation of warm+humid indoor air will happen within your exterior wall. If it is a wooden building like it's common in the US this can create a mold problem. But it can also be a problem for stone buildings like we have here in Germany, if a wallpaper of wallpaint is used that prevents humidity that is trapped within the stone wall from evaporating.
Once you know what materials were used for your exterior wall, you can use a very nice calculator [1] that will show you if the wall has a condensation problem or not. For this you need thickness and material for every single layer of the outside wall.
Curious about the deal value/price — any clues whether it was just to make existing investors even (so say up to $30M) or are we talking some multiple? But if it's a multiple, even 2x sounds a bit crazy.
One option is that the current Bun shareholders didn't see a profitable future and didn't even care if they were made even and a return of the remaining cash was adequate.
Another option is that this was an equity deal where Bun shareholders believe there is still a large multiple worth up potential upside in the current Anthropic valuation.
i don’t get it either - bun being the foundation of tons of AI tools is like a best possible outcome, what were they hoping for when they raised the money? Or is this just an admission of “hey, that was silly, we need to land this however we can”? Or do they share major investors and the therefore this is just a consolidation? (Edit: indeed, KP did indeed invest $100M in Anthropic this year. I’m also confused - article states Bun raised 26M but the KP seed round was 7, did they do the A too but unannounced? Notably, the seed was summer 2022 and chatgpt was Nov 30, so the world is different, did the hypothesis change?)
Would it make sense to have a similar feature in Codex CLI? I often do "spec-driven development", which is basically a loop of:
research -> implementation plan -> actual implementation (based on research + plan) -> validation
I have multiple subagents that I use for each phase that (based on subjective judgement) improve the output quality (vs keeping everything, every tool use etc. in the "main" context window).
Codex CLI is great and I use it often but I'd like to have more of these convenient features for managing context from CC. I'm super happy that compaction is now available, hopefully we'll get more features for managing context.
I didn't know about these ads, thanks for sharing! Can't imagine how people reacted to that when they aired — the things they described sound so "normal" today, I wonder if it was seen as far fetched, crazy or actually expected.
I was in my late teens at the time. My memory is that I felt like the tech was definitely going happen in some form, but I rolled my eyes heavily at the idea that AT&T was going to be the company to do make it happen.
If you’re unfamiliar, the phone connectivity situation in the 80s and 90s was messy and piecemeal. AT&T had been broken up in 1982 (see https://www.historyfactory.com/insights/this-month-in-busine...), and most people had a local phone provider and AT&T was the default long-distance provider. MCI and Sprint were becoming real competition for AT&T at the time of these commercials.
Anyway, in 1993 AT&T was still the crusty old monopoly on most people’s minds, and the idea that they were going to be the company to bring any of these ideas to the market was laughable. So the commercials were basically an image play. The only thing most people bought from AT&T was long distance service, and the main threat was customers leaving for MCI and Sprint. The ads memorable for sure, but I don’t think they blew anyone’s mind or made anyone stay with AT&T.
We’re the same age, and I had exactly the same reaction.
AT&T and the baby bells were widely loathed (man I hated Ameritech…), so the idea they would extend their tentacles in this way was the main thing I reacted to. The technology seemed straightforwardly likely with Dennard scaling in full swing.
I thought it would be banks that owned the customer relationship, not telcos or Apple (or non-existent Google), but the tech was just… assume miniaturization’s plateau isn’t coming for a few decades.
In these commercials, it wasn't the technology itself but the ease of access and visualized integration of these technologies into the commoners' everyday lives that was the new idea.
Solid post, thanks for sharing. Zitron occupies his own echo chamber. I've seen some people share links to his articles with a smirk as a "proof" of how "bullshit LLMs are" — and I know for a fact that they have no understanding of LLMs or how to evaluate limitations, saying nothing about unit economics. Sadly, I don't think it's possible to reason with them.
To be clear, I do expect that the bubble will burst at some point (my bet is 2028/2029) — but that's due to dynamics between markets and new tech. The tech itself is solid, even in the current form — but when there's a lot of money to make you tend to observe repeatable social patterns that often lead to overvaluing of the stuff in question.
Does anyone know how skills relate to subagents? Seems that subagents have more capabilities (e.g. can access the internet) but seems that there's a lot of overlap.
I've asked Claude and this it answered this:
Skills = Instructions + resources for the current Claude instance (shared context)
Subagents = Separate AI instances with isolated contexts that can work in parallel (different context windows)
Skills make Claude better at specific tasks. Subagents are like having multiple specialized Claudes working simultaneously on different aspects of a problem.
I imagine we can probably compose them, e.g. invoke subagents (to keep separate context) which could use some skills to in the end summarize the findings/provide output, without "polluting" the main context window.
How this reads to me is that a skill is "just" a bundle of prompts, scripts, and files that can be read into context as a unit.
Having a sub-agent "execute" a skill makes a lot of sense from a context management, perspective, but I think the way to think about it is that a sub-agent is an "execution-level" construct, whereas a skill is a "data-level" construct.
Skills can also contain scripts that can be executed in a VM. The Anthropic engineering blog mentions that you can specify in the markdown instructions whether the script should be executed or read into context. One of their examples is a script to extract properties from a PDF file.
Yeah, that's my main bottleneck too. Constantly at 90%+ RAM utilization with my 64GiB (VMs, IDEs etc.). Hoping to go with at least 128GiB (or more) once M5 Max is released.