Now we need to get an enterprise grade switch - doubt Cisco would add macsec into SOHO gear. Along with enterprise grade intercoms, cameras, doorbells...
And beloved by many Unifi is out of question - they still can't bake IPv6 support.
So looks like it's feasible but the cost wouldn't be good.
i well familiar with macsec. we use it between datacenters and for aws directlink. it de-facto standard for this kind of stuff. i even worked on hardware that provided macsec support
a couple of years ago I tried to use it inside datacenter during fedramp implementation. it crashed and burned for a couple of reasons:
- linux wpa_supplicant was crashing during session establishment
- switch had a limit on number of macsec session per port
Something about that headline really irks me. I think GitHub is an amazing place for people to share code, I also think it’s really nice of them to do this.
But the maintainers aren’t “their” maintainers. They are maintainers using GitHub for their projects.
Probably just me overreacting, just thought I’d mention it.
They're referring to maintainers of projects that GitHub relies on for their own work, so calling them "our maintainers" isn't much of a linguistic stretch.
Without having looked into it too deeply I feel that they are somewhat “cheating” by using a superserver to launch a new process for each connection, thus letting the OS handle the dynamic allocation needed for each connection.
Still pretty impressive project. Would be fun to take a deeper look at it at some point.
> […] thus letting the OS handle the dynamic allocation needed for each connection.
This is what PHK did when designing Varnish (IIRC): instead of dealing with lots of files on its own (like Squid), just create some files and do a malloc() on them and let the OS do the work:
I think the original point still stands. When the application tries to handle memory pressure itself by writing data structures to disk, it will hit the case where the kernel has already paged that memory out, has to reload it, only to write it to disk and free the memory afterwards.
It makes sense though - no memory management simplifies the codebase. Letting the host deal with it instead means you get the niceties of process level isolation and less complexity. More eyes are on the OS level code than would be on this project. It seems very clever to me.
I have one of the full size Ploopy trackballs, and it feels really good in the hand. It is 3D-printed, and it shows, but the texture of it feels pretty nice when using it.
I struggled a bit with accuracy at first, but now I’ve lowered the DPI to around 500 and it’s become much more usable for me since.
It's worth to know that the watch also tracks HRV whenever you use the Breathe app, so if you want to get consistent HRV readings the easiest way is to use Breathe once a day (for example right when you wake up).
> Could I have done all that in a single shell, then had it automatically cleaned up when I was done?
Yes, that is a pretty standard workflow for most nix users. You either set up a shell.nix for your project with all of its dependencies, or if you need a certain tool once you just write for example: ‘nix shell -p iotop’ to enter a shell where iotop is in the path.
[1]. https://www.defcon.org/images/defcon-19/dc-19-presentations/...