Do not go to this site with enabled javascript! They spam your uplink DNS provider with thousands of uniq, uncachable (fingerprinting?) 'test' dns keys without your
consent, to identify & track the DNS service you are using!
Projects
Console screen reader infrastructure
Vessel - Integrated Application Containers for FreeBSD
Enable the NFS server to run in a vnet prison
Pytest support for the FreeBSD testing framework
Userland
Base System OpenSSH Update
Kernel
Enabling Snapshots on Filesystems Using Journaled Soft Updates
Wireless updates
Netlink on FreeBSD
Adding basic CTF support to ddb
Architectures
CheriBSD 22.12 release
FreeBSD/riscv64 Improvements
go on FreeBSD riscv64
FreeBSD/ARM64 on Xen
Cloud
FreeBSD on Microsoft HyperV and Azure
FreeBSD as a Tier 1 cloud-init Platform
OpenStack on FreeBSD
Documentation
Documentation Engineering Team
FreeBSD Presentations and Papers
Ports
FreshPorts - help wanted
PortsDB: Program that imports the ports tree into an SQLite database
KDE on FreeBSD
Xfce on FreeBSD
Pantheon desktop on FreeBSD
Budgie desktop on FreeBSD
GCC on FreeBSD
Another milestone for biology ports
Third Party Projects
Containers and FreeBSD: Pot, Potluck and Potman
That is the exact opposite off what it says. The US government has actually mandated IPV6 only networks in the next few years.
>At least 20% of IP-enabled assets on Federal networks are IPv6-only by the end
of FY 2023;9
>b. At least 50% of IP-enabled assets on Federal networks are IPv6-only by the end
of FY 2024;
>c. At least 80% of IP-enabled assets on Federal networks are IPv6-only by the end
of FY 2025
Aren't these comments getting a bit old at this point? Running dual-stack should not be any more difficult than just running IPv4. There is a plethora of automated deployment tools and I'd hardly think people are DHCP'ng addresses to their servers. You don't have to use SLAAC and can statically assign addresses just like IPv4. Even for your dual stacked devices getting IPv6 addresses via RA can be tracked back to their IPv4 DHCP bootp requests.
I'm making the assumption here that anyone concerned about their network attack surface is actively capturing network or netflow data in which tools like openargus[1] or Arkime[2] make all of this collectable/searchable. Additionally most network devices support mirror/monitoring to offload data if you aren't working on the scale of needed dedicated taps/aggregators.
Realistically though what information can you glean from a hosts IPv6 address that wouldn't already be part of WHOIS? With IPv4 you already know there are only (3) rfc1918 reserved ranges. Anyone can use them as they see fit so seeing a 10/8 address in a email header doesn't automatically mean the company is huge its just what they picked. Myself, i've just never really bought into the whole "dns naming" or discovering private address ranges giving anything away. With existing NAT device tracking moved onto more unique features such as browser, screen size, etc. such that IP address tracking is probably not as accurate.
> A transition from IPV4 to IPV6 creates a new per device tracking capability that leaks internal network structure.
I doubt it. Your load balancers will be the only addresses that will be addressable anyway. Your IPv4 load balancers will also be "leaking" IP addresses.
Clients that aren't misconfigured will use random IPv6 addresses that rotate. The usual default is once per day but that's a mere preference, you can make your computer take a new IP every minute if you want.
They do feel a bit old. Especially considering that is not the "TL;DR" of the paper. The paper makes no statement on whether or not it is a good idea to use ipv6, only that the US Government is transitioning and some guidelines on how to do that.
You are getting downvoted but ipv6 was ratified in 1998. The sunken cost fallacy is real here. At what point or threshold should there be a proposal for a simple address length extension of IPv4. Even in cloud providers who have an army of sysadmins and netadmins they don't support v6 in private networks.
Let's be very honest here, does anyone have a good reasom to believe another 25 years would mean ipv6 would displace ipv4 or even solve the address shortage when cgnat and other workarounds are profitable to network vendors?
My controversial solution is to stop using numbers for addressing on layer3. A new IP protocol should have hierarchial domain name addressimg. So google.com would have .com as the top domain you would have routes for each TLD with non-ISPs default routing tlds like .com, ISP networks would resolve the route for .google under the .com routing table and so on. Upper layers would be oblivious except that you have less code now. On LANs you can create whatever domain hierachy works for you so long as the TLD is part of a predefined list. TLDs will have a fixed maximum length of 128bits for routing performance amd such. PKI/TLS would work just fine except now you have an extra layer of security in that ISP routing tables would have to also route to the wrong AS and can implement source route (customer1244.telecast.isp) validation to make mitm only slightly harder and address spoofing ddos impossible. So forget about numbers, ascii is also numbers. You are already doing this with v6 and 2600:: and other prefixes. As for layer3 translation, I have an even more controversial idea that will also solve wifi security and lan based mitms for good but for another comment.
While this is true World IPv6 Launch Day in 2012 is the date most people point to for earnest IPv6 deployments. It was also not completely ratified until 2017.
> At what point or threshold should there be a proposal for a simple address length extension of IPv4.
If you pass a IPv4v2 packet it will not be routed. You'll need to replace all networking equipment to support IPv4v2...which is what we've done/currently doing w.r.t. IPv6. The engineers who wrote the spec were very much aware of how much "we've got one shot at this" was.
> another 25 years would mean ipv6 would displace ipv4
We're at over 50% deployment in the US. Again, it's closer to 10 years.
The IETF has a two-level standards system consisting of "Proposed Standard" and "Internet Standard". IPv6 was first published as "Proposed Standard" in 1998 and finally transitioned to Internet Standard in 2017. Although officially Proposed Standards are supposed to be treated as "immature specifications", as a practical matter, people routinely deploy on them. Whether an RFC is advanced to Internet Standard is less a question of whether it is mature than whether the editors and/or WG bother to advance it. Here are a number of examples of widely deployed protocols that never advanced beyond Proposed (1) all versions of TLS (2) HTTP/2 (3) SIP (4) QUIC.
I think choosing 2012 as your start date is pretty generous. Proponents of IPv6 were telling people to start deploying long before that. In fact, the IETF sunsetv4 WG, dedicated to sunsetting IPv4, was formed in 2012 several months before World IPv6 launch day. Arguably, World IPv6 Launch Day was a reaction to the failure of v6 to get large-scale organic deployment 12ish years in.
> If you pass a IPv4v2 packet it will not be routed. You'll need to replace all networking equipment to support IPv4v2...which is what we've done/currently doing w.r.t. IPv6
That was never the difficult part. Mosr corr routers and expensive gear supported ipv6 many years ago.
> We're at over 50% deployment in the US. Again, it's closer to 10 years.
That means almost nothing. Even if you have 100% deployment, it is more expensive to maintain v6 by server admins,developers and consumers alike, especially in the not so rich countries. It just adds more maintenance cost, it isn't economically practical to expect it to hit critical mass and the everyone stops writing v4 specific code and config. IPv42 or whatever will be a good solution will be economically viable requiring the smallest change by end users and producers. V6 was developed by a committee of network engineers that only saw things from a network operator and vendor perspective. The lesson from sunken cost fallacy is that existing investment cannot be used to justify continued investment and in this case the problem of v4 shortage has been addressed by other means in a way that will keep it alive for decades more.
In my opinion, a solutiom that requires a firmware update that can work with existing ASIC and is economically viable is possible but the discussion about that isn't even happening. Billions will be wasted on the hopes that decades from now ipv6 can stand on its own.
Yes, the base idea is not that new. I store since years every GO based application I use as small (few kb) source code tree checkout only, no binary at all. At runtime the wrapper compiles a randomized individual one-time-temporary-uniq binary via garble [0].
The compiler (go) is part of a static read-only (compressed/in-memory) RootFS. Build on a air-gap build server, touching only signed/verified/reviewed code from git-offline mirror snaps. Go has no libs, all static. The resulting runtime only binaries are totally uniq/randomized and dependency free, straight from (signed) source code.
Ah, so that will have some features of ASLR missing. Specifically, you can't do this on a read only root and it didn't randomise the stack location as far as I can tell?
I think I've got a better idea now. So openbsd has ASLR which affects data, code/library, and stack positions. Then this solution works on top of it by reordering symbols within the code.
One thing I'm still not sure about is whether the kernel could theoretically do the same reordering at load time using relocatable symbols.
The assembler laid out the code within the sections and generally it's not changed after that (except for targets that do linker relaxation). However with -ffunction-sections the compiler would put each function in its own section which then can be independently relocated.
If each function is in its own section, then all function calls would need to be indirected through the PLT/GOT, even function calls within the same translation unit? Ouch.
No - the linker is there to resolve references among sections and can do so without PLT/GOT indirection when creating things like static archives.
There may be a code size cost in some architectures - that since the call destination can be relocated far from the call site that the assembler will need to make sure it allocates enough space to reach the call target instead of a small PCREL relocation.
The kernel needs a bit more information than that, since chunks of code can refer to each other and if you rearrange them this would break these since they're typically emitted as relative offsets.
Please don't throw out the baby with the bathwater.
Fully reproducible builds provide great assurance against the supply chain attacks. But 100% reproducibility is in some cases a bit too much. What matters is whether the artifact can be easily proven to be functionally identical to the canonical one.
So I am 100% for a fully predictable sshd random-relink kit, producing unpredictable sshd binaries, but only as long as there is an instruction how to check that the sshd binary that allegedly came from it indeed could have come from it, and was not quietly replaced by some malicious entity.
> So I am 100% for a fully predictable sshd random-relink kit, producing unpredictable sshd binaries, but only as long as there is an instruction how to check that the sshd binary that allegedly came from it indeed could have come from it, and was not quietly replaced by some malicious entity.
You can easily verify the integrity of the object files that are used in the random relinking - they are included in the binary distribution, and are necessary to perform the relinking.
The debate of static vs dynamic linking is still going on, and a very strong argument against static linking has always been that upgrading vulnerable libraries is made difficult. But think of it: package managers already hold the meta-data of what links to what; object files can be distributed just as easily as shared objects; the last necessary step is to move the actual linking step from the kernel to the package manager.
On OpenBSD all static system binaries are compiled as static PIE, so they already benefit from ASLR. The issue, IIUC, is that ASLR only randomizes entire ELF sections relative to each other. In any executable or library, whether static or dynamic, the code is placed into one giant .text section, so the relative offsets remain static. In a dynamic executable all library dependencies are loaded separately, so at least each section of each library gets a unique base address. A leak that exposes the address of a function only leaks the address of other functions in that library, not every function in the process. But in a static executable all those libraries are also placed into the same .text section as the main program code, so a leak of any function address leaks all function addresses.
In theory all functions, or more realistically groups of functions spanning page-size increments, could be dynamically located. The obvious way to achieve that would be to have multiple .text sections within a main executable or library. But off-hand I don't know if that's actually supported by ELF, or if so whether the standard tool chains and environments could easily support it.
The ELF spec certainly allows for multiple .text sections, and one can also use totally custom sections with the correct attribute too.
Any linker that could not handle multiple identically named sections is simply buggy. That said, it is normal for a a linker to prefer to output only a single section of each name, but it is not difficult to get a linker to output multiple .text equivalent sections, especially if you make them have distinct names.
However section are not really what you want. PT_LOAD segments are, since those represent regions that get memmap'd contiguously. One can certainly put different executable sections into different segments.
I'm not 100% certain about how it works on OpenBSD, but on Linux, neither the kernel loader nor the loader embedded in the dynamic linker randomize the segments independently. The problem is that for dynamically linked code, the .text needs to be able to reference the GOT and PLT via relative addresses, so those segments must be loaded at a known distance relative to the code.
For simple static PIE executables this should not be needed [1], however if you start introduce multiple chunks of code loaded at random addresses again, then you need to reintroduce similar concepts, as you cannot reference code in those other randomly placed chunks with a relative address.
Assuming things are at all similar in OpenBSD, to do what you are proposing, it would be needed to mark groups of segments that need to be loaded relative to each other, allowing other segment groups to be randomized with respect to each other. For code in one group to access globals or functions from the other, the linker would generate a GOT and PLT per group, similar to how dynamic linking works, but with simplifications since you know all the code that will be present, so don't need to worry about interposing, etc. In theory each GOT could get away with having as few as one entry per other segment group. [2]
Of course you would need code to initialize these GOT values. Realistically the static ELF loader would need to be augmented to provide the program with information about where it placed each segment group. Then the static PIE libc could include code that reads these offsets, and uses them to initialize the GOTs. If using the one entry per segment group approach and you place the GOTs a say the very start of each segment group, with the entries in segment group order, this would make for really simple initialization code. Of course, a more complicated relocation engine like a hyper stripped down dynamic linker would also be possible.
Footnotes:
[1]: Apparently on Linux even static PIE executables those have some amount of runtime relocation code that is needed (I'm not really sure what/why).
[2]: This is because the linker would know exact offsets of functions and variables within each segment group, so the code can simply load the other segment's pointer into a register using a relative addressing, and do the load/store/jump with that register plus the already known displacement into that segment group.
> You can easily verify the integrity of the object files that are used in the random relinking - they are included in the binary distribution, and are necessary to perform the relinking.
I don't understand the full logic here. Yes I can authenticate the object files. But how would you discern, after a possible intrusion, an "sshd" binary that is indeed a random combination of these objects, from a trojaned "sshd" binary?
Limiting the scope of the damage that root can cause is an open problem, orthogonal to verifiable builds. OpenBSD has some basic checks in place (securelevel), but you should still assume that a compromised host is, well, compromised.
The weak link in reproducibility is that you currently have no trivial way of recreating the same random order of the linked object files.
Currently the random relinking is implemented literally through a call to "| sort -R" (-R for random order) on the list of object files, passed as arguments to the linker. I suppose if sort -R took a seed argument that was saved somewhere safe (chmod 400), the linking order can still be reproduced, and the resulting executable checksummed against the state of the system.
Actually, saving the hashes of the objects into the executable itself, into a new section, would be enough. Then one would need to locate this section, confirm that the hashes there form a permutation of the canonical ones, relink the canonical objects in the same order, and check whether the resulting executable is the same byte-for-byte.
If you save the link order, then you’ve provided a map to the stacker of the link order used which defeats the whole point of randomization. No? I must be missing something
The section with the link order stays only on disk, i.e. not loaded into RAM, and is therefore useless to the one who tries to exploit sshd. Especially because the sshd binary is readable only by root now.
I guess the holy grail would be to combine this with hot patching (https://en.wikipedia.org/wiki/Patch_(computing)#HOT-PATCHING), and relink the kernel every now and then while it is running (currently, a system under attack would have to be rebooted every now and then, and that’s undesirable). That would face ‘a few’ technical hurdles, though.
Yeah I was just thinking this; I've got like years of uptime on my OpenBSD server--don't know how much boot time relinking is helping me. But for like, desktops and laptops, it's fine and a great feature IMO (you probably wade through a lot more muck on a personal machine)
If you have years of uptime on an openbsd machine you are not keeping it up to date.
I have to admit I am guilty of this as well, but any mantained openbsd setup should have an uptime of no more than 6 months and a well maintained openbsd setup will be shorter than that as security patches are applied.
Having said that one of the things I like about openbsd is that if you want to go dark and have an ultra stable system(no updates ever) all the pieces are there for you, (you will want to have the source, I would also make sure I have the ports tree for that release and a copy of the ports dist files.)
This is true; my VPS has some kind of problem updating a FDE machine and I've procrastinated doing something about it for years. The answer is probably putting everything on tarsnap and reinstalling.
Netflix contributes. Real Code. Rock stable. High quality. For so many years. So its not only the Free-Beer-License for them. I have never seen a single commit comming back from others, who build the most (financial) successfull trillion dollar bussines on top of FreeBSD, but comming back again and again, to (hard-)fork.
Do not go to this site with enabled javascript! They spam your uplink DNS provider with thousands of uniq, uncachable (fingerprinting?) 'test' dns keys without your consent, to identify & track the DNS service you are using!
Take a look at your DNS outbound log yourself!