The big issue I've seen is that there are not clear wins for all the pain of IPv6. And the failure to provide a much much better ipv4 interop story out the gate (vs just trying to yell at everyone to switch) was a big big loss. So that means folks ended up having to run ipv4 to end devices in many cases and we have dual stack hell across the network.
For the 90%+ of network clients on-prem that play by the rules you are often supporting both DHCP and SLAAC on IPv6 networks (unless android has figured out DHCP).
Dual WAN redundancy on IPv6 in SMB settings is actually worse relative to a NAT and ipv6.
The subnet size /64 is stupidly large, and the number of available subnets you can easily get annoyingly small from most upstream providers.
Firewall filtering has to be modified or you get weird errors. People don't always like ICMP coming through the firewall.
It's does lots of things differently, but for what? We do prefix delegation with DHCP but can't use DHCP to assign address (we are supposed to use SLAAC) but still need DHCP for lots of other stuff? It's nonsensical - you end up with just way too much garbage.
The privacy extension stuff hits IPv6 hard. You have lots of auto-rotating addresses.
The number of times turning of IPv6 fixes weird glitches / timeouts / stalls etc is still crazy too me.
ATT (massive corp) requires end user devices request /64 subnets one by one - which most end user gear does not support.
Getting static ipv6 IP's (despite claims there are lots of them) is seriously hard from upstream providers in many cases - but IPv4 is trivial by comparison.
The list goes on.
I would have made the subnet size 32 bits. Expanded the network part. Maybe even reduced the overall size - 128 bits feel dumb with /64's as the smallest subnet? Then just go super high interop / overlay to IPv4 with great suggestions so folks could ship ipv6 only stuff (even with on device bridge to ipv4 so outbound interface is ipv6) and no new concepts unless clearly justified.
The vast majority of businesses will never need more than single node architecture. Hardware advances are continually increasing that percentage.
SPARK and its modern counterpart Databricks are essentially obsolete for these organizations. Whatever justification they may have had in the past is no longer true.
I’ve recently closed down several in house SPARK clusters and replaced them with single nodes.
In addition to the simplicity of the design and reduction in cost there was a massive increase in performance. I expect this will become more common in the future; leaving distributed architecture for a small and increasingly niche group.
I think fine to order blocking in Italy - it's an italian court after all. But if they start doing the sort of global blocks folks have tried with X, then just withdraw services.
This is their somewhat muddy response to the “trolls” who might say
“Changing the license was a mistake, and Elastic now backtracks from it”.
We removed a lot of market confusion when we changed our license 3 years ago. And because of our actions, a lot has changed. It’s an entirely different landscape now. We aren’t living in the past. We want to build a better future for our users. It’s because we took action then, that we are in a position to take action now.
This statement is confusing to me. I never found the old situation confusing, until Elastic started adding invented licenses and trying to claim openness while monopolizing the right to host their software. That was confusing.
I thought unlogged was used so crash recovery would truncate / dump the table on a crash with 100% Dataloss but this let you put stuff on a tempfs / ramdisk?
NASA launch coverage has really improved. YouTube access is great especially as SpaceX has moved off YouTube quality also has improved. It seems the last year regardless what I think about SLS and some other NASA programs this is very nice to see. telemetry and video back, though still a bit weaker
School funding is diverted from classrooms but still goes to education. State superintendent level takes a cut down to county superintendents and programs down to local school district superintendent / admin / consultants.
If a private school did 35 kids in a class paying 35K (that's 1.2 million per class) they'd have teacher aids and amazing everything .
State budget act in CA is 23k per student. Wealthy areas for another 5-10k.
We have an amazing teacher fighting insane classroom ratios and requirements. At 1m a class it shouldn't be like that.
> This is false. For profit companies clear and custody billions per day successfully.
> Brokerages, banks, title and escrow companies, clearing companies.
Those are all highly regulated industries. Some of them would love to put stuff in their T&Cs like BountySource’s “if the beneficiary doesn’t withdraw the money after 2 years we get to keep it” but the regulators would never let them. For a business like BountySource, that level of regulation does not exist
It is also naive - non profits have bountiful ways of making money disappear into pockets in all sorts of legal manners. It’s an entire industry - and by being non-profit they can be almost entirely impossible to “control” once the board is captured.
I think you’d want the money to be held by an organisation which is respectable and has some backing and track record - e.g. the OSI, FSF, Linux Foundation, Software Freedom Conservancy - orgs like that are unlikely to redirect the funds into something completely unrelated.
There does need to be some flexibility however - e.g. if a project is defunct and nobody wants to work on it, it is stupid just to leave funding in a bank account forever. But if you give it to another open source project (preferably one in the same area) I think that is fine. Adding it to the coffers of a for-profit company isn’t
And it might be reasonable for a not-for-profit to contract with a for-profit firm to administer such a funding scheme - but they should only be trustees of the funds (so if they go bankrupt the creditors can’t touch it) and they only get paid a defined percentage as a fee for service
For the 90%+ of network clients on-prem that play by the rules you are often supporting both DHCP and SLAAC on IPv6 networks (unless android has figured out DHCP).
Dual WAN redundancy on IPv6 in SMB settings is actually worse relative to a NAT and ipv6.
The subnet size /64 is stupidly large, and the number of available subnets you can easily get annoyingly small from most upstream providers.
Firewall filtering has to be modified or you get weird errors. People don't always like ICMP coming through the firewall.
It's does lots of things differently, but for what? We do prefix delegation with DHCP but can't use DHCP to assign address (we are supposed to use SLAAC) but still need DHCP for lots of other stuff? It's nonsensical - you end up with just way too much garbage.
The privacy extension stuff hits IPv6 hard. You have lots of auto-rotating addresses.
The number of times turning of IPv6 fixes weird glitches / timeouts / stalls etc is still crazy too me.
ATT (massive corp) requires end user devices request /64 subnets one by one - which most end user gear does not support.
Getting static ipv6 IP's (despite claims there are lots of them) is seriously hard from upstream providers in many cases - but IPv4 is trivial by comparison.
The list goes on.
I would have made the subnet size 32 bits. Expanded the network part. Maybe even reduced the overall size - 128 bits feel dumb with /64's as the smallest subnet? Then just go super high interop / overlay to IPv4 with great suggestions so folks could ship ipv6 only stuff (even with on device bridge to ipv4 so outbound interface is ipv6) and no new concepts unless clearly justified.