Incus is very nice and super featured, but suffers from a few issues, namely unintuitive/hard onboarding and bad defaults, which makes giving access to people annoying, as it requires teaching them first and that they can't just make a vm with a few clicks immediately, limited authentication and user control options, like if no external auth users must exist on underlying system, and with limited but very strict auth options requires a full domain and no proxying, currently (might get fixed partially later).
And finally, it suffers from hardcore tracking upstream, ie canonical/lxd(-ui), meaning they won't really do any changes that lxd wouldn't do, and thus are slaved to them : (
Sorry what? Lxd is NOT incus upstream. Incus was forked from lxd specifically to allow divergence and the licenses mean changes rather flow in the other direction. Not that canonical considers incus “upstream” - they’re just divergent forks at this point.
I forked Incus from LXD, and I would not describe LXD as Incus's upstream at all. In fact LXD, tends to take patches from Incus these days -- on the other hand, we can't take patches or even look at patches from LXD because they're all AGPLv3 now.
Most of the maintainers and contributors to LXD have moved to maintaining and contributing to Incus instead because it is community-oriented. Incus also has a lot of features LXD doesn't have (list is too long to enumerate, but one notable one is support for OCI images is Incus-exclusive).
Stephane was arguably the primary maintainer of LXD before the split and how now exclusively works on Incus. AFAIK the only LXD-related thing that is still shipped by Zabbly is the LXD web-ui (which I get the impression Stephane doesn't feel is worth maintaining separately since it's easy to just swap out the branding -- which is what he does). Ultimately an optional web-ui is not a particularly large part of the project...
(To be honest, I only learned about the web-ui when someone asked I package it for openSUSE a few months ago. Maybe I would've forked it too back when I forked Incus if I knew about it, though I'm not a web dev.)
The only time this held, vaguely to my recollection, true was prior to Incus 0.4 where both were cherry picking from each other but neither were upstream of each other
It's very very different between the UI and Incus itself :)
The Incus teams are low level system engineers who develop in Go or C.
The UI is a pile of typescript which none of us really want to understand/touch any more than strictly needed.
The Incus UI is a soft fork (really just a small overlay) on top of the LXD UI to adjust for the API differences between LXD and Incus and for fixing the few big gripes we had with the LXD UI. Because both projects are under the same license, we can actually just follow what happen in LXD UI and pull in code from it.
Incus is a very different beast. The whole reason the project had to be started is because of Canonical's series of changes which eventually led to a re-licensing of LXD to AGPLv3. With Incus remaining Apache 2.0, none of us can even look at the LXD code without risking being "tainted". We cannot import any code from LXD since that license change and we never have. However LXD has no problem with importing Apache 2.0 code into an AGPLv3 codebase, which they have quite actively been doing.
In short Incus is a hard fork of LXD, we don't look at any LXD code or even at their issues or release announcements (mostly because it's not useful for those two). That means that everything that happened in Incus since December 2023 has been completely independent of LXD.
The Incus UI is a soft fork of the LXD UI, it's rebased every time they push a new version out and our goal is to keep the delta as small as possible as it's something we want to spend as little time on as we possibly can. It's also why we always package it as "incus-ui-canonical" to make it very clear as to what it is.
There are also other UIs out there that could be used, sadly last I checked none came close to the feature coverage of the LXD UI or they had dependency on external components (database, active web servers, ...) whereas what we want is a UI that's just a static javascript bundle which then hits our REST API like any other client.
> I mean sure, it's the UI component only, and not on the lxc repo but the Zabbly one, and maybe they treat things widely differently depending.
This one is fair since Incus and Zabbly have had no desire to reimplement a web UI, they've instead opted to leave that to the community resulting in LXConsole for example.
Zabbly, for additional context, is Stephane Graber's company that he set up as a consultant service for Linux and Linux Container (both in terms of the Linux Container organization and the LXC project)
EDIT: I read this part to suggest that former LXD team members that are now on the Incus team potentially have the same mindset, if this is wrong, please correct me.
> And I swear I had a similar experience with regards to some other incus issues where there was a similar response about changes.
So far from my two years of interaction (both as a lurker and commenter) on the forums and purveyor on the issue tracker, I've yet to see anything that resembles treating Canonical LXD as any form of upstream especially since the split.
"Power bank cells are mainly divided into 18650 cells and polymer cells. The most common one on the market is 18650 lithium-ion batteries, with a market share of 70%."
Finns need to mentally evolve beyond this mindset.
Somebody being polite and friendly to you does not mean that the person is inferior to you and that you should therefore despise them.
Likewise somebody being rude and domineering to you does not mean that they are superior to you and should be obeyed and respected.
Politeness is a tool and a lubricant, and Finns probably loose out on a lot of international business and opportunities because of this mentality that you're demonstrating. Look at the Japanese for inspiration, who were an economic miracle, while sharing many positive values with the Finns.
Wow. I lived in Finland for a few months and this does not match my experience with them at all. In case it's relevant, my cultural background is Dutch... maybe you would say the same about us, since we also don't do the fake smiles thing? I wouldn't say that we see anyone who's polite and friendly as inferior; quite the contrary, it makes me want to work with them more rather than less. And the logical contrary for the rude example you give. But that doesn't mean that faking a cheerful mood all the time isn't disingenuous and does not inspire confidence
"I never smile if I can help it. Showing one's teeth is a submission signal in primates. When someone smiles at me, all I see is a chimpanzee begging for its life." While this famous quote from The Office may be quite exaggerated in many ways, this can nonetheless be a very real attitude in some cultures. Smiling too much can make you look goofy and foolish at best, and outright disingenuous at worst.
Yes, globally cultures fall into the category where a smile is either a display of weakness or a display of strength. The latter are more evolved cultures. Of course too much is too much.
You know there is a difference between being polite and friendly, and kissing ass, right?
We are also talking about a tool here. I don't want fluff from a tool, I want the thing I'm seeking from the tool, and in this case it's info. Adding fluff just annoys me because it takes more mental power to skip all the irrelevant parts.
Since it's a tool, what does it matter if it's too polite for your liking? It's just a tool. We've always had these things. Every time you started your computer, Windows would have a load screen saying "Welcome".
I find the hubris of this article absolutely disheartening, and toxic, and it frankly just reinforces how Wikipedia isn't a good place, and people who shouldn't have control over it have control over it.
And it isn't because of the self promoting described, but because of the response to it.
Apart from the fact that this was pure self-promotion, it was also spamming the Wikipedias of small language communities with low-effort autotranslated garbage, which I think is rather insulting.
Once again I voice the only sane option:
Skip IPv6 and the insanity that it is, and do IPv8 and simply double (or quadruple) the address space without introducing other new things.
It'll be objectively worse. IPv6 is at least sort of supported by a non-negligible number of devices, software and organizations. This IPv8 would be a whole new protocol, that no one out there supports. The fact that version 8 was already defined in [an obsolete] RFC1621 doesn't help either.
Even if you decide to try to make it a Frankenstein's monster of a protocol, making it a two IPv4 packets wrapped in each other to create a v4+v4=v8 address space, you'll need a whole new routing solution for the Internet, as those encapsulations would have issues with NATs. And that'll be way more error prone (and thus, less secure), because it'll be theoretically possible to accidentally mix up v4 and inner-half-of-v8 traffic.
Nah, if we can't get enough people to adopt IPv6, there's no chance we'll get even more people to adopt some other IPvX (unless something truly extraordinary happens that would trigger such adoption, of course).
Are you saying you believe it's truly impossible to create a new backwards compatible standard that expands the address space and doesn't require everyone to upgrade for it to work?
It isn't possible to make backwards compatible standard that expands the address space. Where are you going to put the extra address bits in the IPv4 header?
It also can't be backwards compatible with IPv4 networking and software. The network gear will drop extra address, the OS will ignore it, and software will blow up.
It would be much better to make a new version. But if going to make new protocol, might as well make the address big enough to not need expansion again.
Then you have to update every networking device to support the new standard. And update all the protocols (DHCP, etc) for more address space. That part is what took a lot of the time for IPv6. Then you have to update all of the software to support 64-bit addresses. Luckily, most of the work was already done for IPv6.
Then you have to support a transition mechanism to talk to IPv4. Except there isn't enough space in new address. IPv6 on the other hand, has enough address space to stuff the IPv4 host and port in the IPv6 address for stateless NAT.
> Where are you going to put the extra address bits in the IPv4 header?
The optional part. EIP proposed using 16 bits (minimum) to bump the address space to 40 bits (the EIP extension portion is variable-sized so it can go higher until you reach header option limits): https://archive.org/details/rfc1385/page/4/mode/2up
The effort is a bit smaller because existing stacks can already read what's in the EIP part as it disguises itself as an option header. The change is behavioral not structural.
Also with the extra octet added we'd get ~254 current-IPv4-sized clusters of addresses. If a unit inside one of these doesn't really care about the others they can skip supplying this information entirely, i.e. not all participants need to understand the extension. LANs, internal networks and residential use comes to mind as examples in which case only the gateway has to be updated just like the RFC says.
With IPv6 participation is all or nothing or dual stack, but then this is ~1.1 stack :)
That RFC glosses over a LOT of details. I'm skeptical the effort would be a bit smaller, once you consider what is required for routing and the "translation service." That's totally glossed over in the RFC, by the way.
Unless you're planning on doing all IP communications in user space (or within your network "cluster"), the OS and IP stack still needs to be updated, you need a new addressing format, applications need to be aware of it, etc. If you want to actually make use of the new address space, it all needs to be updated... just like IPv6.
No, sorry. Very few switches and not all routers do that in software. If all that is in an ASIC then that part just can't be added to the address without new hardware.
So no, good attempt but it's pretty much still a 'upgrade all the routers and switches' kind of issue just like IPv6.
I'm not going to say it's truly impossible, but it's practically just-about impossible.
There's no straightforward way of getting old hosts to be able to address anything in the new expanded space, or older routers to be able to route to it.
So you have to basically dual-stack it, and, oops, you've created the exact same situation with IPv6...
If it's possible, why has no one done it? Most of the backwards compatible "solutions" that are presented just run into the same issues as IPv6 but with a more quirky design.
This is a pipe dream in the current century. IPv6 adoption has been slow but it’s approaching 50% and absolutely nobody is going to go through the trouble of implementing a new protocol; updating every operating system, network, and security tool; and waiting a decade for users to upgrade without a big advantage. “I don’t want to learn IPv6” is nowhere near that level of advantage.
The reason IPv6 adoption is lacking is that there's no business case for it from consumer-grade ISPs, not that there's an inherent problem with IPv6. Your proposed IPv8 standard would have the exact same adoption issues.
Your IPv8 is what IPv6 should have been. Instead, IPv6 decided to re-invent way too much, and is why we can't have nice things are are stuck with IPv4 and NAT. Just doubling the address width would have given us 90% of the benefit of V6 with far less complexity and would have been adopted much, much, much faster.
I just ported some (BSD) kernel code from V4 to V6. If the address width was just twice as big, and not 4x as big, a lot of the fugly stuff you have to deal with in C would never have happened. A sockaddr could have been expanded by 2 bytes to handle V6. There would not be all these oddball casts between V4/V6, and entirely different IP packet handling routines because of data size differences and differences in stuff like route and mac address lookup.
Another pet peeve of mine from my days working on hardware is IPv6 extension headers. There is no limit in the protocol to how many extension headers a packet can have. This makes verifying ASICS hard, and leads to very poor support for them. I remember when we were implementing them and we had a nice way to do it, we looked at what our competitors did. We found most of them just disabled any advanced features when more than 1 extension header was present.
IPv6 reinvented hardly anything. It's pretty much IPv4, with longer addresses, and a handful of trivial things people wished were in IPv4 by consensus (e.g. fragmentation only at end hosts; less redundant checksums).
The main disagreements have been above what to do with the new addresses e.g. some platforms insist on SLAAC. (Which is good because it forces your ISP to give you a /64).
Devices operating at the IP layer aren't allowed to care about extension headers other than hop-by-hop, which must be the first header for this reason. Breaking your stupid middlebox is considered a good thing because these middleboxes are constantly breaking everyone's connections.
Your sockaddr complaints WOULD apply at double address length on platforms other than your favorite one. The IETF shouldn't be in charge of making BSD's API slightly more convenient at the expense of literally everything else. And with addresses twice as long, they wouldn't be as effectively infinite. You'd still need to be assigned one from your ISP. They'd still probably only give you one, or worse, charge you based on your number of devices. You'd still probably have NAT.
IPv6 is often simpler to administer than IPv4. Subnetting is simpler for the common cases. SLAAC eliminates the need for DHCP on many local networks. There's no NAT to deal with (a good thing!) Prefix delegation can be annoying if the prefix changes (my /56 hasn't in almost 3 years.) Other than that, it's mostly the same.
I frequently see this claim made but it simply isn't true. NAT isn't inherent to a protocol it's something the user does on top of it. You can NAT IPv6 just fine there just isn't the same pressure to do so.
Technically, you are correct. Practically speaking, NAT is an inherent part of using IPv4 for 99.99% of end users. I haven't seen an end user or business with a public IP on the desktop in nearly 25 years.
You can NAT IPv6 but it is rarely done since there is simply no need.
I nat IPv6 on one of my servers because having seperate ipv6 for VMS but the same IPv4 has caused some issues with running mail servers and certificates. If only I could drop IPv4 completely
You are completely missing the point, which is that decibels are opaque and carry hidden field specific meaning that is not possible to know from the "unit".
This is not a criticism of how useful they are in calculations.
Cryptpad is E2E encrypted. At a first glance, this is not.
Which also means its features won't be constrained by the E2EE architecture.
At a first glance, it seems the suite numerique wants to be simpler than full traditional office documents. It seems to compare itself with notion and outline.
CryptPad has very simple modules and also more complex, OnlyOffice based modules.
Ultimately, if the suite numerique's frontend is able to send editing patches as JSON, it should not be too complicated to make it work on a CryptPad server and make it E2EE, which is exactly what the CryptPad team did to OnlyOfficr (the "why not both" option)
It would not really make any sense to try to take all Docs in CryptPad, as Docs is both client and server code. The client has both and editor but also sharing features.
CryptPad integrates editors.
However Docs is based on BlockNote for the editor and this editor has been on our watchlist to replace our aging CKEditor which is used in CryptPad. This would make sense to integrate in CryptPad.
As it was said CryptPad is e2ee which is a LOT of work. Then it has 9 types of document files (Docs has 1). CryptPad also has a drive. It also has shared folders, team drives, import and export features and finally also a Survey Tools with e2ee protection. There are many more little or larger details.
Of course as a part of the Cryptpad team you have a bias, but if I'm looking to replace Google Drive and its functionality, is Cryptpad a good replacement, or is there another one, maybe Nextcloud(?), that could do it better (just user interaction etc wise, not E2EE)?
Would you have a list of things Cryptpad can and can't do compared to other solutions? It would make choosing the best replacement easier.
Hah, I see exactly the same issues as I had when trying to implement TUI on mobile (autocomplete fail, no arrow keys, blocking view, ...). It's just so painful and actual seems almost impossible to leverage the virtual keyboard for input.
reply