Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Canonical Launches MicroCloud to Deploy Your Own "Fully Functional Cloud" (phoronix.com)
129 points by laktak on Nov 17, 2023 | hide | past | favorite | 89 comments



Mmm. So is MicroCloud essentially a glue between LXD + Ceph?

It's not really clear what the problem MicroCloud is trying to solve, though. Considering that LXD already supports multi-node clustering, why does anyone want another cluster manager on top of LXD?


Paraphrased, they are saying LXD is more suited to running VMs or containers and not specifically microservices. It's missing sophisticated cross-container networking, centralized storage management, etc.

So they are pitching a "simpler than K8S, more complicated than LXD" layer on top of LXD for that purpose.

I suppose it also would compete with Proxmox, which is a somewhat different space than K8S. But other things it adds, like the web ui, RBAC, etc...put it in that space also.


I got Proxmox running in an on-prem cluster fairly easily. I was never able to figure out k8s. If MicroCloud is comparably easy to set up and manage as Proxmox, I'll definitely check it out.


This demo is very hard to follow indeed. They are just jumping around, demoing features, but not talking about how it all fits together.


I agree, but as I said in the article, the first stage of the demo failed, and after some frantic attempts to tear down and rebuild the cluster, they gave up, moved on to later stage of the demo, and then came back to it.

Which is why it's incoherent and doesn't fit together.


This is why you always prepare videos


So basically it's VMware but with more nodes required


Whether that's true or not, given the current circumstances of VMware I think "basically VMware but without Hock Tan swallowing it whole" is a compelling enough pitch to merit serious interest.


It's free. The free tier of VMware doesn't offer any redundancy at all.


I tried very hard to get onboard with Ubuntu's new server paradigm, I've been using Ubuntu on and off since 2005. Snap is what turned me away. My research (admittedly several years out of date) told me that it was impossible to disable 'auto-updating' of Snaps. Now I see that they're rolling out what is apparently a high availability service built on services deployed through Snap. I don't see how this is viable, if Snap is still dead set on updating things on their own schedule. I certainly wouldn't trust it without some in-depth testing and validation to ensure that Canonical can't remotely DoS me by pushing some new update that I can't opt out of.


This changed a while ago.

See "Pause or stop automatic updates" at https://snapcraft.io/docs/managing-updates


Cheers! A step in the right direction.


snaps are horrible and a big deterrent. I stopped recommending ubuntu because of them. Total dead end.


So.. we’re back to self-hosting your own services?

And IT does another cycle.


We never really left.

I work with an enterprisey on-prem product, which we of-course tried to replace with a cloud offer. And in all fairness, it's generally gone well - but because we manage the platform, it sits in the cloud we chose. And that's never going to be the right choice for everyone.

So we have some customers who are concerned about national boundaries. We have some customers who offer their own clouds, and aren't hugely enthusiastic about using the competition's. And then the whole mess with the govt having isolated regions within otherwise-public cloud offers.

Between these, it quickly became apparent that replacing our traditional offer with *aaS would leave a lot of money on the table. It's a minority of customers, but it turns out to be a very valuable minority.

Within that context at least, "private clouds" start to feel like they have legs - bringing most of the benefits of the cloud, to places that 'public' can't reach.


> It's a minority of customers, but it turns out to be a very valuable minority.

I've been saying this for years. The Ubuntu vision here is appealing: storage and live migration as first class things, because many high value services are never going to be "cloud native," easily scheduled, stateless, ephemeral container apps: the stuff k8s et al. was designed for.

So they have the right model. Now it's down to execution.


I have the strong impression that most people confound cloud to be public only. Private clouds are fine and deliver a lot of the flexibility at a big % discount of a public cloud.

The big gain from clouds is the flexible infrastructure, especially in the microservices world we are now. In the past, one needed to procure, provision, etc a new server to run a service (times X per environment). With a cloud, regardless if it's public or private, provisioning a VM (or container) to run a new service is a few clicks away.


Currently watching a large enterprise migrate to cloud and hosted private cloud(I know) but all most teams are doing is a 1:1 mapping of VMs. I can see the cost going through the roof vs on prem. Give it another three years and and we'll be rolling back a significant % to on prem.


I'm currently working on planning for a large enterprise migration. We're definitely not doing a 1:1 mapping of VMs, but costs are still going to go through the roof.

There might be a quiet betting pool on just how much more.


Yep. My future is cloud offboarding and cost mitigation consultancy.

Been through the fad cycle three times now and worked out where to make money :)


Some of us never left... I can't justify paying that much money just to execute some code on another person's server.


It can be cheaper to run things in the cloud if your workload is very spiky or short lived.


And some will never leave cloud.


That is where you run the cost calculations and decide if it is worth it or not. At the very first it can make zero sense to run in cloud. Then slightly in it becomes very worth it. Then later on that cost overhang becomes so big you are better off building your own. Where those boundaries are depends on your business.


I think the opposite, at the start it's a no-brainer. Setting up cloud infra for a simple website or web api is so so simple, and between free tiers and startup credits, costs $0.


I think it depends. For example, if I needed machines running 24/7 running some stuff, then I would eat through those cloud credits so fast. For example, managed k8s is actually pretty cheap in the cloud, but you can't turn off their logging solution, and even on the lowest setting, storage for those logs will cost you an arm and a leg. However, if you can use something like serverless or even just regular vms, you can be super cheap at the start. It all just depends on what you're doing.


fair point...


> And IT does another cycle.

People keep saying that as if it was a bad thing


It wouldn't be a bad thing if people learned from the previous iteration.

But they don't.

In fact, what they generally do is layer an implementation of the stuff they forgot last time on top of the preceding iteration which lacked it.

So now, we have clusters of Linux boxes, built with a ton of new tooling on top of Linux because Linux is a UNIX and traditional UNIX doesn't have networking or clustering in the design.

(Go on, then, 2 PCs on a LAN, both running a bare Linux kernel and a shell for init. Call them box1 and box2. Tell me how you mount the filesystem of box2 in a folder on box1.)

Plan 9 is UNIX 2.0 and can do this out of the box, but we didn't adopt Plan 9 so now we have to emulate it in a million lines of Rust or something.

Now we have WASM which is kinda sorta compiling to the bare bytecode of the Javascript runtime, for universal app binaries. Only you need a vast infrastructure to run it.

Inferno is UNIX 3.0 and can do this out of the box: it has the Dis VM right in the kernel, and all its components are compiled to Dis bytecode so binaries run on all supported processors.

But we didn't adopt Inferno so now we have to emulate it in a million lines of C++ because Mozilla cancelled Rust.


As much as I love Plan 9 and Inferno, the reason why they never were a commercial success is because they deliberately broke backwards compatibility with UNIX. (Which was an excellent choice for Plan 9 as a research OS, but perhaps should've been reconsidered for a commercial offering.)

They did however accomplish pushing the "UNIX 1.0" into at least 1.1: as awkward as Linux's /proc is, it's still objectively better than sysctl; 9P is a practical choice for sharing files with a VM guest; and let's not forget everything Go brought to the table.

In retrospect, considering which technologies contemporary to Plan 9/Inferno have "won", I'm also grateful that we don't need to deal with an in-kernel JVM.


> Linux's /proc is [...] still objectively better than sysctl

Is it though? AFAIK there is no way to get an atomic snapshot of the contents of /proc so any attempt to traverse the tree will be met with "No such file or directory" errors as processes end. You can reproduce this with a simple:

    doas find /proc -name pid -exec cat {} \;
Whereas sysctl returns a consistent snapshot of the data requested.


My point is about the elegance of using open/read/write/close (the mantra of Plan 9) versus going through a complex interface full of constants defined in a C header file.

A bunch of shell commands stitched together is the wrong level of abstraction for taking atomic snapshots of anything. Even "ls | xargs cat" suffers from the same problem: ls might output the name of a file that gets deleted or renamed before cat can open it. You'd need to mount a filesystem read-only, or take a snapshot (on e.g. ZFS). You can however use openat(2), which should in theory at least guarantee that if dirfd=open("/proc/123", ...) succeeds, then openat(dirfd, "pid", ...) will not race against another process reusing the same pid.

PIDs being racey by nature is why Linux introduced pidfd_open(2) and friends.


> My point is about the elegance of using open/read/write/close

I certainly agree with you that they are more pleasant to use, and I think procfs could one day solve the atomic snapshot issue. For example, there could be a /proc/snapshot directory. Running mkdir inside there could take a snapshot that the calling process would then be free to traverse at its leisure. It could be tied to the process group that called mkdir to make sure the snapshot gets automatically cleaned up when the calling process terminates. I think this would work a lot better with Plan 9's per-process namespaces but it could be hacked onto Linux.

At that point, the only argument I would have against procfs would be that we'd be paying for hundreds to thousands of syscalls compared to sysctl costing us only one syscall (or rather, two syscalls if I'm reading the FreeBSD source for kinfo_getallproc correctly).


Except that Android, WebOS and ChromeOS are closer to Inferno than UNIX, and quite successful.

IBM i and IBM z, have done quite alright for OSes with an in-kernel JIT.

It turns out when a company really has the budget, and the necessary management support, to push a technology no matter what, it happens.


What I don't get about all of this is why one of the big players don't invest heavily in Plan9/Inferno and promoting it. Surely having the networking and clustering code baked in to the OS makes the whole thing both easier and more efficient to run which would mean they'd be able to offer a product that significantly undercut their rivals in terms of price as well as allowing the companies using it to reduce their DevOps team sizes due to the reduced complexity? Unlikely that MS would do it given that they'd prefer everyone to run Windows but I don't get why AWS or GCP aren't jumping all over it.


When companies are up against that much inertia, they push the boulder towards the nearest local minimum, not a currently known "best" global minimum. Why redo everything for Plan9 and train engineers on a new OS paradigm when you can write Kubernetes at a lower LoE? IMO of course, I'm not an expert.


J2ME, SavageSE, microEJ and Android, are basically Inferno, at least in part, and not on general purpose computing.

Inferno was actually originally started to fight against Sun's ideas regarding Java, hence why alongside Limbo, there was also Java support eventually.


> Go on, then, 2 PCs on a LAN, both running a bare Linux kernel and a shell for init. Call them box1 and box2. Tell me how you mount the filesystem of box2 in a folder on box1.

Running /bin/sh on a bare kernel isn't a unix system. A unix system has daemons, at which point we get enough supporting tooling to make use of the very much built into the kernel NFS and 9P support and mounting files from one machine to another becomes easy.


> Running /bin/sh on a bare kernel isn't a unix system.

That's true and you're right.

However, what I was specifically discussing here is core kernel functionality versus layering it on top, and for that, I had to describe things in an artificially simplistic way, because Linux folks tend to think Linux invented everything and is everything and start to go on about in-kernel Ethernet drivers and in-kernel NFS and so on and because they get busy counting trees, they miss the fact that they are lost in the forest.

I am not advocating the use of bare kernels here. I am not saying this is a rigorous or fair comparison.

What I am trying to demonstrate is the fundamental difference between having networking and a networked system as core parts of your system design, as opposed to bolted on later.

It's integral in Plan 9. Whereas Linux is a UNIX, and therefore must implement things later on top of a core design which assumes that all machines are standalone multiuser minicomputers with dumb text terminals.

This is such a fundamental part of the design of Unix that it is everywhere and it's very hard to show to Unix users that it's there... because it's the material of the walls, the floor, the ceiling and the door and so it's hard to see.


Features are backed by code. The code has to exist somewhere, in previous systems it ran as ring0. Is this where you want your privesc ?


This is missing the point.

The point is that if functionality is core to what you are doing with your OS, then it ought to be a core part of that OS and not layered on top.

If that would make the kernel of your OS too big and too complex then the design of your OS is wrong.


> Tell me how you mount the filesystem of box2 in a folder on box1

It's not that hard. Just install Ubuntu, then VMware, then Windows, then WSL, then Docker, then a microcloud inside a container, then you can easily run a headless Dropbox client to sync that folder.


I get the sarcasm, but seriously -- did we just forget about NFS?

Also, sshfs will do the job at least up to a point.


We try to support YCombinator companies where we can :)


> mount the filesystem of box2 in a folder on box1.

This is not possible due to the CAP theorem.

(You will need to severely rethink your concept of "file" at the very least.)


And yet, that's how Plan 9 boxes talk to one another.

I think maybe you are misinterpreting my post as saying "mount the entire disk used by box1 on box2 simultaneously".

Although when it comes to that, I would also note that DEC's AdvFS did more or less that, 20+ years ago, and it's FOSS now:

https://en.wikipedia.org/wiki/AdvFS

And it's also what the DragonflyBSD team are attempting to make happen in HAMMER2.

https://www.dragonflybsd.org/hammer/


Wasn't DEC doing that with VAX/VMS 40 years ago with VAXclusters?

See https://en.wikipedia.org/wiki/VMScluster


I believe I specifically mentioned VMS and VAXclusters in the article, didn't I?


You did, sorry! I did not read your article originally. That's great. So few people these days even know about anything outside of Linux/Unix or Windows...


DEC did it 40 years ago.


"It" doesn't really work in practice, which is why nobody uses it in production settings.


Usually it is great for those of us that sell consulting services, for the others not really.


> So.. we’re back to self-hosting your own services?

It never stopped: plenty of places have on-site VMware and Hyper-V, both of which provide (IIRC) APIs to automated VM creation. If you're more open source, there's OpenStack (which VMware has a API-compat layer for).


Always depends on the usecase.

Medical, we do everything in-house to make life easier.

Low value servers, in-house.

High uptime + scalable? You are not doing that in-house.


Yes, but spending money on garbage tech products fuels the economy now. It's not like we make anything else.


It's not so much a cycle as a helix IMO, things are better now.


> snap

> Ubuntu Pro

> driving […] subscription

Think I’ll stick LXC on proxmox


Good choice.

FreeBSD and jailed bHyve cells for me.


It's been a few years since I've looked at bhyve. Is it maturing well? Are there any tools that make managing it easier now?


Very stable. Libvirt is supported. So you can now use a GUI [0] [1]

There's a few web control panels too if that's your thing.

I can't fault it. A bit rough round the edges and manual in some places but as expected. Has tackled everything I've thrown at it from Windows Servers to Linux, VPNs to Routers all very well.

You won't get the bells and whistles of an enterprise hypervisor, however for a Tier-2 hypervisor, it's smooth sailings.

[0] https://people.freebsd.org/~rodrigc/libvirt/virt-manager.htm...

[1] https://libvirt.org/drvbhyve.html


What's the container story like?

It might be fun running my homelab on FreeBSD 14...


Most call them Jails, but others like to call them containers. But in comparison to Linux containers they are very different. Some would argue what you can do in jails you can do in Linux containers but, that's another discussion.

Each Jail, can have it's own network adapter, allowing yourself to host network applications, firewalls, servers and routers isolated from your main network.

These jails can have firewalls allowing you to test firewall changes without burning down your network. As well as isolated Web Servers, Music Servers, Game Servers taking no further resources other than what you give it.

You can also apply resource limits to jails, and the whole lot too. IOPS, CPU Cycles, Cores, Network, Quota Size.

With this you can then create locked sandbox environments for applications. Firefox/chromium can all run in their own jail ensuring that any virii are isolated from your core system while ensuring they only have 2mbit of network. You can even run Linux within. Ubuntu on FreeBSD, who would of thought. Can you run FreeBSD on Linux?

What I'm currently setting on my colocation box as I call them are cell's. Zones as if you wish to go Solaris'ly. Which by you can host bHyve, FreeBSD's native hypervisor in a separate jails giving you isolation. I host game servers publicly, so knowing they're restricted to their own cell, with a set amount of resources means I know my system is going to brick if something goes astray. As that too with ZFS allows me to backup real-time. Real-time migration and resource limiting, what more could you want?

There are other third party "container" attempts at containerizing FreeBSD. But in my opinion Jails are the bee-knees.

A feature I've never seen outside of the Unix world are: bectl and beadm. A dream, advance boot environment tools for when you really want to keep deep. Imagine compiling your kernel, rebooting only to find it panics. Those tools allows you to revert back to the OS to it's previous state, even jump into your newly built kernel without having to reboot. Magical.

Resources: https://wiki.freebsd.org/Jails https://wiki.freebsd.org/Containers https://wiki.freebsd.org/LinuxJails https://vermaden.wordpress.com/2022/03/14/zfs-boot-environme...


LXD is better than Proxmox in my opinion, and it's free for a stable version.


Ditto, been running Proxmox 15+ years and it's been rock solid.

The only thing I miss is the kind of automatic deployment and network layers that Kubernetes has.


Yep, Proxmox seems a lot more stable


At home I have a small server farm of cheap Ryzen PCs. Mostly continuous integration and testing for open source projects, but also some LLM.

If I have some idea, I just throw it into farm and see results a few days later. It is slower than renting in cloud, bit about 4x cheaper.

It also heats my house a bit in winter...


You must have cheap electricity where you live...


Let's assume I have huge server at home with tons of disks that sucks on average 200W 24/7 - most machines idle around 10-50W depending on the periphials

I'm in Germany, the country with the most expensive electricity prices - I'm paying 31cent/kWh at the moment. That's about the current market price for consumers - so that's 543,48€/year or 45€/month if that machine would suck 200W on average over the year.

If you are lucky have fiber at home - let's say another 50€/month and add 5€ for electricity for fiber and switches and stuff so you pay 100€/month for self-hosting at home.

Using the pricing of the big public clouds like Google, AWS, Azure you can stuff quite a bit of CPU and SSD's and HDD into that machine until you hit the 200W limit.

I just looked at Azure and took the cheapest instances for dev and compared the reserved price for one year (so already discounted)

Azure that gets me a whopping B4s v2 instance with 4 vcores and 16 GiB and no storage and no bandwidth.

Using Hetzner a cheap german dedicated server discounter you can rent a Intel Core i9-13900 with 64GB memory and 2 x 1.92TB SSD for 100€/month.

Realistically you would buy a Beelink SER6 Max Mini-PC, AMD Ryzen 7 7735HS that idles probably around 15W and only pay 40€/year for that. It's 640€ new here and much faster than that Intel machine. You can almost buy two of these and still only pay 105€ a month and in the second year it's only 80€/year.

Of course there is no redudancy, no support, no flexibility your home ISP is probably overbooked and so on and so on - if you just a big machine somehwere that can fail but works most of the time it can be a cheaper alternative.


> I'm in Germany, the country with the most expensive electricity prices - I'm paying 31cent/kWh at the moment.

"Most" is an exaggeration, but expensive is true: https://ec.europa.eu/eurostat/statistics-explained/index.php...


It's important to note there are extra charges on top of the electricity price that are counted per kwh. Here in Poland we have transmission charges that are almost the same amount as per kwh so the total variable cost is 2x per kwh cost. Looking at my last bill I pay 0.36€ per kwh in Poland despite the chart showing about 0.16€

I wonder if countries like NL listed as one of most expensive have all these charges on top too.


Thanks, I guess I read too much local doom news. I also underestimated that intel box from hetzner - it's actually twice as fast as the small AMD server but I guess the idea was clear.


I am in Czechia, pretty expensive, but I heat with electricity anyway (don't ask).

If you want machine with 8 real CPU cores, 128GB RAM, 8TB SSD storage... It only take 110 Watts under full load. Electricity cost compared to cloud fees is tiny.


Chiming in from Minnesota, 11 cents per kWh.


Potentially great move for Ubuntu if the economic conditions push companies to invest to reduce their IT opex & get off the public clouds (and also if Gartner tell them to do so...).

Now I know Ubuntu just launched the product but damn they suck at selling it. The landing page doesn't even give you examples of applications you can run on it.

Also I'm sorry but you need to have the word "docker" a few times on that page if you want to catch any flies. Your CTO/CIO will also want to see some kind of fake enterprise app store to consider it.


It's only "fully functional" if it's running Haskell or OCaml. ducks


I recently tried Mikrok8s from Canonical and at idle (as in not running any of my containers) it ranges from 5% to 15%. I hope this product doesn't suffer the same wastage.


I had the same problem (there's a github issue about this: https://github.com/canonical/microk8s/issues/2186).

I swapped to k3s and the usage was half of what microk8s used.


There are issues with the grvfs client that gets into a race condition with the disc volumes. It will saturate the max_connections and eat resources at idle. Not sure if this is your issue, I have yet to find a real solution other than uninstalling grvfs-backends and bumping the max user and connections. I got this workaround added to the charmed kubeflow QuickStart. It’s not the correct solution though and I’m pretty spent on the issue.


This is going to be very uncomfortable when Supermicro sues Canonical over a trademarked term.

Supermicro has been selling Microclouds for years, and is a well known product line in the industry.


Just like it was a very uncomfortable situation when Anderson sued Microsoft over the term windows?

Canonical is selling a piece of software, supermicro a line of servers.

And based on their own site it isn’t trademarked. Notice there’s no r next to the microcloud server but there is next to microblade.

https://www.supermicro.com/en/products/blade


Interesting, I wonder if Supermicro applied for the TM but was rejected.

Either way, Supermicro will most likely make noise about this. We'll find out if I'm right when Canonical changes the name.


So OpenStack?


But actually less features than OpenStack. Kind of weird that they didn't just package OpenStack. A vendor to manage it for you would actually provide business value


Ubuntu has had OpenStack for years. It sucks. MicroCloud looks like a simpler, saner reboot.


What are the implications of the AGPL license for this (if any)?


my understanding is that you can't sell this cloud as a service (AWS would not be able to use it)


Sure they can, they just have to make the code available that is in what they deploy.


It means all derivative works of microcloud must also be licensed AGPL (if someone else "links" other software into it, makes modifications and re-distributes, they must provide those modifications, etc).

Any company can sell microcloud, but it must remain AGPL.

This license choice is a defense against microcloud being assimilated into proprietary products.

(Now I'll guess at the motivation of your original question. Lemme know if I missed your point...) The AGPL doesn't apply to any containers or VMs running in a microcloud any more than Ubuntu's licensing applies to programs you run on Ubuntu.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: