Hacker Newsnew | past | comments | ask | show | jobs | submit | gilgad13's commentslogin

Just because you want something to have a viable business model doesn't mean it does. If you want to get paid to develop open source software, I think you have a couple of options:

1. Just don't. Work on open source on the weekends, etc.

2. Do it as part of a "commoditize your complements" strategy.

3. Work at a company that is so large they can fund open source development as part of their advertising strategy.

4. Gather together some expertise in existing open source projects and sell consulting. Crucially, you'll probably need to build on top of some existing open source install base or name recognition. Redhat didn't start the linux project or the gnu userland, Percona didn't write mysql, etc. In some sense you are now one of the leaches that posts such as this one complain about.

The fundamental piece in common here is that the open source bit isn't the main value driver for the business.


I imagine that opinions such as this have influenced this recommendation: https://research.swtch.com/deps . I think there is some spirit in Go of not taking on a ton of small dependencies, but that may be a hold-over from before Go had a built-in package manager.

I think as an organization grows to some combination of available resources and severity of an outage this view becomes more and more common.


I agree, with the addition that closing an unbuffered `<-chan struct{}` is a good way to do broadcast notifications (a. la. the context package).

As evidence that channels should be used rarely, and only in small scopes, consider the number of places chan is used in the standard library.


> There are a lot of organizations who want hyperscale style servers but aren't going to start a division to begin making them themselves.

How does this differ from what large players like Dell are offering under the "hyperconverged" moniker. For example, Dell's Vxrail[0] appears (from marketing speak, anyway) to be a single rack with integrated networking and storage that you can ask to "just start a vm".

[0]: https://www.dell.com/en-us/dt/converged-infrastructure/vxrai...


So, "hyperscale" and "hyperconverged" are two different things. Names are hard.

"hyperconverged" is a term used by VMware to describe a virtualized all-in-one platform. You get compute, storage, and networking, all virtualized as one appliance rather than as individual ones. VxRail is basically Dell EMC's implementation of this idea: you get one of their servers, vSAN and vSphere all set up and ready to go.

"hyperscale infrastructure" describes an approach to designing servers to begin with. A lot of folks moved toward commodity hardware in the datacenter a decade or two ago. And then you get more and more of them. The hyperscale approach is kind of top-down as opposed to that bottom-up style: how would we design a data center, not just a server. Don't build one server and then stick thousands of them in a building; think about how to build a building full of servers. This is more of an adjective, like RESTful, rather than a standard, like HTTP 1.1. That being said, the Open Compute Project does exist, but I still think it's closer to a way of thinking about things than a spec.

Okay, so all of that is still a bit fuzzy. But it's enough background to start to compare and contrast, so hopefully it makes a bit more sense.

The first difference is the physical construction of the hardware itself. If you buy VxRail, you're still buying 1U or 2U at a time. With Oxide, you're buying an entire rack. The rack isn't built in such a way that you can just pull out a sled and shove it into another rack; the whole thing is built in a cohesive way. This means that not every organization will want to own Oxide; if you don't have a full rack of servers yet, you don't need something like we offer. But if you're big enough, there's advantages to designing for that scale from the start. This is also what I meant by there not being a place to buy these things; other vendors will sell you a rack, but it's made up of 1U or 2U servers, not designed as a cohesive whole, but as a collection of individual parts. The organizations that are doing it this way are building for themselves, and don't sell their hardware to other organizations. This is also one way in which, in a sense, Oxide and VxRail are similar: you're buying a full implementation of an idea from a vendor. Just the ideas are at different scales.

The other side would be software, which of course is tied into the hardware. With VxRail, you're getting the full suite of software from VMware. You may love that, you may hate it, but it's what you're getting. With Oxide, you're getting our own software stack, which the article is about the details of. You may love that, you may hate it, but it's what you're getting :). That being said, I haven't actually used a full enterprise implementation of the VMware stack, so I don't know to what degree you can mess with things, but our management software is built on top of an API that we offer to customers too, so you can build your own whatever on top of that if you'd like. Another thing here is that, well... the VMware stack is not open source. All our software will be. That may or may not matter to you.

The last bit about software though, is I think a bit more interesting: even though you are buying a full solution from Dell EMC, you're also sort of not. That is, Dell and VMware are two different organizations. Yes, part of what you're getting is that they say they have pre-tested everything in the factory to make sure it all works together well, but at the end of the day, it's still integrating two different organizations' (and probably more) software together. With Oxide, because we're building the whole thing, we can not only make sure things work well together, but really take responsibility for that. We can build deep integrations across the entire stack, and make sure that it not only works well, but is debug-able. Dell EMC isn't building the hypervisor and VMware isn't writing the firmware. Oxide is writing all of it. We think this really matters for both reliability and efficiency reasons.

So... yeah. That's a summary, even though it's already pretty long. Does that all help contextualize the two?


Contra to most of the comments here, I found a lot to agree with in this article:

> You have no soul as a company at that point, you’re just trying to make money with other people, rather than help people with a problem that you know. You may think you have an idea, but it’s not good enough yet.

I think this is the crux of the difference. If you want to have a successful startup, you have to believe that you deserve to exist and believe you have something unique to offer. You cannot just "show up and listen" and get paid the big bucks.


One aspect of Lua that stands out to me is how every feature is carefully designed both in isolation and in composition with the others. The language has relatively few features, but none of them are hanging off the side, the all lean on each other to make a cohesive whole.

I think Lua is a bit unique in this for two reasons. First, they have in intentional open-source but not open development model. Second, because of the way that Lua is embedded inside other projects, there is more willingness to implement backwards-incompatable changes. I'm sure this is a negative for some who want to build a larger, less fractured community, but it has advantages for language cohesiveness.


> The coordinator communicates with each worker using an improvised JSON-based RPC protocol over a pair of pipes. The protocol is pretty basic because we didn't need anything sophisticated like gRPC, and we didn't want to introduce anything new into the standard library.

Interesting that this does not use `encoding/gob` by these criteria. I think `encoding/gob` is a nice example of what is possible with reflection, and I've certainly learned techniques from reading its implementation, but I haven't seen very many uses in the wild and this certainly would seem like a vote of no-confidence.


While this is true, in the context of alpine climbing where I first heard this statement, the bold alpinists who die young are very much not beginner-intermediates. I've interpreted this differently than just the "Bathtub Curve"[1] applied to dangerous pursuits.

Rather, there is a certain amount of objective risk in alpine environments, and the more time you put yourself in that environment, especially in locations you aren't familiar with, the greater the chance that something will eventually go wrong.

I'm always surprised by the number of famous alpinists who weren't killed on their progressive, headline-capturing attempts but rather on training attempts and lesser objectives.

[1]: https://en.wikipedia.org/wiki/Bathtub_curve


My wife teaches people to ride horses for a living so we talk about the safety of that.

You hear a lot about people who get seriously injured riding who are often professionals or people who ride competitively at a high level. They are doing dangerous things and doing a lot of them.

We don't think it is that dangerous for people who ride at the level we do, out of maybe 15 years we've had one broken bone.

The other day I noticed that we had acquired a used horse blanket from another barn in the area which is a running joke at our barn because of their bad safety culture. They are a "better" barn than ours in that they are attached to the show circuit at a higher level than the bottom, but we are always hearing about crazy accidents that happen there. When I was learning to ride there they had a confusing situation almost like

https://aviation-safety.net/database/record.php?id=19810217-...

with too many lessons going on at once where I wound up going over a jump by accident after a "near miss" in which I almost did. (I never thought I could go over a jump and survive, as it was I had about two seconds to figure out that I had to trust the horse and hang on and I did alright...)


Another good allegory is that, in the US Air Force, the flight crews considered most dangerous are those with the highest collective rank. Sure, the young crews are learning but the old ones still think they know it all and have often forgotten critical details.


(Example) When you go climbing somewhere, you have like a 40% of getting killed that you can mitigate completely by skill, and an additional 0.1% chance that something goes wrong by some fluke, that you can’t mitigate at all.

Pretty good if you go climbing 10 times a year. Pretty bad if you go 1000 times.


Isn't this somewhat expected?

They wouldn't be famous if they didn't succeed on headline-capturing attempts and there are only so many you can realistically do in life. They are dead however as doing dangerous things often enough will kill a substantial number of practitioners.


Need to consider that headline capturing objectives ar a few in a lifetime and training goes on all the time.


I agree with many of the concerns the author raises, but I'm left with the question:

Given all this, what does layering give us?

It gives some deduplication, but only a crude form. It gives some reproducibility from building off a well-known base and tag, but not full reproducibility. It gives some security benefit from building off a well-known base, but not as large a benefit as standard package managers provide.

I would be excited to see a image distribution system based off of something like casync, maybe with an initial rootfs formed through image-focused distributions like yocto[1]. The embedded device ecosystem has been concerned with reproducibility, image signing, and incremental updates for awhile and I think their approaches are very applicable to container images.

[1]: https://www.yoctoproject.org/


Apparently, not a whole lot for image transfer and portability. But layering still gives you something at runtime if a single organization is using the same base image for all of its own containers. And, in practice, I think layer-level deduplication does still save on transfer costs. I'm not sure if this author just wasn't considering or realizing the state of where industry was heading, but with projects that are rebuilt on every commit, the change frequency of upper layers is still a lot greater than the rate of change on distro base images. They may be patched daily and you need to re-download the whole thing every day, but if you're building 40 times a day, that's still better than downloading 40 times a day. It's just a lot worse than we could be doing if we could only download diffs instead of the entire layer when a single bit changes.

It would be nice to see what, if anything, ever came of the ending tease. Something like git but also for binary files is what is called for. Arguably, ClearCase offered this exact feature 27 years ago, but being proprietary and expensive limited its adoption among modern web tooling.


You can have a "layer" build system using a snapshot-style approach. The fact that Docker files and build scripts are written in a layered manner doesn't mean that our storage format needs to be using layered tar archives that duplicate data needlessly.

As for the tease, sorry about that -- there were several discussions in the OCI community in relation to my proposals and other issues we might want to fix but sadly work has stalled.


I agree, and from the subordinate's point of view I feel having a regularly planned session lowers the barrier to raise issues early without making things confrontational.

Its the difference between "we talked about many things, including this issue" and "we need to meet to discuss this issue". I find the second starts everyone off on a defensive foot, regardless of peoples' best intentions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: