Hacker Newsnew | past | comments | ask | show | jobs | submit | technicolorwhat's commentslogin

Yeah, we're also pushing for SSE for this kind of things.


I had previously used SSE in a project and had no idea it existed or that it was widely supported by most browsers. It's a beautiful and straightforward technology, but it doesn't seem to be as widely used as it should be, and more tooling is required. Also more tooling is required.


100% onboard with better tooling.

Starting with out of the box swagger support: https://github.com/OAI/OpenAPI-Specification/issues/396

Shame they are not interested in adding it.


The really huge issue with SSE, and the reason why it’s historically been dismissed, is that it us subject to (and counts towards) the HTTP 1.1 domain connection limit.

Sadly that makes it a bit of a pain in the ass to manage for sites served as (/ compatible with) http 1.1.


I know SSE works on scenarios with few or no state updates, like in dashboard UI's. What about other cases where they deal with input data? like in a chat/game


That’s the beauty, the client just do a normal HTTP request. It really simplify the system.


Does SSE work reliably in mobile browsers nowadays? A few years back when I tried, it was working OK in Chrome for Android but mobile browser support didn't seem complete.


Sadly, there are very few mobile browsers that aren't Chrome or Safari. In fact, Safari/Webkit is the only mobile browser engine available on IOS at all -- all other browsers are just chrome (pun intended) around IOS webkit.

To your point, https://caniuse.com/eventsource

Also there are polyfills for very old browsers like IE.


Having a standard for what events look like is a decent idea as people generally just make some random thing up without thinking too hard and having a standard is generally helpful - but making your own polling system etc when there's already a battle tested one available seems like wasted effort and a bit counterproductive


The idea is nice and needed. However maybe the spec is a bit elaborate for me to adopt it immediately. I've been rolling my own for some time at some clients, for our kafkaesque/event sourcing patterns.

However what I used there was simple http stream/json stream like this:

- No start of [] but JSON newline entries a new line is an new entry

- Using Anything as an id (we've been using redis XSTREAMS as lightweight kafka concepts, just 64bit integers)

- have an type as an event, and versioning is just done by upgrading the type, ugly, but easy.

- We'er considering using SSE at this moment

Compaction is not something that I would do in the protocol I think I would just expose another version of it on a different url I think or put it in a different spec.


How do you recover when there is e.g. some connection error and a client misses some events? Can the client ask to replay the events? Then, how far back can they go?


Hmm for me a course did wonders really. It really allowed me to better express my own needs and feelings and dial in the feelings of others.

What described above I've seen happen, but mostly with beginners or people that use this new found ideas as agenda but without actually connecting with the other, expecting miracles or use NVC as a tool for policing. Or people that were already manipulative in the first place, but now just try to use NVC.

It can also come off as manipulative in an already unhealthy situation, where the relationship consists of so much mistrust that bringing anything new to the table is frowned upon and already met with suspicion.

My personal take away from it was to ensure that I prevent destructive communication and prevent blame using words things like "You should have" because they don't give the other tools to work with and actually address the problem at hand. For me the bottom line of the book was that us expressing our emotions and needs to allow the other, if willing, to actually address the problem at hand. It also made me see that sometimes effective communication was blocked because I had to deal with my own things first.

Communication, empathy, and time to actually listen, is something that unfortunately in my culture isn't thought as a core skill.


Pretty awesome project really! As a sidenote: A bunch of the ML training and edge deployment magic is done via https://edgeimpulse.com which seems to make it much more accessible to build such a thing.


> The depth maps we currently generate come from our hardware and software prototypes. And yes, we’re already at incredibly high levels. That’s because our “deep-learning algorithms” learn from all small errors and inaccuracies.

I like it that they quote deep learning algorithms


Let's outsource social care/s


Its almost like applying DDD bounded contexts after the fact.Breaking up your apps in their boundaries and moving them out on the storage level. I do have some questions though:

1. wonder what happens on the edge of the boundaries when a table does need data from another domain. And what if that domain/cluster is down?

2. How do they physically connect to the cluster? A seperate db connection?


There is no mention on LXC/jails which is ofcourse the biggest inspiration for docker, if I am not mistaken it also used lxc under the hood. That was already a good'ish product and but hard to configure and not for the mainstream at the time. Docker introduced AUFS and downloading of images which was added. I always saw docker as a properly marketed nicely ribboned lxc but mediocre implemented since they removed a lot of options at that time that were super useful like cpu limiting etc from std lxc.


I think docker itself had plenty of time to innovate. They could have solved the problem long before mainstream got a hold of kubernetes. They just didn't solve the actual problems users where having at that time, which was container orchestration across multiple servers. They went very very deep with a lot of drivers and API changes and what not. If I am not mistaken docker-compose wasn't even a part of docker first before it got bundled, someone else was solving docker's same-server orchestration problems.

Kubernetes is for a lot of organisations pretty difficult and way too much for a start. We used rancher for solving our problems which did a fine job, rancher got a worse when they made the move to Kubernetes.

In the beginning they (docker) also removed a lot of options like cpu throttling and configuration and such that LXC had from the start. They also had their own version (a shim) of pid 1, some point at the time. And other things that made it a little painful to properly containerise. I was often very frustrated by docker and fell back on lxc.

There was also something with the management of docker images like pruning and cleaning and other of much needed functionality that just didn't get included.

Also something with the container registries which I a this point can't remember (maybe deleting images or authentication out of the box or something that made it hard to host yourself).

Anyway I think it's failed because it failed to listen to its users and act upon them. They really had a lot of chances I think. I think they just made some wrong business decisions. I always felt they had a strong technical CTO that really was deep on the product but not on the whole pipeline from dev->x->prod workflows.


> We used rancher for solving our problems which did a fine job, rancher got a worse when they made the move to Kubernetes.

(Early Rancher employee)

We liked our Cattle orchestration and ease of use of 1.x as much as the next person. Hell, I still like a lot of it better.

But just as this article talks about with Swarm, embracing K8s was absolutely the right move. We were the smallest of at least 4 major choices.

Picking the right horse early enough and making K8s easier to use led us to a successful exit and continued relevance (now in SUSE) instead of a slow painful spiral to irrelevance like Swarm, Mesos, and others and eventual fire-sale.


Ah nice to hear from a rancher employee <3. Ah yeah, I totally understand the move, no blame there. But cattle was just amazing, it was easy and elegant!


Darn it, I feel excluded, I am dutch and haven't opened any multiple back accounts yet. I probably should.


Do it! Also, why not program periodic circular transfers across a month, of let's say 100€, just for fun! For extra fun, include words in the description like Iran, Kabul, and North Korea :D


My sister does that with transfer apps like venmo. I’m a littler surprised she and her friends haven’t been banned yet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: