Nothing? The ruling is that app developers get to choose how they communicate to users, or how they charge in-app fees. The kind of shady developers you describe would simply continue to use Apple, as it benefits them to do so.
- Most legit services move to a web based Apple Pay (note to the unaware reader: this is NOT In-App Purchases and has never had 30% fees) due to the ease of implementing and lower fee (easier to do cross platform + web)
- Non-legit developers keep the In-App flow
Over time this would skew In-App Purchases to be scammy-only (and therefore, easier to spot). I'm sure people at Apple consider this possibility too – and therefore, now that there's actual competition, IAP flows will probably have to change to prevent this and compete for actual developer preference (and keep it a viable legit-developer choice)
So they should probably just scrap the 30% fee. At the very least scrap it if the user was linked directly to the app. And just make the also huge commission on the payments.
It was in step #1. Most knative tutorials we found have you set up istio, which was the one setting the headers. There was separate work to rip out istio (which did not scale well either) that we didn't include in the post.
So istio used to sit between our proxy and knative's proxy. In order to figure out what headers it was setting, we ran a caddy container as a sidecar to the activator, and had it output the request metadata. We then read the code to confirm
Hi there, my name is on the post!
The CPU graph was a 500 (second) drop in the sum of CPU usage of kube-proxy pods alone. I think this is okay to disclose (we took out most y axes during final edits, this one seems to be a collateral).
You're right about the flattening - since this work, we've taken out the pieces of knative we really needed. Right again about embedding some of those pieces in our L7 proxy.
We didn't upstream the changes because we feel our use case is atypical - all KServices (Knative Services) only ever had one pod. This constraint enabled most of the simplifications we were able to make.
On the last point - so do we!
That's assuming a European GDPR-compliant alternative to Google analytics wouldn't arise. But of course it will. It's not even a very difficult product to build. If anything this is both sticking it to Google and creating opportunities for European startups to fill the void.
I'd already be on either fly.io or Render, but my company has a hard requirement for SOC2 and ISO27001 certification. Heroku has it, Render is working on it, but not there yet.
This is such a weird comment.. nothing about the OC suggests they were talking about front end dev; not to mention that writing for hours before actually running it is a pretty terrible way to code. Typescript is also not a workaround for volume binding, which can accommodate any stack.
I read that as "re-run or refresh, as applicable", i.e. re-run a Go app or refresh a Node.js webapp. Besides, it absolutely can be a big issue with non-frontend development, so the point still stands.
Yes, that is why I specifically mentioned "re-run." About 75% of my work is actually API and backend data engineering work (Python Go, Node in that order). I do "save and re-run" less in that type of work, but definitely not enough to make frequent docker builds a hassle.
As another commenter mentioned, "refresh" tipped them off as a front end dev.
As a backend dev working in statically typed languages, I will sometimes code for hours without running, and I wouldn't say anything about it is terrible. I haven't worked with Typescript, but it wouldn't surprise me if it enabled a similar development process.
I wouldn't necessarily infrequent running as a technique worth emulating, but in certain situations it works pretty well.
The majority of my work is actually in Python and Go on the backend. I do save/re-run less than when I'm doing front end, but docker builds are still a hassle. Flask will automatically reload on file changes, taking advantage of a docker volume. With Go, I'm doing go runs in development anyway up until deployment with go build.
Maybe I'm doing things wrong, but docker volumes are essential in how I like to dev.
Even there it is beneficial to run sooner. E.g, for servers as soon as the listen is up and dispatching, run it and see that the handler runs. Often times the only times my error handling code runs is during this initial code writing time. (For certain annoying to test for types of error). Fast iterations around a known good baseline.