I think it's relevant for Hacker News, as Nintendo uses copyright laws to restrict drastically how their games can be played in tournament; for now, it seems to prevent any small tournament to generate any money — even selling food (!!) during the tournament — without becoming a fully fledge organisation and acquiring a license from them… with a lot of additional, specific restrictions (not allowing mods for example, which is not anecdotal for some version of Smash).
Nintendo wasn't helping the e-sport scene before, but with those move, I think they are now actively trying to kill it in its current form.
At first, I thought it was a good exercise (and it still is), but going through the result [0] made me more skeptical.
It is... slow? I mean, Internet Explorer slow. Maybe I'm spoiled by the level of responsiveness of application-style web interfaces, but opening an album or returning to the library feels slow. Is it because I'm browsing from Western Europe and the application is hosted in the USA?
I'm used to browsing multi-page apps that don't pretend to be apps, and having a 500ms load time after a click is expected and feels right. But waiting the same time for a click in a page that looks like an app makes me uncomfortable. It's weird - is this the Uncanny Valley again?
Looks like the enhance-styles.css doesn't get cached properly and gets requested on every route. The browser then waits for 500ms for a response from a server, likely due to increased web traffic.
An issue which could have been avoided by using a SPA :D
the main page loads fast but the interactions are slow, like there's some artificial delay
the 500ms estimate above seems about right... it should be much faster. Navigating from one static page to another should be sub-100ms assuming the server is on the same continent
> Was the Google self-driving car thing vapourware? It never went anywhere.
Well, it goes somewhere: people are actually being driven around Pheonix, Arizona [0] using those cars. It's just that it is not named "Google self-driving cars" any more: it was spun into its own company, Waymo [1].
And the EU is actively working on creating a new ePrivacy Regulation [0] that would explicitly tackle the explosion of cookie banners.
> Simpler rules on cookies: the cookie provision, which has resulted in an overload of consent requests for internet users, will be streamlined. The new rule will be more user-friendly as browser settings will provide an easy way to accept or refuse tracking cookies and other identifiers. The proposal also clarifies that no consent is needed for non-privacy intrusive cookies that improve internet experience, such as cookies to remember shopping-cart history or to count the number of website visitors.
…in which Cloudflare's CEO directly address the first point (having a legacy clause in their Terms that used to prohibit benchmarking, clause they removed quickly after), and in which one of the main engineer behind Cloudflare Workers underline that there is "apple vs oranges" comparisons for a while from Fastly before that blogpost.
I really recommand reading those previous discussions, you'll learn a lot about the context of this blogpost and what Cloudflare think of it!
> The biggest European provider, Deutsche Telekom OVHcloud, holds 1-2% of the market.
Wait what? It's OVHcloud, not "Deutsche Telekom OVHcloud". There is no mention of any relationship between the two companies beside a common project named "Gaia-X" which is still in a very very early stage of planning.
This doesn't inspire confidence in the journalist.
Ovhcloud is French. Telekom is a German company and they have open telekom cloud. We actually use this on a project (and I don't recommend that to friends).
Gaia-X is a european initiative and consortium to build some standardized cloud ecosystem. The goals are a bit hand wavy and since Google, Amazon, and Microsoft are actually part of the consortium, it's not really clear what the goals are at this point. I haven't heard much about this since it was announced a few years ago.
I've used AWS, Gcloud, Hetzner, Telekom Cloud, and a few other things. My view on this space is that it needs disruption. Amazon was disruptive when they launched in 2006. It was great and it enabled a lot of companies to stop worrying about maintaining infrastructure. That's 15 years ago. It's a commodity product at this point.
The problems I see:
- Companies overcharge for their services. AWS especially is very expensive to use. To the point that the monthly cost of some of their more expensive things about pays for the hardware they give you. Of course it delivers a lot of value. But it creates an opportunity for others to offer cheaper services.
- Most providers try to lock you into their platform with a combination of complexity and convenience. Once you are on a platform, switching it out for another becomes really expensive. Things like openstack exist of course but are a combination of complex and not that comprehensive.
- The services offered by these companies are very hard to use for what they do. You end up micro managing lots of bits and pieces and you need expensive devops people to do that for you. There are better ways to deploy software to server these days that feature higher levels of automation and convenience.
To disrupt the big providers, you'd need to be as good but an order of magnitude cheaper. Instead of focusing the commodity things that everybody has (buckets, vms, load balancers, etc.), the real value is in the more advanced stuff they offer. Things like Fargate or Google cloud run. Basically Kubernetes without all the moving parts to worry about. Don't offer database servers but scalable elastic data bases.
In short, that's where the money is and the reason Amazon is so ridiculously wealthy. They overcharge for this stuff and they don't get any real competition so they get away with it. If you think about it, a lot of this stuff is very resource efficient. So it should be cheaper than getting a lot of hardware that you then under utilize. Instead, it's more expensive. AWS pockets the difference.
That's what needs disrupting but it will be a race to the bottom once this starts happening. I think that's overdue though. So this Swedish company might be onto something.
Huh. I saw something blocking me from reading halfway down but it is not there now on my phone anyway. Still don’t trust em, but I guess it’s not paywalled.
Does this mean that those iPhone components will be manufactured in the US, then shipped overseas to be incorporated into new iPhones, which will then be shipped back to the US?
I understand that this makes sense, but at the same time, I wonder how many tons of CO₂ will be emitted because of this kind of "onshoring".
Shipping of tiny electronic components around the globe doesn't have any substantial CO2 emissions, as long as they are shipped in bulk (and preferably by sea), simply because each component is so small/light.
Shipping your bags of clothes when you go on holiday has a far bigger impact. Those are shipped by air (~50x the CO2), are far larger/heavier, and in a weeks time will be shipped back, perhaps unused.
The big corp I work for has dedicated logistic dudes who fly with crates full of wafers around the world. While 30 years lifecycle FPGAs from Xilinx feel well in the container in open sea, project specific parts manufactured for specific client’s specific product with lifecycle of 2 years go by plane. Saving 1000$ and risking loosing business is not worth it.
And even then, it is normally planned that finished goods are shipped by sea, and it is only when things fall behind schedule that air shipping is used as a way to get goods into the right place in time for release day.
iPhones are shipped by air to Louisville, KY to distribute throughout the central USA. It shuts down UPS for a whole week when iPhones launch. All employees can be required to unload and move iPhones, including engineering.
It's a good question, and a very good thing to think about. It turns out shipping is actually surprisingly low on the list of contributors to carbon emissions. All global shipping together contributes about 1.7% of global emissions. That's in comparison to, say, industrial energy usage (24.2%) or passenger cars (11.9%) or residential buildings (10.9%) or animals raised for food (5.8%). Should carbon emissions of shipping be a factor in decisions like this? Unquestionably, yes. But at worst it's going to be a small tick in the negative column, and could actually come out ahead depending on other factors like local emissions regulations, etc.
One unrelated correction - you mentioned passenger cars are responsible for 11.9% of global emissions, but your source says this figure is for the whole road transport. Passenger cars AFAIK account for the third of that.
Yes, good catch. I was getting that mixed up with the US EPA's stats[1]. FWIW they say passenger cars account for 58% of all transportation emissions within the US, more than all trucking, flights, and shipping combined. Again, within the US.
Apparently shipping is slightly unusual as it produces a greater than average amount Sulfur Oxide due to bunker fuel's higher sulfur content though it is not considered a greenhouse gas such as in the graph. This is where headlines like "One cargo ship emits as much pollutants as 50 million cars" comes from, though the calculation is debatable.[1]
For comparison gas or diesel is something like 10-15ppm Sulfur in the US now while historically bunker fuel is historically more like 3-5%, though it has been decreasing with regulations. [2]
This is a fantastic talk on some of the optimizations that Zig makes easy to implement: https://vimeo.com/649009599
Bun is written in Zig, but it takes the same approach that esbuild took to make things fast: a linear, synchronous pipeline fitting as much as possible in the CPU cache with as little allocation and other overhead as possible. Zig has more knobs to turn than Go.
Bun goes one step further and does a great job of offloading work to the kernel when possible, too (i.e. the large file I/O improvement with the Linux kernel)
JSC is a multi-tier JIT with the last stage ultimately being LLVM, so if you want to be pedantic, Bun relies on LLVM’s optimizer which is written in C++.
The transpiling itself is written in Zig, which is the part that has the performance improvement. If Bun relied on JavaScript and JSC for the heavy lifting, it would be no faster than the other JS bundlers.
I really hate to talk about software based on the language they're written in and I don't mean to imply one language is better or worse but the upper bound of performance on Zig is likely easier to reach and likely higher than the upper bound of performance in Go. Though it may depend on the workload. (esbuild being written in Go.)
Nintendo wasn't helping the e-sport scene before, but with those move, I think they are now actively trying to kill it in its current form.