It's because you are not an Average Human and so averaging everyone's humor doesn't really work for you. I think this is precisely why Instagram and tiktok are actually more addictive, they give you these personalized algos that are powered by your personal engagement stats, vs Reddit which just sort of sorts by other people's opinions
I mean they're basically a college student given the YOE and where this was published, sounds like the kid got mugged by reality. But yea, definitely jaw-droppingly naive.
It's curmudgeonly of me, but he got mugged by MIT; which operated very much like the consultancy he went to work for. Promise everything, give the client what they want, ignore the obvious problems in the proposal and let the client take the hit if that's what they want so desperately.. just give them the rubber stamp.
So, he took his MIT rubber stamp out into the real world and found out how much "double class load" was NOT like "actual paid work."
It's not a common term. They're saying the exact same code will run successfully on 5 different CPU types (architectures).
I'm reminded of the Feynman "why" video. The other answers are technically more accurate than mine, and even mine assumes you understand the concept of CPU types. It's difficult to pitch answers at the right level.
Imagine you translate a book into 5 different languages. Then bind the five translations into a single volume. At the front, you put a brief table of contents listing the page number at which each translation begins. Each reader opens the book, checks the table of contents, then jumps to the page containing the translation in their language. All readers are reading the same book, but each reads the translation in their language. Unless you don’t know any of those five languages, in which case the book is unreadable for you.
Technically more accurate and complete than your answer, but an analogy I expect most non-technical people could understand.
It's a very impressive accomplishment. I support retro mac ppc/32/64, Windows 32 and 64, Linux on Intel and on Raspberry Pi, and signed mac Intel and Apple Silicon, for every (audio DSP, generic interface, mac AU and VST2.4) plugin I make. https://www.airwindows.com/
But I do it with lots of different available downloads, not as a single binary. That's what I find impressive about this. Somebody's running that XCode mod where you can bring in all the libraries from all the previous versions back to the dawn of time. I'm not even going to pretend to try to keep something like that working: I do my retro builds (and original design) on an antique machine dedicated to the purpose, and the modern stuff on a modern laptop dedicated to staying current.
Mach-O[1], the executable file format used by macOS and iOS (and other NeXTSTEP descendants) supports stuffing code segments for multiple architectures in the same file, aka "fat binaries"[2]. This is a fat binary that supports 5 architectures.
> Unlike done in previous project, netop Tiger SDKs (used to build several intermediate binaries) don’t contain 64-bit AppKit versions, and thus ppc64 and x86_64 binaries are excluded from the binary release.
The binary release is only three-architecture; it does not run on current Intel MacOS since it's missing an x86_64 segment. (You get a "this app needs to be updated" dialog if you try.)
I see a lot of these retrospectives from companies that (to a certain degree) failed to live up to their hype. I don’t want to invalidate the sort of lessons you can only learn by living through these hyper scaling phases, but I sometimes wonder if some of the lessons folks learned are potentially why things went wrong?
"Almost immediately after installation, Akita provided all the endpoints’ requirements, as well as some examples of expected values, which allowed us to better understand the service that the contractors had built. Once we could understand the data flow (including not only the request body, but also headers and authorization), improving the system became a lot easier. After we had the mapping,"
Interesting... could be cool to see a value-add service on top of Akita or another Obs vendor that just inspects req/res payloads and generates OpenAPI specs based off what was observed. I can't count how many times I've dropped into teams and tried to piece together their API contracts just to realize... they don't have specs! Having to then turn around and reconstruct them backwards based off code spelunking and maybe some design docs is... frustrating.
Hi, I’m creator of a tool called AppMap that will record traces of your code (test cases or live interaction). Once you’ve made AppMaps there are different tools you use on top of it, such as visualization and analysis, and extensions for VSCode and JetBrains.
Hi, Jean here from Akita. Yes, it's possible to generate an OpenAPI spec, either by running Akita in production or from a HAR file. Here's a blog post someone wrote about using Akita to generate an OpenAPI spec: https://apisyouwonthate.com/blog/creating-openapi-from-http-...
I see a lot of value in being able to generate the schema based off of real time traffic, vs. static analysis or test cases. In my mind the traffic to the API is the real source of truth for the API.
I really really love how this post touches on the bullshit that is the carbon credit market. Question: what incentive do BigCos have to buy your "high quality" carbon offsets vs the inferior ones you mentioned? Do you price cheaper per ton of CO2 credited? At the end of the day they're just trying to comply at the cheapest price possible, right?
There is definitely a lot of bullshit out there but when companies decide to pay for climate mitigation, even as a marketing ploy, I think that is net good.
Many companies are certainly looking to buy the cheapest credits they can find but there are promising indications that things are changing, led by companies like Stripe and Shopify.
This is a useful business that only exists because of carbon credits.
I winced a bit at their attacks on low quality carbon credits because the very idea has been under sustained attack by climate change deniers for decades.
Oh, rich people just paying money for carbon credits and still flying around the world, that's not real it's all fake, like climate change.
Obviously some are better than others, but the concept itself is useful and worth fighting to improve, not write off.
We are 10x cheaper than the high-quality carbon removal that for example Stripe Climate is purchasing. But we are 10x more expensive than the "low quality" credits that I describe in the post. So overall there is a 100x differential between what is allegedly the same ton of carbon, based on perceived quality and other factors.
was discussing this the other day with colleagues. you google CockroachDB and are immediately served Yugabyte and PlanetScale, which are imo much "hotter" right now