That was already the case with the M-series chips, which are shared between Macs and higher-end iPads. The Neo just extends it to the A-series as well.
Yep I know, and now using a last gen A chip, I feel they are really rubbing our faces in it.
Like Apple is saying, "Nice iPhone 17 Pro w/ A19 w/ vapor cooling chip you have there; you know you run a full general purpose OS on it, but we're not gonna let you, nanananana :p"
No exactly, Apple is playing in our faces, all while people continue to defend the “differences” of device categories and the subsequent justification of shipping iPhones and iPads with locked bootloaders.
The belief that people only hold opposing opinions to yours because they have money on the line is such conspiracy theory nonsense. Some random teenage in middle America couldn't just really like Apple products? It's gotta be some grand conspiracy against you?
It's been done, the ZSNES and Project64 emulators have both had exploits which allowed a malicious ROM to run arbitrary code on the host. ZSNES is written mostly in assembly so that was kinda asking for trouble though.
Those speeds on the Pro/Max are impressive though, more in line with Gen5 NVMe drives. Those have been available in desktops for some time but AFAIK the controllers are still much too hot and power hungry for laptops, so I think Apple's custom controller is actually the first to practically hit those speeds on mobile.
Nintendo's lawsuits they won against emulator projects in the past had donation systems as one of, if not the sole main point they drove to win the case.
From a practical perspective, they "won" in their recent attacks on emulation by shutting big projects down, but we can't know what would have happened at trial because they never got that far.
NoA sued the Yuzu devs and settled out of court, with the devs paying $2.4 million and shutting down the Yuzu and Citra projects. The $2.4 million was noted as being a reasonable estimate of what Nintendo's lawyers would have billed if the case went to trial, not a reflection of Yuzu's collection of donations.
NoA used some combination of carrot-and-stick to get the Ryujinx developers to shut that project down as well, but we won't know what that combination was because they never filed a lawsuit, so there are no public records, and there was likely an NDA.
FWIW, while Dolphin doesn't accept donations, the non-profit foundation behind it has been collecting money for almost 15 years via ads and referrals. All of the financials are transparent: https://opencollective.com/dolphin-emu
I suspect you would quickly attract a lot of the wrong kind of “developers” the moment a financial reward appeared. Especially now that it’s so easy to use AI to make something that looks slightly plausible.
Although I suspect the other sibling comment is the real reason.
It's kind of bizarre that Zoom is still bothering to keep the lights on at Keybase when it's been completely fossilized for six years now. The writing is so obviously on the wall that nobody should be relying on it for anything, and yet they just won't let it die.
It's not fossilized, it's just that no one uses it. Put hot chicks on there or make it mandatory for logging into Slack and suddenly everyone will be using keybase.io, and honestly I think web of trust is a good idea and if a webapp can make it seem easy or intuitive then I'm all for it.
We're scratching our heads wondering why there's no forward motion when it's simply that no one is pushing it.
They haven't added or really changed anything since the acquisition AFAICT, it's just trucking along exactly as it was the day Zoom bought them out. Twitter account proofs were broken by the API changes years ago and nobody is at the wheel to fix or even just deprecate them.
This issue (human attestation) is the product of these AI companies. They are poisoning the well, only to sell the cure. This may not have initially been the plan of many of these companies, but it is the eventual end goal of all of them. Very similar to war profiteers, selling both the problem and the solution simultaneously has yet to be illegalized, but has long been masterfully capitalized, and will be vigourously because nobody will stop it.
Years ago (around 2020, when GPT-2 and 3 became publicly available) I noticed and was incredibly critical of how prevalent LLM-generated content was on reddit. I was permanently banned for "abusing reports" for reporting AI-generated comments as spam. Before that, I had posted about how I believed that the the fight against bots was over because the uncanny valley of text generation had been crossed; prior to the public availability of LLMs, most spam/bot comments were either shotgunned scripts that are easily blockable by the most rudimentary of spam filters, generated gibberish created by markov chains, or simply old scraped comments being reposted. The landscape of bot operation at the time largely relied on gaming human interaction, which required carefuly gaming temporal-relevance of text content, coherence of text content (in relation to comment chains), and the most basic attempt at appearing to be organic.
After LLMs became publicly available, text content that was temporally, contextually, and coherently relevant could be generated instantly for free. This removed practically every non-platform-imposed friction for a bot to be successful on reddit (and to generalize, anywhere that people interact). Now the onus of determining what is and isn't organic interaction is squarely on the platform, which is a difficult problem because now bot operators have had much of their work freed up, and can solely focus on gaming platform heuristics instead of also having to game human perception.
This is where AI companies come in to monetize the disaster they have created; by offering fingerprinting services for content they generate, detection services for content made by themselves and others, and estimations of human authenticity for content of any form. All while they continue to sell their services that contradict these objectives, and after having stolen literally everything that has ever been on the internet to accomplish this.
These people are evil. Not these companies - they are legal constructions that don't think or feel or act. These people are evil.
You just need to pay someone 1 cent every time they scan their eye for you. You will have people sitting at home and giving their eye scans to AIs to use.
It's not clear to me how this is verifiable without constant hardware supervision. Even that'll get cracked, just like DVD encryption back in the day.
You almost need dedicated hardware that can't run any other software except a mechanical keyboard and make it communicate over an analog medium - something terribly expensive and inconvenient for AI farms to duplicate.
One physical robot with four wheels, a camera, and a 101 up/down "fingers" to match the keyboard can roll between physical machines and type on mechanical hardware keyboards. This brings the ceiling of how many accounts you can control down to the number of computers you have, but that's not a high price to pay.
I can't be the only one who remembers the celebration 18 months ago when Apple finally stopped selling Macs with 8GB of memory... only for 8GB to suddenly be excused again when the Neo arrived. Perhaps it's not the same people but the general vibe is giving me whiplash.
Because people can’t differentiate between the cheapest MacBook available, then or now, and what they may need? For some reason they think it’s okay to expect Apple to give them stuff for free.
My money is on 12GB in the second gen since that's what the A19 Pro has, and it would still conveniently differentiate from the other MacBooks with at least 16GB.
Further down they also mention that the requests come from CFs ASN and are branded with identifying headers, so third party filters could easily block them too if they're so inclined. Seems reasonable enough.
reply