Just found about this skin market/casino thing, and also that my teenage son purchased a skin for 100€, but is still pretty excited and happy about it because «its real value is around 700€».
I am still processing this information.
Most people have no sense of security. They say yes to strangers if asked to plug in a USB device on their laptop. When I said no in the train to someone asking to plug their device "for charging", I was definitely the bad guy.
Just find anything plausible, for backup storage, or say, to share family photos with grand parents but it does not work on my home wifi because my ISP is blocking ports, whatever.
Ah man, this must be rethorics and you wouldn't lie to a friend close enough to do such a favour, would you?
WHo the h is after you guys anyway, to want such level of degraded-internet-speed?
And about 'Warp', is it or is it not a VPN after-all? They mentionned they aren't a VPN, but that they build on wireguard ??
In the early 2000s the video field was flooded with fast paced releases of new codecs and new codecs versions, and there was codecs implementation to downloads right and left, and people were bundling them and releasing them with names sounding like a warez group. It was a little crazy to watch a video at the time.
This was mitigated by vlc and mplayer, two video players that integrated most codecs as fast as they could, and it was a breath of fresh air. You just started them and any video would play, no codec issue anymore.
MPlayer has not been updated for some times, and traction was lost, but VLC, although looking a bit old on the UI-side (and a little buggy on ARM Windows) is still here and is solid when someone just wants to watch a video on any platform.
The same thing exists on Windows, developers have to code sign their binaries. It's even worse in my experience because you have to use a token (usb key with cryptographic signing keys in it) and that's impractical if you want your ci/cd to run in a datacenter. At my company we had a mac mini with a windows VM and a code signing token plugged in just for the purpose of signing our macos and windows binaries.
Another solution that is not mentioned in the article is that users of both macos and windows should be able to easily integrate the certificate of a third-party editor, with a process integrated in their OS explaining the risks, but also making it a process that can be understood and trusted, so that editors can self-sign their own binaries at no cost without needing the approval of the OS editor. Such a tool should ideally be integrated in the OS, but ultimately it could also be provided by a trusted third-party.
I struggled with a similar problem recently. You can use osslsigncode to sign Windows binaries from Linux. It is also possible, with some pissing about, to get everything to work hands off.
In the end we went with Digicert Keylocker to handle the signing, using their CLI tool which we can run on Linux. For our product we generate binaries on the fly when requested and then sign them, and it's all done automatically.
> The same thing exists on Windows, developers have to code sign their binaries.
> Another solution that is not mentioned in the article is that users of both macos and windows
The article is actually about notarization on iOS, which is vastly different from notarization on macOS. On iOS, every app, whether in the App Store or outside the App Store, goes through manual Apple review. But apps distributed outside the App Store have fewer rules.
Yes I don't understand what he means.
On Windows you can basically tone down the security to the point you basically tell it to shut up and let you do anything you want, including shooting yourself in the foot (which is fine by me).
On macOS you have to resolved to various tricks to be able to run stuff you have decided you want to run, for whatever reason.
Azure Key Vault - even in the ‘premium’ HSM flavour can’t actually prove the HSM exists or is used, which doesn’t satisfy the requirements the CA has. In theory, it shouldn’t work - but some CAs choose to ignore the letter and the spirit of the rules.
Even Azure’s $2400a month managed HSM isn’t acceptable, as they don’t run them in FIPS mode.
Nope. Notarization is not code signing. It’s an extra step, after code signing, where you upload your software to Apple’s servers and wait for their system to approve it. It’s more onerous than code signing alone and, with hindsight, doesn’t seem to have been offering any extra protection.
It's not the same, but in practice it's also not so different. Microsoft keeps track of how many times a certain executable has been run and only after a certain threshold does the executable become openable without hunting for tiny buttons. The kicker: this also applies for signed binaries.
Microsoft will upload these executables to the cloud by default if you use their antivirus engine ("sample collection").
In a way, Microsoft is building the same "notarisarion database", but it's doing so after executables have been released rather than before it. Many vendors and developers will likely add their executables to that "database" by simply running it on a test system.
On the other hand, SmartScreen can be disabled pretty easily, whereas macOS doesn't offer a button to disable notarisarion.
Microsoft's notorisation sounds fully automated and transparent, while Apple's is more political and hands on. Individual apps getting their notorisation slowed down to a glacier pace because the platform owner doesn't like them doesn't seem to happen in Microsoft land.
Wasn't there even a story some time ago about how some completely legit, legal, above-board app to virtualize old (pre OS X) versions of Mac OS got rejected by Apple's notarization process?
I'm honestly not even sure it's about denying competitors anything. It feels more like denying their users. Apple has a long history of intently denying users the ability to do what they want LONG before any potential App Store competitors appeared.
Notarization is the same for macOS and iOS AFAIK. Both platforms have a separate app store review process that's even more strict than the notarization process.
> Notarization is the same for macOS and iOS AFAIK.
Assuming the basic facts are straight, the the linked story explicitly proves this is false:
> UTM says Apple refused to notarize the app because of the violation of rule 4.7, as that is included in Notarization Review Guidelines. However, the App Review Guidelines page disagrees. It does not annotate rule 4.7 as being part of the Notarization Review Guidelines. Indeed, if you select the “Show Notarization Review Guidelines Only” toggle, rule 4.7 is greyed out as not being applicable.
Rule 4.7 is App Review Guidelines for iOS, so this would be a case of failing notarization for iOS App Review Guidelines, which means the policies (and implementation) are different between platforms.
(Of course there's no such thing as "Notarization Review Guidelines" so maybe this whole story is suspect, but rule 4.7 is the App Review Guidelines rule that prohibits emulators.)
The point is that notarization plays the same role for both platforms: checks whose purpose is to make sure that the software won't harm the user's device, unrelated to the App Store review process. Both platforms have an additional App Store review process which is significantly more strict, and the notarization process isn't supposed to involve App Store review for either platform.
When Apple denies notarization for bullshit reasons on one platform, it makes me highly suspicious of their motivation for notarization on all platforms.
Their decision to use the same word for both is enough for me to treat them as the same. Apple has tried to convince people that notarization exists for the user's benefit; the iOS implementation of notarization has convinced me that that's not the case.
The bigger difference is that Apple isn't just checking for malware, it's checking for conformance with various APIs, manifest requirements and so on. Not as strict as the iOS App Store, maybe, but it will refuse to notarize if it detects use of unsanctioned API calls.
You don't even need signing for Microsoft's system to do what it does - it can operate on unsigned code, it's all hash based.
Is there a concrete example of this? We know this isn't blanket policy, because of a recent story (https://news.ycombinator.com/item?id=45376977) that contradicts it. I can't find a reference to any macOS app failing notarization due to API calls.
Notarization doesn't blanket block all access to private APIs; but the notarization process may look for and block certain known accesses in certain cases. This is because notarization is not intended to be an Apple policy enforcement mechanism. It's intended to block malicious software.
So in other words, using private APIs in and of itself isn't an issue. Neither is it an issue if your application is one that serves up adult content, or is an alternate App Store, or anything else that Apple might reject from its own App Store for policy reasons. It's basically doing what you might expect a virus scanner to do.
Yeah, don't disagree with any of that, but I'm looking for explicit evidence that that is true (right now it sounds like it's just an assumption)? E.g., either examples of apps failing notarization due to API calls, or Apple explicitly saying that they analyze API calls. Without that it sounds like we're just guessing?
I have experienced it myself but this was some years ago, may not be current. Think it was things they were trying to deprecate, which are now fully gone - was around the time they introduced Hardened Runtime, 2018-19 ish.
I have the opposite experience - on macOS you can guarantee what users will see when you distribute your notarized app, while on Windows you cannot for undefined time.
How often do you notarize your apps? Why does the speed matter at all? In my cases it takes 2 seconds for the notarization to complete.
The length of time notarization takes depends primarily upon how large and complicated your app is, and how different is from previous versions of the same application you've previously notarized. The system seems to recognize large blocks of code that it's already analyzed and cleared and doesn't need to re-analyze. How much your binary churns between builds can greatly influence how fast your subsequent notarizations are.
A brand new developer account submitting a brand new application for notarization for the first time can expect the process might take a few days; and it's widely believed that first time notarizations require human confirmation because they do definitely take longer if submitted on a weekend or on a holiday. This is true even for extremely small, trivial applications. (Though I can tell you from personal experience that whatever human confirmation they're doing isn't very deep, because I've had first time notarizations on brand new developer accounts get approved even when notarizing a broken binary that doesn't actually launch.)
And of course sometimes their servers just go to shit and notarizations across the board all take significantly longer than normal, and it's not your fault at all. Apple's developer tooling support is kinda garbage.
“Notarize your macOS software to give users more confidence that the Developer ID-signed software you distribute has been checked by Apple for malicious components. _Notarization_of_macOS_software_is_not_App_Review. The Apple notary service is an automated system that scans your software for malicious content, checks for code-signing issues, and returns the results to you quickly.”
⇒ It seems notarization is static analysis, so they don’t need to launch the process.
Also, in some sense a program that doesn’t launch should pass notarization because, even though it may contain malware, that’s harmless because it won’t run.
The important part is that once you have a code signing certificate, you can sign your executable independently, offline, without involvement from Microsoft, which isn’t possible with Apple’s notarization.
It's more akin to an enforced malware scanner, at least in principle, kind of mandatory VirusTotal with a stapled certificate.
In practice though they use it to turn the screws on various API compliance topics, and I'm not sure how effective it is realistically in terms of preventing malware exploits.
> doesn’t seem to have been offering any extra protection.
How would this be measured?
Since no one has pointed it out here, it seems obvious to me that the purpose of the notarization system is mainly to have the code signatures of software so that Apple can remotely disable any malware from running. (Kind of unsavory to some, but probably important in today's world, e.g., with Apple's reach with non-technical users especially?)
Not sure how anyone external to Apple would measure the effectiveness of the system (i.e., without knowing what has been disabled and why).
There's a lot of unsubstantiated rumors in this comment thread, e.g., that notarization on macOS has been deliberately used to block software that isn't malware on macOS. I haven't seen a concrete example of that though?
Disabling malware via hash or signature doesn't require the Notarization step at all. Server can tell clients to not run anything with hash xxyyzz and delete it. I mean, just think about it. If disabling stuff required the Notarization step beforehand, no anti-malware would have existed before Notarization. Nonsense.
I think notarization is just a more automated way to do this approach, e.g., otherwise Apple has to hunt down all the permutations of the binary themselves. It seems like it just simplifies the process? (It makes it a white list not a black list, so it's certainly more aggressive.)
Highly suggest trying Azure Trusted Signing on a CI system with windows boxes (I use Github). Windows signing was an expensive nightmare before, but is now relatively painless and down to $10/mo (which isn't cheap but is cheaper than the alternatives).
Azure Trusted Signing is a crapshoot. If you can get running, it's easy and fast and great. But if you run into any problems at all during the setup process (and you very well might since their onboarding process is held together with duct tape and twine), you're basically left for dead and unless you're on an enterprise support plan, you're not going to get any help from them at all.
Last time I checked it's still US/Canada only. Luckily I only needed code-signing for an internal app, so we just used our own PKI and pushed the certs over MDM.
It’s also limited to companies that have a proven life span of at least 3 years IIRC (you have to provide a duns number). They may have reopened for individuals, but that means your personal name attached to every binary.
The main issue with this article is that it claims to be about anonymization, but reject HMAC because it's not reversible, and promotes IPCrypt because it is.
Except that if it's reversible, it's not anonymization, it's pseudonymization.
Happened in France too. It was put in place in the late 70s, and ended in 2020. Called the «numerus clausus» (closed number, in latin) and it restricted the number of medicine students allowed in the country every year.
The number of students fell by 50% between 1980 and the mid 90s: 8500 new students/year in 1972, 3500 in 1993.
Of course, now the number of doctors in France is far from enough for an aging population, in every specialty and it will take at least a decade to improve. It's not uncommon to have 1-year waitlists for ophthalmology appointments, and several weeks or even months for dermatology.
Not sure if it is valid for France. But there is paid healthcare system in Germany. No wait time and newest treatment methods are used if you bring your own cash. Same doctor has appointment next day if you tell that you‘re paying by yourself. If you come as normally public insured patient… well… come in a month or better in a year please.
Or maybe just read the commits between now and a reasonable date far enough in the past so that if there is some hostile code injected before that point in time, then at least you will share the walk of shame with a lot of people and you can play the sound of "who could have guessed?"
There's no point in reading the code in the Git repository or its commit history because that's not the code that you're actually executing. You have to read what's in your node_modules, everything else is irrelevant.
It doesn't index all of npm, only if the package was reference by a Linux distribution somehow (e.g. package-lock.json in a tar file used in an Arch Linux PKGBUILD).
Even is the AI bubble does not pops, your prediction about those servers being available on ebay in 10 years will likely be true, because some datacenters will simply upgrade their hardware and resell their old ones to third parties.
Sure, datacenters will get rid of the hardware - but only because it's no longer commercially profitable run them, presumably because compute demands have eclipsed their abilities.
It's kind of like buying a used GeForce 980Ti in 2025. Would anyone buy them and run them besides out of nostalgia or curiosity? Just the power draw makes them uneconomical to run.
Much more likely every single H100 that exists today becomes e-waste in a few years. If you have need for H100-level compute you'd be able to buy it in the form of new hardware for way less money and consuming way less power.
For example if you actually wanted 980Ti-level compute in a desktop today you can just buy a RTX5050, which is ~50% faster, consumes half the power, and can be had for $250 brand new. Oh, and is well-supported by modern software stacks.
Off topic, but I bought my (still in active use) 980ti literally 9 years ago for that price. I know, I know, inflation and stuff, but I really expected more than 50% bang for my buck after 9 whole years…
> Sure, datacenters will get rid of the hardware - but only because it's no longer commercially profitable run them, presumably because compute demands have eclipsed their abilities.
I think the existence of a pretty large secondary market for enterprise servers and such kind of shows that this won't be the case.
Sure, if you're AWS and what you're selling _is_ raw compute, then couple generation old hardware may not be sufficiently profitable for you anymore... but there are a lot of other places that hardware could be applied to with different requirements or higher margins where it may still be.
Even if they're only running models a generation or two out of date, there are a lot of use cases today, with today's models, that will continue to work fine going forward.
And that's assuming it doesn't get replaced for some other reason that only applies when you're trying to sell compute at scale. A small uptick in the failure rate may make a big dent at OpenAI but not for a company that's only running 8 cards in a rack somewhere and has a few spares on hand. A small increase in energy efficiency might offset the capital outlay to upgrade at OpenAI, but not for the company that's only running 8 cards.
I think there's still plenty of room in the market in places where running inference "at cost" would be profitable that are largely untapped right now because we haven't had a bunch of this hardware hit the market at a lower cost yet.
I have around a thousand broadwell cores in 4 socket systems that I got for ~nothing from these sorts of sources... pretty useful. (I mean, I guess literally nothing since I extracted the storage backplanes and sold them for more than the systems cost me). I try to run tasks in low power costs hours on zen3/4 unless it's gonna take weeks just running on those, and if it will I crank up the rest of the cores.
And 40 P40 GPUs that cost very little, which are a bit slow but with 24gb per gpu they're pretty useful for memory bandwidth bound tasks (and not horribly noncompetitive in terms of watts per TB/s).
Given highly variable time of day power it's also pretty useful to just get 2x the computing power (at low cost) and just run it during the low power cost periods.
It's interesting to think about scenarios where that hardware would get used only part of the time, like say when the sun is shining and/or when dwelling heat is needed. The biggest sticking point would seem to be all of the capex for connecting them to do something useful. It's a shame that PLX switch chips are so expensive.
The 5050 doesn't support 32-bit PsyX. So a bunch of games would be missing a ton of stuff. You'd still need the 980 running with it for older PhyX games because nVidia.
Someone's take on AI was that we're collectively investing billions in data centers that will be utterly worthless in 10 years.
Unlike the investments in railways or telephone cables or roads or any other sort of architecture, this investment has a very short lifespan.
Their point was that whatever your take on AI, the present investment in data centres is a ridiculous waste and will always end up as a huge net loss compared to most other investments our societies could spend it on.
Maybe we'll invent AGI and he'll be proven wrong as they'll pay back themselves many times over, but I suspect they'll ultimately be proved right and it'll all end up as land fill.
The servers may well be worthless (or at least worth a lot less), but that's pretty much true for a long time. Not many people want to run on 10 year old servers (although I pay $30/month for a dedicated server that's dual Xeon L5640 or something like that, which is about 15 years old).
The servers will be replaced, the networking equipment will be replaced. The building will still be useful, the fiber that was pulled to internet exchanges/etc will still be useful, the wiring to the electric utility will still be useful (although I've certainly heard stories of datacenters where much of the floor space is unusable, because power density of racks has increased and the power distribution is maxed out)
I have a server in my office that's at from 2009 still far more economical to run than buying any sort of cloud compute. By at least an order of magnitude.
72 Gigs of Ram, 4x SCSI 15K drives I think. Yeah, I mean it's not doing anything crazy running a lot of virtual machines, random servers, probably the most intense thing is video transcoding. It works well though and like I said way way cheaper than running the same stuff on cloud infrastructure. I think I bought it for like $500 about 10 years ago. I started saving about $76 a month just off of moving Virtual Desktops off of AWS to that when I got it so easily paid for itself in a year.
If it is all a waste and a bubble, I wonder what the long term impact will be of the infrastructure upgrades around these dcs. A lot of new HV wires and substations are being built out. Cities are expanding around clusters of dcs. Are they setting themselves up for a new rust belt?
There are a lot of examples of former industrial sites (rust belts) that are now redeveloped into data center sites because the infra is already partly there and the environment might be beneficial, politically, environmentally/geographically. For example many old industrial sites relied on water for cooling and transportation. This water can now be used to cool data centers. I think you are onto something though, if you depart from the history of these places and extrapolate into the future.
Sure, but what about the collective investment in smartphones, digital cameras, laptops, even cars. Not much modern technology is useful and practical after 10 years, let alone 20. AI is probably moving a little faster than normal, but technology depreciation is not limited to AI.
They probably are right, but a counter argument could be how people thought going to the moon was pointless and insanely expensive, but the technology to put stuff in space and have GPS and comms satellites probably paid that back 100x
Reality is that we don’t know how much of a trope this statement is.
I think we would get all this technology without going to the moon or Space Shuttle program. GPS, for example, was developed for military applications initially.
I don’t mean to invalidate your point (about genuine value arising from innovations originating from the Apollo program), but GPS and comms satellites (and heck, the Internet) are all products of nuclear weapons programs rather than civilian space exploration programs (ditto the Space Shuttle, and I could go on…).
Yes, and no. The people working on GPS paid very close attention to the papers from JPL researchers describing their timing and ranging techniques for both Apollo and deep-space probes. There was more cross-pollination than meets the eye.
It's not that going to the Moon was pointless, but stopping after we'd done little more than planted a flag was. Werner von Braun was the head architect of the Apollo Program and the Moon was intended as little more than a stepping stone towards setting up a permanent colony on Mars. Incidentally this is also the technical and ideological foundation of what would become the Space Shuttle and ISS, which were both also supposed to be little more than small scale tools on this mission, as opposed to ends in and of themselves.
Imagine if Columbus verified that the New World existed, planted a flag, came back - and then everything was cancelled. Or similarly for literally any colonization effort ever. That was the one downside of the space race - what we did was completely nonsensical, and made sense only because of the context of it being a 'race' and politicians having no greater vision than beyond the tip of their nose.
For All Mankind. I tried getting into that, but the identity politics stuff (at least in first season) was way too intense for me. I'm not averse to it at all in practice (Deep Space Nine is one of my favorite series of all time) but, for me, it went way beyond the line from advocacy to preachiness.
This isn’t my original take but if it results in more power buildout, especially restarting nuclear in the US, that’s an investment that would have staying power.
The problem is that the nomenclature and conventions differed, and this many years later people tend to conflate them.
BBS networks like ILink had tearlines, optional taglines, and mandatory origin lines. FidoNet had tearlines and origin lines because it shared roots and sometimes nodes with the BBS networks; so they were there for compatibility. Usenet mainly had signatures, with all of its equivalents to the other stuff in headers.
In echomail they were known as "Origin" because the intended purpose of that line was to identify the originating node. It looked like this:
* Origin: any random text (12:34/56.78)
The text was supposed to be the name or description of the node, but this wasn't mandated by the rules, and the address at the end unambiguously identified the node anyway for anyone who cared, so people quickly repurposed it for taglines.
[1] https://en.wikipedia.org/wiki/Ad_blocking#History
[2] https://www.webdesignmuseum.org/exhibitions/web-banners-in-t...