Hacker Newsnew | past | comments | ask | show | jobs | submit | universenz's commentslogin

Surely you used several USB Ethernet adapters to rule them out as being the source as well right? Those types of dongles are well known for calling home.


Good observation :) Multiple ethernet adapters: Apple original (ancient USB2 10/100), Tier 1 PC OEM, plus a few random ones. Some USB adapters emit more RF than others.


And your sure it wasn't some built in Apple service ? I believe they host a ton on GCP


It excluded the published hostnames for services and CDNs (some of which resolved to GCP, Akamai, etc) published by Apple for sysadmins of enterprise networks, https://news.ycombinator.com/item?id=46994394. It's indeed possible that one of the unknown destination IPs could have been an undocumented Apple service, but some (e.g. OVH) seem unlikely.


tHaTs BeCaUsE wE dOn’T SeLL wIdE ScReeN DiSpLaYs YeT! -Apple Genius


No no no.. go one better for the Mac. It should be whichever device/s which are next to be made legacy from Apple’s 7 year support window. That way you’re actually catering to the lowest common denominator.


Is there a single Apple SoC where they’ve provided removable ram? Not that I can recall.


Is there even an existing replaceable memory standard that would meet the current needs of Apple's "Unified Memory" architecture? I'm not an expert but I'd suspect probably not. The bus probably looks a lot more like VRAM on GPUs, and I've never seen a GPU with replaceable RAM.


CAMM2 could kinda work, but each module is only 128-bit so I think the furthest you could possibly push it is a 512-bit M Max equivalent with CAMM2 modules north, east, west and south of the SOC. There just isn't room to put eight modules right next to the SOC for a 1024-bit bus like the M Ultra.


Framework said that when they built a Strix Halo machine, AMD assigned an engineer to work with them on seeing if there's a way to get CAMM2 memory working with it, and after a bunch of back and forth it was decided that CAMM2 still made the traces too long to maintain proper signal integrity due to the 256 bit interface.

These machines have a 512 bit interface, so presumably even worse.


Current (individual, not counting dual socketed) AMD Epyc CPUs have 576 GB/s over a 768 bit bus using socketed DIMMs.


My understanding is that works out due to the lower clock speeds of those RAM modules though right?

It's getting that bandwdith by going very wide on very very very many channels, rather than trying to push a gigantic amount of bandwidth through only a few channels.


Yeah, "channels" are just a roundabout way to say "wider bus" and you can't get too much past 128 GB/s of memory bandwidth without leaning heavily into a very wide bus (i.e. more than the "standard" 128 bit we're used to on consumer x86) regardless who's making the chip. Looking at it from the bus width perspective:

- The AI Max+ 395 is a 256 bit bus ("4 channels") of 8000 MHz instead of 128 bits ("2 channels") of 16000 MHz because you can't practically get past 9000 MHz in a consumer device, even if you solder the RAM, at the moment. Max capacity 128 GB.

- 5th Gen Epyc is a 768 bit bus ("12 channels") of 6000 MHz because that lets you use a standard socketed setup. Max capacity 6 TB.

- M3 Ultra is a 1024 bit bus ("16 channels") of "~6266 MHz" as it's 2x the M3 Max (which is 512 bits wide) and we know the final bandwidth is ~800 GB/s. Max capacity 512 GB.

Note: "Channels" is in quotes because the number of bits per channel isn't actually the same per platform (and DDR5 is actually 2x32 bit channels per DIMM instead of 1x64 per DIMM like older DDR... this kind of shit is why just looking at the actual bit width is easier :p).

So really the frequencies aren't that different even though these are completely different products across completely different segments. The overwhelming factor is bus width (channels) and the rest is more or less design choice noise from the perspective of raw performance.


Yeah, but AMDs memory controllers are really finnicky. That might have been more of a Strix Halo issue than a CAMM2 issue.


Entirely possible. Obviously Apple wouldn't have been interested in letting you upgrade the RAM even if it was doable.

I'd love to have more points of comparison available, but Strix Halo is the most analogous chip to an M-series chip on the market right now from a memory point of view, so it's hard to really know anything.

I very much hope CAMM2 or something else can be made to work with a Strix-like setup in the future, but I have my doubts.


I thought so too when they launched the M1, but I soon got corrected.

The memory bus is the same as for modules, it's just very short. The higher end SoCs have more memory bandwidth because the bus is wider (i.e. more modules in parallel).

You could blame DDR5 (who thought having a speed negotiation that can go over a minute at boot is a good idea?), but I blame the obsession with thin and the ability to overcharge your customers.

> I've never seen a GPU with replaceable RAM

I still have one :) It's an ISA Trident TVGA 8900 that I personally upgraded from 512k VRAM to one full megabyte!!!


It's really unfortunate that GPUs aren't fully customizable daughterboards, isn't it.


96gb on baseline model m3 ultra with a max of 512gb! Looks like they’re leaning in hard with the AI crowd.


A fellow HN Kiwi. Nice work on this Mike. Keen to see how it develops!


Thanks! Yeah I always like seeing other Kiwis' work on here, so had to share.

(and y'know, for SEO purposes etc. cough cough)


Right? Dude, please provide some additional context around your web browser stack because something you’re using is triggering their system. That or you’re click house neighbours are using your wifi.


Why do you assume LinkedIn is correct? I've seen multiple social networks pull the scam where they claim you've violated their ToS only to demand your phone number and other personal information for "verification", and then after they've slurped your data, they don't seem to care about the so-called "violations" anymore.


Twitter pulled that stunt on me: within minutes after creating an account, having done precisely nothing with it yet, they locked me out with some vague complaint about security and suspicious behavior, demanding a phone number. I refused to comply, and several weeks of daily complaints to customer support eventually got the account unlocked, with no explanation; they are clearly just harvesting numbers for the sake of it.


I have private relay enabled via iOS iCloud. I have no extensions installed. Preload top hit is enabled on safari by default. (Not sure if this is could be a factor). I only use safari across my devices. 2FA is enabled for LinkedIn and save password. That is it. I access LinkedIn frequently throughout the day.


> I have private relay enabled via iOS iCloud

I can see that might be it - because it means LinkedIn would be seeing you logging-in from different IP addresses in different geolocations every time (though Apple doesn't let you virtually change-country, I understand in the US it does make it look like you've moved-state).


I also use iCloud Private Relay and LinkedIn seems happy in my case. There has to be more to it and I wish they’d make these kinds of guards more transparent. You can’t just ban people because some crappy algo thinks you’re a bot.


Thank you. I recently dropped my full time job to start a consulting job. I needed LinkedIn for the crucial networking during my early stages. Now I am completely banned without knowing why


Good point. I checked my private relay settings in iOS iCloud. It is set as “maintain general location” along with a description “ Maintain your general location to receive localized content, or enhance your privacy by using a broader IP address based on your country and time zone. Safari Private Browsing always uses an IP location from your country and time zone.”


Also it appears the new v2 licensing approach does not support Family Purchases, which is a step backward in my view.


While he may not of penned it himself, he is certainly funding 50% of the rather generous severance and that should count for something by comparison.


If you are thinking this is him and not the corporate strategy team, then you are a mistaken.


The corporate strategy team leads the implementation, but the final go/nogo is on the CEO for sure. Or at least, that I would expect.


The CEO is told they need to layoff by the CFO. The CEO agrees, then the corporate strategy team devise a strategy with the PR team. After the strategy is devised, the severance and headcount numbers are sent to the CFO for approval. The CFO should be the final go/nogo. Off course, the CEO can come in and change his mind, but that wouldn't be wise since the CFO has the best understanding of the economic situation and company's financial health.


Correction: They /attempted/ to add an /optional/ feature for your /children's/ account(s).

Do you have kids? Daughters? Would you like better control over what they're exposed to? Being given the tools to track screen time, purchases and control where possible exposure to objectionable content for immature minds isn't a bad thing.. provided it is opt-in. Which the feature always was before the news cycle chose their own narrative.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: