Hacker News new | past | comments | ask | show | jobs | submit login

64GB of 512bit unified memory is REALLY fast/huge This will be better than many training GPUs for ML...

Better than dual socket servers...

I wonder if the mac pro will be dual proc...




So, now we know that LPDDR5 will be coming with at least 16GB per die stack. A doubling from LPDDR4. One package = 128 bit, a double of regular DIMM I/O width.

I see, it's not too much behind even HBM2, of which we may never see a mobile variant.

I was long pointing to people making laptops that LPDDR4 is much cheaper than DIMMs in overall, despite nominal per-GB cost being higher.

The elimination of manual assembly, termination, extra through hole parts, along with LPDDR actually taking less PCB area, less layers, and being less demanding of the PCB material easily compensates for higher chip cost.


Yeah, but can it run CUDA pipelines?


Considering that CUDA is a proprietary technology from Nvidia, how could they?


There are a bunch of CUDA translation shims being worked on.


Not holding my breath, it's been almost 5 years without a working CUDA shim. Hopefully this will push that work over the edge though. If I had the relevant skills I'd contribute...


I think they want everyone to move to Metal Performance Shaders. I've done some stuff on them, but not nearly as developed as CUDA.


Any idea how the neural engine does vs the gpu?


doubt it




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: