Hacker Newsnew | past | comments | ask | show | jobs | submit | more Bimos's commentslogin

Maybe add Chimera as well?

https://arxiv.org/pdf/2107.06925


it looks as if Chimera has marginally less bubbles than DualPipe?


Oh more nice pictures :)


I heard that it is possible to achieve better performance than cuBLAS using CUTLASS? I thought they chose the better one among cuBLAS and CUTLASS as baseline.


> FFMA SASS interleaving

> We observe a performance improvement in the CUTLASS FP8 kernel between NVCC 12.2 and 12.3. By comparing the compiled SASS, we discover that one bit in a series of FADD instructions is flipped in an interleaving pattern. After referencing some open-source CUDA assembler implementations, we identified that this bit controls yield, which may enhance warp-level parallelism (just a guess, yielding the current warp and let other warps work).

> To leverage this, we develop a similar script to modify the FFMA instructions in the compiled binary. Besides simply modifying the yield bit, we also flip the reuse bit (registers cannot be reused if the warp is yielded). This adjustment improves performance (10%+ in some cases) for fine-grained scaling FP8 GEMMs by creating more opportunities to overlap MMA instructions with promotion FFMA instructions.

I would say it is really mind-blowing.


From what I read elsewhere, this is the type of typical performance optimization for matrix math you would see when performance is critical. It’s just not been applied yet to this specific problem by other AI players since it wasn’t a necessity for other companies. But eventually everyone would probably end up here regardless.


How many people does it take to implement this? A 10% gain in performance could pay for a lot of people's salaries when your company is spending hundreds of millions on GPU clusters.


If you think how many people who looked and failed to realize this optimization in the preceding performance efforts of the community, you could argue for quite a big number.


Uh, three? I worked at $CORP where we had a three people sub-team, they reverse engineered most of Volta's SASS instruction encoding, built a working SASS assembler (before the open source one of course), with the ultimate goal of making GEMM / Conv faster. And they did it. Though it wasn't applied to a high-profile enough big picture so we never heard about it :>

If you don't believe me, previous open source SASS assemblers were mostly from university, they surely didn't have that many people.


Did $CORP also release the im0lementation to make it trivial for others to replicate their work?


I think we did release some of the optimized kernels but I don't think we have released any one with SASS black magic, at least not before I left. Already been sanctioned by BIS, better not annoy NVIDIA furthermore.


Actually, a number of them did. Even Google did.


I mean it’s not a significant change so one? But that isn’t to say anyone could do it.


Just a reminder, this is the third of many open source releases from DeepSeek that they are willing to release, and that release is a very trivial low bar for them to find optimizations when it is needed.

I guess since the majority here are blown away by the very low-level code involved, it tells me that they're likely not ready to use it or have been stuck on very high level tools that abstract this away.


I tell you a secret. Most devs do something wrong when they start rolling out their own linear algebra library. Thats why people use LAPAC, BLAS, etc...


The thing is most people don't use Lapack or Blas. Most people are at higher levels of abstraction than torch.matmul.


Just a few of highly skilled people.


I think most AI players rely on high performance GEMM. But most people would be satisfied with cutlass or cublas, and the others implement gemm themselves, but not necessarily use undocumented features?


Using undocumented features is not rare. People reverse engineered Apple's undocumented AMX instructions on their CPU, and I know people use undocumented/private extensions for several different kinds of GPUs.


I‘ve only seen it done by hedge funds so far. What were you referring to?


scott grey figured out this exact thing and more back in 2015 for maxwell, and it's been written about many times since by other people.


It is not literally mind-blowing..


I think he might mean hyperbolically figuratively so


Literally literally means not literally.

I love it when words turn into their opposites!


I edited it.


orthogonally


The PTX instructions they talked about in the tech report should be pointing to the code here?


"For extreme performance, we discover and use a behavior-out-of-doc PTX instruction: ld.global.nc.L1::no_allocate.L2::256B. This instruction will lead to an undefined behavior: accessing volatile GPU memory with non-coherent read-only PTX modifiers .nc. But the correctness is tested to be guaranteed with .L1::no_allocate on Hopper architectures, and performance will be much better. If you find kernels not working on some other platforms, you may add DISABLE_AGGRESSIVE_PTX_INSTRS=1 to setup.py and disable this, or file an issue."


So non-coherent refers to bypassing cache coherency, ie don't care about what other units might have written to that address? And the L1/L2 modifiers are to avoid L1 thrashing, keeping the value in L2 only?

Or did I get that wrong?


My understanding of the L2 part is that it asks for a 256b pre-fetch (only available on some platforms it seems) but they use vectors of 4 32bits signed ints max so not sure why only the 256 would work or if the fact that it did fetch the next 128 helps.


Yeah that's about right



How is it different from eliminating y?


As a Chinese I want to mention that foreign people don't usually call it "PPT".


Let's see how US reacts to this incident in Munich meeting


This wouldnt happen if russia had better access to state of the art military grade electronics - sanctions lifted!


Increase drone traffic controller headcount?


> If GPUs Are So Good

Who ever claimed that?


> In Shanghai, the Huangpu River divides old and new.

Pudong is new, but Puxi is not "the old one". It is mixed with old historical sites and modern buildings. It lives with its history, and it grows actively. It doesn't have to destroy all the old stuff, but it also doesn't have to slow down its pace for them. I believe it is the same for any city (e.g. in Europe) with history.


button^2


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: