Hacker Newsnew | past | comments | ask | show | jobs | submit | dspwizard's commentslogin

VLIW works great when you have predictable & short latencies of memory accesses - you will not find DSP designs that do not use TCM (core local SRAM). So you program DMAs input data into its own TCM and work on it from there. GPUs on the other hand hide latencies of memory accesses by switching threads when stalled


TI C2000 is one example


Thank you. I assume you're correct, though for some reason I can't find references claiming C++20 being supported with some cursory searches.


Cadence DSPs have C++17 compatible compiler and will be c++20 soon, new CEVA cores also (both are are clang based). TI C7x is still C++14 (C6000 is ancient core, yet still got c++14 support as you mentioned). AFIR Cadence ASIP generator will give you C++17 toolchain and c++20 is on roadmap, but not 100% sure.

But for those devices you use limited subset of language features and you would be better of not linking c++ stdlib and even c stdlib at all (so junior developers don't have space for doing stupid things ;))


Most of VRAN sites will be still DRAN (distributed RAN -server on site). CRAN (centralized RAN) is a fairy tale so far and works very few scenarios like very dense urban. Still we are talking about 200/400G NICs being standard.

L1 interface is also more efficient than it was for previous G’s where BTS sent time domain data, modern FH sends only allocated parts of spectrum in frequency domain.


The most amusing part is that Huawei developed 5G tech on its own - nokia has shit and e/// is not better. Even in regards of 4G massive MIMO tech - Huawei is years ahead of other, they couldn't steal this tech from anyone - because one one had it.


It's rather more trivial to leapfrog an existing technology when you're able to steal & learn from all of the IP it's based on and then target your R&D efforts accordingly. You don't have to suffer the organizational difficulties of sunk costs, path dependency, legacy systems, etc. So even if their equipment is more advanced, they still got there on the back of massive IP theft.


Isn't this why most people on HN and in the tech community writ-large oppose patents in tech? Faster innovation happens when IP protections aren't holding back innovation.


I would myself advocate for looser patent grants. It seems wasteful that separate companies must redundantly spend resources treading down the same worn pathways. However, a system that circumvents that redundancy would require radically different & collaborative R&D arrangements to function equitably. In its absence, and in the presence of a system where single entities spend their own resources on development, IP theft is damaging. It assists in undercutting the bottom line and even driving out of business those companies that have been stolen from.


A level playing field is more important than any hairbrained theories I might(do) have about IP law. Far worse than tech patents and copyright are tech patents and copyright that only apply to some companies but not others.


By this point, the US patent system can be considered a self inflicted wound. It's not like there have been decades of activists trying to take it down.

Maybe the US could have a cheap 5G vendor, if it had abolished patents decades ago, but now it's too late, the cat is out of the bag.


Part of me hopes we will see some IP reform. I can’t imagine China sees the patent trolls and high price of drugs and thinks “yea that’s the system of the future”


Are you inferring that we let their continued violation a system we both agreed to, while following it ourselves, slide, because it kinda resembles your philosophy from a hazy distance?


If some of us were trying to reform a defective system in the first place, then yes we’d rather the world didn’t just finalize on hamstringing the pace of innovation for the future.


> Huawei is years ahead of other, they couldn't steal this tech from anyone - because one one had it.

Sources? I just spent 10+ minutes reading into this and only 2 out of the 10 i looked at suggested that and they were dubious at best. This one [0] had a decent overlook of the various companies but it didn't say anything at who was leading which isn't too surprising since I also cannot find any sources showing any company whatsoever actually deploying 5g en mass. Mostly just companies deploying lone cell towers to run tests and claim they have 5g.

[0]: https://www.greyb.com/companies-working-on-5g-technology/


That may partly be because a lot of r&d departments that used to be there are now long gone because Huawei put them out of business. Not just by stealing but because it was nearly impossible to compete with a company who were backed again and again by the Chinese government despite failing many times.


What is wrong with being backed by the government? Also, Wikipedia article on Echelon mentions a case of alleged industrial espionage by US: [1]

[1] https://en.wikipedia.org/wiki/ECHELON#Examples_of_industrial...


Huawei is far ahead of others in mMIMO technology and have this implemented in their BTS boards, although right now it is done in FPGA not ASIC.


I had to register to comment about TI "quality" and friendliness ;-).

Their SoCs are ridden with HW bugs and TI will not put every HW bug to errata - e.g. their infrastructure pktdma will hang if you will use chained descriptors but you will not find it in silicon errata. TI response was - "just don't use it" and refused to verify it on their side.

The ISA of c66x DSP is just stupid - you have quad SP multiply but only double SP addition - forming registers back and forth in quads will result in MV instructions often (because compiler is not so smart). There are no real vector registers, "vector" instructions take 4 or 2 32bit registers.

Want to compute power of individual complex int16 in a vector ? You are out of luck - DDOTP4H will add everything together.

There is even no way to utilize their multipliers fully since load&store is 2x64 bit, while multiplier can perform 8 SP multiplies per cycle.

Moreover memory access to L2 is so slow that you will wait in memory stalls (there is no HW prefetch from L2 to L1D cache) and L1 SRAM is way to small to do anything serious (32 kB).

After trying other DSPs, like CevaXC or VSPA, TI looks like poor joke. Their C7000 that supposed to address some of those problems is long overdue and will be well underpowered comparing to recent Ceva DSPs or NXP VSPA.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: