Hacker Newsnew | past | comments | ask | show | jobs | submit | dcan's commentslogin

09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0


That's an illegal number mate. Straight to the slammer!

(for those missing out: https://en.wikipedia.org/wiki/AACS_encryption_key_controvers...)


Thabk you so much this is a beautiful rabbit hole to go down


plz stop! my hddvds...


Patents and licensing, usually

https://news.ycombinator.com/item?id=39543291


Ah so HDMI is one part of it, that's really unfortunate. Thank you for this insight.



After wrecking the suspension of 2 e-scooters downtown from craptastic roads, I can say nothing of value would be lost from avoiding downtown ATX.


I see 15g CO2(eq)/km in the bottom left-hand corner


Terminating HDCP is difficult, you’d have to downgrade it to HDCP 1.4 and then have a 1.4 ‘compliant’ (see: device on the end for it to be a dummy monitor. If you need anything newer than HDCP 1.4, it’s likely not possible.


I did a tear down of this Monoprice dongle: https://tomverbeure.github.io/2023/11/26/Monoprice-Blackbird....

It terminates as an HDCP 2.0 endpoint and converts to HDCP 1.4. You’d still need an HDCP 1.4 sink to make it work though.


I'm using the Monoprice multiviewer. It negotiates HDCP without a display attached. Other than being a bit big and expensive, and being unable to strip HDCP, it's a good solution.

I found the same device in generic packaging on AliExpress, but haven't had the chance to order that version, yet.

There are lots of professional SDI converters and such, but they are either $3k+ or "call for price".


That was written by you?

I don't agree with this section:

> The HDCP converter simply announces itself as a final video endpoint… yet still repeats the content to its output port. Without a very expensive HDMI protocol analyzer, we can’t check if the source is tagging the content as type 0 or type 1, but there is no reason now to think that it’s not type 1.

There's no magic in the HDMI protocol that says type 1 vs type 0. Its just another HDCP message over DDC, but it is only sent to repeaters. In this case, since the HDCP Repeater is lying about not being a repeater, it isn't getting sent the StreamID Type information.


You’re probably right.


Great teardown. Can these things remove HDCP altogether? It seems like if it can report that the sink is HDCP2.x then it can do so even if it has no compliance at all right? So that would mean it streams an encrypted stream to something that needs to then still do the decryption? These devices seem like they'd be underpowered to do that in real time at 18 Gb/s.


I assume the silicon can do it, but it’s not exposed to the user, because that would almost certainly be a license violation.


Try reading a 40+ page document with track changes enabled (and 100+ changes) - it pins a full CPU core for 5 seconds when you go to the next page!


Gray text on a black background is an awful colour choice for this website


I got black text on a white background


the site uses the css prefers-color-scheme to see if your system has light or dark theme selected and chooses the colors based on that.


indeed. the offwhite headline color should also be set to the body text.


I tried Portal RTX on a 9070 XT and got 20 FPS at full resolution (no frame generation). There’s no driver limitations, but I have no idea what the expected FPS is


yikes that's dismal, I wonder what a 5070 gets


Depends if you count real or fake frames and if it fits in what little VRAM Nvidia gives their captive customer base.


To be more precise, four CPUs - two ARM and two RISC. There is just a mux for the instruction and data buses - see chapter 3 of the [datasheet](https://datasheets.raspberrypi.com/rp2350/rp2350-datasheet.p...).

It’s space-inefficient as half of the CPUs are shutdown, but architecturally it’s all on the same bus.


> It’s space-inefficient as half of the CPUs are shutdown

In practice is doesn't matter very much for a design like this. The size is already limited to a certain minimum to provide enough perimeter area to provide wire bonding area for all of the pins, so they can fill up the middle with whatever they want.


They should have filled it with more SRAM instead - 520KB is far too little.


What difference would the extra 16KiB or whatever instead of the 2 RISC-V cores make? If 520KB is far too little for you, you're likely better off adding a 8 MiB PSRAM chip.


Just 16KB? Couldn’t a lot more be fitted?

PSRAM has huge latency.


SRAM takes up a tremendous amount of space compared to logic. Usually at least six transistors per bit, plus passives, plus management logic.


SRAM is big in gate count. typically 6 transistors per bit.

The i386, a 32 bit chip already dragging around a couple of generations of legacy architecture came in at 275,000. I would imagine the Hazard3 would be quite a bit more efficient in transistor usage due to architecture.

16K is 16384(bytes) *8(bits per byte) *6(transistors per bit) = 786, 432


It was the first CPU on my desk! 80386SX 25MHz.

(this one, only 32bit internally)


Thanks for the explanations - was not aware.

…vertically stack a slab of SRAM above or beneath the CPU die, does come to mind ;)


This is way too expensive for something like a microcontroller. AMD calls this 3D V-Cache and uses it on their top end SKUs.


But doesn't the ESP32-S3-WROOM have some large on-chip RAM?

For the Pico, say, something in the line of the approach taken by many smartphone SoCs that package memory and processor together.


The ESP32-S3 has 512 KB of SRAM, and the RP2350 has 520 KB of SRAM. The ESP32-S3-WROOM does indeed come in configurations with additional PSRAM, but that would be comparing apples and pears. The WROOM is an entire module complete with program flash, PSRAM, crystal oscillator etc. It comes in a much larger footprint than the actual ESP32-S3, and it is entirely conceivable that one could create a similar module with the same amount of PSRAM using the RP2350.

Furthermore, the added RAM in both cases is indeed PSRAM. That being said, the ESP32-S3 supports octal PSRAM, not just quad PSRAM, which does make a difference for the throughput.


> "some"

And go cellphone style: Package-on-Package or Multi-Chip Module of some sort.

Wouldn't the massive increase in capabilities from adding 8MB-16MB of closely-integrated, fast RAM far outweigh the modest price increase for many applications that are currently memory-constrained on the Pico?


> But doesn't the ESP32-S3-WROOM have some large on-chip RAM?

They use the same PSRAM chips with relatively bad latency you complained about higher up in the thread. There are boards like those from Pimoroni that even have them on the PCB from the factory.

> For the Pico, say, something in the line of the approach taken by many smartphone SoCs that package memory and processor together.

What for? This only saves you PCB space, the latency is not going to be affected by this. There probably won't be enough people ordering those to justify the additional inventory overhead of (at least) 2 more skews.


I believe there's already a separate Flash die in the same package. Probably not possible to add yet another die for DRAM.

(for various chemistry reasons, it's much more efficient to manufacture Flash, DRAM, and regular logic on separate wafers with different processing)


Wouldn't the massive increase in capabilities from adding 8MB-16MB of closely-integrated, fast RAM far outweigh the modest price increase for many applications that are currently memory-constrained on the Pico?


It may be technically space inefficient but they only added the RISC-V cores because they had area to spare. It didn't cost them much.


Source for the RISC-V cores being essentially free (Luke Wren is the creator of the RISC-V core design used):

> The final die size would likely have been exactly the same with the Hazard3 removed, as std cell logic is compressible, and there is some rounding on the die dimensions due to constraints on the pad ring design.

https://nitter.space/wren6991/status/1821582405188350417


Funny thing is that it cost them more than you might think. It was the ability to switch to the riscv which made it (much) easier to glitch. See the "Hazardous threes" exploit [1]

[1] https://www.raspberrypi.com/news/security-through-transparen...


I wonder if they're using the same die for one or more microprocessor products that are RISC-V-only or ARM-only? They could be binning dies that fail testing on one or the other cores that way. Such a product might be getting sold under an entirely different brand name too.


They're not currently doing that but there is a documented way to permanently disable the ARM cores, so they could sell a cheaper RISC-V-only version of the same silicon if there's enough demand to justify another SKU.


That may be the plan for the future. Right now, this is a hedge / leverage against negotiations with ARM. For developers looking to test their code against a new architecture and compare it to known good code/behavior, it doesn’t get any easier than rebooting into the other core!


I find this whole concept remarkable, and somewhat puzzling.

Have seen the same (ARM + RISC-V cores) even at larger scales before (Milk-V Duo @1GHz-ish). But how is this economical? Is die space that cheap? Could you not market the same thing as quadcore with just minor design changes, or would that be too hard because of power budget/bus bandwidth reasons?


SRAM is very area intensive. What you're asking for is very greedy. The RISC-V core they are using is absolutely tiny.


Thats also a good point. For the big Milk-V systems I mentioned they use external DRAM-- but cache might still be a die-space issue (I'd assume that it's always shared completely between ARM/RISC-V cores, and would need to be scaled up for true multicore operation).

But I'm still amazed that this is a thing, and you can apparently just throw a full core for a different architecture on a microcontroller at basically no cost :O


two things:

1) it needs a certain perimeter to allow all the pins to go from the silicon to the package, which mandates a certain sized square-ish die 2) only the cores are duplicated (and some switching thing is added)

so yes, there is enough space to just add another two cores without any worries, since they don't need more IO or pins or memory or anything.


They already do - most of the components buck the 12V down to the 1.3ish volts that the GPU core needs


They are not transformers, though. The coil/chokes are not galvanically isolated which makes them (more) efficient. Stepping down from 48V to 0.8V (with massive transient spikes) is generally way harder than doing it from 12V. So they may ended with multi step converters but that would mean more PC with more passives.


3.3V from 48V is a standard application for PoE. (12V intermediate is more common though.) The duty cycle does get a bit extreme. But yes, most step-down controllers can't cover both an 0.8V output voltage and 48-60V input voltage. (TI Webench gives me one - and only one - suggested circuit, using an LM5185. With an atrocious efficency estimate.)

You'd probably use an intermediate 12V rail especially since that means you just reuse the existing 0.8V regulator designs.


Aside the step down, the transients can be quite crazy, which might make the power consumption higher (due to load line) calibration. 48V fets would have much worse RDSon compared to lower voltage spec'd ones. So it does make sense no single smart power stage to have such transistors (presently).

There are other issues, too. 48V would fry the GPU for sure, 12V often time does not even with a single power stage failure.

In the end we are talking about a stupid design (seriously 6 conductors in parallel, no balancing, no positive preload, lag connectors, no crimping, no solder) and the attempted fix is a much more sophisticated PCB design and passives.


So then it would need to be significantly larger.


Likely smaller actually.


This isn’t how it works.

Your SMPS needs sub-2V output, cool. That means it only needs to accept small portions of the incoming.

But, if the incoming is 48V, it needs 48V tolerant parts. All your caps, inductor (optional typically), diodes, the SMPS itself.

Maybe there isn’t a sides difference in a 0603 50V capacitor and 10V 0603 capacitor, but there is a cost difference. And it definitely doesn’t get smaller just because.

Your traces at 48V likely need more space/separation or routing considerations that they would at 24V, but this should be a quickly resolved problem at your SMPS is likely right next to your connector.


Yes. And it also doesn’t need to handle 40+ AMPs on input, with associated large bus bars, large input wires, etc.

Extra insulation is likely only a mm or two, those other components are big and heavy, and have to be.

It’s the same reason inverters have been moving away from 12v to 48v. Larger currents require physically larger and heavier parts in a difficult to manage way. Larger voltages don’t start being problematic until either > 48v or >1000v (depending on the power band).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: