Hacker Newsnew | past | comments | ask | show | jobs | submit | Tharre's commentslogin

You can probably already get that if you order a somewhat significant amount of chips directly from Raspberry Pi. They seem to already have everything required for it, it's literally just setting a bit differently during factory programming.

But I'm assuming you're talking about for consumer use, in which case my question is why? There is absolutely no way you're ever benefiting from them spinning up an extra SKU with significantly less volume (most people want the ARM cores).

Even if they decide to eat the costs for the benefit of consumers, at most the chip would be what, 15 cents cheaper? I really struggle to see how that's a meaningful difference for hobbyist use.


> You can probably already get that if you order a somewhat significant amount of chips directly from Raspberry Pi.

It's not currently possible. As of the A3 stepping, the ARM_DISABLE OTP bit is ignored as a security mitigation - changing that would require a new mask revision.


The TPM itself can actually be discrete, as long as you have a root-of-trust inside the CPU with a unique secret. Derive a secret from the unique secret and the hash of the initial bootcode the CPU is running like HMAC(UDS, hash(program)) and derive a public/private key pair from that. Now you can just do normal Diffie-Hellman to negotiate encryption keys with the TPM and you're safe from any future interposers.

This matters because for some functionality you really want tamper-resistant persistent storage, for example "delete the disk encryption keys if I enter the wrong password 10 times". Fairly easy to do on a TPM that can be made on a process node that supports flash vs a general CPU where that just isn't an option.


If that's a concern, you can lock the OTP either permanently or with a password, before you hand them out. Or just use the older RP2040.

But I don't think that "targeting the education market" is accurate in the first place. They certainly make sure to serve that market with their very nicely priced Pico boards but it hardly seems to be their only goal. You don't go through the effort of spinning up a new revision to fix security holes if there aren't at least some industry customers.


It's worth noting that strcpy() isn't just bad from a security perspective, on any CPU that's not completely ancient it's bad from a performance perspective as well.

Take the best case scenario, copying a string where the precise length is unknown but we know it will always fit in, say, 64 bytes.

In earlier days, I would always have used strcpy() for this task, avoiding the "wasteful" extra copies memcpy() would make. It felt efficient, after all you only replace a i < len check with buf[i] != null inside your loop right?

But of course it doesn't actually work that way, copying one byte at a time is inefficient so instead we copy as many as possible at once, which is easy to do with just a length check but not so easy if you need to find the null byte. And on top of that you're asking the CPU to predict a branch that depends completely on input data.


We should just move away from null-terminated strings, where we can, as fast as we can.


We have. C is basically the only langage in any sort of widespread use where terminated strings are a thing.

Which of course causes issues when languages with more proper strings interact with C but there you go.


Given that the C ABI is basically the standard for how arbitrary languages interact, I wouldn't characterize all of the headaches this can cause as just when other languages interact with C; arguably it can come up when any two languages interact at all, even if neither are C.


Arguably the C ABI was one of those Worse is Better problems like the C language itself. Better languages already existed, but C was basically free and easy to implement, so now there's C everywhere. It seems likely that if not for this ABI we might have an ABI today where all languages which want to offer FFI can agree on how to represent say the immutable slice reference type (Rust's &[T], C++ std::span)

Just an agreed ABI for slices would be enough that language A's growable array type (Rust's Vec, C++ std::vector, but equally the ArrayList or some languages even call this just "list") of say 32-bit signed integers can give a (read only) reference to language B to look at all these 32-bit signed integers without language's A and B having to agree how growable arrays work at all. In C today you have to go wrestle with the ABI pig for much less.


From a historical perspective, my guess is that C interop in some fashion has basically been table stakes for any language of the past few decades, and when you want to plug two arbitrary languages together, if that's the one common API they both speak, it's the most obvious way to do it. I'm not sure I'd consider this "worse is better" as much as just self-reinforcing emergent behavior. I'm not even sure I can come up with any example of an explicitly designed format for arbitrary language interop other than maybe WASM (which of course is a lot more than just that, but it does try to tackle the problem of letting languages interact in an agnostic way).


We should move away from it in C usage as well.

Ideally, the standard would include a type that packages a string with its length, and had functions that used that type and/or took the length as an argument. But even without that it is possible avoid using null terminated strings in a lot of places.


The standard C library can’t even manipulate NUL terminated strings for common use cases…

Simple things aren’t simple - want to append a formatted string to an existing buffer? Good luck! Now do it with UTF-8!

I truly feel the standard library design did more disservice to C than the language definition itself.


Doesn't C++'s std::string also use a null terminated char* string internally? Do you count that also?


Since C++11 it is required to be null-terminated, you can access the terminator with (for e.g.) operator[], and the string can contain non-terminator null characters.


This doesn't count because it's implemented in a way "if you don't need null-terminated string, you won't see it".


It has nul-termination for compatibility with C, so you can call c_str and get a C string. With the caveat that an std::string can have nuls anywhere, which breaks C semantics. But C++ does not use that itself.


>Which of course causes issues when languages with more proper strings interact with C but there you go.

Is is an issue of "more proper strings" or just languages trying to have their cake and eat it too? have their sense of a string and C interoperability. I think this is were we see the strength of Zig, it's strings are designed around and extend the C idea of string instead of just saying our way is better and we are just going to blame C for any friction.

My standard disclaimer comes into play here, I am not a programmer and very much a humanities sort, I could be completely missing what is obvious. Just trying to understand better.

Edit: That was not quite right, Zig has its string literal for C compatibility. There is something I am missing here in my understanding of strings in the broader sense.


Yes

And maybe even have a (arch dependent) string buffer zone where the actual memory length is a multiple of 4 or even 8


I haven't seen a strcpy use a scalar loop in ages. Is this an ARM thing?


Modern x86 CPUs have actual instructions for strcpy that work fairly well. There were several false starts along the way, but the performance is fine now.


They have instructions for memcpy/memmove (i.e. rep movs), not for strcpy.

They also have instructions for strlen (i.e. rep scasb), so you could implement strcpy with very few instructions by finding the length and then copying the string.

Executing first strlen, then validating the sizes and then copying with memcpy if possible is actually the recommended way for implementing a replacement for strcpy, inclusive in the parent article.

On modern Intel/AMD CPUs, "rep movs" is usually the optimal way to implement memcpy above some threshold of data size, e.g. on older AMD Zen 3 CPUs the threshold was 2 kB. I have not tested more recent CPUs to see if the threshold has diminished.

On the old AMD Zen 3 there was also a certain size range above 2 kB at sizes comparable with the L3 cache memory where their implementation interacted somehow badly with the cache and using "non-temporal" vector register transfers outperformed "rep movs". Despite that performance bug for certain string lengths, using "rep movs" for any size above 2 kB gave a good enough performance.

More recent CPUs might be better than that.


Whoops, this proves I’m not really a userspace assembly programmer…

But you can indeed safely read past the end if a buffer if you don’t cross a page boundary and you aren’t bound by the rules of, say, C.


X86-64 has the REP prefix for string operation. When combined with the MOVS instruction, that is pretty much an instruction for strcpy.


No, it's an instruction for memcpy. You still need to compute the string length first, which means touching every byte individually because you can't use SIMD due to alignment assumptions (or lack thereof) and the potential to touch uninitialized or unmapped memory (when the string crosses a page boundary).


You do aligned reads, which can't crash.

Not even musl uses a scalar loop, if it can do aligned reads/writes: https://git.musl-libc.org/cgit/musl/tree/src/string/stpcpy.c

And you don't need to worry about C UB if you do it in ASM.


The spec and some sanitizers use a scalar loop (because they need to avoid mistakenly detecting UB), but real world libc seem unlikely to use a scalar loop.


Not true, see "Extensions from Lua 5.2" here: https://luajit.org/extensions.html


Android kernels 4.19 and higher should all have support included for WireGuard unless the OEM specifically disables it: [0]. The Pixel 8 ships with the android 14 6.1 kernel so it most definitely should have WireGuard kernel support. You can check this in the WireGuard app BTW, if you go to settings it will show the backend that's in use.

[0] https://android-review.googlesource.com/c/kernel/common/+/14...


Kernel support should have no bearing as the apps are purely userspace apps. You can use the kernel mode if you root the phone, but that's not a typical scenario.


Well, the issue isn't kernel vs user space, but you are correct that you still need a custom ROM and/or root unfortunately. I had assumed Android had also allowed netlink sockets for WireGuard but alas they did not. So the app can't communicate with the kernel module, bummer.


You're looking at the wrong thing, WireGuard doesn't use AES, it uses ChaCha20. AES is really, really painful to implement securely in software only, and the result performs poorly. But ChaCha only uses addition rotation and XOR with 32 bit numbers and that makes it pretty performant even on fairly computationally limited devices.

For reference, I have an implementation of ChaCha20 running on the RP2350 at 100MBit/s on a single core at 150Mhz (910/64 = ~14.22 cycles per bytes). That's a lot for a cheap microcontroller costing around 1.5 bucks total. And that's not even taking into account using the other core the RP2350 has, or overclocking (runs fine at 300Mhz also at double the speed).


You’re totally right; I got myself spun around thinking AES instead of of ChaCha because the product I work on (ZeroTier) started with the initially and moved to AES later. I honestly just plain forgot that WireGuard hadn’t followed the same path.

An embarrassing slip, TBH. I’m gonna blame pre-holiday brain fog.



That now yields a "429 Too Many Requests"

I didn't know HN could bury a site like that.

Edit: I had to turn off my VPN and now it works.


As others have already mentioned, the bit banging part is mostly handled by the PIO, so you mostly just spend CPU cycles on 4b5b encoding and scrambling. The more immediate practical problem though is that this is transmit only, no receive.

Combined with RMII ethernet phys only costing around 30 cents even at single quantities definitely makes it just a fun project, though definitely an impressive one at that.


> Guarantee? Not by law but it will be hard for them to take that away.

Last year they removed the ability to register[0] yubikey FIDO2 tokens affected by the EUCLEAK 'vulnerability', despite it not posing any security risk even by their own admission, and nobody seems to have cared. The whole thing screams security theater, they require the much more expensive FIDO2 Level 2 keys for no reason (which limited you to just Trustkeys at the time after yubikeys got banned) while their own sites crashes[1] if you give it a secure password.

At the end of the day, if not it's required by law the only other guarantee you have is a broad userbase that will complain if it's taken away and at least at the moment it's clear that no such userbase exists.

[0] https://www.a-trust.at/de/%C3%BCber_uns/newsbereich/20240905...

[1] https://imgur.com/a/Uyjaoa7


You don't have to tell me, I absolutely hate that passkeys support attestation. But there is pressure to support a non smartphone based sign in, and it does exist.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: