Hacker Newsnew | past | comments | ask | show | jobs | submit | sigmaris's commentslogin

An NPU with no driver support in the main Linux kernel, only in a vendor-provided fork containing dubious-quality drivers:

https://forum.radxa.com/t/lack-of-concern-for-security-in-bs...

https://blog.tomeuvizoso.net/2024/03/rockchip-npu-update-1-w...


At least as of last week, people (mainly at Collabora) are working on open-source support for Mali “Valhall” including the GPU in Orange Pi 5. It’s just sufficiently different from previous generations to need a new driver, and taking a while to develop.


This is higher-level than the Hypervisor framework; this Virtualization framework providers an entire VM with virtio peripherals including a display.

https://github.com/lima-vm/lima can use Virtualization framework for creating VMs, there is also https://github.com/gyf304/vmcli as a very simple CLI utility for running VMs, though it's not very actively maintained.


The ClockworkPi products, particularly uConsole, look like a reasonable modern equivalent: https://news.ycombinator.com/item?id=33331139


Those aren't nearly as cool looking although I'd still want one. Unfortunately, these kinds of devices are more expensive than they should be considering that they're marketed as toys/experimental devices. Once you go over $100, it's more cost effective to just pick up a cheap second-hand laptop instead, though they're not pocketable and certainly a lot more boring.

Also, it bugs me that the ClockworkPi seems to be RISC-V powered - they should change the name for those and fully get on the RISC-V bandwagon.


My zipitz2 costs far less and it was an amazing tiny serial console (with an 80x25 cli with a tiny font) and a sdl2 based GUI. I could emulate the NES, play nethack/slashem, IF, mp3, join MUDs, connect to wifi to browse webs with lynx and links, read epubs with a custom script with unzip and elinks...


the zipit z2 cost (inflation-adjusted) around $225 in 2008, more than the clockworkpi uconsole does today


I bought a second hand one by less than $50.


The actual SoCs used on each type of core module are somewhat well hidden (no mention on the main website as far as I can see) but can be found through a bit of research:

RPI-CM4 Lite: Broadcom BCM2711

A-06: Rockchip RK3399

A-04: Allwinner H6

R-01: Allwinner D1


Thanks. I wonder why they would hide it?


The boot ROM in the RK3399 in the PinePhone Pro has a hardcoded boot order, and doesn't use the special boot partition of the eMMC - it only looks for a bootloader on the data partition at a fixed sector (64).


Would it be possible to flash Tow-Boot to that sector and only use the rest of the partition as actual data storage? It seems like it would effectively work the same as having a separate SPI hardware chip on-device, while also restoring distribution independence. The downside is that implementing this might require some special-case support though it could likely be implemented most conveniently by using, e.g. dm-linear, so only configuration changes would be needed, not new kernel code.


The issue with that is it's really easily wiped or changed when installing an OS. In the end the SPI hardware did get added and it just works, it's by far the simplest and most reliable solution.


> In the end the SPI hardware did get added and it just works

What about their new PineBook Pro, though? Is that situation still in flux?

> The issue with that is it's really easily wiped or changed when installing an OS.

I'm not really seeing this. If the OS supports this special scheme, they need only deal with the soft block device that's created by mapping "the rest" of the partition. And if they don't, then all bets are off anyway.


It seems like in the end the pinebook pro did get the SPI chip on it, but then they flashed a closed source U-Boot to the eMMC instead which does not allow booting from SD. So it's again a complete pain for the other distributions to help these users.

For the wiping issue, a lot of times I see suggestions to wipe the first few MB of the storage when there's booting and flashing issues to get rid of an old U-Boot, which is exactly what you don't want in the Tow-Boot case.


>a closed source U-Boot

U-Boot is GPL. How is this possible?


So the SPI chip is effectively useless? Is this something that could be fixed in a newer hardware revision?


Not useless, but users will need to manually flash tow-boot when that should just be the default. If Pine64 had made better moves, their customers would have never had to worry about firmware or bootloaders, only which (standard) distro install media to use.


I assume by "new API" for VMs on macOS you mean Hypervisor.framework, which is supported by qemu 4.0.0 and xhyve, both open-source. I wrote a blog article about using qemu with Hypervisor.framework to install Debian in a VM here: https://sigmaris.info/blog/2019/04/automating-debian-install...


Excellent! Thank you very much for sharing, this is exactly what I was looking for.


Generally the SoCs used will have a separate video encoder IP-block that's not part of the Mali GPU. For example the "Cedar" video engine on Allwinner SoCs, Rockchip's VPU, Samsung's Multi-Function Codec, Qualcomm's Venus.


Instead of hiring enough humans to review the content they make money from, Google's approach to catching this stuff seems to be doing a global search for videos with "CP" in related text and auto-banning the channels: https://www.bbc.co.uk/news/technology-47278362 and then claiming that they're making use of "artificial intelligence" to moderate the platform.


I'd be really interested in how many humans you'd consider "enough humans" to actually be able to make an impact?

Because it's quite easy to underestimate just how much content is constantly uploaded to YouTube, how much of it is watched and reported, on a per second basis [0].

With numbers like that, you could probably fully employ a small nation of people and still wouldn't be able to review and catch everything. That is the main reason why they try to automate all their solutions, they need to work at a massive scale, manual reviews simply can't do that.

[0] http://www.everysecond.io/youtube


Also they will not involve humans because it removes their ability to blame the system. If a human employee fails to correctly flag something (which is likely given the sheer volume of incoming content to review) and it slips through, YouTube could potentially be held liable for its effects (whatever they may be). Whereas if the system misses something it's easier to pass it off as just a bug that needs fixing or an obscure edge case that wasn't handled.


I doubt that plays any actual role because as long as it's properly outsourced nobody can claim it was one of their employees who censored something. Which is exactly what happens in the Philippines [0]:

> There are two ways the content is forwarded to the Philippines. The first is a pre-filter, an algorithm, a machine that can analyze the shape of, say, a sexual organ, or the color of blood or certain skin color. So whenever the pre-filter is analyzing and it picks up on something that is inappropriate, the machine will send that content to the Philippines and the content moderators will double check if the machine was right. The second route is when the user flags the content as being inappropriate.

Afaik Facebook, Twitter, and Google all participate in this kind of "moderation outsourcing" but the only way this is even possible is if they have AI/ML pre-select content for moderation.

[0] https://www.vice.com/en_us/article/ywe7gb/the-companies-clea...


Not totally certain, but I believe it'd be used if a userspace program used the Linux Kernel Crypto API, using a socket of type AF_ALG.

https://www.kernel.org/doc/html/v4.11/crypto/userspace-if.ht...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: