Nice work! I'm also a fan of the runtime only frameworks. They might not be as efficient as the compiled counterparts on paper, but in my experience the performance loss is almost negligible compared to the rest of the stack.
Thanks! Totally agree — in most real-world apps the “runtime overhead” ends up dwarfed by everything else in the stack, so the build-free ergonomics can be a bigger win.
And nice, Mancha looks cool — I’ll give it a read and star. It looks similar with dagger in some features, for instance, mancha uses :data to create scope while dagger uses +load directive. Always great to see other takes on the runtime/compile spectrum; I think having multiple approaches out there is healthy for the ecosystem!
I built my own frontend framework for similar reasons: https://github.com/fresho-dev/mancha. It was meant to adress the lack of lightweight solutions that worked both on the frontend and the backend. The main goal was to start with client side rendering and, only if you reach the point where you need it, switch to server side rendering. It also includes a drop in replacement for TailwindCSS except it won't yell at you for doing everything client side.
What I really wanted was a better maintained version of PetiteVue. But that highlights another problem: I simply can't trust anyone in the frontend JavaScript ecosystem, I've been burned too many times. It took a while to get to the point of it being usable, but now I know no one can pull the rug from under me. I use only the most basic APIs possible, only 1-2 third party dependencies, and as little hacks as possible.
It still has a few warts here and there but I hope to be able to call it a 1.0 stable version soon enough.
Corollary to your statement: of the very small (<1%) group of users running such ancient versions of Android, 100% read HN and will be responding to your comment. As if it invalidated the stats on actual usage: https://apilevels.com/
What hardware are you using? I recently finished a project myself and I'd like to do more with e-ink but the hardware "driver" that plugs into the pi costs as much as the e-ink display.
Yes, it can get pricey. I'm using a Raspberry Pi Zero W with a Waveshare 7.5 e-ink panel, which came with its own raspberry Pi driver. I also added a PiSugar2 battery. Waveshare seemed to be the most affordable option out there.
I find the costs to be reasonable, but I just bought a Waveshare e-ink ESP32 controller to try boosting battery life, since I don't need a fully fledged OS. All I'm doing on the frame is pulling an image from an URL or displaying a local image if the connection fails.
This sounds like it could port well to an RP2040-based display. Though I am forever battling against its very constrained RAM for network stuff (not helped by MicroPython needing a chunk for buffers, an GC bitmap for mark/sweep and some other network mystery meat). That said our (Pimoroni) larger Pico W-based Inky has an 8Mbit PSRAM to act as a back buffer for the display. It’s “slow” but when updates take 30s+ anyway that’s kind of redundant. Very little of that 8Mbit is actually used, so with a little tinkering it might be able to cache multiple images for sequential display without having to faff about with SD cards.
Currently I transmute some images (XKCD and NASA APOD) via a scheduled GitHub action into something fit for the various display sizes. An even more extreme approach would be to convert into the packed (4bpp for 7-colour E-ink) framebuffer format server-side. Less network efficient, but more predictable memory usage.
We’ve had JPEG support for a while, but I brought up a PNG decoder (Larry Bank’s PNGdec) recently(ish) and it’s a much better fit than JPEG for palette-based images. It uses a 32K sliding window, however, which can get spicy if you’re not careful.
You can use Wikidata and its Sparql query language for this. Although it's not straight forward for all countries, e.g. some have especial properties for regional breakdowns that are not defined as ISO standard subregions.
Thanks for the suggestion, but I already tried it in the past and it is almost impossible to have correct results, it just takes a missing category on a country/state/province/city to make it all fall apart.
For example it is common to receive historical entities if they are mislabeled, entities that no longer exist because they were merged... it is good for a side project but not suitable for production.
> Does it work with Raspberry Pis and the CSI cameras? What about my Logitech USB camera?
I don't know details about the Raspberry Pi cameras, but if it's a UVC camera (quite likely) then it should be supported by this library according to the documentation. A USB Logitech camera should also be supported. The gotcha is that unless those cameras expose advanced functionality (3A, multiple stream support, etc) then you don't get any benefit over using the standard V4L drivers.
> Would it support receiving a picture from eg a flatbed scanner over a parallel port?
I don't think scanners identify themselves as camera devices, but someone can correct me if I'm wrong.
I'm surprised to see such big emphasis on support for Android. Most camera modules come with their own drivers that already provide Android support. One benefit could be the licensing, but after a quick inspection it is unclear to me what license this library is under -- there is a licenses folder with 4 different licenses in addition to a developer agreement.
The design seems to be heavily inspired by the Android camera API: per-frame configuration, 3A, multiple stream support, device enumeration, etc.
> The HAL will implement internally features required by Android and missing from libcamera, such as JPEG encoding support.
That is interesting, since most camera modules will have a hardware accelerated path to encode frames directly to JPEG. If it's done internally, it will be much slower than all other implementations I'm aware of.
Maybe someone can take a second look at the paper. I couldn't find the published version, just a manuscript[1] (kudos to the authors for making it available under CC license). But... The only reference I could find about the sample size says:
> "To prepare a single RNA injection, the pleural-pedal and abdominal ganglia were removed from 4-5 sensitization-trained animals—or from 4-5 untrained controls—immediately after the 48-h posttest"
4-5??? I really hope that I'm missing something here, otherwise I find truly depressing how low the bar is for scientific journals.
Figure legend 1D says: “Control RNA (5.4 ± 3.9 s, n = 7) and Trained RNA (38.0 ± 4.6 s, n = 7)”
That’s a relatively low n but it might be sufficient. However, they don’t explain how the number was reduced from ~30 donor animals to 7 test animals. This might be entirely reasonable though (I know nothing about working with Aplysia).
I'm sorry, Sturgeon's Law is in full effect and the current system encourages spamming out low-tier research for grants/etc.. So the bar is on a completely different metric altogether from what a normal person would expect
Shameless plug to my own runtime (and compile) micro framework: https://github.com/fresho-dev/mancha