Also, is that mainline Linux or are we going down the hacked kernel route of the ARM boards where someone is going to have to pick through the source to get the patches otherwise it will remain on an ancient unsupported kernel?
> Also, is that mainline Linux or are we going down the hacked kernel route of the ARM boards where someone is going to have to pick through the source to get the patches otherwise it will remain on an ancient unsupported kernel?
I think getting the full sources alone is sufficient. It might not be desirable for the Linux team to accept patches for every standalone piece of hardware. As long as the patches for the actual ISA are mainlined I consider that a win.
[1] Maybe there are, I didn't look too closely at the repo, but it looks to me that it's all source code.
I own a small mountain of SBCs and I rather disagree. 5.10 is old as-is, and often times the patches aren't submitted upstream due to quality. I've made it a new rule to only buy boards where there is a recent fork of u-boot, and plans to upstream u-boot/linux patches. Life is short and I'm tired of doing work for companies pushing this stuff out. I can't back this up, but I believe mainlining prevents e-waste by keeping boards serviceable and relevant longer.
Ah, the obligatory "WhAt If No InTeRnet" comment that happens every time someone complains software support for board is shit.
For one, to not have to keep ancient dev environments around when you need to make small change. Also when you decide to repurpose it for something else, which is far more likely for hobbyist board like this.
> For most embedded stuff, your choices are leave it in the field or recall.
Sure if you want to get fired. "Sir, you need to pack the $200k worth, 300kg CNC and send it to us so we can upgrade your firmware".
The "we can fly one of our field techs to you to upgrade your firmware" option seems to still be somewhat popular in industrial settings. Though I guess that strongly depends on the specific industry.
Of course but poster above talked about "recall or throw away" which is two worst possible outcomes for both sides.
And for technician to be able to do that you need someone that develops the fixes anyway, and it's far cheaper in the long term if you don't need to get the software versions from 10 years ago just to run a compile...
> Ah, the obligatory "WhAt If No InTeRnet" comment that happens every time someone complains software support for board is shit.
There's no what-if - internet is not on that board. Sure, available as a add-on board, but if you use that standard then everything has internet via an add-on i2c/USB device.
> Sure if you want to get fired.
I don't think you work in the field.
> "Sir, you need to pack the $200k worth, 300kg CNC and send it to us so we can upgrade your firmware".
You honestly think that every single factory has every single automation controller internet-connected?
I mean, I dunno what else that snark was supposed to mean, but I can assure you that factories don't let their equipment talk to the internet. Anything that the equipment needs will get shipped out to the deployment.
You aren't working in this field, that much is clear.
> You honestly think that every single factory has every single automation controller internet-connected?
It's not about being internet connected, the point you seem to consistently miss or ignore on purpose and that I already commented on. Learn to read before you start flailing nonsense at keyboard, I wrote ONLY about software stack problems.
It's about having software stack to continue developing without wasting time keeping some legacy crap just so you can compile a code change. But I already wrote it in previous comment. I wrote nothing about internet connectivity aside from criticizing people like you that use it as a logic-bankrupt argument to excuse shit software stack
> I mean, I dunno what else that snark was supposed to mean, but I can assure you that factories don't let their equipment talk to the internet. Anything that the equipment needs will get shipped out to the deployment.
Multiple failures of people accessing SCADA enabled unsecured control systems over internet prove your baseless assumptions wrong.
>You aren't working in this field, that much is clear.
You seem like a guy that needs 6 followups to explain to him how to set up electronic torque wrench then still breaks the bolt so I'm glad I'm not.
See, if you were working in the field (any field, really) you'd know that the software stack is usually certified for that particular field.
For EMV, for example, it actually doesn't matter if you have an internet-connected device; you aren't loading a new kernel that hasn't been certified onto the device, and considering that the cost of recertification can sometimes be more than simply moving the customer to new hardware, it's very rarely done.
So, yeah, for hobbyists, having updates is important.
For industry, once something is certified they aren't going to want to eat the cost of recertification unless there's a really good reason.
PS. You shouldn't be this arrogant in an area where you yourself claim to have no expertise. I actually am an expert experienced in large-scale deployments of devices, and I'm keeping my tone as level as possible. You should do the same as well.
There's lots of reasons you may need updates anyway. We've seen components which die after an internal counter overflows some amount of days. (Was that SSDs?) Customers will find issues you didn't know about. You'll need to get extra metrics out of remote systems. And many others.
Shipping hardware is costly in transport and worker-hours. Being able to update things on site is way better, even if it's "put this on a USB drive, plug in, restart".
> Just to point out, the sales page for this board says in big letters that it can do 10/100 ethernet with an add-on board.
So? If we lower the bar to "can do internet with an add-on" then everything can be internet connected, because there's a network addon (interface card/module, i2c, USB) that can be used by almost every single device out there.
It's different when a SoC comes with network - that's obviously targeting always-connected devices.
But I think GP does have a good point about ease of serviceability allowing for longer product life, even if it would be on something that you do not expect to require updates.
All of things can be affected by a kernel bug though.
So my revision 1.2 of the product has an updated kernel, because it is supported, and I can do this without requiring a full rework of the hardware because it is only a software update.
Updates can be applied prior to shipping the hardware. You seem to be advocating that the hardware gets no support because you can't think how things can be updated, which is a strange position to take.
> It might not be desirable for the Linux team to accept patches for every standalone piece of hardware.
I believe this may warrant an “extended mainline” kernel project that has all those extra modules and that works to consolidate them into the least amount of different variants as possible, all the while providing some continuous hardware testing. It could be maintained in part by the manufacturers themselves who’d need to provide hardware and high quality code at the very least to be able to claim support from this extended mainline.
Sounds like my second business idea for the day. I’d love if someone would run with it.
I have a non-thinkpad laptop, and it seems nearly every kernel release there is some regression. Touchpad not working.. Then GPU driver not working... Then suspend to disk broken... Regressions that, for a regular non-technical user would be a big headache.
I would like to 'lend' hardware to someone to prevent this. I'd happily leave my laptop/desktop/SBC overnight compiling a kernel, rebooting into it, running tests, and hopefully finding regressions before they get into a release.
Unfortunately, there doesn't seem to be any project to make this easy. I'd like to just "apt install volunteer-my-hardware-for-overnight-testing". A few thousand people doing that would soon find issues that only affect rare hardware.
For this kind of testing, you frequently want a 'watchdog' - which is hardware that can auto-reboot a computer if the software locks up or malfunctions. The way it works is that software 'pets' the watchdog every few minutes, and if the software malfunctions, then it doesn't pet the watchdog, and the system gets hard reset and boots a known good kernel.
Unfortunately, while nearly all computer hardware has a watchdog, frequently the linux kernel doesn't implement support for it. That in turn means that testing buggy kernels on real hardware automatically frequently ends up with the hardware getting stuck and a human needs to reset stuff manually.
Unless it's so bad as it becomes unbootable and no safe boot option exists that can be engaged via, say, the serial console, being able to power cycle the board should be more or less enough. If the bootloader can be controlled via a serial console, then it can be automated.